The Oxford Scientist: Regeneration (#10)

Page 1

Hilary Term 2022

Oxford University’s independent, student-produced science magazine.

r e g e n e r a t i o n


the Oxford Scientist

Regeneration

HAVE YOU THOUGHT ABOUT...

A CAREER AS A PATENT ATTORNEY?

An intellectually challenging and rewarding career option

5 6

In This Issue Editor’s letter | L. Sophie Gullino Treating the old with the new: stem cell therapies for Alzheimer’s disease| Helen Collins

8 The Wonders of Webb 10 On dosing the dozing how the birth of new neurons could 11 Neuroregeneration: revolutionise neurotechnology 12 Mimicking natural processes for urban regeneration 14 Listen to your gut 16 Moral agency and modern medicine regeneration and rams: how an accidental 18 Radiotherapy, discovery revolutionised cancer medicine 19 AI fossil hunters 20 The regeneration of the Chemical Weapons Convention | Molly Hammond

| Simon Litchtinger

What Does It Involve?

| Karolina Zvoníčková

Training as a Patent Attorney is a career path that will enable you to combine your understanding of science with legal expertise. You will leave the lab environment yet remain at the cutting edge of science and technology, applying your knowledge and skill in a commercial context. You will help to protect intellectual property assets and grow businesses.

| E.M. Ford

| Elizabeth Mira Rothweiler

| Isabella Giaquinta

| Emma Durkin

| Sophie Berdugo

Sam Parry MBioChem in Molecular and Cellular Biochemistry, University of Oxford (2020)

|

Tamara Gibbons

22 Sleep: the underappreciated therapist technological and scientific advancements can help make 24 How sure COVID-19 is the last pandemic | Harrison France

Sound Interesting? J A Kemp is a leading firm of UK and European Patent and Trade Mark Attorneys with offices in London, Paris, Oxford, Cambridge and Munich.

| Jeremy Ratcliff

modelling for regenerative medicine: dream or 26 Mathematical reality? | W. Duncan Martinson

28 The role of science in waste 29 Is regenerative farming the future of food? | 30 Reductionism in science: learnings from natural remedies | | Halima Doski

For more information on applications and a career with J A Kemp, please visit: www.jakemp.com. Jenny Soderman

2

MChem in Chemistry University of Oxford (2018)

Julia Johnstone

Hamzah Mahomed

Artwork by Peyton Cherry Cover art by Matthew Kurnia

3


the Oxford Scientist

Regeneration

HT22 Team

Join the TT22 Team

Editor-in-Chief L. Sophie Gullino | Print Editor Gemma Bingham | Creative Director Cecilia Jay Schools Coordinator Halima Doski | Schools and Tech Advisor Gavin Man | Marketing Director Molly Hammond | News Editor Sarya Fidan | Web Editor Helen Collins | Sub-editors Rhienna Morar, Anezka Macey-Dare, Ally Darnton, Katarina Jerotic, Dominic Clearkin, Natalie Stevenson, Mason Wakley, Anastasia Bektimirova, Ilke Delice, Sophie Berdugo, Elisabeth Rothweiler, Franziska Guenther, Molly Hammond | Creative Team Asia Hoile, Bianca Rasmussen, Matthew Kurnia, Daniel Coneyworth, Peyton Cherry, Sophie Park, Tanmayee Deshprabhu, Antonia Fern

4

Editorial

W

Creative

Editor-in-chief

Manages all aspects of the Oxford Scientist, and has the final say on all editorial and creative decisions.

Web editor

Manages oxsci.org, commissions, edits and uploads web articles. Experience with WordPress is appreciated but not required.

Creative Director & Team

Marketing & Events Events Director & Team

Print Editor

News Editor

Manages our social media channels and mailing list, keeping our followers engaged and growing our digital presence.

Commissions and edits articles for the magazine, and leads the team of sub-editors.

Commissions and edits articles covering the latest news in Oxford and beyond.

Broadcast Editor & Team

Creates high-quality digital content for our platforms. Experience in videography, photography or podcasting software is appreciated but not required.

Edit and write articles for the magazine and website.

e’re in a time of transition, awakening and rebirth. As we are emerging from the pandemic and move onto the next chapter, we’re filled with hope to tackle new challenges. For this issue of The Oxford Scientist, we wanted to explore this transformative process, focusing on the anticipation, excitement and positivity that characterises novel adventures. ‘Regeneration’ captures this feeling of rebirth that we are experiencing, both in our personal lives and as a scientific community. Just as social events return, halls fill with laughter and nature is revitalised after winter, also science is experiencing a never-ending revolution.

Commissions and creates artwork for the magazine and online, and lays-in the magazine. Experience in Photoshop, InDesign or Affinity Publisher is appreciated but not required.

Organises our Chalk Talks, socials and the magazine launch event.

Sub-editors

Editor’s letter

Regeneration permeates all biomedical disciplines to the extent that it lends its name to an entire field. Regenerative medicine focuses on repairing and restoring the normal functions of cells and tissues. At the forefront of this research are approaches employing stem cells—cells that can specialise into any cell in the body. In this issue you can read how stem cells can be studied using mathematical modelling and how these therapies can be applied to treat challenging degenerative conditions, such as Alzheimer’s disease. Additionally, advances in neurology suggest that in the future we could stimulate neurogenesis, the birth of new neurons, as a therapeutic tool for neurodegenerative diseases.

Marketing Director & Team

Even more controversial and astonishing is the story of the first person currently living with a genetically modified pig’s heart, the first successful animal to human transplant in a living person. This advance sparked an intense ethical debate, but also opened a plethora of possibilities.

Business and Outreach Schools Coordinator

Manages Schools Writing Competition and liaises with the Business Team to sell subscriptions.

Fast-paced technological advances like these are revolutionising a variety of fields. In this edition, we have collected some interesting examples from the applications of artificial intelligence in archaeology, to NASA’s newest telescope, which could break new ground in the realm of astrophysics.

Business Director & Team

Finally, we bring our thoughts back closer to home by exploring how we can incorporate regeneration into our daily lives. Our cities can be improved with nature-based solutions and novel approaches to waste reduction. On a more personal level, ‘regeneration’ can mean both taking on new challenges, but also listening to our bodies, winding down and resting, allowing the restorative process of sleep, and overall taking steps to improve our physical and mental wellbeing.

Manages external partnerships and secures advertisements for website and magazine.

Other ways to get involved

I hope you enjoy this issue and that you share our hopeful enthusiasm for the future!

Write for us: Join our Facebook contributors group for weekly commissions. We always welcome pitches and submissions for our website at web@oxsci.org. Our first call for pitches to the magazine will be announced at the beginning of next term.

L. Sophie Gullino

Design for us: To produce artwork for the website or next print edition, email creative@oxsci.org. Talk for us: Chalk Talks is looking for speakers! If you’re interested in giving a five-minute talk with only a whiteboard and pen then email editor@oxsci.org. Open to everyone, from undergraduate to PI.

Editor in Chief, HT22

Artwork by Antonia Fern

5


the Oxford Scientist

Regeneration

Tr e at i ng t h e ol d w i t h t h e n e w: Stem cell therapies for Alzheimer’s disease

‘T

he next great advance in medical care will not be a magical pill, it will be a miraculous cell called the mesenchymal stem cell’. Speaking at a TEDx event in Ashland, Oregon in 2019, American physician Dr Neil Neimark sang the praises of stem cell therapies and their great potential for treating disease. He is not alone in his optimism. Over the last few years, the number of studies investigating the ability of stem cells to treat notoriously difficult conditions has skyrocketed, with nearly 13,000 papers published in 2021 alone. Their promise is particularly great for degenerative diseases, where drugs typically fail to reverse decades of accumulated damage.

Neural stem cells are highly specialised and are only found in a few places in the adult brain, predominantly in the hippocampus. Although these cells have a much narrower developmental capacity, they can differentiate into different types of neurons as well as astrocytes, the support cells of the brain. Given their location in the memory-storing region of the brain and their natural propensity for producing neurons, neural stem cells are great targets for Alzheimer’s therapies. So far, studies have shown that their transplantation into the brains of rodents has increased new neuron formation and improved brain function. Moreover, these transplanted cells release signalling molecules and growth factors that can stimulate the renewal of existing cells in the brain, in essence helping damaged tissue to repair itself.

One such condition is Alzheimer’s disease. As the most common form of dementia, Alzheimer’s is estimated to affect over 50 million people worldwide. Although at first patients present only mild memory problems, this progressive neurodegenerative condition rapidly produces severe cognitive deficits. Eventually, patients lose their ability to walk, talk and look after themselves. Moreover, with a continually aging global population, the number of people living with Alzheimer’s is expected to surpass 150 million by 2030, making it an intensifying global health crisis.

Despite these early successes, stem cell approaches remain complex and have faced major controversy. Firstly, the innate ability of stem cells to self-renew and differentiate can lead to the formation of cancers where they are implanted. This is a particular problem with embryonic stem cells, which by nature divide exponentially, wreaking havoc if their differentiation is not controlled. Moreover, the immune system tends to reject foreign implanted cells, meaning the body will likely try and destroy the therapy before it has had time to take effect.

From Alois Alzheimer’s initial description of Alzheimer’s disease in 1907 as a “peculiar disease of the cerebral cortex”, our understanding of its causes has grown enormously. We now know that the condition starts long before symptoms appear with the accumulation of toxic amyloid beta molecules which form sticky deposits called plaques on neurons in the brain. In the early stages of the disease, these plaques are particularly prevalent in the hippocampus, a region associated with learning and memory. They also cause further harmful processes, including neuroinflammation and the aggregation of intracellular proteins known as tau tangles. These changes lead to the loss of synapses (the junctions between neurons via which they communicate), the neurons themselves and eventually large portions of brain tissue. This loss of neurons is ultimately what produces the decline in memory and cognitive function experienced by patients.

Then, of course, there are ethical concerns, especially regarding the use of embryonic stem cells. Some human rights groups and devout Christians are opposed to the use of embryonic stem cells as they come from fertilised eggs. If life begins at conception, the harvesting of stem cells is preventing that embryo from becoming a life, and the Bible teaches ‘thou shalt not kill’. Even major funding bodies like the Alzheimer’s Society do not financially support studies on embryonic stem cell therapies, citing patient objections for this decision. However, other religions take a different stance—embryonic stem cells are harvested before many Muslim scholars believe human life begins and thus ‘research on human embryonic stem cells is permissible if they are obtained from in vitro fertilisation and are not viable’ (Supreme Council of Health in Qatar). However, the alternative, mesenchymal stem cells, have also faced controversy. Although adults can consent to the donation of their own stem cells, their collection from umbilical cord blood has angered many. The ethical dilemma is clear.

Despite this knowledge, the development of treatments for Alzheimer’s has stagnated. Until last year, the only drugs available for Alzheimer’s patients merely improved the symptoms of the disease in its early stages and had no effect on patient life expectancy. Although there was much excitement in 2021 when the United States Food and Drug Administration approved the use of Aducanumab, the first drug to target amyloid plaques, long-term effectiveness remains undetermined. There are currently no approved therapies that tackle neuronal loss in Alzheimer’s disease.

Due to their highly complicated nature and the debates about their use, there have only been a handful of clinical trials of stem cell therapies in Alzheimer’s disease. In 2015, a study of nine patients with mild Alzheimer’s disease reported that a single infusion of umbilical cord blood stem cells into the hippocampi was safe and produced no severe adverse effects after two years. In 2019, this was extended to three intravenous infusions of stem cells, but this trial is yet to publish its results. However, the safety of these approaches was questioned in 2019 when a study of 21 subjects given nine infusions of autologous stem cells (collected from fat tissue) reported side effects of cancer, pulmonary embolisms and severe fatigue. Nonetheless, ongoing trials are hopeful of their ability to optimise implantation methods and minimise experience of negative side effects.

This is where stem cells may be the ideal solution. Stem cells are undifferentiated cells found throughout life. They have two incredible characteristics: their ability to self-renew, meaning they can indefinitely produce more undifferentiated cells, and their capacity to differentiate into various mature cell types. In the right environment, stem cells can become highly specialised, producing everything from cardiac muscle cells in the heart to neurons in the brain. Thus, this power could be harnessed to treat Alzheimer’s disease by regenerating the neurons that have been lost. Several different approaches to stem cell therapy have been trialled in animal models of Alzheimer’s so far. Embryonic stem cells come from the cluster of cells which develops in the days following egg fertilisation. These cells can differentiate into any cell in the human body, including specialised neurons. Researchers have shown that implanting human embryonic stem cellderived neurons into the relevant regions of the brain of a mouse model of Alzheimer’s disease could improve memory and learning performance. They have also been successfully applied in other neurodegenerative conditions including macular degeneration (an age-related breakdown of part of the retina). Mesenchymal stem cells, the cells Dr Neimark so vehemently advocates for, are produced later in development and are found in abundance in umbilical cord blood, as well as in adult bone marrow and fat. These cells can produce most adult cell types and have an enormous advantage over other stem cells – they can be harvested and readministered to the same person; a process termed autologous transplantation. Mesenchymal stem cells have been successfully implanted into the brains of mice with Alzheimer’s pathology, where they were able to reduce amyloid plaques, tau tangles and inflammatory markers, as well as improve spatial memory.

6

Artwork by Matthe w Kurnia

One way to tackle these issues could be to compare Alzheimer’s to Parkinson’s disease, where stem cell therapies have advanced and are now being tested more broadly. Here, stem cells are specifically differentiated into dopamine neurons to replace those lost to the disease. Nevertheless, these comparisons highlight an overarching concern that, like the drugs currently used to treat Alzheimer’s and Parkinson’s, stem cells are still unlikely to cure the neurodegenerative condition. Describing a patient who received stem cell therapy over 20 years ago, Parkinson’s researcher Dr Jeff Bronstein says, ‘he was a successful patient, but the disease keeps progressing.’ Dr Bronstein believes the mistake people make is that they look at stem cell transplantation as disease-modifying therapy. ‘It’s not. It has the potential to improve disease symptoms, but it can’t alter the course of disease’. Nonetheless, many still have hope for the regenerative power of stem cells. According to Dr Tilo Kunath, a biologist at the University of Edinburgh, stem cell therapies ‘will have some teething problems at the beginning, as any new therapy would’, but he hopes for a future where improved implantation techniques could revolutionise the treatment of degenerative conditions. It is clear that much more research is needed into stem cell therapies before their true usefulness can be ascertained. However, what will likely be an even greater challenge is the gaining of public trust concerning these strange but powerful cells. Helen Collins

7


the Oxford Scientist

Regeneration

The Wonders of Webb

T

he James Webb Space Telescope ( JWST) is about to usher in a whole new era of astronomy. With its successful launch on Christmas Day and subsequent deployment, all that’s left now is a few months of meticulous alignment and focussing before the first new highresolution images can be released. Largely seen as Hubble’s successor, the JWST is NASA’s next-generation space telescope. Hubble was deployed just over three decades ago and provided vast amounts of incredible data, but researchers are ready for a new leap in space observation. While Hubble primarily operates in the visible light range, the JWST uses the longer wavelength infrared region, allowing the JWST to carry out research that other telescopes never could. The science goals of JWST are organised into four key themes: the early universe, galaxy evolution, star life cycles, and other worlds. This article will dive into each theme, pulling out the most exciting possibilities, and highlighting the transformative impact the JWST will have. Early Universe When we are observing incredibly distant objects, we are observing structures increasingly close to the beginning of the universe. The light from these regions takes billions of years to reach Earth, so when we observe them, we see them as they looked billions of years ago. The expansion of the universe also results in an additional red-shifting effect. As the space that separates every object in the universe gets

Artwork by Tanmayee Deshprabhu

larger, the distance between the wavefronts of light emitted from these objects increases and we observe a longer wavelength. This means that light emitted from ancient stars in the UV and visible light range will be stretched out into the infrared by the time it reaches our Solar System—making the JWST perfect for observing the early universe. The JWST should be able to see back to 180 million years after the Big Bang. It’s in this period that we believe the earliest generations of stars and galaxies began to form, giving us incredible insight into the processes that have shaped our universe. Galaxy Evolution When we turn our telescopes to the sky, we can see hundreds of billions of galaxies scattered across the observable universe. Two very common types are spiral galaxies, like our own, and elliptical galaxies, bright blobs with smooth distributions of matter. Both types of structure took billions of years to develop through collisions and major merger events with other galaxies. But what did young galaxies look like? Images from Hubble show that they are clumpy, with most stars developing in densely packed knots of matter. How these chaotic, irregular structures changed over time to develop into the majestic spirals and diverse range of galaxies we see today is one of astronomy’s great unanswered questions. The JWST will help us obtain valuable information on what the earliest galaxies were like, and how the supermassive black holes in their centres influenced their future, allowing us to develop new models of the processes of galactic evolution.

New data from the JWST will also offer insight into how galaxies initially form. Current computer models suggest formation happens when large clumps of dark matter come together. Dark matter doesn’t interact with light at all: no emission, no reflection, and even no absorption, making it incredibly difficult to detect. Its interactions with other bodies through gravity are the only way we can pinpoint its location in space. Astonishingly, when we do the calculations, it appears that approximately one quarter of our universe is made up of dark matter. By better understanding galaxy formation, in which dark matter appears to play a crucial role, we also hope to understand this mysterious substance a little more. Star Lifecycles Stars have been a major focus of astronomy since its inception. Only in the last hundred years though did we make the massive strides forward in theoretical conception to bring us to where we are today. It was in 1938 that Hans Bethe proved fusion powered stars, and it has only been a few decades since we realised that stars continually form in the universe. To understand how stars are formed and how they develop we need to observe regions where stars are just beginning to form. Stars need to be surrounded by lots of dust and gas to start pulling in enough matter to begin the process of nuclear fusion that will power them for the millions to billions of years of their lifetimes. The dust in these nebulae makes it very difficult for us to observe young stars with current telescopes, as it blocks ultraviolet and visible light. However, longer infrared radiation can penetrate these massive clouds, so with the JWST we’ll be able to observe many more of these early stars. These range from protostars that don’t yet have the heat and pressure for fusion to begin, to young stars with circumstellar discs and planetesimals just clumping together. It’s these early stages that we know so little about, yet they are also vital to understanding the process of planetary system formation.

Earths, warm Neptunes: these are all types of planets we didn’t even know existed when the JWST mission began. Today we have a whole catalogue of exoplanets for us to focus our observations on and almost a quarter of scientific proposals accepted for the JWST’s first research cycle are focussed on these other worlds. The opportunity that we have now to carry out this research is an incredible silver lining to the massive delays this project has experienced over the years. When investigating exoplanets, the JWST can provide data on cloud formation, temperature, and weather patterns. This will allow scientists to better model these planets in 3D and understand the factors shaping their climates. JWST will also be searching atmospheres for “biomarkers”, chemicals such as methane that we only know to be formed by biological processes. All these diverse aspects of exoplanet research are likely to trigger a paradigm shift in our understanding of these other worlds. Final Thoughts The JWST is so powerful that when it becomes fully operational, we will undoubtedly break new, revolutionary ground in the realm of astrophysics. From the beginning of our universe to the processes shaping planets and stars to this day, the James Webb Space Telescope will have a huge impact on scientific knowledge, a sure reward for the decades of hard work and meticulous planning that have gone into this extraordinary mission.

Other Worlds NASA’s science goals for the JWST have changed significantly since the mission’s inception. Back in the 1990s when it was proposed, the study of exoplanets was an incredibly new field of research. In the last few decades, observations, most notably by the Kepler space telescope, have vastly transformed our understanding of these distant worlds. Hot Jupiters, super-

Molly Hammond on How NASA’s newest telescope will change physics forever 8

9


the Oxford Scientist

Regeneration

On dosing the dozing

Neuroregeneration:

WHY IS IT HARD TO STUDY NAPPING?

how the birth of new neurons could

Artwork by Antonia Fern Scientists can easily enlist volunteers for investigating the short-term benefits of napping. In a controlled trial, one group takes a supervised nap while a second group stays awake. Judging by their respective performance at specific tasks measured afterwards, the researchers can draw conclusions about the impact of a nap on a skill or behaviour of interest. As soon as a risk is concerned, however, purposefully exposing people to the presumed risk factor becomes very difficult ethically. Furthermore, because the effects in question are long-term, an experimental trial would be logistically challenging. A convenient solution is an observational study. In a prospective cohort study, for example, a large group of participants who do not have a history of heart disease are asked about their napping habits. Several years later, the scientists follow them up and count how many heart attacks and strokes happened among those who napped and those who did not. This can uncover associations between napping and heart disease. There are a few key limitations to this methodology. Firstly, the researchers rely on self-reporting, which can be inaccurate. Behaviour is also unique, so working in broad categories can be inappropriate. Secondly, there is the issue of confounding factors. A higher age could make people nap more and increase their risk for heart disease, but this does not mean that napping causes heart disease. These factors, if known, can be adjusted for. What we cannot know is whether an important one was missed. Over the years, data from hundreds of thousands of people has been gathered to examine risks posed by napping. While there are still more questions than answers, we do know that short, infrequent naps can be beneficial for health, whereas long naps every day may be detrimental. The amount of sleep at night also matters. Naps can improve the health of people who sleep too little, but might do more harm than good for those who sleep enough.

I

t is early afternoon. You have just handed in your essay after staying up late writing it. It would be a stretch to say you are proud of it, but at least it is finished. Nothing would be easier now than to drift into a brief nap, some shut-eye to boost your productivity. You are not alone. Although quantifying it is difficult, and there are cultural differences, you can assume that half the population, give or take, are fellow nappers. There are plenty of good reasons to nap. Research over decades has corroborated folk wisdom of reduced sleepiness, improved concentration, motor skills, emotional regulation, and even athletic performance, to name just a few. Napping can also help alleviate symptoms of sleep deprivation in people who do not sleep enough at night. Sleep inertia, that feeling of grogginess when waking up, may initially cloud some of the positive effects, but usually not too much after just a short nap. Nonetheless, not all that is known about napping is rosy. Many studies have found it to be associated with cardio-vascular disease, diabetes, and overall mortality. Such an inconsistency between benefits and risks may seem puzzling, so it is instructive to consider what different types of study design are used.

10

The internet is full of dos and don’ts about napping. These articles largely reflect well the conclusions of a particular piece of research or a medical consensus. The different methods by which the conclusions are obtained, however, are not normally presented in detail, yet they can be crucial to whether any sort of advice is merited. We understand well the benefits of naps. We also understand how excessive napping is associated with a range of negative outcomes, but we do not know whether that is a matter of predisposition or cause and effect.

revolutionise neurology

H

istorically, neurology has excelled in diagnostics but not in therapeutics. Neurologists have demonstrated an incredible ability to precisely pinpoint lesions that are responsible for a vast range of neurological conditions, such as Alzheimer’s disease and Parkinson’s disease, yet had little to offer in terms of treatment. For most of the last century, neurology has been dominated by the view that we are born with all the neurons we will ever have and that once damage is done, it can never be repaired. Even Santiago Ramon y Cajal—the father of modern neuroscience—said, ‘everything may die, but nothing can be regenerated’. This position is beginning to shift. Evidence is accumulating that neurogenesis, the production of newborn neurons, can occur in certain parts of the mammalian brain, creating a flurry of research into its regenerative capacity. As our understanding of neuronal structure, function, and development has advanced, so too has the idea of ‘neuroregeneration’, and its full potential remains to be revealed. Pioneering work by Joseph Altman in the 1960s made the ground-breaking discovery that some regions of mature guinea pigs’ brains were still undergoing neurogenesis. By injecting newborn guinea pigs with tritiated thymidine, a molecule used by cells for DNA synthesis, which has been labelled with radioactive tritium, Altman was able to detect which cells had only just formed through division. These radiolabelled dividing cells were neurons, suggesting that neurogenesis is still taking place in postnatal mammals’ brains even after they have reached maturity. This exciting finding brought a question to the fore: can we stimulate neurogenesis to replace any neurons that have been lost due to damage or disease? Despite the major controversy that has surrounded neurogenesis for decades, we now have evidence showing that this idea is not very far-fetched. Researchers have been able to isolate neural stem cell precursors (cells capable of forming neurons in cultures) from the adult nervous system of mice, including the cerebral cortex and spinal cord. This is particularly revolutionary because adult neurogenesis is normally confined to highly

specific regions in the brain: the hippocampus, which plays a major role in learning and memory, and the olfactory bulb, which transmits smell information from the nose to the brain. It has been hypothesized that neurogenesis in adults is typically limited to these brain regions because only specific parts of the adult brain contain an appropriate environment with growth factors that support neurogenesis, a so-called neurogenic niche. This hypothesis sparked a wave of encouraging research into these factors. The hope is to induce a widespread neurogenic niche that could allow neuroregeneration in many more brain regions, potentially providing a novel therapeutic tool to treat neurological conditions, such as neurodegeneration. Research shows that neurogenesis can also be stimulated in adult rodent models by traumatic or ischemic injury—damage caused by restriction of blood flow—such as those caused by strokes. It can even be stimulated in areas such as the cerebral cortex and spinal cord, where neurogenesis does not ordinarily occur. Although such spontaneous injury-induced neurogenesis is insufficient in the restoration of function in humans (most stroke patients do not fully recover), the administration of growth factors has been shown to promote the formation of new neurons from precursors grown in cultures. Glial cells (the immune cells of the central nervous system) located in the retina and cerebral cortex, were also found to have the capacity to be reprogrammed to differentiate into neurons following injury, providing additional possibilities for neurogenesis. Nevertheless, experimental neurogenesis is currently at the preclinical stage of research, and it is debatable whether any novel interventions could be adapted for human use in the future. There is also still some ambiguity regarding whether adult neurogenesis takes place in humans in the first place. Consequently, adult neurogenesis remains clouded in controversy and requires ample further research—as Ramon y Cajal put it, ‘It is for the science of the future to change this harsh decree’.

Karolina Zvoníčková

This is not an advice piece, and I will not try to tell you whether the time it took to read this would have been better spent cozily recovering from your essay night with a nap. While science is not quite sure if napping is good or bad overall, such a question is likely ill-phrased. What unequivocally matters though is enough sleep... so sweet dreams! Simon Litchtinger

11


the Oxford Scientist

Regeneration

Mimicking natural processes for urban regeneration

R

egeneration is a buzzword used frequently by city councils around England to describe schemes which aim to transform urban areas into sustainable, circular, greener places to live. This aim of regeneration, however, is seen by scientists as a serious interdisciplinary challenge. As populations continue to rise and urbanisation expands, the green spaces we have left are increasingly reduced. Our cities’ air has become polluted, and our water resource systems stressed and placed under high demand. These challenges are exacerbated by the impacts and uncertainties of anthropogenic climate change which has given rise to more extreme weather events, for example, the more frequent and severe floods which have been seen across England in recent years. Regeneration schemes already have to tackle multiple factors such as air pollution and poor river water quality, so the added consideration of flood risk criteria creates a complex challenge. Is it possible to target all these issues with one solution? As the regeneration of urban areas is now seen as an interdisciplinary problem with a multitude of interlinked issues, a flexible and adaptable solution is required. The complexities of poor urban air quality, river water quality, and flood risk, mean that Nature-Based Solutions (NbS) in urban areas are becoming more common—but are NbS the adaptable and flexible solution we need? NbS describe structures which mimic natural processes. These can be constructed wetlands, constructed reedbeds, and specialised vegetation planting. In previous years, NbS such as constructed wetlands have primarily been implemented to treat wastewater, agricultural and industrial runoff. These runoff sources known as “point pollution” are more straightforward to design NbS for. Urban runoff, known as “diffuse pollution”, is harder to tackle as it contains a multitude of different pollution sources such as sewer overflow, drain water, rain-runoff from roof tops, and rubbish dumping in the streets. NbS are artificial systems formed with planting soils, vegetation, and specific plants which can retain water and treat pollutants. These constructed natural systems which mimic natural processes are now being used for absorbing and treating the urban runoff mentioned above before it enters a cities river (usually the main drinking water supply source!). There is also further potential for these systems to retain small flood peaks (reducing flood risk) and for the vegetation to absorb carbon dioxide from the air. The River Thames Basin, infamous for raw sewage release scandals, and known to be at high flood risk, is the target for multiple regeneration attempts. Many catchments of the Thames have been given ‘poor’ water quality status under the standards of the EU Water Framework Directive. In response to this failure to meet standards, catchment areas such as Salmon’s Brook and Pymmes Brook in London have

begun to undergo regeneration through the installation of NbS (under the Healthy River Challenge run by Enfield Council). In Salmon’s Brook, NbS structures have been installed to treat the raw sewage which enters the rainfall drainage system via historically misconnected sewers before it joins the brook eventually leading to the Thames basin. Historically misconnected sewers are a problem across London: years ago sewer pipes and rainfall drainage systems underground were accidentally misconnected and now contain leaks, resulting in a mixture of sewage contaminating the surface water drainage pipes which enter water sources such as the River Thames. The NbS approach being used to tackle this has been deemed successful as an Environment Agency report in 2016 presented results showing that the NbS reduced pollutants such as ammonia, nitrogen, and phosphate. However, there has been no follow up study to look at water quality improvements over the long-term, and no flood risk or air pollution reduction testing.

“Nature-based Solutions in urban areas are becoming more common— but are NbS the adaptable and flexible solution we need?” Whilst NbS may seem like an easy, natural, and solve-all fix for many councils, amongst the scientific community there is concern over the lack of scientific and engineering standard testing of these structures. There is no design manual or handbook for different urban scenarios like there are for hard concrete structures such as flood defences, and there is not enough information on their lifespan, structural resistance to different flood peaks, or abilities to remove pollutants and retain them over long periods of time. There is also the concern that during severe flood events, the pollutants retained in the NbS may be released and leached back out into the environment. NbS do offer an exciting opportunity for regenerating our urban spaces and making them more in tune with the natural environment, however, they require further research and implementation. For NbS to potentially one day be able to transform urban areas into more sustainable, circular, and green places to live, intensive research into their optimum design for different city scenarios is needed. E.M. Ford Artwork by Tanmayee Deshprabhu

12

13


the Oxford Scientist

Regeneration

Listen to your gut!

D

ating back to Hippocrates’ teachings on the benefits of fibre-rich diets in 430 B.C., eating habits have traditionally been connected to health. What have we learned since then about diets, our gut, and its microbial inhabitants? In the 19th century, German paediatrician Theodor Escherich consolidated the study of the human gut microbiome; today the bacterium Escherichia coli is named after him. Henry Tissier studied the administration of beneficial bacteria in the human gut and Ilya Metchnikov investigated the merits of lactic acid bacteria in fermented milk for a healthy life. Alfred Nissle isolated a bacterial strain E. coli Nissle 1917 which he found to be antagonizing the growth of harmful bacteria. Since 2012, scientists have been investigating the gut in the Human Microbiome Project: our gastrointestinal (GI) tract harbours approximately 100 trillion microorganisms, a number exceeding that of human cells by tenfold. Our gut microbiome—a complex and dynamic population—consists of bacteria, yeast, fungi, archaea and viruses. We live in symbiosis with these microbiota and a balance of “good” and “bad” organisms is crucial for our health. The first weeks of life are of paramount importance in the formation of the microbiome. After birth the GI tract is rapidly colonized although, interestingly, microorganisms can already be found in the placenta pre-birth. Whether or not the vaginal flora influences the newborn’s microbiome during birth is currently being debated, however, it is known that newborns delivered by Caesarean section possess a less diverse microbiome. This is typically associated with the sterile hospital environment, and by ninemonths of age the differences in the microbiome are negligible. The earliest microbiome populations are comprised of Bifidobacteria which are specialized in the digestion of human milk sugars, a fact which highlights the symbiotic co-existence of bacteria and their human host. By around age two and a half the human microbiome resembles that of an adult in terms of its composition and function, especially during processes such as food degradation, vitamin synthesis, lipid metabolism, maintenance of the intestinal barrier, and the suppression of harmful microbiota.

Artwork by Daniel Coneyworth

14

Throughout our life, there are factors such as diet, illness, or antibiotic treatment that continually disturb the dynamic population of our gut. In adults, the ratio of Bacteroidetes (responsible for polysaccharide metabolism) to Firmicutes (involved in lipid metabolism) can be utilized as an indicator of microbiome and host health. Over 75% of the microbiome is formed by a stable core population of microbiota, an imbalance of which, called Dysbiosis, can harm the intestinal barrier and enhance inflammatory processes. Detrimental imbalances are thought to contribute to ageing as

well as diabetes, cancers, and the pathogenesis of cardiovascular and neurodegenerative diseases. A significant decrease in microbial diversity has been found in aged individuals, a state which offers harmful species the opportunity to prevail. These changes are not only connected to mobility, diet, previous illness and medication, but also to the individual’s genetics and geological location. The intestinal barrier is vital to the co-existence of host and microbiome as it separates the two whilst maintaining an “immuno-acceptance” state. Immune receptors in our gut detect microbial motifs and can, if needed, adjust the production of antimicrobial substances and induce inflammation. Recurring inflammations are harmful to our organism and lead to deterioration of the gut lining. By consequence, the leakage of microbes and their excretions into the blood stream can result in activation of the systemic immune system. With our ageing population an understanding of Alzheimer’s disease is becoming more and more necessary. Researchers have examined the link between inflammation caused by microbiota and the formation of amyloid plaque in the brain which is responsible for the development of dementia. They found that the gut microbiome influences the immune system and its effect on the nervous system via bacterial excretions. Certain microbial markers, such as Lipopolysaccharides which are anchored on bacterial membranes, were associated with amyloid plaque quantity in brain tissue. Current work focusses on identifying bacteria involved which could open up the possibility of preventative strategies. Antibiotics are indispensable life savers and facilitate recovery from common infections. Nevertheless, most antibiotics are so-called ‘broadspectrum’ antibiotics which work more like a sledgehammer than a precision tool and tend to harm our whole gut microbiome. Clostridium difficile is a bacterium populating the healthy human GI tract: under prolonged antibiotic treatment, this harmful species can gain the upper hand and excrete noxious toxins which later damage the intestinal barrier and cause diarrhea or more serious complications such as sepsis. It was assumed that the healthy gut microbiome can fully recover from an antibiotic, but under prolonged drug treatment the initial microbiome composition might not be re-established. Pre-biotics (fibres that cannot be digested by the host but serve as nutrients for bacteria in the lower GI tract) and pro-biotics (formulations containing presumably beneficial bacteria for a healthy gut microbiome) are being investigated as preventive or supportive measures.

academic, government, and industry experts to review this question. They concluded that a healthy microbiome population cannot be easily defined, but on the whole is more resilient toward perturbations. The distribution of certain microbial species may elevate the risk of infections and diseases, although it is yet unknown whether dysbiosis is a cause or a consequence of illness. Overall, high diversity in the microbiome is associated with health, and poor diversity with disease. Our lifestyle and diet directly influence the gut microbiome and its metabolic end-products. Diets with high salt and refined carbohydrate consumption, coupled with low fibre intake, are presumed to lead to a decrease in microbial diversity. Food type, quality, and appear to be important for a healthy microbiome as well as physical exercise. Foods that are especially rich in fibres include fruits, vegetables, and grains. Despite the recommendation of around 30g of fibre for an adult per day, the average Western diet contains around 20g daily. If the fibre content of a banana is around 2g / 100g, with an average weight of 118g per banana one would need to eat around 13 bananas to reach 30g of fibre—and plenty of water for sufficient hydration. Fluctuations in our microbiome due to diet can be reverted. For prebiotics, no causal connection to microbiome and host health can be made at this point, as fibre may be effective independently from an individual’s microbiome composition. For pro-biotics, mostly temporal persistence of the pro-biotic formulation in the GI tract has been shown. The prevalent microbiome is known to be “colonisation resistant” against incoming, unfamiliar microbiota. Nonetheless, administered pro-biotics could influence the commensal bacterial species indirectly by modulating the transcriptional activity in these species. All in all, we only know little about our gut microbiome and we are just beginning to understand the complex coherence between microbiome and host health. Elaborate studies are aiming to define a healthy microbiome and its biomarkers whilst examinations of fibre, pre- and pro-biotics are trying to elucidate the mechanistic link between microbiome and host. Furthermore, modern studies aim to understand the GI tract as a whole by investigating the mutual influence of genes, proteins, and metabolites of gut bacteria. The microbiome remains a fascinating field of research and all evidence points toward the need to listen to our gut. Elisabeth Mira Rothweiler

A healthy gut microbiome—what is that? In 2012, the International Life Sciences Institute North American Microbiome Committee commissioned

15


the Oxford Scientist

Regeneration

Moral agency and modern medicine: Playing God or doing good?

I

n recent years, regeneration has permeated the medical field. Regeneration is defined as a form of repair and renewal. However, at its core, it embodies the resilience to survive and the extension of life. This year, a team from the University of Maryland School of Medicine performed a pig-to-human organ transplant, one of the medical field’s most significant breakthroughs. The procedure marks an astounding first: the occurrence of a pig organ being inserted into a human capable of surviving and recovering (kidneys from pigs were transplanted into two legally dead individuals who were supported by ventilators in 2021). This revolutionary operation is an example of xenotransplantation, the receipt of cells, tissues, or organs from a different species. In the 1990s, xenotransplantation became an intriguing conversation point, suggested to help circumvent the lack of human allografts, or materials, readily available for transplantation. In recent years, the medical field has witnessed several defining moments for this technique, from Baboon bone marrow to porcine liver-assist devices. However, full organ xenotransplantation clinical research trials have not yet begun, and procedures are rarely approved by regulating boards. The University of Maryland School of Medicine case was an anomaly, as the U.S. Food and Drug Administration granted an Emergency Use Authorization (EUA) for the procedure to move forward. EUAs are permitted sparingly, but let doctors use experimental techniques and treatment measures as a last resort if it means saving a life.

groundbreaking procedures like these, cross the moral and ethical line. Conversely, this sort of pioneering work also offers hope for the future. Advances in modern medicine have saved countless lives and professionals have become so skilled in therapeutic areas that extending life is not a hopeful fantasy. From gunshot wounds to organ failure, death is no longer assured, and many would argue that regenerative medicine’s present and future benefits are too great to be dismissed. Those who believe this tend to point to the serious shortfall of organs available currently. In the UK alone, 470 individuals died last year waiting to receive an organ transplant. Others go even further, suggesting that there is almost no aspect of modern life that is not some form of manipulation of the environment for human gain. Xenotransplantation and similar procedures are certainly thoughtprovoking concepts. Permitting xenotransplantation on a larger scale would provide professionals with a new supply of viable organs for transplant and save many lives. However, advanced technology like this does not come riskfree. It requires heavy debate, regulation, and, if necessary, termination. It is too soon to know whether Bennett’s operation can be deemed a success, but the world must be ready to face the possibilities it presents. Isabella Giaquinta

David Bennett, a 57-year-old with heart disease, was ineligible to receive a conventional organ transplant due to his terminal diagnosis; an experimental procedure was the only option left. On January 7 2022, Bennett became the first person to receive a genetically modified pig heart. Today, over a month since the operation, Bennett is still alive. Many find the idea of having a pig heart disturbing. Others can barely wrap their heads around the fact that this is feasible with modern technology. Why a pig? Why not an ape, cow, or moose even? Well, a vital point in organ transplantation is that the human body’s natural immune response sometimes leads to rejection. Therefore, in considering xenotransplantation, scientists must prioritize finding a species that can be genetically altered to reduce the likelihood of a human rejecting the organ. It turns out pigs are the ideal candidate; their heart valves have been used in humans for years, and early insulins came from pigs too. In the human body, the most sensitive antigen that triggers the rejection of tissues from non-human animals is called galactose-a1,3-galactose (Gal). To combat this immune response, in 2003 genetically modified pigs were produced so that they would no longer express Gal (a1,3-galactosyltransferase gene-knockout pigs). This modification is deemed the fundamental platform for establishing genetic alterations into the pig genome. However, arguably the most significant contributor to this procedure’s success is CRISPR-Cas9. Developed in 2012, CRISPR-Cas9 technology allows geneticists and medical professionals to edit parts of the genome by changing, adding, or removing sections of DNA. It has been lauded as one of the most revolutionizing scientific innovations in history, and its pioneers were awarded the 2020 Nobel Prize in Chemistry for their discovery. Without this technology, Bennett’s operation would have been nonexistent; the pig from which Bennett’s heart came underwent ten changes to its genome, and six human genes were implemented into the heart, all made possible through CRISPR-Cas9.

Artwork by Peyton Cherry

16

Bennett’s pig heart has been met with praise but also criticism. Procedures made possible by CRISPR-Cas9, like this transplant, have been adjudged an enormous threat, and rightfully so. From a bioethical standpoint, events like this pose many problems—from animal rights issues to concerns surrounding “playing God”. Some people argue that

Artwork by Matthew Kurnia

17


the Oxford Scientist

Regeneration Artwork by Asia Hoile

Radiotherapy, Regeneration, and Rams How an accidental discovery revolutionised cancer therapy

I

t’s almost impossible to think where cancer treatment would be now if it wasn’t for the revolutionary discovery of fractionation in the late 1920’s. Struck by the possibilities of newly discovered x-rays, two French scientists, Regaud and Ferroux, set about a new experiment to sterilise rams using testicular x-ray irradiation. Their efforts, however, were hampered by severe skin reactions arising from the delivery of an extremely high single dose. They later accidentally found that splitting the overall dose into smaller blocks, or fractions, was necessary for both the reduction of skin reactions and to permit sterilisation. This discovery completely changed the course of cancer treatment, paving the way for radiotherapy to become one of the most prevalent and successful treatment methods. Radiotherapy bombards tumour cells with ionising radiation, aiming to generate sufficient DNA damage in tumour cells to prevent replication. Whilst it may seem intuitive that blasting a tumour with as much radiation as possible would be the optimal method to achieve this, the skin and normal tissue surrounding the tumour is also exposed causing extreme burning and blistering of the skin. However, normal tissue cells repair more quickly than tumour cells after radiation-induced damage owing to the ability of their stem cells to repair, self-renew, and differentiate rapidly to regain function. Conventional treatment schedules deliver five small fractions per week for 4–7 weeks, which allows almost complete repair of damaged normal tissue between fractions. This explains exactly why fractionation is so effective: the inter-fraction time is insufficient to completely regenerate tumour cells thereby allowing for an accumulation of damage before delivery of the next fraction. Once this method was acknowledged to be superior, fractionation regimes became the focus of major development. Schedule alterations were introduced to account for variation in tumour type, irradiation volume, amount of normal tissue included, and the radiotherapy tolerance of that tissue. Many tissues have been categorised as early responding tissues, exhibiting radiationinduced side effects less than 90 days post-treatment through rapid stem cell regeneration and tissue repair. Meanwhile, late responding tissues generally display toxicity after this time, an issue which progressively worsens as the stem cell supply is decimated. By varying the dose per fraction, overall treatment time, and dose rate (amount of dose received per unit of time), these characteristics can be exploited to find an ideal treatment regime. For

example, breast and prostate cancers fall into the late responding tissue group and therefore respond positively to hypo-fraction, a method which utilises longer inter-fraction periods with higher doses for sufficient normal tissue repair. The search for conformative radiotherapy treatment types which might target a higher proportion of tumour cells has surged through enhanced medical technology developments, facilitating breakthroughs for the next generation of cancer treatments. One of Europe’s leading cancer centres based in Manchester, The Christie, completed its first successful radiotherapy course of a head and neck cancer patient in late 2021 using the world’s first MRI-linac treatment. This machine not only captures high quality images of the tumour shape before each dose (for precise tumour targeting), but also adapts the beam shape to respond to tumour changes or internal movements generated through motion caused by breathing. The accurate targeting of this technique decreases the amount of normal tissue damage, requiring minimal stem cell regeneration which in turn greatly reduces the toxicity experienced from their depletion. Furthermore, radioisotopes such as radon-223 are being investigated as a targeted radiotherapy modality for advanced cancers. This procedure involves the ingestion of a radioisotope tagged to an antibody which can accumulate at the tumour site. Within the decay chain of radon, high energy alpha-particles are emitted with very short ranges, permitting the delivery of large, targeted doses on their release. With the subsequent reduction both of surrounding normal tissue and skin toxicity, the proportion of normal tissue damage is minimal, as is the level of regeneration required. This is just the tip of the iceberg in terms of new scientific breakthroughs in cancer treatment, however, it certainly seems that the future of radiotherapy, consisting of highly conformal, precise treatments aiming to minimise the proportion of normal tissue damage, is highly promising. The progress over the last century since Regaud and Ferroux’s breakthrough points to further novel and innovative fractionated treatments with a view to prolong patient survival and mitigate unpleasant side effects. This process has propelled two details into clear focus: the reduction of normal tissue damage and the encouragement of regeneration are the way forward for radiotherapy treatment.

Emma Durkin

18

AI FOSSIL HUNTERS

F

or many, the image that springs to mind when thinking of archaeologists, palaeontologists, and paleoanthropologists is normally the action-packed lives of Indiana Jones or Dr Alan Grant from Jurassic Park. The reality of life in these fields is often far less fast-paced. Paleoanthropologists spend a vast amount of time sat uncomfortably, exposed to the elements, meticulously excavating field sites in the search of the next big fossil. Sometimes, this hard work pays off. Last year saw many exciting fossil finds be widely publicised and gain public attention, such as of the discovery of the ‘Dragon Man’ in China and the Homo fossils from Nesher Ramla, Israel. Both of these fossils have rewritten the ever-evolving story of human origins. As for the allure of finding the last common ancestor between humans and chimpanzees, the hunt is still proving difficult. The primary puzzle though is identifying where to stick the trowels in first—a challenge for anyone whose research relies on finding rare objects, whether it human remains or ancient artefacts. Traditionally, the first step of selecting the initial digging point was informed by surveys done by the researchers’ professional predecessors, or on their personal knowledge of the landscape. This, however, is very time intensive. In recent years, machine learning techniques have been designed as part of a concerted effort to update fossil discovery techniques, particularly in previously unexplored places. Machine learning is a branch of artificial intelligence (AI) which allows a computer programme to automatically learn from past data—and so far, this automation has been deployed with very promising success. For example, in 2012, an AI model was successfully developed and deployed in the hunt for vertebrate fossil sites in Wyoming, USA. Similarly, in 2014, an unsupervised machine learning technique called iso-clustering was successfully applied to fossil hunting in Utah. Iso-clustering, which stands for “iterative self-organising clustering”, can predict promising locations based on entering information on just one or two known fossil sites—perfect for challenging research areas. The methodological progress of using unsupervised machine learning in the pursuit of fossils was recently pushed a step further. Research published in 2021 by Oxford researchers Professor Susana Carvalho, DPhil candidate João d’Oliveira Coelho, and collaborator Professor Robert Anemone (The University of North Carolina at Greensboro),

“Paleoanthropologists spend a vast amount of time sat uncomfortably, exposed to the elements... Sometimes, this hard work pays off.” applied the k-means algorithm to satellite images of Gorongosa National Park, Mozambique. This became the first study applying this relatively simple algorithm to the search for new fossil sites. The k-means algorithm works by creating clusters of similar pixels within an image—here, satellite imagery of previously unexplored woodland. These clusters are then classified into landscapes with or without sites containing fossils. The model can then be tested by comparing its results to the “ground truth”—human-led checks of the model-predicted clusters for fossils. This method can handle a vast amount of information, and so has the potential to be scaled to huge projects where step one is fossil location. Although still in its infancy, this approach can accurately locate areas of interest and has already assisted in the discovery of four new field sites in the scientifically important location of Gorongosa National Park. Finding new fossils in this location is of particular interest as the East African Rift System, spanning from Ethiopia to Mozambique, was central to early human evolution. Many important fossils and artefacts, including stone tools and Ardipithecus, Australopithecus, and Homo skeletal remains, have been uncovered in Tanzania and Ethiopia. However, the southern region of the Rift System has been largely neglected in paleoanthropological research because the densely vegetated landscape is harder for fossil discovery. Applying the automated technique has allowed researchers to access fossil sites in these new and hard-to-reach locations, and conduct research addressing current evolutionary questions. This k-means algorithm, and the use of artificial intelligence more generally, has huge potential for revealing fossils in locations paleoanthropologists may not have reached relying on previous site-knowledge alone, thus allowing researchers to spend less time searching, and more time digging. The automation of fossil hunting techniques is launching the study of our ancient past right into the future. Sophie Berdugo

19


the Oxford Scientist

I

n March 2018, Sergei and Yulia Skripal were the victims of a chemical weapons attack whilst sat on an unsuspecting park bench in Salisbury, England. The incident was quickly classified as an attempted assassination and whilst the Skripals survived, an innocent British citizen, Dawn Sturgess, lost her life, and the impact on the local community was overwhelming. The substance used in the attack was identified as a variant of the Novichok family of compounds, a set of chemicals first synthesized by the Soviet Union during the Cold War. Most known information about Novichoks comes from Vil Mirzayanov, a Russian scientist-turned-whistleblower, in his book State Secrets. Though the use of Novichoks (or any chemical) as a weapon is restricted by the Chemical Weapons Convention (CWC), only some specific chemicals are subject to stringent detection and verification methods. Until 2020, Novichoks were not listed under any of the Schedules of the CWC, and thus were not subject to detection when inspections were carried out by the Organization for the Prohibition of Chemical Weapons (OPCW). This is despite the OPCW becoming aware of the existence of Novichok as early as 2011; the Salisbury attack prompted a change to this fact. Two proposals to amend the Schedules of the CWC were submitted within a year of the incident which proposed to add compounds to the list of chemicals subject to detection and verification methods. The first, a joint proposal submitted by the USA, Canada, and the Netherlands, listed groups of chemicals solely related to the Novichok families. Information on these groups of alkylphosphonamidofluoridates and alkylphosphoramidofluoridates was taken from Mirzayanov’s book. The scope of this proposal covered most known variants of Novichok including A-234, the substance used in Salisbury, though not all were listed. The second proposal was from Russia and contained five suggestions. The first and second of these were like those of the first proposal, but much more restrictive on the alkyl groups connected to the main organophosphate backbone of the chemical—only methyl and ethyl groups were covered. Suggestion three was one specific molecule, A-242, which is similar in structure to those already mentioned, but with a guanidine substituent.

Regeneration Suggestion four, curiously, detailed sets of carbamate chemicals, which are completely unrelated to the Novichok compound used in Salisbury. Carbamates in general are compounds which have many different legitimate uses, including as muscle relaxants and insecticides. Nevertheless, the specific examples given in the Russian proposal correspond to those patented by the US Army in the 1960s as chemical weapons. A question remains as to why the Russians decided to suggest carbamates as an amendment. Perhaps, politics was, as usual, a major driving force, rather than science itself. If Novichoks, which were almost definitely synthesized and used by the Russians, were in proposal to be restricted, why not try to target the US arsenal of chemicals in retaliation?

into rumoured compounds, and data compiled to assess the toxicity and viability of such compounds as chemical weapons. Scientific fact ought to be recognized and applied to political and legislative decisions. Another important point to consider is that the mode of chemical warfare has evolved since the use of mustard gas in World War I. While events in Syria in recent years have highlighted the continued use of chemical weapons in large-scale battlefield situations, smaller and more targeted attacks have come to light. Salisbury, Navalny, and the assassination of Kim-Jong Nam in 2017 are all examples. In an age where a single inconspicuous vial of colourless liquid holds the power to kill thousands, detection and verification methods are under extreme pressure to deliver. Finally, just how much information should be contained within the CWC? No precursors to Novichok compounds are listed in the document, despite a sub-section being dedicated to substances that can be used to directly synthesize chemical weapons. If precursors were listed in the open-source legislation, would they run the risk of misuse by intelligent but ill-meaning persons who could use such information to create the restricted compound? This is one of the reasons why Novichok was not added to the CWC back in 2011. Nonetheless, in today’s digitalized world where so much information is at our fingertips, is this still a valid approach to take?

The fifth and final item of the Russian proposal was once again a series of Novichok compounds, however, it was a theoretical series proposed by D. Hank Ellison in The Handbook of Chemical and Biological Warfare Agents. There was little evidence behind this general structure, and as such in early 2019 the OPCW recommended the Russian proposal for rejection, owing to a lack of consensus on the fifth item. Russia subsequently submitted a modified proposal which was adopted due to the omittance of the problematic fifth article. As of July 2020, the CWC has listed suggestions one and two of the joint proposal (Mirzayanov’s Novichoks), and suggestions three and four of the Russian proposal (Mirzayanov’s Novichok A-242, and the families of carbamates). Overall, Novichoks were added to the Chemical Weapons Convention following a blatant attack on British soil. Nevertheless, the process of this amendment has raised multiple questions about the inner workings of the CWC.

The 2020 amendment, the first since the entry-into-force of the CWC in 1997, not only remedies certain issues but also brings new ones to light. It is important to use this opportunity to learn and to adapt our legislative efforts. If we fail to start acting pre-emptively, rather than reactively, we run the risk of more devastating attacks further down the line—despite its potential to kill thousands, the one fatality from the Salisbury incident is still one too many. We must work to fulfil the original intentions of the Chemical Weapons Convention: to completely eradicate the use of chemical weapons, ‘for the sake of all mankind’.

Firstly, there remains one known Novichok variant unrestricted. Similar in structure to A-242, Novichok A-262 is widely speculated to be the compound used to poison Russian politician Alexei Navalny in August 2020. Why was A-262 seemingly not even considered for addition to the CWC, and should it be now? Reliability of sources is also a critical component of decision-making for the OPCW. Item five of the original Russian proposal was deemed too speculative to be included, with not enough factual evidence backing the existence and toxicity of the substances suggested. Nonetheless, speculation should be taken seriously in many cases, especially when lives are potentially on the line. Thorough research could be conducted

Tamara Gibbons

Artwork by Peyton Cherry

The Regeneration of the Chemical Weapons Convention 20

21


the Oxford Scientist

Regeneration

Sleep: The Underappreciated Therapist Harrison France awakens to the benefits of adequate rest

Artwork by

O

xford’s culture can make the recommended 8 hours sleep a night hard to reach. When asked for her opinion on rest, the ex-president of Brasenose JCR İrem Kaki simply laughed before joking, ‘sleep is for the weak’. We may all think we know the importance of a good night’s slumber, but why do we need sleep? And why might getting an early night be more productive in the long run? Deep Slumber Sleep is characterised by two main stages. The first phase is non-rapid eye movement sleep (NREM), considered to be a restorative period for the mind. In the brain, activity slows down, fluid flow increases, and harmful substances are washed out into the blood for breakdown elsewhere. These changes have been observed by researchers, infusing fluorescent proteins into the brains of mice and measuring how long it takes for the glow to decrease. Such studies indicate that substances around neurons are cleared around twice as fast in sleeping mice. As molecules eliminated include the

22

harmful misfolded proteins B-amyloid and a-synuclein, the respective drivers of Alzheimer’s and Parkinson’s Diseases, sleep may have protective effects against these disorders. More recently, scientists have started to move away from the brain-centric model of sleep. Overall, we see significant changes to energy consumption during slumber, with a decrease in the use of nutrients and increased production of new proteins and cellular components. The body enters an “anabolic state”, a primarily regenerative environment promoting tissue repair, driven by signalling molecules such as growth hormone. Studies have shown sleep deprivation is associated with reductions in healing, delaying repair of ulcers, and increasing self-reported muscle fatigue after exercise due to a combination of changes in signalling and immune function. Sleep helps the immune system to “remember” germs and prevent infection, which further explains how it aids in wound repair.

Bianc

ssen mu s a aR

The Strange World of Dreams Beyond the physical processes, sleep also has a role in our ability to think and generate memories. The second stage, rapid eye movement (REM) sleep, returns brain activity to near-waking levels and allows us to experience dreams. Throughout the night, we repeatedly transition between NREM and REM sleep based on our individual needs. Even so, following sleep deprivation, the physical restoration from NREM appears to be preferred, with brain waves remaining in this slow state. Though beneficial, the cost of prioritising NREM is a reduction in “dreaming” REM sleep, which reduces its emotional and cognitive merits. Dreams are thought to act as an amalgamation of previous recent events, interpreted without the constraint of the brain’s prefrontal cortex, which controls planning, decision making, and goaloriented behaviour. In dreams, REM sleep enables the brain to enter a hyperassociative state, improving pattern

recognition. Selectively waking individuals during REM periods and immediately testing them with anagrams has been used to quantify REM-associated problem solving. Compared with waking from NREM sleep, the number of anagrams solved in 80 seconds increased by 32%, suggesting improved pattern identification in REM. Nevertheless, caution is needed in interpreting these data as REMwaking only returned abilities to those seen in awake subjects, not furthering them as may be expected. Regardless, this unanticipated result may be explained by differences in brain activity. While asleep, decreased function of the dorsolateral prefrontal cortex (an area associated with problem-solving while awake) should predict a drop in performance, hence the observation that it is retained suggests alternative areas are activating to compensate for the loss of dorsolateral prefrontal cortex activity. Too Many Waking Nights? Emotional stress, impaired memory, and lapses in concentration are all common effects of short-term sleep deprivation. Indeed, moderate-to-severe tiredness is implicated in 16% of all road traffic accidents, and loss of sleep can be an inducing factor of mental health problems in susceptible individuals, so the consequences are serious. In the long-term, insufficient sleep has an even broader range of impacts on the body and mind. Risks of cardiovascular diseases, including high blood pressure and heart attacks, are increased, and studies show that extended waking reduces the influence of the hormone insulin, impairing blood sugar control. This disturbance may explain increased incidence rates of diabetes mellitus, a disorder involving poor glucose regulation, and obesity in chronically sleep-deprived groups. Nevertheless, such effects require years of sleep loss, hence are not a cause for alarm for most of the population. Just Keep Swimming It isn’t just humans who need sleep— virtually all species have comparable periods of rest. The need for sleep can be a problem for marine mammals, however, who must regularly surface for air. Rather than forego sleep entirely, animals such as dolphins have evolved an alternative strategy, further highlighting the importance of this rest. Typically entering an inactive state, recordings of brain

activity reveal that dolphins enter NREM sleep one hemisphere at a time. The other side remains awake, but with reduced levels of activity, allowing the animal to observe for predators and ascend for breaths as necessary. After approximately two hours, the hemispheres will swap roles, with the process continuing until necessary rest has been achieved. Out Like a Light So, knowing the benefits of sleep and the dangers of missing it, how can we make sleeping easier? One of the biggest contributors to good sleep is a regular routine. Humans function with a 24hour biological rhythm, which is wellentrained within the brain. Light signals throughout the day, especially shortwavelength blue light and the changes to brightness occurring at dawn and dusk, help to keep the rhythm in phase such that the influences of accumulating sleep-promoting chemicals in the brain is greatest at night. With a regular sleep schedule, the brain becomes better at matching bedtime and tiredness, decreasing the time it takes to nod off.

can be considered a form of sedation which lacks the majority of the benefits mentioned previously. A Final Goodnight It can be easy to neglect sleep, especially with Oxford’s barriers to healthy sleep habits, but the benefits are hard to ignore. Adequate sleep helps our bodies and minds to grow and regenerate, providing a myriad of effects that improve our function. Sometimes, it might be worth considering missing that essay deadline, or avoiding that night out, in favour of a good kip. And if anyone complains, you can just show them this article.

While blue light is known to be important, and frequently cited by the media as a major cause of reduced sleep quality, the evidence suggesting that light from phones, laptops, and other devices, can delay sleep is more limited. Controlled trials do suggest a relationship, but the delay from these forms of light is only around seven to ten minutes—biologically insignificant. Nonetheless, in individuals prone to sleep deprivation this may still have an effect. Blue light is capable of delaying melatonin, a hormone helping prime the brain for entry to NREM sleep, for longer periods (~90 minutes), so if a person is already vulnerable to sleep disruption, light exposure may be the catalyst for insomnia. For the wider public, it may be the content of the device that opposes sleep initiation, rather than light. Exposure to technology keeps the brain engaged and active, which is a contrast to the relaxed, restful state optimal for slumber. Avoiding drugs and alcohol is another common recommendation. While alcohol may accelerate the onset of sleep, it has dramatic effects on the depth and quality of cycles and can abolish REM sleep. Without this mentally restorative period in the night, it is common to remain tired on waking and experience reductions in concentration. Alcohol-influenced sleep

23


the Oxford Scientist

Regeneration

How Technological and Scientific Advancements can help Ensure COVID-19 is the Last Pandemic

T

he SARS-CoV-2/COVID-19 pandemic has been the defining event of this generation. The virus has taken the lives of at least 5.6 million people, disrupted the education of nearly all school-aged children around the world, and dramatically reduced global economic stability. All of these will have lasting effects on the world through the next decade and beyond. As we begin to enter a stage of “living with the virus”, individuals are rightly looking toward the next, inevitable pandemic and asking questions as to whether we are prepared or if we will once again be caught unawares. It has become popular on social media to state that the world is just as (or even more) unprepared for the next pandemic. This is wrong. Yes, there are legitimate discussions around the erosion of public trust in governance and the rise of the anti-vax movement, but this discounts the astounding progress from the scientific community during this period. The technological advancements wrought and lessons learned from the SARS-CoV-2 pandemic have already placed the world in a significantly better state to take on a novel biological threat. Diagnostics and Sequencing While I’m sure many readers would happily go the rest of their lives without performing another lateral flow test, the expansion of athome diagnostic testing is here to stay. The increased investment in technologies, and more generally improved understanding of in which situations certain diagnostics (e.g., lateral flow, PCR, LAMP) are most useful, will lead to a revolution in the diagnosis of infectious diseases and broad community surveillance. Beyond the development of specific tests, better

24

understanding of sample stability, medium (i.e. saliva versus nasal swabs), and combinations of old testing technologies with novel biochemistry (i.e. CRISPR-based nucleic acid detection) will allow more accurate tests to be developed for specialist labs more quickly. “Sequencing” is the process wherein the genetic code of an organism—be it human, frog, or virus—is transformed to a string of letters through a variety of biochemical processes. “Sequencing technologies” can turn the contents of a clinical sample into human- and machinereadable code that allow scientists to gain a great deal of information about the pathogens inside. One issue with most first- and secondgeneration sequencing technologies, however, is that they require a priori knowledge of the pathogen targeted. While this information can come from the use of traditional diagnostics, these approaches are impractical for a novel pathogen where traditional diagnostics do not yet exist. Oxford Nanopore is a third-generation sequencing technology which can sequence targets through a pathogen-agnostic approach. Nanopore sequencing is still in its infancy and has achieved remarkable improvements in speed and accuracy over the last few months alone. Of course, sequencing technologies are only useful for pandemic preparedness if they are available to those where risk of zoonotic pathogen spillover is highest. Recently, global access has been championed by the financial support of the Rockefeller Foundation’s Pandemic Prevention Institute and the Wellcome Trust. Accessibility has never been higher and, provided sustainable investment, will continue to grow. Scalable Vaccine Platforms The pandemic has helped turn pharmaceutical companies Pfizer/BioNTech, AstraZeneca and Moderna into household names due to

the success of their COVID-19 vaccines—over 10 billion doses of their vaccines have been administered globally. The development of the COVID-19 vaccines are arguably the greatest scientific achievement in the past decade, not only for their record-breaking speed in development but also for their remarkably high efficacy pre-variants. Attempts to quantify the lives saves and disability-adjusted life years averted will fill research papers and theses for years to come. As easy as it is to celebrate the current vaccines we have deployed, the success of the vaccine platforms themselves should be celebrated as well. With the ChAd-Ox1 nCoV-19 vaccine, Dr. Adrian Hill and the Jenner Institute have finally shown their viral vector can deliver. Similarly, with both Pfizer/ BioNTech and Moderna the world has finally achieved safe, effective mRNA vaccines for humans. These vaccine platforms are much more flexible for targeting new pathogens than traditional forms of vaccine development using inactivated or attenuated pathogen targets. MRNA vaccines are unlikely to solve all of the challenges of vaccinology, but they can certainly speed up the process. Funding further process improvements rather than specific vaccines is likely the best strategy for pandemic preparedness, but this approach is generally unappealing to investors (Professor Sarah Gilbert and Dr. Catherine Green have a fantastic section addressing this issue in their book Vaxxers). The Coalition for Epidemic Preparedness and Innovation is doing the best work in the field by funding high-risk, highreward vaccine projects that will provide useful lessons for broader development in the field.

“If the current funding for pandemic preparedness is maintained or improved, there is little question that we will be better prepared than at the start of 2020.” Therapeutics Early in the COVID-19 pandemic, convalescent plasma—a blood product derived from patients who have recovered from infection—was widely used globally as an intravenous antiviral, despite scant evidence of who would benefit most from its administration. Convalescent plasma has the advantage of being (relatively) easy to collect from recovered individuals, having a strong safety record, and should be rich in antibodies targeting the pathogen of interest. On the surface, one can be

forgiven for having thought plasma would have been a silver bullet. Nevertheless, after the conclusion of several large-scale trials in hospitalized individuals, recruitment for which was partially hampered by the granting of an emergency use authorization by the Food and Drug Administration in the United States, the evidence was fairly clear that convalescent plasma is nearly useless for treating those who are already hospitalized. Recent trials conducted in the United States and Argentina have demonstrated efficacy at preventing severe illness if plasma is given early in infection and at high titre (a measure of concentration for antibodies). In future pandemics, during the period where pathogen-specific antivirals are under development, convalescent plasma could serve as a firstline treatment for high-risk individuals if given shortly after infection. This would, of course, require access to accurate diagnostic tests. While the University of Oxford is rightly celebrated for the development of the ChAdOx1 nCoV-19/AZD1222 vaccine, the RECOVERY trial led by Professors Sir Peter Horby and Sir Martin Landray may very well be the largest impact the University has in terms of reducing COVID-19 mortality. The results of the trial, particularly in identifying the potential of Dexamethasone, are estimated to have already saved a million lives globally by March 2021. As a trial, RECOVERY was innovative for the speed at which it was established, its massive size (it leveraged nearly every hospital in the UK), multiarmed approach, and simplicity in implementation. Had more sites globally approached clinical trial design as RECOVERY did, there would have been far fewer underpowered studies publishing conflicting evidence and lives would have been saved. RECOVERY has set a benchmark for trial design in crisis situations. As COVID-19 vaccines continue to be delivered around the world, society will begin to recover from this pandemic. If the current funding for pandemic preparedness is maintained or improved, there is little question that we will be better prepared than at the start of 2020. Nevertheless, progress will stall if the world fails to capitalize on this opportunity and aggressively financially support the field. Jeremy Ratcliff

25


the Oxford Scientist

Regeneration

Mathematical modelling for regenerative medicine: dream or reality?

O

ne of the most stunning qualities about mathematics is its universality. Besides describing properties of numbers and shapes, the application of maths to modern science underpins many significant discoveries and technological developments. Quantum mechanics and general relativity, for example, are both understood with the language of mathematics and have each impacted the design of GPS, spacecraft, and/or computers— to name but a few inventions. At the centre of this language lie structures known as mathematical models: sets of equations that translate our understanding of a given natural phenomenon into abstract “words” supplied by calculus, probability, geometry, or some other mathematical field. Although these models are inherently theoretical and, consequently, do not capture all aspects of reality, they nevertheless can be strikingly accurate. Newton’s second law (force equals mass times acceleration), for instance, can be reformulated in terms of a differential equation—a structure from calculus that describes how the position and velocity of a given object change. By solving the equation, we can predict with extremely high precision where the object will be at any future time. While mathematical models have been extraordinary at describing the laws of physics and chemistry, their application in biology has proven to be much more challenging. This difficulty arises from multiple sources: the behaviour of cells, for example, results not only from their interactions with the environment, but also from unique internal properties, such as the set of genes that they express. Additionally,

26

most biological systems tend to have many interacting components that cannot all be feasibly tracked. Although these challenges make mathematical models for biology usually worse predictors than their counterparts for physics and chemistry, the models can nevertheless be useful to experimentalists. Indeed, they can help biologists explore different hypotheses in a tractable and cost-effective way, draw attention to potentially significant mechanisms, and guide experimental design. These benefits can be readily seen in the field of developmental biology, where mathematical modelling has advanced our understanding of stem cell behaviour. Stem cells are a crucial biological system to investigate because their descendants form the basis of practically all adult organs, and they normally help to replenish tissues in the body. Consequently, they are key for developing practical and useful tools and treatments in regenerative medicine. Although mathematics has been applied to investigate multiple aspects of stem cell behaviours, I will focus on how the subject has helped discover new mechanisms that guide the migration of neural crest cells, which are a particular stem cell population. Neural crest cells, which are common to vertebrates, emerge from a structure that eventually becomes the spinal cord. They migrate to specific areas throughout the organism, where, depending on their final location, they differentiate into skin cells, bone cells, or even peripheral neurons. It is no surprise, then, that disruptions to neural crest cell migration can have serious consequences on normal development, including death or

disorders such as cranial malformation. What is surprising, however, is that such complications are rare. This quality of nature, known as robustness, suggests that cell movement is not entirely random but instead can be explained by their interactions with other cells and their environment—exactly the kind of behaviours that can be represented using mathematics. Mathematical modelling has drawn from, and added to, experimental studies to suggest plausible mechanisms that confer robustness to neural crest cell migration. One specific hypothesis, developed after observing that cells contact each other frequently during migration, suggested that migration could arise from a process in which cells repel each other. Such cell-cell repulsion, known as contact inhibition of locomotion in this context, explained why cells avoided becoming densely packed in the early stages of migration, and why they moved towards less populated regions. Mathematical models developed with this hypothesis, however, did not always yield realistic results. Cells would spread out in simulations and colonise regions not normally traversed by neural crest cells. This discrepancy between the theoretical model and the true biology led investigators to refine their hypothesis by reasoning that the cells should also attract each other over large distances. This “short-range repulsion, long-range attraction” scenario was able to produce realistic migrating collectives in simulations, inspiring experimentalists to search for molecules that attract, and are secreted by, neural crest cells. This prediction was eventually verified in frog embryos by the discovery of molecules such as

stromal cell-derived factor 1 (Sdf1), which cells move towards. These experimental observations confirmed that neural crest cells could indeed attract each other by secreting Sdf1, and thereby demonstrated the power of mathematics in making experimentally useful predictions. As the example above illustrates, mathematical models are not set in stone but instead are continually updated based on experimental results. As more data become available on neural crest cells, the mathematical models used to describe them will become better at anticipating their behaviour over larger time-scales. This in turn may aid the design of new technologies in stem cell engineering and regenerative medicine, as one must first have a reliable and accurate model to make an informed prediction before attempting to control where and when stem cells migrate and differentiate. In this way, by harnessing mathematics to accurately guide stem cell behaviours, researchers will draw closer to establishing regenerative medicine as a grounded, practical field with reliable tools and treatments. W. Duncan Martinson

27


the Oxford Scientist

Regeneration

The role of science in waste

A

nyone who has conducted an organic synthesis lab knows of the inexplicable mess and waste one regularly produces in an effort to achieve what seems like nothing. This fact, and others like it, have led to a bigger question—how do scientists contribute to the current global waste crisis? Laboratory waste is under strict regulations in the UK, with many substances being toxic and having to be handled with the upmost care. Reducing waste and emissions in a lab can seem impossible due to a plethora of factors including buildings being temperature-controlled, requiring a 24/7 hour energy supply and a heavy reliance on single-use items during experiments. There is potential for modernising existing labs (e.g. improving the efficiency of buildings using solar panels), nevertheless, balancing the needs of the environment with those of scientific advancements is no small feat. Therefore, reputable organisations such as the International Institute for Sustainable Laboratories and their benchmarking tools are crucial for the future of labs. Having parameters which each lab can measure, such as their carbon footprint, is allowing for futureproofing, with the options to compare policies and practices with peers allowing for collaboration. There is a persistent rhetoric advocating for recycling of plastic waste, but many fail to realise that plastic recycling involves solvent based processes and further incineration. Only about 16% of plastic waste is used for recycling, with the rest being sent to landfills or incineration, indicating that most of the plastic we use regularly is from raw materials. The existence of only a single type of plastic would make recycling far more efficient, however, this is not the case at present. Scientists have put forth a unique proposal which incorporates hyperspectral imaging technology, meaning that during the separation process each type of plastic will have a unique presence under ultraviolet light, allowing for more effective separation. This would form a principle in the desire of a circular economy, where most of the plastic in use forms a ‘closed loop’ minimising the use of raw materials. Modern climate activism has an emphasis on the actions of individuals. The truth remains that the majority of emissions and waste are a result of the behaviour of multinational corporations and lack of government accountability. There have been several calls for a UN panel on chemical and physical waste, allowing scientists to be at the forefront of

policy changes. Nonetheless, with each country having varied laws around waste, it becomes easy for corporations to exploit countries with minimal regulations for profit. The persistent lack of appropriate legislation has led to shocking statistics, with 16% of premature deaths in 2019 across the world were attributed to waste mismanagement. A paper published in 2020 revealed how forced changes in the environment have impacted our heath, from oxidative stress (imbalance of oxygen species producing inflammation) to epigenetic alternations leading to changes in gene expressions. Developing countries have been impacted more by contamination due to practices such as open dumping and burning being commonplace because of unreliable public services. The EU and USA each export 70-95% of their recycled plastics to China alone, leading to laws banning the import of ‘foreign garbage’. There is a clear disconnect from the empty promises politicians make about the future of our planet and the actions being carried out against other nations.

“Modern climate activism has an emphasis on the actions of individuals. The truth remains that the majority of emissions and waste are a result of the behaviour of multinational corporations and lack of government accountability.” The solutions to waste are neither obvious, nor simple. There is hope however, with developments in technology leading to more efficient recycling and governments increasingly taking note of scientific guidelines. There have been great efforts by many research groups to discover ways to improve our waste management capabilities. Scientists do contribute a significant amount to the waste produced yet, they play a critical role in the improvements of institutes to aid the problems and guide governments when possible. The fight for climate justice is one that cannot be fought without great minds of science. Halima Doski

Is regenerative farming the future of food? T

he start of the New Year brings with it “Veganuary”. Although people commit to it for many different reasons, environmental concerns are high on the list: as well as being responsible for 30% of global greenhouse gas emissions, agriculture also takes the heaviest toll on biodiversity in the UK. As agricultural usage accounts for 72% of the total land surface in the UK, large-scale systemic change is vital to help meet climate goals. Nevertheless, giving up meat and other animal products completely may not be the only option for those trying to be more conscious about the impact of their diet on the planet. It has been suggested that a focus on regenerative agriculture could be a viable alternative, with the term “Regenuary” even being coined in 2019 by Glen Burrows, co-founder of ‘The Ethical Butcher’. Is this really the case?

Regenerative agriculture is a term used to describe farming practices that aim to increase biodiversity and sustainability whilst producing good quality products. These practices include promoting soil quality and minimising its disruption, increasing crop diversity, and allowing for a greater focus on animal welfare (such as raising animals on mixed pasture outdoors).

Whilst the claim that the UK is 30 years away from the end of soil fertility is an exaggeration, soils worldwide are eroding and thinning and action is necessary. Disturbing and tilling soil can cause soil erosion and releases carbon dioxide from the soil, therefore, curbing these activities can promote carbon sequestration and increase soil quality. Moreover, a study on a regenerative farm in the US suggests that measures such as converting annual cropland to pasture, increasing diversity of livestock, and reducing tillage and pesticide use, can lead to beef having a negative carbon footprint due to the increased carbon stored in the soil and vegetation. Improving soil quality has plenty of other beneficial effects too, including increasing water retention. This can help protect from floods and provide resistance to droughts, thus strengthening the land’s resilience in the face of climate change. It can also enhance the quality of crops as a result of the fact that more organic matter in the soil means that it contains vital nutrients which create a healthy environment for soil microbes. Soil microbes, in turn, help maintain a good soil structure, are crucial for cycling the nutrients important to plants (such as nitrogen and phosphorous), and protect plants from disease.

A study on US cornfields showed that not only do regenerative farms spend less on pesticides and fertilisers, but they can sell crops for more due to organic premiums: regenerative farming can also be economically beneficial.

The EAT-Lancet report, which aims to develop universal global targets for sustainable production of food and healthy diets, recommends a ‘planetary health diet’. This diet is mostly plant-based, including no more than 98g of red meat a week and small amounts of poultry and fish. In addition to being healthier for individuals this is also necessary for the planet—not least to keep up with the goals set in the Paris Agreement. The report also discusses regenerative practices, stressing that these are also vital to achieve climate goals.

Having taken part in “Veganuary”, 85% of people admitted that they would permanently change their diet by at least halving their animal product intake, with 40% of them saying they would stay vegan. For those who feel they cannot commit to going cold-turkey and cutting out meat and dairy full-time, buying less and ensuring that they buy produce from the farms that use regenerative techniques could help decrease the impact of agriculture on biodiversity and the climate. Of course, a plant-based option is not always necessarily the most environmentally friendly choice, and regenerative practices are important across all areas of agriculture. In summary, although it is no alternative to reducing meat and animal product consumption, regenerative agriculture will be crucial to improving sustainability going forward.

Julia Johnstone

Artwork by Sophie Park

28

29


the Oxford Scientist

Regeneration

Reductionism in science: learnings from natural remedies

R

eductionism is the idea that any system can be described as a sum of its parts: the body by its cells, the mind by the activity of its neurons, and therapeutic drugs by their chemical structure. The theory has roots in the Ancient Greek belief of Atomism and has shaped how science has developed since. Across scientific disciplines, we aim to break a system down into its most basic components in order to build a picture of the whole, ultimately to learn how to manipulate individual components for our own advantage. Yet, as the frontiers of our technology march on, I would argue that we are outgrowing this philosophy. Antibiotic resistance: learnings from honey Antibiotic resistance speaks to the crux of the reductionism problem. Despite being developed to address bacterial infection with a single, targeted solution, our failure to consider the complex emergent properties at work has led to resistant strains of bacteria which render our treatments useless. I would argue that reductionism is the problem. While an antibiotic may be effective on the scale of the individual bacteria, on the scale of a bacterial colony, evolution ensures that its success is short-lived. It’s well known that the global pipeline of antibiotics has dried up, yet, worryingly, bacterial infection remains a problem and drug-resistant infection is a growing issue. To solve this, we might look to a species that has been producing effective antibiotics for far longer than we have: bees. Honey has been used in wound care for more than 3000 years, however, it was only recently that western medicine accepted it as an effective wound treatment. Manuka honey, a natural product native to New Zealand and Australia, is commonly used as a wound dressing due to its antimicrobial properties. Its high sugar concentration draws water from bacterial cells and an internal metabolite produces hydrogen peroxide in the

30

presence of water (though these alone don’t account for its entire effect). Researchers speculate that various phenolic compounds could synergise with one another to produce significant antibiotic activity. This synergy was demonstrated in the lab by the significant antibacterial effect of Rifampicin when combined with Manuka honey, even when the concentrations of each were too low to have an individual effect on the bacteria. This effect was shown to be greater than simply additive, and prevented resistance to the Rifampicin from occurring. Ultimately, this is in part the reason why bees aren’t experiencing an antibiotic resistance crisis: their evolution has favoured emergent solutions. By taking these learnings further, we might produce antibiotics that are cheaper and more effective than our current treatments and aid them through combination therapies that cannot be resisted. Depression: learnings from magic mushrooms The development of Selective Serotonin Reuptake Inhibitors (SSRIs) appeared to be a success story for reductionism. In the 1950s, the first two antidepressant drugs, iproniazid and imipramine, were found to target a wide variety of neurotransmitters in the brain. Nevertheless, these drugs had numerous side effects and prompted the development of Prozac, the first SSRI, as serotonin deficiency seemed to correlate with depression. We now know that there is far more behind mood disorders than ‘chemical imbalance theory’ accounts for. Some patients respond better to drugs that target other neurotransmitters, while others don’t respond to treatment at all. Research has shown that psychotherapy alongside antidepressants, the current preferred treatment for depression, shows greater results than either treatment individually. The leading theory postulates that antidepressants can prime the brain for change, so-called ‘neuroplasticity’, which makes therapy more effective.

Artwork by Sophie Park

Psychedelic-assisted psychotherapy, which employ compounds such as LSD and psylocibin, have been shown to encourage the formation of new neurons in the brain and facilitate neuroplasticity. These higher-order effects are unaccounted for by the simplistic view we take in considering only the neurotransmitter scale. With the emergence of ‘treatment-resistant depression’, there is an interesting parallel between our views on the body and brain. While the use of SSRIs is unlikely to have led to treatment resistance in patients as seen above with antibiotics, it is clear that a reductionist approach to emergent systems in many ways fails to produce effective treatments. Reductionism appears to be the natural response to understanding complicated systems, yet it is often ill-equipped to tackle multi-dimensional problems. There are alternatives to this approach, such as dialectical emergence. This is the idea that, though some phenomena are not reducible to their parts, they are related to them. The need to consider a philosophy like this is exemplified by the difficulty of determining a 3D protein structure simply by its chemical structure; we could not understand protein behaviour by considering only its chemistry, instead, we developed biology to account for the majority of our observations. Taking such an emergent approach should become a part of our training as scientists if we are to move beyond reductionism.

Hamzah Mahomed

31



Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.