Issue 55 - Michaelmas 2022

Page 1

Michaelmas 2022 Issue 55 www.bluesci.co.uk Cambridge University Science Magazine

FOCUS

Chip Design and More

Particle Physics . Computers Green Energy . Weird Biology


Cambridge University Science Magazine

YOUR AD HERE BlueSci is published at the start of each university term and 3000 copies are circulated to all Cambridge University colleges and departments, and to our paid subscribers. However, this doesn’t come cheap, and we rely on our partners and paid subscribers to keep us going. We are also seeking new partnerships. If you would like to sponsor a student-run society teaching key science communications skills with reach across Cambridge University and beyond, we would love to hear from you!

Contact finance@bluesci.co.uk to subscribe to us or to secure your advertising space in the next issue


Contents

Cambridge University Science Magazine

Regulars

Features 6

In Search of the Missing Jawbone – Solving an Evolutionary Riddle

Hayoung Choi unmasks the mysterious origins of the structures that were repurposed to build the middle ear 8

The Green Revolution – How a Jellyfish Transformed Cell Biology

Andrew Smith explores the origins of GFP and how it became the versatile tool that we know today 10

3 4 5

FOCUS

16

Lateral Flow Tests – Beyond COVID-19 Caroline Reid talks about pushing existing medical technology to its full potential

12

On The Cover News Reviews

Microprocessor Design – We’re Running Out Of Ideas Clifford Sia discusses the challenges involved in building faster microprocessors

Is 3D Printing All It’s Cracked Up To Be?

Sarah Lindsay talks about the versatility of 3D printing 14

Generative Adversarial Networks

Shavindra Jayasekera discusses how competing algorithms can solve the problem of small biomedical datasets 22

Pavilion: When Braille Meets Colours

Laia Serratosa walks along the fuzzy, dreamlike borders between science and everything else 24

The Decarbonisation Challenge

Frontiers at the Large Hadron Collider

Manuel Morales Alvarado writes about the importance of the work at the LHC in the progression of high energy physics and its impact on wider society

BlueSci was established in 2004 to provide a student forum for science communication. As the longest running science magazine in Cambridge, BlueSci publishes the best science writing from across the University each term. We combine high quality writing with stunning images to provide fascinating yet accessible science to everyone. But BlueSci does not stop there. At www.bluesci.co.uk, we have extra articles, regular news stories, podcasts and science films to inform and entertain between print issues. Produced entirely by members of the University, the diversity of expertise and talent combine to produce a unique science experience.

Michaelmas 2022

28

Shikang Ni talks about the future of nuclear fusion energy and where we are now

Beyond The Periodic Table

Clifford Sia explains the difficulty of making the world run on renewable energy 26

Nuclear Fusion – Harnessing the Power of Stars

30

Mickey Wong explores the origins of radioactivity and why this means that superheavy elements are unlikely to exist

Weird and Wonderful

32

Xenotransplantation Bioreceptive Architecture Moon Plants

President: Adiyant Lamba.. ������������������������������������������������������������������������������� Lamba.. ������������������������������������������������������������������������������� president@bluesci.co.uk Managing Editor: Georgina Withers................................................. Withers.......................................................managing-editor@bluesci.co.uk ......managing-editor@bluesci.co.uk Secretary: Adam Dray........................................... ��������������������������������������������� Dray........................................... ���������������������������������������������enquiries@bluesci.co.uk enquiries@bluesci.co.uk Finance Officers: Amelie Lam, Katie O’ Flaherty..................................................finance@bluesci.co.uk Subject Editors: Bethan Charles, Elizabeth English ����������������������������������� English �����������������������������������subject-editor@bluesci.co.uk subject-editor@bluesci.co.uk Podcast Editors: Laura Chilver, Georgia Nixon & Mark Grimes........................podcast@bluesci.co.uk News Editors: Yan-Yi Lee ���������������������������������������������������������������������������������������� Lee ���������������������������������������������������������������������������������������� news@bluesci.co.uk Webmaster: Clifford Sia.................................................................................webmaster@bluesci.co.uk Social Media and Publicity Officer: Andrew Smith................................communications@bluesci.co.uk Art Editor: Pauline Kerekes...............................................................................art-editor@bluesci.co.uk

Contents 1


Issue 55: Michaelmas 2022 Issue Editor: Clifford Sia Managing Editor: Georgina Withers First Editors: Andrew Smith, Sarah Ma, Devahuti Chaliha, Leah Hurst, Bartosz Witek, Shikang Ni, Clifford Sia, William Guo Shi Yu, Adam Dray, Sarah Lindsay, Adiyant Lamba Second Editors: Adam Dray, Emily Naden, Rhys Edmunds, Devahuti Chaliha, Saksilpa Srisukson, Sarah Lindsay, Sarah Ma, Shikang Ni, Clifford Sia, William Guo Shi Yu Art Editor: Pauline Kerekes News Team: Yan-Yi Lee, Lily Taylor, Sneha Kumar, Sung-Mu Lee Reviews: Ems Lord, Benedetta Spadaro, Adiyant Lamba Feature Writers: Caroline Reid, Andrew Smith, Mickey Wong, Shavindra Jayasekera, Shikang Ni, Manuel Morales Alvarado, Sarah Lindsay, Hayoung Choi, Clifford Sia FOCUS Writer: Clifford Sia Pavilion: Pauline Kerekes Weird and Wonderful: Megan Chan, Bartek Witek, Barbara Neto-Bradley Production Team: Clifford Sia, Georgina Withers Caption Writer: Clifford Sia Copy Editors: Andrew Smith, Clifford Sia, Adiyant Lamba, Georgina Withers Illustrators: Caroline Reid, Sumit Sen, Pauline Kerekes, Biliana Tchavdarova, Barbara Neto-Bradley, Rosanna Rann, Mariadaria Ianni-Ravn, Sarah Ma, Duncan Shepherd Cover Image: Josh Langfield

Pushing Boundaries The relentless pace of scientific progress is one of the fundamental constants in our lives that we have come to depend upon, so much so that we tend to take for granted the gradual improvements that result from this. But what many fail to appreciate is the sheer complexity of doing so, as it is all too easy to focus on the achievement instead of the challenges that had to be overcome to get to this point. This is a problem that threatens the spirit of scientific collaboration, as the lack of understanding fosters a highly siloed environment that prevents innovations from being shared between different fields of research. We hope that this issue can remedy this to some extent, by providing our readers with an accessible glimpse into cutting edge research in fields as disparate as biotechnology, computing, engineering, and physics. Starting with the biological point of view, Hayoung Choi elaborates on the methods used by developmental biologists to reconstruct the telltale traces that identify evolutionarily linked structures, using the middle ear as a pedagogical example. Andrew Smith then discusses the humble green fluorescent protein and how it heralded an era in cell biology by giving scientists a tool to directly analyse cellular processes. From a more practical perspective, Caroline Reid discusses how lateral flow tests are relevant beyond COVID as a cheap yet effective diagnostic assay, while Sarah Lindsay discusses how 3D printers can be modified to print biological structures. The role of artificial intelligence to solve practical problems is highlighted by Shavindra Jayasekera, who mentions how generative adversarial networks mean that the small size of medical datasets are less of a problem when training medical AIs. Of course, the relevance of faster computers here cannot be understated, and so the FOCUS piece focuses on the challenges involved in doing so. As an aside from this, Pauline Kerekes in her Pavilion piece invites the reader to consider how art may be made equally accessible to sighted and to blind people. Clarke Reynolds is given as an example of how a piece of art may have dual meanings, as by elevating Braille to an art form, he has been able to create art that has meaning to both the sighted and the blind person. The next part of this issue then builds upon the physical and technical nature of scientific progress. Beginning with an article about the challenges of scaling up green energy to industrial scales, Manuel Morales Alvarado then goes on to describe the role of large particle accelerators in pushing the boundaries of particle physics, with reference to the world-renowned Large Hadron Collider. Shikang Ni then discusses the current state of the art in nuclear fusion, while Mickey Wong finishes up with a whimsical insight into the nature of radioactivity and how it arises from the fundamental laws of nature. Finally, it is our hope that by introducing you, the reader, to a broad range of fields spanning the scientific frontier, you will be inspired to find out more about how you can contribute to these fields with your unique knowledge and insights. And at the very least, to encourage you that it is indeed possible to make your field more accessible to outsiders through scientific communication, no matter how complex

This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License (unless marked by a ©, in which case the copyright remains with the original rights holder). To view a copy of this license, visit http://creativecommons.org/ licenses/by-nc-nd/3.0/ or send a letter to Creative Commons, 444 Castro Street, Suite 900, Mountain View, California, 94041, USA.

2 Editorial

Clifford Sia Issue Editor #55 Michaelmas 2022


On the Cover With my cover design I tried to convey a feeling of some of the experiences that I thought were at the core of what it means to push boundaries. In testing and breaking through boundaries, the unknown is always waiting on the other side, and I wanted to portray the weightlessness and uncertainty of striding out into this domain. The astronauts in my picture are experiencing the same things as they drift untethered through space, having broken free of the structure they previously inhabited. Enormous abstract forms now surround them; their function, construction, or perhaps even intention, is incomprehensible to the astronauts. They can’t be certain if these forms will be useful, harmful, something they can manipulate or something that will manipulate them. Despite the immense unfamiliarity of their new surroundings, the astronauts’ curiosity compels them to explore and discover, boldly leaving behind their previous constraints in order to seize the opportunities that lie deeper in space. The forms are etched with electrical markings, as the full magnitude of the changes this technology has allowed would have been similarly difficult for early developers to comprehend. In further progressing this and other technologies, humanity places itself in the same position as the astronauts in the picture, navigating powerful and complex technological changes in order to secure a future outside our current boundaries Josh Langfield @aetherweather Cover Artist

Michaelmas 2022

On the Cover

3


News The Role of Horizontal Gene Transfer in Insects and its Possible Applications

Scientists Identify Two Viruses that Trigger Alzheimer’s Disease

Horizontal gene transfer (HGT) involves the transfer of genetic information between organisms, often from two different species. Genes can be transferred to insects via HGT from bacteria, fungi, viruses, and even plants. Many of these genes have important ecological functions in the host insects, impacting characteristics such as metabolism, adaptation, immunity, and courtship.

Researchers from Tufts University and the University of Oxford have recently identified herpes simplex virus (HSV1) and the varicella zoster virus (VZV) as potential triggers of Alzheimer’s disease. The former is transmitted through mouthto-mouth contact; on the other hand, the latter is commonly known as chickenpox and can be transmitted via physical contact with the mucus, saliva, or skin lesions of an infected individual.

Research has found that lepidopterans, a group of insects containing butterflies and moths, have especially high levels of HGT-acquired gene expression, including a particular gene that impacts mating behaviours of species such as the diamondback moth. Diamondback moths are a prolific agricultural pest, causing significant loss of brassica crops. Through knockout of a specific HGT-acquired gene that contributes to courtship behaviours, it is possible to significantly reduce the number of courting attempts made by male diamondback moths which could potentially be used in population control. These findings have implications for pest control via genetic modification of pest species like the diamondback moth; however, gene-drive modifications could have unintended consequences. The unintentional eradication of insect species could irreversibly impact dependant species or ecological functions and could even be transferred beyond the target species. Therefore, scientists need to remain cautious about the applications of gene editing in insects. LT Original article: https://www.cell.com/cell/fulltext/ S0092-8674(22)00719-X

The study was conducted with a three-dimensional human tissue culture model that mimics neural activity. It was found that both HSV-1 and VZV typically remain dormant in neurons, but the re-infection or activation of VZV may lead to parts of the brain being inflamed. This could activate HSV, causing the amyloid beta and tau proteins to accumulate in the brain. This then paves the way to the type of cognitive damage and loss of neuronal function commonly observed in Alzheimer’s disease. As scientists have yet to pinpoint the cause of Alzheimer’s disease, the study’s significance lies in how it opens a new window to embark on this line of inquiry. The study also implies that the VZV vaccine — one that protects against shingles and chickenpox — may play an important role in lowering the risk of cognitive disorders such as dementia. SK Original Article: https://pubmed.ncbi.nlm.nih. gov/35754275/

Novel Mapping of the Human Immune System Could Unveil New Targets in Medical Therapy The human immune system is incredibly complex, with several different cell types that communicate via networks of proteins. These interactions are key to the understanding of immunological disorders and are also of significant interest to researchers investigating cancer biology; harnessing the power of the immune system can be an effective way to target tumours. Researchers from the Wellcome Sanger Institute and their collaborators have successfully created a novel and integrated map of the proteins involved in signalling within the human immune system. They used newly optimised protocols to screen for potential physical interactions between pairs of proteins and validated each interaction via extensive computational and mathematical methods. Whilst several previous attempts to characterise networks of immunological interactions have been made, this map is thought to be one of the first that allows for each of these interactions to be viewed within different biological contexts. For example, the affinity with which a particular protein might bind to a receptor is not always constant, but may increase in a state of inflammation as cells of the immune system are activated. Furthermore, much existing network analysis focuses primarily on secreted proteins, often neglecting interactions that take place on cell surfaces. As well as increasing our understanding of how the immune system operates, there is hope that the new mapping of interactions may shed light on new insights in the pharmaceutical industry, including potential new drug targets. YYL & SML Original Article: https://www.nature.com/articles/s41586-022-05028-x Check out our website at www.bluesci.co.uk, our Facebook page, or @BlueSci on Twitter for regular science news and updates 4

News

Michaelmas 2022


Reviews Femtech Venture Creation Weekend – May 2022 Female health research has been chronically underfunded and is merely considered as a niche subset of healthcare, but this is bound to change… In May 2022, the University of Cambridge hosted the first ever ‘Femtech Venture Creation’ event. Organised by the Judge Business School and the Cambridge Femtech Society, in partnership with Bayer Pharmaceuticals, it brought together students, scientists, entrepreneurs, investors, MBAs, and healthcare professionals to think about disruptive ideas to improve female health using technology (fem-tech). In teams, participants worked through the first stages of business planning. With a diverse range of concepts, from fertility to period care and comfy high-tech clothing to drug discovery tools, everyone worked tirelessly on their ventures with the guidance of mentors from the Cambridge entrepreneurship ecosystem. The Bayer Women’s Health talk inspired attendees to think about the future of female health such as meeting needs in endometriosis, uterine fibroids and menopause. Mini-lectures from the Judge Business School expert, Ann Davidson, equipped participants with tool-kits to approach their innovative projects. The final pitches left the mentors ecstatic with the level of expertise and passion that characterized all involved. Seeing bright minds challenge the status quo in female healthcare was truly a refreshing sign of hope. BS

The ‘low threshold high ceiling’ approach to nurturing young mathematicians At the peak of the pandemic, the University of Cambridge’s flagship NRICH mathematics outreach project (nrich.maths.org) reported over a million weekly pageviews of its school resources. These include mathematical problems and games, all with accompanying teachers’ notes, as well as articles for teachers focusing on mathematics and mathematical teaching. NRICH is widely recognised for its ‘low threshold high ceiling’ (LTHC) tasks, inspired by the pioneering MIT and Cambridge mathematician Seymour Papert. In practice, a LTHC task means everyone can get started, and everyone can get stuck. The resources are curriculum-mapped, meaning that the prior mathematical knowledge needed for each task is clear and so teachers can identify the most suitable activities for their classes. These will vary depending on the age and prior attainment of the learners, and ensures that everyone in the class can get started.

Stock image generated by Stable Diffusion

Prehistoric Planet (BBC, Apple TV) Prehistoric Planet, BBC’s latest paleobiology TV series, puts the viewer in an immersive experience alongside creatures of the Maastrichtian Cretaceous period, 66 million years ago. Succeeding BBC’s paleo-media offerings such as Walking with Dinosaurs in the 2000s, this series combines ‘Planet Earth’-style depictions of ancient creatures, rendered beautifully in CGI, with the dulcet tones of David Attenborough. Streaming on Apple TV, Prehistoric Planet has a sizeable budget, and it shows — the dinosaurs largely look amazing. The show also excels in its whimsical and unique depiction of these creatures’ behaviour: whether that be the dramatic and hilarious depiction of Carnotaurus attempting to mate, or T. rex acting as a doting parent, this series provides a perspective on dinosaurs as animals rather than movie monsters. It can also be argued that dinosaurs aren’t even the main stars of the show. Rather, the other prehistorics of the era — such as the pterosaurs that soar the skies — look fantastic in the photorealistic world created by the production team. All depictions in the series are based on scientific knowledge, although some criticisms have been levelled at the speculative nature of the depictions. Unlike other documentaries of modernday natural life, the dinosaurs aren’t real, and we can’t know how they truly behaved, despite the incredible efforts of scientists. Each of the five episodes centres around an ecosystem such as ‘Forests’, ‘Deserts’ or ‘Ice Worlds’, and the stories told are engaging even if they seem hypothetical. Prehistoric Planet is a must-watch for paleomedia fans, and an engaging watch for any curious viewer. AL

Learning to recognise what it feels like to be stuck, and having strategies to get yourself unstuck, is a crucial mathematical skill, and is part of becoming a resilient mathematician. Some undergraduates reflect on the shock of finding mathematics difficult for the first time at university! In summary, NRICH’s LTHC activities enable whole classes to work together on the same task, rather than different children working on different activities. When the ceiling is raised it can be surprising what heights learners can achieve. EL Michaelmas 2022

Reviews

5


In Search of the Missing Jawbone – Solving an Evolutionary Riddle

Hayoung Choi explores various approaches to solving the long-studied evolutionary riddle on the mysterious structures that were repurposed to build the middle ear The land vertebrates, a lineage of animals to which we belong, evolved from bony fish, and underwent a number of adaptations for terrestrial life in doing so. Here, we will follow the evolutionary tale of the stapes, one of the bones in our middle ear that transduces sound waves from the air to the fluidfilled inner ear system. This is necessary because air has a much lower impedance than water, and without this, sound would just reflect uselessly off the inner ear. While we now know that the middle ear corresponds to parts of the gill arches in bony fish, a testament to the evolutionary link between us and our fishy ancestors, the journey scientists took to figure this out began in the 19th century, and involved multiple scientific breakthroughs and novel experiments.

6

In Search of the Missing Jawbone

Originating from a filter-feeding ancestor, all vertebrates have the genetic potential to produce seven identical pharyngeal slits. In the course of evolving to live on land, each of these have diverged in their structure and function. The most well known is the first, or most anterior pharyngeal slit, which became the jawbone, or mandibular arch that defines the mouth. It is probably the most significant evolutionary step in the history of vertebrate evolution, as it enabled the predation of other animals – especially as teeth evolved. The fate of the other arches were, however, more variable. Here, we will look at the fate of the second, or hyoid arch.

Michaelmas 2022


This is a tale told by the fossils of ancestral vertebrates, spanning the transition from sea to land. In fish, the monolithic hyoid arch comprises multiple fused bones, which help open and close the link between mouth and gill cavities, synchronising their movement to guide the unidirectional flow of water past the gills for optimum extraction efficiency of oxygen. The amphibious fish Panderichthys, however, separated the hyoid arch into an upper and a lower component. While the lower hyoid arch still functioned as before, the upper portion broke off and became vestigial. It is clearly non-functional, as there is nothing for this bone to transduce against. And later, in an early limbed land vertebrate called Acanthostega, we see proof that the upper hyoid arch would ultimately become the stapes, as we see a smaller stapedial bone resting against the auditory capsule. Of course, the existence of transitional fossils has massively helped our understanding of the evolutionary course of the stapes, but these had yet to be discovered at the dawn of the field of evolutionary biology two hundred years ago. And, morphological evidence is by definition incomplete, as they are evolutionary snapshots at a random point in time, with no hint as to what happened in between. For example, the incus and malleus in the mammalian middle ear lack a counterpart in amphibians and reptiles. Thus, even though Reichert and his early 19th century peers were quick to agree that they arose from the posterior part of the first pharyngeal arch, there was disagreement about where the stapes arose from. Here, embryology has been helpful in filling in the blanks within the evolutionary history of the stapes. During embryogenesis, the same structure may change in radically different ways, allowing us to derive evolutionary homologies between apparently unrelated structures. It was one such painstaking sequential analysis of human embryos done at the turn of the 20th century that would conclusively prove that the stapes formed from a mass of undifferentiated tissue growing from the tip of the hyoid arch. Beyond this, the discovery of ‘homeotic mutants’ was the ground for further breakthroughs in understanding the evolution of the stapes. These are genetic mutations that result in mutants with correctly formed structures but in the wrong places. For example, fruit flies with Bithorax mutations have two thorax segments. As each thorax forms intact wings, the resulting fly has two pairs of wings not unlike a dragonfly. For the first time, scientists had a mechanistic link between individual genes and morphological differences, and by extension the anatomical variation between animals. As it turns out, these homeotic mutants have mutations in homeobox genes, which are genes that confer positional identity throughout the developing embryo. During embryogenesis, self-reinforcing gradients of expression form between different homeobox gene families, creating a coordinate system that endows each cell with the knowledge of its position within the embryo. This information is then accessible to genes controlling cell growth, specialisation, and death, enabling a

Michaelmas 2022

mass of otherwise undifferentiated tissue to be sculpted into a functioning thing. In the case of the hyoid arch, it is the Dlx homeobox gene family that determines the fate of the upper and lower portions of the arch. It was in this context that the concept of evolutionary developmental biology (Evo-Devo) arose, a new paradigm integrating molecular genetics, developmental biology, and evolutionary biology. Throughout the 21st century, homeobox and other genes have been extensively manipulated to help us understand how they help determine morphology. What do we gain from doing so? One key reason is that there is less room for subjectivity in determining homology when considering genetic information in the context of morphological comparisons, allowing greater confidence in our historical reconstructions of the evolutionary process. The fact that mutations in these genes are the root cause of morphological differences gives us testable hypotheses to prove the existence of deep homologies that are only tenuously supported by other types of evidence. We see this in how provably related variants of the same gene (PAX6) drive the formation of eyes of different shape and structure across multiple distantly related species. And yet, knowing that the delicate interplay of these gradients uniquely define a positional coordinate system is one thing. A far harder task is to reconcile these gradients and how they work with the observed morphology. We do not yet fully understand the genetic changes underpinning the restructuring and subsequent repurposing of the hyoid arch. This indeed brings us to the larger question of how the mammalian ear evolved, as the outer, middle, and inner ear are homologous to various disparate structures in fish. In fact, the hair cells within the inner ear are homologous to the external sensory lateral line in fish, posing the obvious question of how an external structure became an internal structure embedded within the skull. Even so, knowing what happened is half the battle. We do not yet know what evolutionary pressures could have conceivably reshaped a bony support into a tool of hearing, as there is no conceivable reason as to why the hyoid arch split into two in the first place. Perhaps, despite all appearances, the vestigial upper hyoid arch had a cryptic function in Panderichthys, or perhaps it was the inadvertent result of other evolutionary pressures on the complex interplay between homeobox genes. Moreover, the homeobox genes are extremely ancient, dating back to the unicellular last universal common ancestor (LUCA) and thus predating the origins of multicellularity. What gave them the power to control body shape? Are they innately special, or just a serendipitously useful gene from the annals of evolution? Whatever the reason, we at least know that we are on the right track to help us uncover even more answers about the how and why of evolution Hayoung Choi is a first-year undergraduate studying Natural Sciences at Peterhouse. Artwork by Mariadaria Ianni-Ravn.

In Search of the Missing Jawbone 7


The Green Revolution – How a Jellyfish Transformed Cell Biology Andrew Smith explores the origins of GFP and how it became the versatile tool that we know today GFP is one of the most famous proteins in biology. Green fluorescent protein (more commonly known simply as GFP) is, unsurprisingly, a protein that glows green. It has been, and continues to be, used throughout a wide range of different fields in cell biology to understand when specific proteins are made, where they are in cells and what they interact with. However, for a protein that is used so commonly in research, it has a rather humble origin: jellyfish. So how did GFP go from relative obscurity to leading a cell biology revolution? THE HUNT FOR GFP | The story of GFP began in the early 1960s, when Osamu Shimomura was attempting to isolate a fluorescent protein from the jellyfish Aequorea victoria that glowed green. While this may conjure up images of bright green jellyfish floating in the sea, the jellyfish are actually transparent and do not usually fluoresce. To get them to light up, they need to be touched or shocked with electricity. Even then, it is only the rim of the jellyfish bell that glows green. Considering this, the importance that GFP would come to play hardly seems obvious from its origins. After processing many jellyfish samples, Shimomura managed to isolate a fluorescent protein that glowed… blue. Yes, that is right, blue. The initial fluorescent protein identified from Aequorea victoria was not GFP. It turned out that this jellyfish uses two fluorescent proteins to glow green when stimulated. The first protein identified was named aequorin, and it glowed blue in the presence of calcium ions. Shimomura went on to identify the second protein which would become known as GFP. He found that the wavelengths of light needed to activate GFP overlapped those emitted by aequorin. Therefore, when the jellyfish are stimulated, calcium ions enter the cells that contain aequorin, and activate it. The energy that is then emitted from aequorin is transferred to GFP, causing the GFP to emit green light. Isolating GFP was a major step on the path to GFP becoming a cell biology revolution. However, to really have an impact, GFP needed to be expressed in cells other than those of a jellyfish. MAKING CELLS GLOW | Before GFP could be expressed in other organisms, its gene needed to be identified. To achieve this, Douglas Prasher and colleagues created chains of DNA nucleotides based on knowledge of GFP’s amino acid sequence. They then used these to isolate DNA encoding GFP from a library of DNA that encodes all the proteins made in Aequorea victoria.

8

The Green Revolution – How A Jellyfish Transformed Cell Biology

At this time in the early 1990s, it was still unclear if specific enzymes were required for GFP to become functional. This question was solved by Martin Chalfie who expressed GFP first in the bacteria Escherichia coli, and then in the microscopic worm Caenorhabditis elegans, using DNA encoding GFP from Prasher. By showing that fluorescent GFP could be produced even in bacteria, which are so distantly related to jellyfish, Chalfie demonstrated that no specific enzymes were required for GFP to function. In fact, oxygen is the only factor required for GFP to fluoresce in cells, as demonstrated by Roger Tsien when he expressed GFP in bacteria without oxygen. Subsequently, GFP would be expressed in numerous other model organisms, including yeast, fruit flies, and various mammalian cells. MODIFYING GFP | Even though scientists could now readily express GFP in a variety of cell types, some researchers realised that they did not need to settle for the default properties of GFP. Tsien was one of the first people to start tinkering with the structure of GFP, by changing which amino acids were present at specific positions in the protein. This was possible because of the relatively stable structure of GFP, which can be modified in certain ways without losing its function. Tsien and colleagues mutated GFP to increase its brightness and to create variants that emit different colours, including blue, cyan, and yellow. The initial work by Tsien has been expanded by other researchers, leading to a variety of GFP variants with improved properties, including the ability to fold into the fluorescent form more efficiently at 37°C, and an optimised DNA sequence encoding GFP for more efficient expression in mammalian cells. THE GFP TOOLBOX | Developing variants of GFP allowed scientists to create a toolbox of fluorescent proteins. But what is it that scientists can do with GFP and its variants that has revolutionised cell biology? After all, scientists were using fluorescent compounds bound to antibodies before GFP was discovered, which allowed them to visualise the location of proteins in cells. The significant difference is that, unlike fluorescent compounds like fluorescein, GFP is a protein, so it can be expressed by cells. This difference is crucial to the importance of GFP. To visualise proteins by fluorescence microscopy using fluorescent antibodies, cells need to be fixed. This is where cells are treated with a compound that crosslinks the different proteins in a cell together, effectively freezing a cell in its

Michaelmas 2022


current state. Therefore, the cell is no longer alive, and the proteins cannot move. Using this method allows researchers to identify the locations of proteins only at specific times.

Andrew Smith is a 4th year PhD student studying neurodegenerative disease at Christ’s College. Artwork by Sumit Sen.

That is where GFP comes in. Scientists can now create a fusion of their protein of interest and GFP, which can be done without either protein losing its function. This means that fluorescence microscopy can monitor the location of the protein of interest within cells in real time. Scientists have taken this one step further by using a process called fluorescence resonance energy transfer (FRET), to identify if two proteins bound to different GFP variants bind to each other. If they do, energy will be passed from one GFP variant to the other, causing the second GFP to fluoresce. (This is the same process that naturally occurs between aequorin and GFP in Aequorea victoria!) FRET has been instrumental in studying protein-protein interactions within cells. Researchers have also used GFP in other ways to investigate these interactions, such as by developing split-GFP. This is where two proteins of interest are fused to different halves of the same GFP variant. Upon the two proteins interacting, the two GFP fragments can combine into a whole GFP and fluoresce. By using multiple types of split-GFP variants in the same cell, multiple protein-protein interactions can be visualised at the same time. GFP has even been used to create biosensors such as calcium sensors. Tsien and colleagues fused two different GFP variants to opposite ends of a calcium-binding protein. When calcium ions enter a cell, the calcium-binding protein changes shape and causes FRET between the GFP variants, allowing the change in ion concentration to be visualised. GFP and its variants have been used in a variety of different types of experiments in addition to those above. As such, GFP has become an essential part of the cell biologist’s toolkit. GFP’S LEGACY | Scientists have used variants of GFP across many different fields in cell biology. The wide-reaching impact of GFP was recognised by the 2008 Nobel Prize in Chemistry, which was jointly awarded to Shimomura, Chalfie, and Tsien. GFP has had such a wide impact and is used so commonly, that it is easy to take its existence for granted these days. However, GFP would not be what it is today if it did not have the properties that make it so special and that allowed it to be developed further. Given its unusual origin in jellyfish, it may have not even been discovered in the first place! Looking back on its story, GFP is proof that breakthroughs across science can start from basic research in niche fields

Michaelmas 2022

The Green Revolution – How A Jellyfish Transformed Cell Biology 9


Lateral Flow Tests – Beyond COVID-19 Caroline Reid talks about pushing existing medical technology to its full potential

The next generation of lateral flow tests (LFTs) will use a drop of blood to detect specific white blood cells. These disease-fighting cells, called neutrophils, help the body to fight infections. Immunocompromised people, such as those on chemotherapy, can have low levels of neutrophil cells. If they develop an infection and this is not treated rapidly with antibiotics, they are at greater risk of life-threatening complications like sepsis. However, patients on chemotherapy do not always have low neutrophil levels and therefore do not always need to immediately attend hospital for antibiotics if they believe they may develop an infection. In fact, up to half of hospital visits made by chemotherapy patients thought to be developing an infection, around 50,000 per year in the UK may be unnecessary. Clinical assessments and lengthy blood tests showed that their neutrophil levels were normal and they were not at risk of sepsis. Dr Pietro Sormanni’s group at the University of Cambridge develops technologies to discover antibody molecules used as detection markers on LFTs. Thanks to a newly established partnership with a Cambridge-based startup, 52 North Health, the group will now develop antibodies that will underpin LFTs to accurately detect an individuals’ neutrophil levels and risk of sepsis in an at-home test.

10

Lateral Flow Tests — Beyond COVID-19

Sormanni summarises the ethos behind this project: ‘Patients on immunosuppression therapy, like chemotherapy, are extremely vulnerable to infection. If they develop a little bit of fever or any sort of sign of potential infection they are rushed to the hospital and given a lot of antibiotics while doctors do a blood test which takes time. Sometimes this means that they didn't need any of those antibiotics if there was no biomarker for infection. This contributes to the rise of antibiotic resistance, is stressful for the patient, and costly to the healthcare system, so we are trying to turn the readout of this relatively lengthy blood test into a lateral flow test: a faster point of care.’ Dr Saif Ahmad, an academic consultant oncologist at Addenbrooke’s Hospital, has first-hand experience with this problem. As a response to patient distress, he co-founded 52 North Health, the company developing these LFTs, called Neutrocheck, that tests for neutrophils. ‘I realised that this was something I saw every day when I was on call in the hospital,’ commented Ahmad in a BBC interview in 2020. Since then, the company has been busy designing and testing this LFT. It takes around ten minutes to produce a result and will help patients and doctors quickly decide who needs antibiotics and hospital care. This has the benefit of reducing the rise in

Michaelmas 2022


antibiotic-resistant bacteria, allowing resources to be focused on the sickest patients. It also increases the quality of life for patients, allowing them to stay out of the hospital with peace of mind. ‘We have tested our device performance from over 200 blood samples from Addenbrooke’s Hospital, Cambridge, and we have performed user testing with patients interacting with the device in around 40 individuals,’ said Ahmad. A clinical study will commence in 2023 at Addenbrooke’s Hospital to test the device and apply for UKCA and CE marking, a label of meeting high safety, health, and environmental protection requirements. LFTs were a huge component of the testing strategy to manage COVID-19 and this public awareness campaign has energised research into LFTs and their use in other diagnostics. Using the principle, a target in a liquid can quickly be detected. In LFTs, a set of stationary antibodies bind the specific molecule, releasing a dye. Therefore, a result appears within the time it takes for the molecule, like the smartie colours, to move. For the person using the test, this looks like a line appearing on the strip. Of course, the reality of designing this technology for medical use is a little more complicated than a drop of liquid on a stick. The real challenges are in finding biomarkers that will accurately reflect the clinical condition you are testing for, whether or not to seek further treatment for patients using NeutroCheck, and then obtaining suitable antibodies binding to these biomarkers. Proteins, our complicated building blocks, provide both the challenge and the answer. Proteins make up around half of our dry body weight, every single cell in the human body contains proteins. The human genome encodes more than 20,000 different proteins, of which most are expressed in all cell types, but a few are unique to specific types of cells. To design LFTs that detect one cell type, like neutrophils, among the many that are present in the blood, antibodies have to be obtained that bind to one of these unique proteins. The Sormanni lab is going to tackle this problem using 3D computer modelling to predict which antibodies will react with a specific protein. Traditionally, this process would be done with experimental trial and error using lab resources and time, whereas a computer can make these predictions much more quickly. In the case of COVID-19, the LFT contains detection antibodies initially modelled on a computer that bind to the viral spike on the surface of the coronavirus protein, which is unique to this virus.

Michaelmas 2022

While faster than their human counterparts, computers still have their limitations. Recently, in the Baker Lab at the Institute of Protein Design at the University of Washington, there have been advances in computational design that can create proteins that bind to specific molecular targets in a similar way to antibodies. The program needs to be able to scan a target molecule, identify the potential binding sites, then generate proteins that can target those sites.The protein that is generated by a computer might not exist in reality, so the computer then screens millions of proteins in its database to find the candidates that are the most promising, which are then further optimised in the laboratory. ‘When it comes to creating new drugs, there are easy targets and there are hard targets,’ said Dr Longxing Cao, who worked on the project as a PhD student and is now an assistant professor at Westlake University. ‘Even very hard targets are amenable to this approach. We were able to make binding proteins to some targets that had no known binding partners or antibodies.’ Designing a test that is no larger than a stick of chewing gum can require teams of scientists working together all over the world, as well as someone inside who knows which tests are needed. Diverse experiences and specialities, from doctors to computer scientists, are needed to figure out what tests are needed and how to make them. The future of LFTs are broad and exciting and will require problem-solvers from all walks of life of all specialties to help design them. ‘In principle,’ adds Sormanni, ‘you can get fancier and you can have different bands with different targets on a single lateral flow test. Or even bands that are printed with different densities so that you could get a kind of gradient that gives you information about the quantity of your target protein. So if you have a lot of protein, you will probably see a lot of bands. And if you have a little of your protein, you may only see very few bands.’ A device like NeutroCheck has the potential to streamline the healthcare journey for thousands of people. As Dr Ahmed showed, innovation can stem from the right person noticing a problem. When you step back from this article, what will you notice? Caroline Reid started in physics and, although she left the equations behind, loves all things that bubble, beep, and bang in communications. Artwork by Caroline Reid.

Lateral Flow Tests — Beyond COVID-19

11


Is 3D Printing All It’s Cracked Up To Be? Sarah Lindsay introduces the many layers of 3D printing The first 3D printer was built in the 1980s by Chuck Hall, and it has been suggested that the majority of first-worldcountry households would own their own 3D printer. Dropped and smashed your favourite mug? Not a problem, print another. Lost the remote control again? Hit start on the printer and you could have another one by morning. Recently, 3D printing has had a huge amount of attention — objects can be made cheaply and with very little waste. Its popularity is increasing from printing day-to-day household objects to industry to scientific discovery, providing an opportunity for engineers to collaborate with any other industry from construction to healthcare. Printing has developed considerably since the 1980s; in the early 2000s, Thomas Boland made the first bioprinter — which meant printing with live cells became a reality. 3D printing is the process of layering down material to build up a 3D object. The general process of 3D printing is to first make a model using computer-aided design (CAD) software, a software developed specifically to allow for the design of any 3D object, or to use measurements taken from imaging such as MRI scans. The model then needs to be transferred to a slicer software to convert the file into a language the printer can understand. Here, decisions such as how sturdy the structure needs to be, or how cheaply it can be manufactured, are made with the infill pattern and density. If the structure needs to be sturdy, a higher infill density is required but this leads to more material being laid down and therefore higher costs. Finally, the file is uploaded to the printer along with the appropriate ink, nozzles, and settings. Hit print and, depending on your materials and size of object, it will be generated in seconds or up to days later. The variety of applications with 3D printing are vast — from aerospace, car manufacturing, scientific research, to construction — creating a more sustainable and cost-effective solution. The world’s first 3D-printed school has opened its doors to teach children in Malawi. The construction took just 18 hours and produced very little waste. The collaboration between 14Trees and Holcim has provided children an opportunity to have an education. With more schools being built by 3D printing by organisations such as Thinking Huts, more and more children will get the opportunities they deserve. Moving on to healthcare, customisation is key to any effective treatment method. We are constantly learning that healthcare needs to be individualised to the patient’s needs for the best outcomes. 3D printing provides an easy way to customise implants that go into the human body. One area that requires this personalisation is in joint treatment. Researchers are working on treatments that can help avoid the inevitable and destructive total joint replacement, particularly for the younger age groups. One way of doing this is to insert a metal plate to relieve the weight and pressure on the damaged area of the joint. This procedure is known as a high tibial osteotomy. 12

3D Printing

Engineers at the University of Bath’s Centre for Therapeutic Innovation have been working in collaboration with 3D Metal Printing Ltd. to achieve a tailor-made plate that has fewer complications compared to the standard. They have 3D-printed medical-grade titanium-alloy plates designed from CT scans of patients, and have used computer modelling to determine the safety and risk of implant failure using the personalised plates compared to the standard. Although this study is currently in the computer-modelling stage, the plates have been approved for clinical trials. The power of 3D printing is also shown through the creation of replica organs. These models can be made rapidly, cheaply, and in a lot of detail. Surgeons at Guy’s and St Thomas’ NHS Trust used a 3D-printed model of a cancerous prostate to not only pre-plan the procedure, but to also use as a guide during surgery. This particular procedure was successfully performed in 2016 using the minimally invasive robotic surgery. Despite the many advantages traditional robotic surgery has, the one drawback for surgeons is losing the ability to physically feel the tissue. Having a 3D-printed model replica directly in front of the surgeons gives them their sense of touch back and enables them to determine precisely where to cut. This was the first time a model was taken into surgery and used to ensure vital components of the tissue were not damaged, and complications were minimised. The model itself was made from MRI scans of the patient’s prostate, and 3D-printed in a lab at St Thomas’ Hospital. It took 12 hours to print the model, and cost as little as £150 – £200. Replica organ models can save countless lives, but can we print real organs? This technique, known as bioprinting, allows cells to be printed into any desired shape. It seems simple enough — design the model or, better yet, take images of the patient’s organs and then print. When it comes to printing biological tissue, there is a lot more to consider than the model. Firstly, what is available to print the cells in? The bioinks available in which cells can be printed are very limited, as bioprinters work by forcing ink out of a nozzle and onto a print bed for cross-linking. The bioinks need to be shear-thinning, meaning that its viscosity decreases as it is forced through the nozzle and still needs to be capable of holding its structure once it hits the print bed. This very important property minimises shear stress placed on cells during the printing process, and leads to a higher yield of live cells in the scaffold. Secondly, there is a trade-off between cell survival and resolution of the final print. Increasing the resolution means using a small nozzle to print finer structures, but this places greater shear forces onto the cells, resulting in a higher proportion of cell death. A fine balance needs to be optimised to ensure cells remain alive during the print, but also that the desired model can be achieved. Material choice is limited, even Michaelmas 2022


more so by the need for biocompatible materials, porosity for nutrient transfer, and a cell-friendly cross-linking method to stabilise the scaffold. Hydrogels are the ideal materials to use for bioprinting; they have all the required properties, and mimic the extracellular matrix of the cells’ natural environment, leading to a higher cell survival rate. Further complexities arise when thinking about the intricacies of organs. Multiple cell types, with their unique extracellular matrices, are often required for a single organ. The majority of tissues in the human body require vascularisation for oxygen and nutrient transfer; incorporating these into a print can be complicated. On top of this, getting a printed organ to perform the complex tasks that natural organs do naturally is a challenging feat. Scientists are exploring ways in which 3D bioprinting organs may be possible in the future, but for now, it is still very much in the research stage. With huge cost implications, not only for the printer itself, but also for the bioinks, energy to run the printer, and equipment, time and skills required for the cells to produce a fully functioning organ prior to transplantation, it seems an impossible task. Even without the future of organ transplantation, 3D printing has revolutionised healthcare

Sarah Lindsay is a post-doc in the Department of Surgery. Illustration by Rosanna Rann.

Michaelmas 2022

3D Printing 13


Generative Adversarial Networks Shavindra Jayasekera discusses how competing algorithms can solve the problem of small biomedical datasets

Machine learning is a data-hungry field. In order to reliably find patterns in data, machine learning algorithms require extremely large datasets. Some of the cutting-edge language models such as GPT-3 even feast on the whole of the Internet. However, despite the trend of ever-growing datasets as society becomes more and more digitised, biomedical imaging remains an exception to this rule. Data from medical scans is not only costly to obtain as it requires radiographers and specialist equipment, but it is also subject to strict privacy laws. Therefore, some medical datasets can have at most 100-200 images — in contrast, one of the most popular image datasets, ImageNet, contains over 14 million annotated images. This is a major bottleneck for researchers that wish to use machine learning in medical imaging.

14

tries to improve its judging ability, whereas the generator tries to trick the discriminator (hence, the name ‘adversarial’). Initially, the generator produces a random output and the discriminator guesses randomly but eventually they learn from each other and the generator can produce convincing synthetic data. In an ideal scenario, the generator mimics the data so well that the discriminator can do no better than random guessing.

One way to circumvent the lack of data is to augment the available images to produce slightly different data that can be added to the dataset. For instance, given a data set of faces, we can change the eye colour or rotate and crop the images. However, this modified dataset does not capture the full variability of human faces and any model trained on this dataset would not perform well if it sees faces that it has not previously encountered.

But why does this battle between networks result in the generator learning the characteristics of the real data? To explain this behaviour, consider the analogy of a rookie forger of Monet paintings and a novice art critic. Initially, the forger produces blobs and the critic cannot tell the difference between a real or fake. However, the critic notices that the forger is using the wrong colour palette and gains an upper hand. As a result, the forger has to adapt and learn the correct tones, forcing the critic to find another feature to distinguish the forgeries from the real paintings. Gradually, the forger learns the defining aspects of a Monet painting such as the colours and brush strokes and then the more subtle aspects such as the choice of composition. Eventually, the forger can produce work that is indistinguishable from real Monet paintings.

Alternatively, one can use generative adversarial networks (GANs). At its core, a GAN pits two neural networks, a discriminator and a generator, against each other to generate synthetic data that mimics an existing dataset. The generator network creates data which is then provided to the discriminator network. The discriminator network then has to determine whether it is real or generated. The discriminator

Although GANs were only first created in 2014, in the space of a few years, their output quality has improved to the point that they now can generate hyper realistic human faces. GANs can even tackle abstract tasks such as converting photos into Monet paintings and turning horses into zebras. In the context of medical imaging, GANs can create various synthetic datasets such as lesion data, MRI scans, and retinal images.

Generative Adversarial Networks

Michaelmas 2022


Furthermore, image classifiers which use a combination of real and synthetic data tend to consistently perform better than those trained on real data alone on tasks such as tumour classification and disease diagnosis. However, GANs are not a silver bullet for the problem of small datasets yet. Firstly, they are notoriously difficult to train. The success of a GAN relies on the delicate balance between the performance of the discriminator and the generator. If the discriminator is too good, the generator is not able to improve because there is no chance of it being able to trick the discriminator so it will never learn. Likewise, if the discriminator is too weak, then the discriminator cannot differentiate the synthetic data from the real data and so the generator is not under any pressure to improve. Another problem is that the generator might cycle between a handful of realistic outputs, thereby successfully tricking the discriminator but not producing outputs with similar variability to the real data. Finding ways to stabilise the training of GANs is a very active area of research. GANs are also dependent on the quality of the data that it is being trained on. In machine learning, the term ‘garbage in, garbage out’ refers to the idea that if an algorithm is trained with bad data, then its output will be equally nonsensical. This applies strongly to GANs that are being used to synthesise data. A study in 2018 showed that GANs can hallucinate features when trained poorly. The researchers trained a GAN to convert brain MRI images into CT scans, but trained it exclusively on images without tumours. The resulting algorithm created realistic CT scans but also removed any tumours from an image, which could be very dangerous if the algorithm ever saw clinical use. Furthermore, GANs can reinforce systematic biases within datasets. A review of publicly available skin cancer image datasets in 2021 highlighted the severe lack of darker skin types in lesion datasets. Therefore, a GAN that creates sample lesion images is unlikely to adequately represent darker skin types. Indeed, studies that try to generate lesion data with GANs rarely take darker skin tones into consideration, and therefore only exacerbate the existing inequality. If this synthetic data is then used to train algorithms to diagnose skin cancer, the resulting algorithm would not have had exposure to darker skin tones during training, which would reduce its diagnostic accuracy in patients with darker skin. Nevertheless, GANs are pushing the boundaries of machine learning in biomedical imaging. The ability to infer from smaller datasets is an important problem that has held back machine learning to date. With data augmentation techniques such as GANs we can hope to see an explosion in applications of data-driven approaches to medical imaging problems in years to come Shavindra Jayasekera studies maths at Trinity College. Artwork by Biliana Tchavdarova Todorova.

Michaelmas 2022

Generative Adversarial Networks 15


16

FOCUS


Microprocessor Design – We're Running Out Of Ideas Clifford Sia discusses the challenges involved in building faster microprocessors It used to be the case that you could go out to the store every year and buy a phone or computer twice as fast as your current one. And then it was every two years. Then three. Next thing you know, you're stuck using the same computer from 10 years ago because there isn't anything out there worth upgrading to. And now you find it getting increasingly sluggish when running the latest software. So, what gives? BUILDING A BETTER SWITCH | Believe it or not, but every single digital component relies on the fast and accurate switching of a sufficient number of switches to achieve the desired output. For a long time we were limited by the lack of a miniaturisable switch. You had, of course, electromechanical relays, and later, vacuum tubes, but they could only get so small and switch so fast. Later, it was found that semiconductors such as silicon or germanium behaved as switches when exposed to impurities in specific patterns and sequences, and thus were born diodes, transistors, thyristors, and so forth. It was only a matter of time before someone realised that it was possible to print these switches on the same piece of semiconductor using techniques borrowed from the world of lithography, and then to join them up together with wires to make any arbitrary circuit. And after some stumbling around, the transistor was discovered to be increasingly power efficient the smaller it was built, due to a happy coincidence of the scaling laws driving its operation. So every 18 months or so, in accordance with Moore's Law, your friendly local fab would figure out a way to print smaller transistors on a slice of silicon. As these were necessarily smaller and more efficient, every year microprocessor design teams would scramble to use the extra transistors to make their design go faster. Initially, there was heady progress as even a clunker of a design could be reasonably fast, as long as the transistors could go fast enough to compensate for all the design flaws. But this did not last. The first sign of trouble was when the transistors got too small to be printed. This is because lithography remained the only efficient way to print transistors en masse — you had light shining on a mask that cast a patterned shadow onto a photosensitive coating on the silicon wafer, rapidly creating any arbitrary pattern on the surface that you could use to guide subsequent processing steps. This is fine, until the individual lines on the mask became smaller than that of the wavelength of light, essentially turning into an expensive diffraction grating that blurred everything. Initially, the fix was simple enough — just find a laser that could generate a shorter wavelength of light with enough power. But this is easier said than done. Each new wavelength required the development of new photoresists sensitive to the new wavelength, new materials

Michaelmas 2022

to block or reshape the light, and new equipment and new process flows to account for these differences. Eventually, it got to the point where it was just cheaper to work around the diffraction limit, where one might add tiny fillets to the mask pattern to negate the effects of diffraction on the resulting light distribution, or to use the initial pattern as a template to create smaller patterns. This got to a head in recent years, when shrinking the wavelength into the extreme ultraviolet became the only sane way to continue building smaller structures reliably at an acceptable speed. However, it is inherently impractical by virtue of the involved light being so energetic that everything, including air is opaque to it, so the entire process must take place in a vacuum chamber using mirrors to control the light. The light itself ended up being generated by vapourising droplets of molten tin with a multikilowatt laser as they drip past a special collecting mirror. Moreover, to enhance the efficiency of the mirrors, the light could not hit the mirrors at more than a grazing angle, resulting in a narrow field of view that limited how much area could be exposed at any one time. The next bit of trouble arose when the transistors became so small they started behaving like wires, and the wires became so small they started behaving like resistors. Replacing the aluminium in the wires with copper helped, although now additional care had to be taken to lay down barriers to keep the copper from diffusing into the silicon and destroying all the transistors built on it. But then the copper wires had to become so small that electrons started being unable to tell between the wires and empty space, causing their apparent resistance to shoot up disproportionately. Cobalt, indium, and molybdenum were tried, as their smaller grain boundaries compelled electrons to follow the wire boundaries more scrupulously, but their low heat conductivity, high coefficient of thermal expansion, and fragility proved no end of trouble for foundry companies. Meanwhile, the issue of transistors not working below a certain size was neatly sidestepped by placing them on their side and by using special coatings to enhance the electric fields there, among others. However, these serve only to delay the inevitable. To be sure, improvements from the manufacturing side are still possible, but they become increasingly expensive and impractical. To drive further improvements in electrical performance, there has been a drive to utilise vertical space more effectively. Hence, the transistors in use were first changed from simple 2D structures that could be printed onto a surface into sideways finFETs that had to be etched. And, now manufacturers are proceeding to the logical conclusion of arranging these transistors as 3D stacks

FOCUS

17


of wires or sheets. This, too, can only improve the switching ability of the transistor so far, and now manufacturers are already looking into alternative ways of shrinking the wiring between these miniature transistors without increasing the resistance too much. Thus far, the approach has been to rewire them so that power can be delivered from directly above, or by distributing power upwards from the other side of the chip where there are fewer constraints on how big the wires can get. But now we are faced with a highly complex process that has nearly impossible tolerances, is almost impossible to evaluate due to the sheer number of structures that have to be inspected, and requires more hardware investment for the same incremental increase in throughput. And so, we are seeing a trend where increases in performance continue apace, albeit at a slower rate than before, but which does not translate into a cost reduction for existing hardware. To make matters worse, these processes now take so much time, money and expertise to set up that only a few companies in the world remain capable of keeping up with the bleeding edge, and even then it takes so long for these companies to respond to any change in demand so that supply and demand are essentially uncorrelated. This results in a situation where there are regular boom and bust cycles, which is simply unsustainable in such a demanding industry. To some extent, we have seen manufacturers pushing back by requiring customers to prepay for capacity years in advance, but this only kicks the problem down the road. BUILDING A BETTER PROCESSOR | Meanwhile, in the world of the chip designer, things started going wrong at about the same time. It used to be the case that they relied solely on Moore's law for massive speed improvements, given that the first microprocessors made a lot of design compromises to compensate for the low number of transistors per chip available to them. Moreover, the low expectations of consumers at the time relative to what was actually possible meant that there was no real need to optimise these chips. Even then, there were obvious low hanging fruit, so when it came to expanding the capabilities of these early chips, these were rapidly adopted once it became possible to implement them. These included useful things such as adding internal support for numbers larger than 255, or support for larger memory sizes, or the ability to execute instructions in fewer clock cycles, among others. There was also a drive to integrate as many chips as possible into the central processing unit, so instead of a memory controller, a math coprocessor, and so forth, your central processing unit could now do all that and more. But it wasn't immediately clear about where to go from there. The aforementioned toxic combination of high resistance and high switching power soon meant that chip designers were now faced with the uncomfortable fact that they could no longer count on raw switching speed to drive performance, and that each increase of complexity had to be balanced against the increased power consumption. With general purpose processors hitting scaling limits, it now made sense to create specialised chips targeting specific workloads to avoid the overhead of general purpose

18

FOCUS

processors. And thus the concept of accelerator chips would emerge, the most prominent being the graphics processor unit. Throughout all this, the preferred processor design was also in flux, as opinions differed on how to push chip design further. One could revamp the instruction set to make it easier for the processor to decode what had to be done from the instruction code supplied to it, or one could give the programmer the tools to tell the processor how to run more efficiently. Others would prefer to spend their transistor budget to add support for complicated operations such as division or square roots, so as not to rely on inefficient approximations of these operations using obscure arithmetic tricks. In what is now termed a complex instruction set architecture, all sorts of new instructions were being added on an ad hoc basis to natively implement various simple programming tasks. It soon became apparent that the decoding of these complex instruction set architectures were extremely energy intensive, and there was then a push in the other direction, towards the fewest possible instructions that could be decoded in the simplest possible way, with the net result finding wide use to this very day in mobile computers. Then there was the concept of tasking the programmer, or at least the programmer writing the compiler to convert programming code to machine readable code, with thinking about how to shuffle data around optimally. The very long instruction word (VLIW) paradigm makes this explicit by requiring the programmer to group instructions into blocks that are then simultaneously executed. But this belies the difficulty of finding instructions that can be simultaneously executed, and keeping track of how long each instruction would take to complete. In a related approach, the single instruction, multiple data (SIMD) paradigm allows a single instruction to perform operations on multiple streams of data at the same time. Instead of adding single pairs of numbers at a time, now you could add entire arrays of numbers to each other in one go. While these instructions would see limited uptake in general purpose processors, instead they found widespread adoption in specialised processors such as in graphics processor units and digital signal processors, which target highly parallelisable workloads such as image processing that involve iterative computations on large amounts of data. The opposite was also considered, and computer architectures that could directly interpret high level code were built. But, as they locked you into a single programming language and were difficult to debug, they mainly exist today as a paradigm in which one can safely run untrusted code by executing it in a simulated computer that can only run code of a specific type. And yet, painstaking design overhauls would continue to be made to general purpose processors of each design paradigm, and later to other specialised processors to help make them faster. One early step was to break up each computation into smaller, simpler stages that could execute faster. However, this created a whole host of potential bottlenecks that now had to be considered when designing a chip in order to avoid leaving performance on the table. Instead, for reasons of ease of use and compatibility, most general

Michaelmas 2022


purpose processors would deal with the instruction decoding issue with the more conservative route of adding an additional stage to translate the instruction set to something more scalable. Processors could also now run multiple instructions simultaneously, allowing them to execute different parts of the same linear strand of code at the same time, and they also gained the ability to execute multiple programs simultaneously to be able to take full advantage of available resources. Speculative execution was also introduced at this time, in which the CPU would guess how a decision would pan out and calculate the resulting implications even before the decision had been reached. Of course, if it guessed wrongly, there would be speed and security penalties. However, this could only be scaled so far, as this approach required numerous energy intensive connections to be made across different parts of the chip. At the same time, processors also began to outperform the storage they ran from, as it turns out that reliably storing data is inherently slower than performing an operation, especially since the non-negligible speed of light at these scales limits how quickly information can be passed to the processor. Thus, it became necessary to add tiers of faster and nearer memory to the system, as improvements in manufacturing processes allowed the extravagant waste of millions of transistors on the processing die on something as mundane as storage. But this would again run into a wall, as large caches require more energy, and take longer to access, while occupying expensive die area, all while the other parts of the chip up had to be scaled up in order to utilise the additional bandwidth efficiently. Thus dawned the multiprocessor era, as chip designers realised that instead of adding more complexity for marginal benefit, it sufficed to provide more processors per chip for a mostly linear increase in benefit. This wasn't always useful, as it turns out that software had to be rewritten to take advantage of the extra threads, and had software developers done so, Amdahl's law limits the maximum amount of speedup observed to the inverse of the proportion of the task that cannot be parallelised. While this is not an issue for massively parallel tasks acting on arrays such as rendering or video encoding. For most desktop software or games, only a 2–4x speedup can be seen. Ultimately, this would be limited by the amount of power needed to run all cores at a reasonable speed, and the fact that eventually a large enough chip would be impossible to manufacture due to the larger number of things that could go wrong in the manufacturing process. The problem now is that chip designers and semiconductor manufacturers alike have painted themselves into a corner where there are no longer obvious ways to provide massive improvements over existing technology, under existing constraints. As the old adage goes, one must pick between power, performance, and area (and hence cost) when designing a chip. Power can no longer be ignored, since currently, the high resistance of the wires and plateauing improvements in transistor efficiency mean that only a fraction of the transistors on a microprocessor can be used at any given time, lest the entire chip melt. Thus, while achievable transistor density continues to increase, transistor utilisation is

Michaelmas 2022

facing a hard wall, as it turns out that it is impossible to efficiently cool something putting out more heat per unit area than a nuclear reactor. There are ways around this, of course, but when we start talking about making chips so thin that flexibility becomes an issue, or more power hungry than a space heater, or drilling cooling channels into them, one can't help but raise an eyebrow. Nor can area be ignored, due to the limited availability of leading edge processes as the machines we need to make them happen can only be produced and installed so quickly, as well as the high cost involved in manufacturing each wafer. RETHINKING THE PROCESSOR | Can we discard our constraints and start afresh? One natural solution is to adopt a heterogeneous computing approach, in which we split up a processor into a grab bag of specialised coprocessors to get the best of all worlds while keeping total chip cost within bounds. Thus, you would have a reasonably fast CPU for general computation but which offloads massively parallel tasks such as graphics processing to a GPU that is slower but is capable of performing many concurrent computations, or to a DSP to perform basic image processing functions. Later, the need may emerge to incorporate still more chips to speed up different types of calculations, such as deep learning or cryptography operations, essentially heralding a return to the era of the coprocessor chip. Another, as shown by the Mill architecture, offloads the task of rearranging the input instructions to the programmer so that the processor can focus on sequential execution. Alternatively, the necessary high bandwidth links can be scaled up further in an advanced packaging paradigm. In the case of HBM memory, this was used to give processors faster access to memory, so that less time is spent waiting for new data, while 2.5D and 3D packaging has also allowed companies to pick the optimal process with which to print different parts of a chip. We can scale bandwidth further with bolting on cache dies onto a processor, and in fact it makes sense to try and disaggregate processors into smaller chiplets that remain tightly interconnected. Among others, defect rates would be reduced due to their reduced complexity and the possibility of manufacturing them under more specialised conditions, and the possibility of vertical stacking allows long, energy-hungry interconnections to be avoided. We can also rethink the strict von Neumann hierarchy in which every data transfer must pass through the CPU, as this results in unnecessary data transfers to and from the CPU. Instead, approaches such as direct memory access allow us to bypass the CPU in any given system when passing data between devices. Meanwhile, in the in memory processing paradigm, one tries to bring the data to be processed as close to the processor as possible, although the utility for more complicated workloads is limited by the expense of sufficiently fast memory to make it worthwhile. In the processing in memory approach, this is taken to the logical conclusion by fully integrating the processor and the memory, but to date these approaches have yet to take off as the process of manufacturing fast processors and fast memory tend to be mutually exclusive.

FOCUS 19


RETHINKING THE SOFTWARE | In the end, all of these efforts would come to naught without software and developers to make use of all these new capabilities. There is a need for a complete rethink of software paradigms to take advantage of this brave new world emphasising parallelism and an economy of data flow. To some extent, there are new programming paradigms that provide an alternative to the classic linear control flow, such as graphics card programming interfaces that expose the tools needed for a programmer to run multiple copies of a simple program in parallel across the available compute resources. We also need to consider ways of improving the translation process from programming language to machine code, as the compilers that do so can always be improved, and all programming language should implement well defined ways to allow the programmer to peel back and bypass the abstraction inherent in them in order to achieve speed and safety improvements. The increased shift towards increased abstraction in human computer interactions has, to some extent, caused a lack of curiosity towards low level computer design. This affects us at all levels, as it not only leads to a dearth of expertise on how to improve on existing microprocessor designs, but also creates a situation where inefficient code is written due to a lack of appreciation for the nuances of the underlying hardware. Programmers need to be made aware of potential inefficiencies in their code and the tools with which they can identify and fix these in a safe manner. And indeed, our failure to recognise this is leading to unsafe, slow code that threatens to undo the progress that we have made thus far in building faster computers. And this is the situation we find ourselves in today. We have faster hardware, but these still fall far short of our ever increasing expectations of what a computer should do. This in turn, is fed by the expectation that new hardware should be able to do more, even as this results in complex, poorly optimised software that runs slowly on less capable hardware. And, with the manufacturing process becoming ever more complex and expensive, at some point there will be a point of reckoning when we need to rethink our expectations about what our hardware is capable of, and that hopefully this will result in an increased appreciation into the underlying design and how it can be leveraged to its full potential. And then, just maybe, your devices might just stop getting slower with each update Clifford Sia studied medicine at St. Catharine's College but happens to have a passing interest in computers and also helps run the BlueSci website. Artwork by Barbara Neto-Bradley.

20 FOCUS

Michaelmas 2022


Michaelmas 2022

FOCUS 21


Pavilion: When Braille Meets Colours

Thinking out of the box can be promoted by many changes in one person’s life. In particular, our brains can adapt through various, unexpected, and creative ways after vision loss*. In this interview, Pauline Kerekes talks to Clarke Reynolds, a visually impaired artist who invented a new form of visual art.

*For reference, see the books “An Anthropologist on Mars” and “The Mind’s Eye” by Oliver Sacks. 22

Pavilion

Michaelmas 2022


Could you tell us about your story: when did you get interested in art and when did your visual impairment start? I lost sight on one eye at age 4. Around that time, my local school took me to a gallery that was close by. No one in my family is artistic but as soon as I entered this gallery, I knew I wanted to become a professional artist. I left school at the age of 14 due to kidney problems, got back to school at age 18 and got a diploma in art design and a degree in model making. I soon after found a job as a dental model maker, but after two years I started to get shadows in the other eye and doctors told me that eye was getting blind. That was 11 years ago, and you’d think as a visual artist that would be the worst thing that could happen to you, but for me I thought I would still be an artist and it wouldn’t impact what I want to do. I have a very creative imagination and I thought what only matters are my mind and my hands. In most cases, visually impaired people can have many different types of residual vision. How is it for you? Yes, 93% of visually impaired people can actually see, just in different ways compared to sighted people. For instance, some people see at the peripherals, some see in lines, some see better or worse depending on the light conditions. My eyesight is like looking under water with light and shadows mingling together. Every day is different as my eyes adjust to light conditions: when the bright sunlight hits my eyes, they go black. I don’t know what normal sight is as I’ve only had the use of one eye since I’m a young child. I’ve bumped into doorframes a lot, there is no depth perception for me for sure! Even though I did a degree using straight lines and grids for model making I still don’t know what a straight line looks like... I rely on the materials to guide my art. I have 5% of true vision left which is the top right in my left eye, that’s why you see me bobbing my head around as I find the sweet spot! Would you say you use more your tactile sense than sighted people? There is a misconception about the acuity of touch in blind people! We are used to move our heads much more when we are blind, and we get what is called early degeneration of the vertebrae in the neck. That results in pins and needles in the fingers. So, my sense of touch is awful, I can’t feel very well at all. For instance, I can hold a very hot cup of tea without feeling it because my fingers are completely numb. What type of art do you do? I’ve always been a big fan of pointillism and dots are everywhere in my art. When I explain to people how I see, especially through the right eye, I say it’s like looking at a thousand dots. A few years ago, I learnt Braille and thought: why are dots and Braille not an art form? Since the last 3 years, I’ve been known as the blind Braille artist. I created a colour coded version of Braille. It’s based on the idea of the association between a

Michaelmas 2022

pattern and a colour, so that the brain immediately thinks about the pattern when seeing the colour. I used colour theory to map out Braille letters. It's 26 colours, one colour per letter. When you see my art, you can decode it by eye with the colours but you can also touch it and decode it using Braille. There’s a narrative in my art. In my last exhibition, called ‘Journey by Dots’, all the dots were neon-lightened and the whole exhibition was in the dark so you’re guided by the dots. My art comes in various sizes as I explore the nature of the patterns. In this last exhibition I painted twenty thousand dots. My aim is to exceed a hundred thousand dots in one piece and even beyond that. Your work is based on colour theory. How do you perceive colours? In my life I lost first the ability to see blue and green, that’s normal, these colours are lost first in blind people. Then I lost the ability to see in the red spectrum, however I still have the memory of certain colours like pink. And yellow is the last colour people who are becoming blind can see (and the first colour new-born babies can see, interestingly!). To be able to still work with colours, I trained my brain to recognize tonal differences in the grayscale I see. Becoming blind enhanced your creativity. This issue of BlueSci is about pushing boundaries, how would you say you push boundaries? Yes! I had to become blind to start learning Braille as an artistic language. Now that I have lost my sight, I am what people call a ‘disabled artist’ and there is a big stigma attached to disability art in mainstream art. Pushing boundaries in terms of going beyond what society associates with disability: as a disabled artist I want to be recognized alongside people like Antony Gormley or Banksy. Pushing boundaries in terms of gallery accessibility too. A gallery is a creative space and then why should gallery be like libraries where people can’t talk? I want people to be able to talk in front of the art. ‘#talkInFrontOfArt’, that’s what I would say! Conclusion What was striking while talking to Clarke was his ever-renewed creativity, not limited by the visual constraints or learnt rules about space or dimensions sighted people usually have. If the essence of an artist is to think out of the box, then the term of disabled artist should be questioned: where is the disability when one can not only reach the level of creativity of other artists, but also create bridges so that anyone can access his art? Pauline Kerekes is a post-doc in neuroscience at the physiology, development and neuroscience department in Cambridge who helps coordinate the art behind BlueSci. Photo credits to Duncan Shepherd. For more information on Clarke Reynolds, he has a personal website (https://www.seeingwithoutseeing. com) and he hosts a podcast called Art in Sight. Hear him explain his art at https://youtu.be/eZV8lf-mpIY.

Pavilion 23


The Decarbonisation Challenge Clifford Sia explains the difficulty of making the world run on renewable energy In recent years we have witnessed extreme heat driving up global temperatures; sparking record numbers of forest fires; and prompting drastic shifts in rainfall patterns, causing both droughts and floods. And it has been known for decades that the emission of greenhouse gases into the atmosphere from the wholesale burning of fossil fuels is trapping heat in the atmosphere and driving these global climate shifts. Despite this, there is a relative lack of concern about taking action to halt our reliance on fossil fuels. However, part of this inaction can be attributed to the sheer extent to which existing infrastructure must be reengineered. One of the foremost requirements for an economically sustainable transition is the need to ensure the stability of the electrical distribution grid. As nearly all power grids run on alternating current, current must be fed into the grid and withdrawn at the right points in time: any variations in frequency or voltage will interfere with this timing with potentially disastrous implications for anyone connected to the grid. Thus, grid stability depends on a carefully choreographed flow of energy from producers to consumers, ensuring that power is supplied at exactly the rate it is consumed. In the past, this was not an issue, as responding to fluctuations in electricity demand was a simple matter of burning more or less fossil fuels. But now, sources of energy such as solar or wind energy have upended the old axiom that power plants always produced exactly as much energy as was expected of them. This is since their energy production is by definition dependent on the weather, which if any, is inversely correlated to energy demand. It is possible to curtail production from these sources when there is surplus production, but this represents a needless waste of resources in order to build this excess capacity, which may still be unavailable when most needed. This creates a situation where the price of energy fluctuates to a greater extent throughout the day, and even across the seasons, as prices fall whenever wind and solar are abundant and vice versa when they are scarce. These fluctuations can be severe to the point where prices may turn negative or increase by several orders of magnitude, and ultimately lead to power outages if they exceed the ability of the local grid to compensate for. We see, for example, a 'duck curve' in areas with high deployment of solar energy, where energy prices peak during the early morning and late evening when solar energy is unavailable but when electricity demand has started to increase. Of course, these patterns of local oversupply and shortage could be mitigated to some extent with grid scale energy storage. However, since these imbalances can persist across days and months, it becomes necessary to explore different

24

The Decarbonisation Challenge

types of energy storage in order to cost effectively target this issue over different time scales. Thus, for instance, grid scale batteries are ideal at blunting the impact of transient mismatches in supply or demand, but are expensive to build. Meanwhile, other techniques such as gravity or thermal storage can cost effectively store large amounts of energy, but are slower to respond and are less efficient. These, however, can scale to the sizes needed to offset seasonal deficits in supply or demand. Instead, it is often more useful to improve the interconnectedness of power grids with the construction of more electrical transmission lines, so that excess power generation ability can be put to work to cover a deficit elsewhere. Moreover, the profitability of generating electricity from green energy sources such as solar, wind, or geothermal is determined by the immutable requirements of local geography, land availability, and so on, rather than of fossil fuel availability and the cost of delivering electricity. Thus, electrical transmission lines often have to be built to connect these new power stations to traditional sources of electricity demand. Even then, this may not always be possible for political or practical reasons. For example, the state of Texas has historically refused to integrate with surrounding power grids to avoid regulatory oversight, while Japan has two power grids that have very limited interconnection capacity as they run at different voltages and frequencies. From a practical standpoint, the losses in long distance transmission lines also meant that it was always more economical to build power plants as close as possible to wherever the energy they produced would be required, and then design transport infrastructure around the logistics of bringing the necessary quantities of fossil fuels there. One solution would be to operate transmission lines at ever higher voltages in order to minimise the current that flows through them, thus mitigating resistive losses without having to lower resistance (these require larger diameter cables which are more expensive and require stronger supports). However, as voltages approach the megavolt range, ever more care has to be taken to ensure adequate clearance around each wire in order to prevent arcing between wires, and components must be redesigned to handle higher voltages. A better option has been the adoption of high voltage direct current in these transmission lines. These sacrifice some simplicity in terms of the voltage conversion process in exchange for decreased resistive losses at a given voltage as well as providing the ability to transfer power between unsynchronised grids. Yet another issue stems from the difficulty of reducing the carbon footprint of transportation. Weight and volume are

Michaelmas 2022


at a premium, as larger and heavier vehicles are more complicated to build and power. In fact, fossil fuels are uniquely suited for this task as they are very energy dense and get consumed as they are burned up, meaning that vehicles powered with them can be small and light, with the overall weight of the a vehicle decreasing as the fuel is used up. However, this results in a significant proportion of anthropogenic carbon emissions. Electrification of the transportation sector is one solution, as this enables the centralisation of energy production which improves efficiency and for the decarbonisation of the transportation and electricity sectors to occur in lockstep. Even then, in situations such as planes or trucks where energy density is crucial, the energy and power density of batteries still leaves much to be desired. In these cases, it might be better to fuel them with synthetic fuels with reasonable energy density. We can, for instance, convert crops into biofuel, although the entire process is highly inefficient compared to what can be achieved with solar panels, and competes with human food production. We can also directly manufacture our own fuels, although this is itself problematic as we currently lack the means to produce large quantities of synthetic fuels cost effectively without relying on fossil fuels as a feedstock. Just as the combustion of these fuels into carbon dioxide and water generates large quantities of energy, so do the methods needed to regenerate these fuels require just as much energy. Indeed, the prospect of decarbonising an economy is a daunting task that requires the reengineering of the infrastructure that we take for granted as well as the rethinking of the assumptions behind their construction. While it can be done, the cost and complexity of doing so means that in the absence of significant government interventions, market forces will conspire to result in inaction. And yet — as the increase in extreme weather events in recent years has shown, and will continue to show — the impact of climate inaction will only continue to increase until concrete steps are taken to initiate these fundamental changes Clifford Sia studied medicine at St. Catharine's College. Artwork by Sumit Sen.

Michaelmas 2022

The Decarbonisation Challenge 25


Frontiers at the Large Hadron Collider Close to the beautiful Jura mountains, on the border between France and Switzerland, we can find the Large Hadron Collider (LHC): the largest and most powerful particle accelerator ever built. The LHC is operated by the European Centre for Nuclear Physics (CERN), and it consists of a 27-kilometre ring buried in a tunnel 100 metres underground, in which beams of subatomic particles collide front to front at almost the speed of light. Trillions of protons are set in motion with electromagnets 100,000 times more powerful than the Earth’s magnetic field and go around the tunnel 11,000 times per second. At the LHC, with these magnitudes that challenge the imagination, high energy physicists are trying to understand the laws of the smallest, most fundamental constituents of matter. High energy physics is the branch of physics that deals with elementary particles and their interactions through the four fundamental forces of Nature: electromagnetism (which describes the behaviour of charged particles and light), the strong force (which binds elementary particles to form protons and neutrons), the weak force (responsible for the radioactive decay of unstable particles), and gravity. Except for gravity (which is too weak to be probed at the LHC), all of these forces can be described by the Standard Model (SM) of particle physics. In the SM, these fundamental forces have carriers – called gauge bosons – and there are twelve particles of matter that interact through these carriers: six quarks (sensitive to the strong force) and six leptons (not sensitive to the strong force). The last piece of the SM is the Higgs boson, which is responsible for giving mass to the other elementary particles. The SM is one of the most successful scientific theories ever devised, but we know that it cannot be the whole story. There are still many big unanswered questions that the SM cannot explain, like how to describe gravity as a quantum theory, what is dark matter, why is there matter at all in the Universe, and many others. At the LHC, high energy physicists test the SM by using particle accelerators. Charged particles (protons, specifically) are accelerated through very powerful electric and magnetic fields, and then are collided into each other at immense energies–the higher the energy of the collision, the better resolution we have to study the process. The LHC hosts four different main experiments that analyse these collisions: ATLAS, CMS, ALICE, and LHCb. They provide measurements for different particle processes to tackle different problems. ATLAS and CMS are general purpose detectors as they study a very large range of subareas in particle physics, including different sectors of the SM and the properties of the Higgs boson, and also dark matter, extra dimensions, and physics beyond the SM. ALICE is specialised in the study of heavy-ion collisions that result in quark-gluon plasma: the state in which matter would have been 26

Frontiers at the Large Hadron Collider

just after the Big Bang at the beginning of the Universe. Finally, the LHCb experiment is dedicated to study the differences between matter and antimatter. These four experiments can be understood as extremely powerful ‘microscopes’ that help us understand the physics that is realised at the scales of a millionth part of a millionth part of a millimetre. Although the SM has been permanently tested, it has not been clearly broken so far. We have not found convincing signals of supersymmetry (which proposes that every fundamental particle has a so-called superpartner with the same properties but different spin), signatures of dark matter, or extra dimensions.

“ Pushing boundari is not only useful physics, but also society. Intrinsic h comes with extra than just the pleas and understanding development of ne Still, as history has taught us, the ‘absence’ of discovery is discovery itself. Consider, for example, that Einstein’s theory of special relativity would not exist without the results of the Michelson-Morley experiment, which did not detect any signals of aether – a supposed medium for the propagation of light. The importance of the LHC for the development and progress of particle physics since it started operating in September of 2008 cannot be overstated. Its most famous discovery has been the detection of the Higgs boson: the final piece of the SM. Michaelmas 2022


Manuel Morales Alvarado writes about the importance of the work at the LHC in the progression of high energy physics and its impact on wider society This particle, postulated almost 50 years before its detection by Peter Higgs, François Englert, and Robert Brout, is extremely important as it is responsible for giving mass to the other elementary particles. The experiments of the LHC provided definitive evidence to confirm its existence, but their discoveries do not end there. They have also found 59 new hadrons: new composite states of elementary particles (much like the wellknown proton or the neutron, but of far higher energy). The LHC has also studied the charge-parity asymmetry, which describes how particles would behave if we swap their electric charges (and other additional features, known as quantum numbers) and we look at them from an opposite orientation.

ies at the LHC for high energy for the wider human curiosity a benefits other sure of discovery g, such as the ew technology. ” The LHC has been fundamental in the construction and consolidation of theories of fundamental interactions over the past decade. However, particle physics is not just about the past, but about the future as well. The LHC will continue to break records in the coming years. Now, after a three year long hiatus for maintenance and upgrades, it is back in operation. The so-called ‘Run 3’ has begun, and it will last for four years. In this time, particle collisions will be generated at the unprecedented energy of 13.6 trillion electronvolts, allowing us to probe smaller spatial scales and expand our knowledge about potential heavier particles and Michaelmas 2022

effects that become visible at the higher energy collisions. Data will be collected in numbers never seen before: the experiments of the LHC will obtain more measurements in this individual Run than in all the previous runs combined, which will allow us to gain very robust statistics to probe our theories. But the history does not end there either. After Run 3, another upgrade, even more powerful, will take place: the High Luminosity LHC phase. It will increase the amount of data we collect from particle collisions by a factor of 10 and, once the era of the High Luminosity LHC is over, particle physicists even discuss the idea of building a completely new collider: the Future Circular Collider (FCC). It would be a ring of 100 kilometres in diameter and would run at energies ten times higher than the LHC in its High Luminosity phase. Pushing boundaries at the LHC is not only useful in high energy physics, but also for the betterment of wider society. Intrinsic human curiosity comes with extra benefits other than just the pleasure of discovery and understanding, such as the development of new technology. Particle accelerators are extremely complicated, and the sophisticated machines and experiments at the LHC, obviously, cannot order their equipment from a catalogue but have to manufacture it themselves. These curiosity driven processes end up with some of the most advanced technology on Earth that has then been taken, developed, and made available by industries many times before. For example, this has happened with technology developed at the LHC for the manufacture of semiconductors used in laptops and supercomputers, as well as extremely powerful big data processing and storage techniques. Electron and proton beam therapy, used all over the world, has also been developed at the LHC. Furthermore, if you are reading this online, you are benefiting from the World Wide Web, a system originally developed by CERN to share data. Pushing the physical and technical frontiers at the LHC has meant the discovery of some of the most fundamental facts of our Universe and the access to cutting-edge technology that would have been impossible to obtain otherwise. Let us keep testing and pushing the boundaries of what we know since, as the great polar adventurer Sir Ernest Shackleton would have said, ‘It is in our nature to explore, to reach out into the unknown. The only true failure would be not to explore at all’. We can only guess at what wonders may lie ahead Manuel Morales Alvarado is a second year PhD student in the High Energy Physics group of the Department of Applied Mathematics and Theoretical Physics and member of the PBSP collaboration. Illustration by Caroline Reid.

Frontiers at the Large Hadron Collider 27


Nuclear Fusion – Harnessing the Power of Stars

Shikang Ni talks about the future of nuclear fusion energy and where we are now

Nuclear fusion in the core of stars has allowed them to shine brightly for billions of years. About a century ago, the idea of producing energy by creating a miniature star on Earth sounded like something straight out of science fiction. Today, nuclear fusion has become the holy grail of limitless clean energy. While mankind has gotten pretty good at extracting energy from the chemical bonds in fossil fuels, we have reached a point where its by-products such as carbon dioxide are threatening to warm the planet beyond a point of no return. Hence, the development of renewable clean energy to cure our carbon addiction is more pertinent than ever; fusion energy could be the best antidote. Nuclear fusion essentially aims to recreate the condition of the Sun on Earth. The Sun produces energy by smashing hydrogen atoms together in its core under intense heat and gravity to form nuclei of helium, in what is called the proton-proton reaction. This reaction emits an enormous amount of energy, which powers the entire solar system. In laboratories, the ingredients for fusion are not just any hydrogen, but specific isotopes with extra neutrons, namely

28

Nuclear Fusion - Harnessing The Power of Stars

deuterium and tritium. For nuclear fusion to occur, the nuclei must have enough energy to overcome the strong electrostatic repulsion between protons to get close enough for the attractive but short-ranged strong nuclear force to dominate and snap the nuclei together. Hence, we have to subject the hydrogen gas to immense pressures and extreme temperatures of up to 100 million degrees Celsius (6.5 times hotter than the core of the Sun), creating plasma in the process. The way to do it essentially involves squeezing and heating. One method is electromagnetic confinement that uses strong magnetic fields and microwaves together with neutral beam injection, which shoots highly accelerated neutral particles at the gas. In short, it is a microwave on steroids. Another method is inertial confinement which uses pulses from superpowered laser beams to subject the surface of a deuterium-tritium fuel capsule to tremendous pressure and temperature, causing it to implode and achieve fusion conditions. Once the plasma is created, it is suspended in space by strong magnetic fields created by superconducting electromagnets. The fusion reaction releases energetic neutrons, carrying 80% of the energy from the reaction with them. These neutrons are what is harvested to generate electricity.

Michaelmas 2022


Currently, there are two main models of nuclear fusion reactors: the Tokamak and the Stellarator, each with their unique set of challenges and advantages. The key difference between them is their geometry. The Tokamak is toroidal in shape and symmetrical. On the other hand, the Stellarator has a complex helical twisted structure. Most fusion reactors now are Tokamaks. The presently largest one is the Joint European Torus (JET) in Oxfordshire. Others include the ongoing construction of the International Thermonuclear Experimental Reactor (ITER) in southern France and Experimental Advanced Superconducting Tokamak (EAST) in China. The most advanced stellarator is the Wendelstein 7-X (W7-X) in Germany. Fusion research is thus an ongoing international effort. The ITER, which is a multinational project involving 35 nations, aims to create the world’s largest and most powerful reactor. It is twice as large as JET and aims to produce ten times the input energy. As impressive as it may seem, the immediate goal of ITER is not energy production, and this 22 billion dollar project is simply a grand science experiment to gather enough information to allow the next generation to build the DEMOnstration Power Plant (DEMO), which is industry-driven with the aim of commercialisation by 2050. To give an idea of the technological capability going into fusion reactors: large superconducting electromagnets are cooled with liquid helium to temperatures just shy of absolute zero, while just metres away lies the intensely hot plasma. The reactor has to be built to withstand the largest temperature gradient in the known universe. Nuclear fusion, if realised, has many advantages over current alternative sources of energy. It is convenient to compare fusion with fission power. Fission is tainted with poor public acceptance as various incidents in the past, particularly the ones of Fukushima and Chernobyl, have caused fear among the general public. In addition, it is plagued with problems of radioactive nuclear waste disposal and risk of nuclear proliferation. On the other hand, fusion does not run the risk of a meltdown because if anything happens, the plasma expands and cools within seconds, forcing the reaction to stop. Put simply, it is not a bomb. It also releases four times more energy per unit mass than fission and four million times more compared to coal and gas. The only radioactive substance produced is residual radioactivity in the structural material which is short-lived and can be minimised with better design.

2021. However, the fusion reaction only lasted for several billionth of a second. Clearly, another critical criteria is that energy production has to be sustained for longer times. The current record is held by JET where 59 MJ was produced over five seconds in February 2022. To give the numbers some meaning, 59 MJ is only sufficient to boil 60 kettles worth of water. Furthermore, a lack of tritium in nature poses a major barrier to its sustainability. While deuterium is abundant in seawater, tritium only occurs in trace amounts in nature, mainly produced by the effect of cosmic rays on the outer atmosphere. It is also difficult to artificially produce, making it incredibly expensive, driving the cost of fusion up. Making matters worse, most materials cannot withstand irradiation by energetic neutrons and suffer microstructural damage, which necessitates regular repair and replacement, again driving costs up. Also, absorption of tritium by the material in the reactor wall results in fuel loss. In the worst case, the fusion reactor could run out of fuel before it even gets started. Truth be told, it is evident that fusion is still far from fruition. As a result, there are many opposing voices to fusion, pointing out that it is nothing but science fiction. As basic research, it has value; but, to sell it as a technology that can solve the world’s energy problem is just deceptive. Nuclear fusion research is an expensive gamble. Proponents of fusion want to turn the idea into reality and put forth the argument that ‘we have it when we need it’. Meanwhile, proponents of fission want to upgrade existing fission technology and change public perception, and climate advocates urge us to decarbonise with all the resources we have now before it's too late. At the end of the day, it is all a matter of time-scale. Renewable energy and nuclear fission are short to medium-term strategies to reduce reliance on fossil fuels. According to Kardeshev, who famously came up with the Kardashev scale, mankind would eventually grow out of breaking mere chemical bonds as it becomes insufficient to satiate our ever growing need for energy. It is then that our endeavour to harness the power of stars will bear its fruits Shikang Ni is a first-year undergraduate student studying Physical Natural Sciences at Wolfson College. Illustration by Caroline Reid.

However, fusion has its fair share of technical and practical challenges. For fusion to be even viable in the first place, the output energy has to exceed the input heating energy in what is termed ‘break-even’. The ratio of fusion power to input energy is a value called the Q factor. A desirable Q factor would be greater than one. The best ratio now is only 0.7, achieved by the National Ignition Facility (NIF) in August

Michaelmas 2022

Nuclear Fusion - Harnessing The Power of Stars 29


Beyond The Periodic Table Mickey Wong explores the origins of radioactivity and why this means that superheavy elements are unlikely to exist

30

Why are some elements stable and others unstable? One could well say that the stability of a nucleus is beholden to the laws of thermodynamics, and that radioactivity occurs when a loosely bound nucleus becomes more tightly bound. But it isn't obvious where this binding energy should come from, as an atomic nucleus should fly apart under the mutual repulsion of the positively charged protons within it. Moreover, as has been obvious for almost a century by now, the orbiting electrons that neutralise the nuclear charge do not cling to them that tightly, as the lightness of an electron conspires with Heisenberg's uncertainty principle to keep them from approaching the nucleus that closely.

accurately describe reality, we must first assume the existence of a handful of overlapping fields that each assign a few parameters to each point in space and time. These parameters interact in highly correlated ways, such that a vibration in any one field induces vibrations in the other fields at the same location, forming a mathematical construct that behaves identically to a particle. These coupled vibrations, or particles, can then travel in either direction in time, with the particles travelling backwards in time being interpreted as antimatter. Moreover, we find that identical copies of certain classes of these vibrations, termed fermions, by definition cannot approach each other infinitely closely, giving rise to what is termed the Pauli exclusion principle.

Before we can begin to understand what is going on, we must first understand quantum field theory. As it turns out, in order to

In this context, the fundamental forces of nature arise since whenever two particles approach each other sufficiently

Beyond The Periodic Table

Michaelmas 2022


closely, their combined vibrations cause the fields in the space between them to vibrate such that these particles experience a force between them. As these shared vibrations transfer momentum, they are considered particles, or force carriers, in their own right. The best known of these is the photon, which mediates electromagnetic interactions, but we also have W and Z bosons mediating weak interactions and gluons mediating the strong force. However, with the exception of the photon, the movement of these force carriers through space sets up dependent vibrations in the surrounding fields across the entire length of their passage. This has the effect of limiting the effective range of these fields, as these nascent vibrations require energy to set up, and together these vibrations are then even more likely to encounter, or even create, another particle before they can travel very far. This now allows us to understand the origins of the nuclear binding energy. We know that the protons and neutrons in a nucleus are sufficiently close together for their constituent quarks to experience the strong force. However, the gluons mediating the strong force interact so strongly with the fabric of spacetime that their effective range is very short, and so the bulk of these interactions can only take place between the quarks within a nucleon. Only a small proportion can involve a quark in a neighbouring nucleon, as sometimes an antiquark with appropriate properties may appear out of the maelstrom of vibrations flanking each gluon, that then creates a pion that may then travel towards and attract a quark in a neighbouring nucleon. Left unchecked, this would eventually cause all the quarks in a nucleus to fuse together into a singular blob of quark matter. However, the positively charged protons repel each other quite strongly as the photons mediating the electromagnetic interaction travel infinitely far, so each proton feels the repulsion of all other protons in the nucleus but only the attraction of neighbouring nucleons. The aforementioned Pauli exclusion principle also prevents this outcome, as any two nucleons of the same type trying to occupy the same point in space must take on different values of spin direction, angular momentum, and distribution in space. This requires each additional nucleon to be located further away from the centre of the nucleus, where they are more weakly bound, because there are fewer nucleons for them to interact with. From this, we now gain an insight into what makes a radioactive nucleus unstable. We see that in nuclear fission and in alpha decay, these loosely bound surface protons and nucleons can spontaneously bind together and form a separate nucleus. If this nucleus happens to form sufficiently far away from the parent nucleus, electrostatic repulsion may overpower the attraction to the parent nucleus from the strong force. This seals the deal by flinging both nuclei apart at high velocity. Even if nuclear fission does not occur, a sufficiently large imbalance in protons or neutrons may result in the outlying nucleons being sufficiently loosely bound that they can just fly off. Or, these loosely bound

Michaelmas 2022

protons and neutrons may happen to release a W or Z boson that may bump into a time travelling positron, antineutrino, or even an orbiting electron instead of a fellow nucleon, and thus a proton may become a neutron or vice versa. Thus, we can see why certain nuclei are radioactive, and why these nuclei become progressively more unstable the heavier they get. In particular, we find that for certain numbers of protons, there is no number of neutrons that will result in a nucleus stable for long enough to remain in large enough quantities from their formation until their subsequent extraction and discovery on Earth. Thus, they must be made with nuclear fusion in a lab. This is easier said than done because not only are these highly charged nuclei harder to fuse, but they are also more unstable than they should be as the lighter nuclei from which they are made do not have enough neutrons to compensate for the increased mutual repulsion between the extra protons. Still, for sufficiently large nuclei, the innermost nucleons may approach each other closely enough that they could undergo a phase transition into the aforementioned blob of up and down quarks. Just as an ice cube can melt, so can the quarks inside the nucleus begin to intermingle freely instead of being frozen within individual nucleons that only occasionally exchange a quark. Once formed, they would be largely immune against most forms of decay, although there would still be limits to how strongly charged they could get before electrostatic forces broke them apart or induced the emission of electrons or positrons. However, this limit is sufficiently large and extremely unlikely to be reachable in the lab. Even if the limit is reached, it is unclear whether it could avoid decaying into smaller nuclei for long enough until the phase transition could take place. Instead, this process is theorised to happen within gravitationally bound nuclei such as neutron stars, where the lack of mutual electrostatic repulsion and the extremely long ranged gravitational force allow enough nuclei to be held together for an indefinitely long period of time. However, this also means that it is improbable that we will ever find naturally produced quark matter on Earth, as there is no conceivable process through which these star-sized nuclei could be broken up into smaller pieces and find their way to us. And thus, we find ourselves back in the present situation, where we have discovered all the elements that we have been able to find in nature, as well as a smattering of highly radioactive elements that we have been able to provably create through nuclear fusion Mickey Wong graduated in 2015 with a degree in Music from the University of Cambridge. Artwork by Pauline Kerekes.

Beyond The Periodic Table 31


Weird and Wonderful Moon Plants If, like me, you watched all of Netflix during the pandemic, you might remember a scene from Space Force where General Naird (Steve Carell) volunteers for a Lunar Habitat experiment. The astronauts-to-be are growing potatoes under conditions mimicking those on the moon and much to General Naird’s mid-meal disgust, this involves homemade fertiliser. Yes, I mean human faeces. Earlier this year, scientists successfully grew plants in moon dirt for the first time (with no faecal matter involved). They used dirt collected from the Apollo 11, 12, and 17 missions and grew seedlings up until they were 20 days old. On day 6, there were no notable differences between moon-dirt seedlings and the nonmoon-dirt controls. On day 20 however, the moon-dirt seedlings looked stunted and strangely similar to that houseplant you asked your friend to keep an eye on over the break. When the researchers sequenced the seedlings’ gene transcripts they found increased expression of genes linked to high levels of stress. It seems unlikely that we’ll be growing potatoes on the moon just yet, but it is hard to hear about this work without broader questions creeping to mind: What landscapes are available for agriculture and anthropogenic change? Which plants do we grow, and for whom? These questions can be challenging when considering the complexity of life on earth, let alone across the atmosphere into the beyond. BNB Artwork by Sarah Ma

Bioreceptive Architecture

Xenotransplantation

Over 4 billion humans now live in urban areas, with numbers continuing to climb. Our landscapes are becoming dominated by the spiralling concrete expanse, the natural world often excluded at the cost of more office space. The Bartlett School of Architecture at UCL combines scientific research with engineering to develop new bio-receptive materials, weaving greenery into our sterile structures.

Earlier this year, a man with terminal heart disease in Baltimore, USA, received a pig-heart transplant, marking a landmark moment in medicine. It represented a breakthrough in the development of xenotransplantation - the transplantation of tissues, cells, or organs between different species.

High density cities fall victim to increased temperatures, humidity, water stress, and increased vulnerability to flooding. Tackling these challenges, architects have designed hydrophilic bio concrete surfaces which efficiently absorb water and provide scaffolds for colonisation by poikilohydric plants such as algae, lichens, and mosses. These tree-like facades can improve stormwater management and offset carbon dioxide, nitrogen, and other urban pollutants. Unlike traditional monoculture green walls, bio-concrete walls require no irrigation and maintenance, exploiting the natural resilience of poikilohydric plant communities that have plagued structures for centuries. Instead of a sterile world of concrete and steel, future buildings should integrate plant colonisation into their superstructure. The phrase ‘concrete jungle’ could become truly accurate, describing a breathing photosynthetic forest city that protects us from the self-imposed perils of urban life. BW

32 Weird and Wonderful

The increase in life expectancy has led to growing numbers of patients experiencing chronic disease and end-stage organ failure. Whilst transplantation of human organs is effective at replacing ailing organs, demand for human organs far exceeds supply. Currently, seventeen patients die each day in the United States while on the waiting list for an organ transplant, with more than 100,000 reportedly on the waiting list. Xenotransplantation might therefore be a promising alternative to bridge the gap between supply and demand of organs, tissues, and cells. With advances in gene editing and immunosuppressive therapy, clinical xenotransplantation is becoming more viable. Gene editing technology, for instance, can be used to remove genes in the donor animal that is contributing to organ rejection. However, many animal rights groups are opposed to the use of animal organs for human transplants, claiming that animals have a right to live, without being genetically manipulated for the sole purpose of organ harvesting. MC

Michaelmas 2022




Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.