Spectra Magazine 2025

Page 1


WHITGIFT

SPECTRA

Michaelmas 2025

Social Media’s Control p26

Hydrogen Cars Behaviourism

What Happened? p29

Living off the Sun

The solar powered sea slug p67

EARTH LIFE BEYOND

THE FERMI PARADOX Whats really out there?

EDITORIAL

Hello, and welcome to this years issue of Spectra

In an age where headlines are dominated by artificial intelligence, climate breakthroughs, and the limits of human knowledge being constantly redefined, this issue explores the future of science, whilst exploring theories and the past. In this issue, we explore everything from the creation of synthetic elements that stretch the boundaries of the periodic table, to self-healing materials that could revolutionise everyday life. This issue looks into space, and immortal animals, and questions how science and society shape one another in a world that is progressing faster than ever. There are words from plenty of young researchers, and passionate thinkers — proving that science isn’t just something done in labs, it is being lived, questioned, and celebrated across students.

HUMAN HEALTH & THE BRAIN

P11 Past, Present and Future of Robotic Surgery P14 Alcohol Related Neurological Diseases

P42 Materials of Denture

P56 The Complex Biology of Epigenetic Modfications

EVOLUTION AND THE NATURAL WORLD

P8 Parasites

P67 The Solar Powered Sea Slug

P62 The Immortal Jellyfish

TECHNOLOGY & ENGINEERING FRONTIERS

P54 Superman Memory Crystal

P75 Self Healing Polymers

P4 3D Printing

P29 Hydrogen

DATA, AI & MATHEMATICAL THINKING

P22 Natural Language Processing

P44 Mathermatical Approaches for Reducing Data Dimensionality

P32 Introduction to Projective Geometry

SPACE, PHYSICS AND THE UNIVERSE

P52 False Vacuum Theory

P58 The Fermi Paradox

P65 The Periodic Table

P48 Nerve Agents

P70 Pushing the boundaries of the Periodic Table

P26 Fluid Dynamics and Aerospace

3D PRIN TING Could it revolutionise the future of design and manufacture?

Over the last few decades, 3D printers have developed in design rapidly, allowing for finer detail on prints, use of stronger material, faster print time, and a multitude of other innovational features. The future of 3D printing is incredibly optimistic, with new, commercially, and industrially available printers being created and launched, each with new features. Before we explore the finer details of 3D Printer’s and their

design, we need to understand the history behind them. In the 1980s, the idea of additive manufacturing and 3D printers came around, leading to the first 3D Printer concept being created in the early 1980s in the form of a Stereolithography Apparatus (SLA) printer, which uses a guided laser to cure liquid resin layer by layer into a three-dimensional object (see above). This led to the invention of the

Fused Deposition Modelling (FDM) printer. This variety of 3D printer now is the most commonly used variant, and it works by depositing layers of polymer, such as PLA or PETG on top of each other to create a three-dimensional object.

Rapid Prototyping & Iterative design

3D Printers are now used frequently in the process of rapid prototyping and iterative design, although mostly FDM printers are used for this. This is because they are faster, cheaper, and generally create stronger parts than SLA, or other methods of printing. Rapid prototyping is the continuous cycle of prototyping, testing, and analysing a product under development in order to slowly improve the design of the product. This method eliminates most issues with products and allows for cheap and easy iterations of design. Iterative design is the process of slowly improving design over many models in order to create a final product. 3D

Printing allows for quick iteration of design, decreasing the time and cost of developing products.

Use of consumer level 3D printers

As 3D printers have developed in design, the price has fallen as a result of mass production and optimized design. Companies such as Bambu Lab, Creality and Elegoo have revolutionized the industry of consumer level 3D printing by releasing various affordable and easy to use 3D printers. Over time, the price of these printers has dropped exponentially, going from Over £2000 fora 3D printer in 2014,

to under £200 for a high quality, entry-level printer in 2024. 3D printers are fantastic for anyone who loves to design products and create projects, as it allows for fast iterative design, usually being around a few hours for a smaller print. I can personally attest to 3D printing at home, having designed several of my own products, iteratively designing parts for projects, and printing functional parts for use around the house, including gutter brackets and other helpful items, although there is the occasional print failure which can arise from various different issues, but most are easy to correct. The

industry is growing exponentially, with 3D printers becoming increasingly common in homes due to the falling price, increase in quality and faster print times.

Use of 3D printers in industrial design and manufacture

Although 3D printers are not used much in manufacturing currently, they are becoming an increasingly popular choice over other methods of manufacture such as injection moulding. Many new innovative industrial 3D printers have been

introduced over the last few years, which come with significant benefits over home-use 3D printers, including incredibly fast speed, high build volume capacity, and the ability to print with new materials. These industrial printers use different technologies compared to consumer level printers, which primarily use FDM and SLA technology. These innovative technologies allow for faster printing time, and other materials to be printed. Nowadays, 3D printers not only use generic polymers such as PLA, but now both industrial 3D printers and many consumer level 3D printers can

print a variety of different materials, including some modern materials such as carbon-reinforced polymer, and flexible materials such as TPU. Some industrial printers now use new methods such as Selective Laser Melting (SLM) and Metal Fused Filament Fabrication (FFF) technology to print using metals such as titanium, steel, and aluminium, although this process is currently slow and expensive. Industrial printers have high startup costs, although it is becoming cheaper over time. In the future, the manufacture of products with 3D printers is expected to explode in popularity among manufacturers as the technology develops and can be used to fulfil orders using the Just In Time (JIT) manufacturing strategy which aims to be able

to fulfil orders as they come in, which can be done with 3D printers as custom, unique objects can be created completely from scratch.

The future of 3D printing

The future of additive manufacturing technology is very optimistic. Companies are entering the market at rapid pace, releasing high quality, affordable 3D printers for both consumers and industrial clients. New materials such as carbon-reinforced polymer is under development for use, bringing strength comparable to metal. Industrial printers are becoming more prominent in the manufacture and design of products. Materials are becoming stronger and cheaper, and the print time of prints is decreasing rapidly. Over the last 10 years, average print time has dropped significantly due to new technologies being developed, and that is expected to continue. One of the negatives of 3D printing, which is the requirement of supports for overhanging sections is expected to become less of an issue with new methods of printing such as non-planar printing, and new build plates with better adhesion. Methods of automating 3D printing have been developed, although they are not prominent in the industry yet however, they could allow for continuous manufacture of parts once a functional automated printer has been developed and created. New methods of monitoring prints such as artificial intelligence (AI) and cameras have allowed for print failures to be detected automatically or manually by the user via apps. Consumer level printers have brought along very simple to use software such as Bambu Studio and

3D printing filament rolls

Bambu Handy, allowing people with little printing experience to easily print products. Build volume capacities have increased significantly which allow for larger objects to be printed, although they require more filament and time to print. The innovation in 3D printing technology over the last few decades has allowed for functional, medical applications of 3D printers, including prosthetics, allowing for the creation of cheaper and lighter prosthetics, which is expected to continue to be used and improved in the future.

In conclusion, 3D printing and the additive manufacturing industry is has grown exponentially over the last 40 years and is expected to continue this momentum of growth as new, modern technologies are developed, including innovative printers with higher build volume, faster speeds are higher quality, new materials which can improve durability and strength while being affordable, and new applications of 3D printing in the manufacturing industry. 3D printing can be used to rapidly develop new products by iterative design, while also being able to continuously manufacture products on a large scale for any

use case. Fully unique and custom products can be created, with no extra cost to create a new, unique shape, unlike injection moulding which requires expensive moulds. Overall, 3D printing has impacted the design industry greatly over the last 40 years and has begun to enter the product manufacture industry, which will continue in the future as technologies develop. It is safe to say that 3D printing will revolutionize the future of design and manufacture around the world. ~

PARASITES The “Invisible” Partners in Life

Biological Features of Parasites

The biological success of parasites stems from four principal characteristics. Through these adaptations, parasites have maintained evolutionary success across countless generations and ecosystems.

Attachment and Feeding Structures: These specialised anatomical adaptations, such as hooks, suckers, or stylets, facilitate a secure attachment to the host and ensure efficient nutrient extraction.

Rapid Reproduction and Genetic Flexibility: High fecundity and genetic adaptability allow parasites to persist despite environmental pressures and host defenses. 1 2 3 4

Immune Modulation: Parasites can manipulate host immune responses through a variety of strategies, including the secretion of immunomodulatory molecules, the mimicry of host antigens, or the induction of regulatory immune cells (Maizels and McSorley, 2014).

Complex Life Histories: Many parasites, particularly helminths and protozoa, require different hosts for various developmental stages. This dependency creates intricate ecological webs and drives host evolution.

Words: Hong Kiu Yeung

Parasites have long been perceived as malevolent agents of disease and suffering, featuring prominently in historical records of plagues, famines, and societal collapse. Diseases such as malaria, schistosomiasis, and trypanosomiasis have shaped human populations and public health policies for centuries. Yet, recent research challenges this negative narrative. Parasites are increasingly understood not only as pathogens but as essential regulators of ecosystems, immune systems, and even potential tools for medical innovation. This article explores the biological nature of parasites, their ecological and immunological importance, as well as their emerging applications in medicine.

Parasites are organisms that live on or within a host organism, obtaining nutrients at the host’s expense and often causing harm (Gadallah, n.d.). They vary widely, from unicellular protozoa such as Plasmodium, to complex multicellular helminths like tapeworms and nematodes, and ectoparasites such as ticks and lice. Parasites have evolved remarkable survival strategies. They often display sophisticated mechanisms for attachment, such as the scolex of tapeworms or the hooked mouthparts of a lice, which enable them to remain embedded within or upon the host. Furthermore, many parasites have complex life cycles involving multiple hosts, maximising their chances of

transmission and survival. Malaria parasite alternating between mosquitoes and humans to complete its development will be one of the perfect examples (Meekums et al., 2015).

Perhaps most impressively, parasites are excellent at immune evasion. For instance, Trypanosoma brucei, the causative agent of African sleeping sickness, undergoes antigenic variation by altering its surface glycoproteins to escape immune surveillance (Kocahan et al., 2019).

The Importance of Parasites in Human Health

While parasites are often associated with morbidity and mortality, their role in human health is more nuanced. Chronic exposure to parasitic organisms during evolution has profoundly shaped the human immune system.Parasitic infections typically induce a regulated immune response characterised by the expansion of regulatory T cells (Tregs) and the suppression of inflammatory responses. This modulation helps prevent overactive immune responses, reducing the risk of autoimmune diseases like type 1 diabetes, multiple sclerosis, and inflammatory bowel

disease (Maizels and McSorley, 2014). In addition, intestinal helminths interact with gut microbiota, influencing bacterial composition and promoting gut health (Anthony et al., 2007). Last but not least, the hygiene hypothesis suggests that the decline in parasitic infections in industrialised societies correlates with increased incidences of allergies and asthma which proposes that early parasitic exposure may calibrate the immune system towards tolerance rather than hypersensitivity (Maizels and McSorley, 2014). Hence, parasites have played a vital role not merely as pathogens but also as architects of immune balance.

Medical Uses of Parasites

Recent medical research highlights several ways in which parasites — or their biological mechanisms — can be harnessed therapeutically:

Helminth Therapy: Clinical trials have explored the intentional introduction of helminths, such as Trichuris suis ova, into patients with autoimmune diseases. Helminth-derived molecules can promote the development of regulatory immune cells and mitigate conditions like Crohn’s

disease (is a chronic inflammatory condition that can affect any part of the gastrointestinal (GI) tract from the mouth to the anus) and multiple sclerosis (a chronic autoimmune disease that affects the central nervous system, specifically the brain and spinal cord)(Maizels and McSorley, 2014; Kocahan et al., 2019).

Cancer Immunotherapy: Studies have revealed that parasitic infections can provoke strong type 1 immune responses, which are crucial for anti-tumor activity. Toxoplasma gondii has been investigated for its ability to stimulate the immune system to recognise and destroy cancer cells (Wang et al., 2023).

Vaccines and Immunomodulation: Understanding parasite immune evasion strategies informs vaccine design, especially through the study of innate lymphoid cells (ILCs) and pattern recognition receptors like Toll-like receptors (TLRs), which are pivotal in early immune activation (Maizels and McSorley, 2014).

These findings offer new avenues for therapies that are less invasive and potentially more natural than current synthetic pharmaceutical approaches.

The Role of Parasites in Nature

In ecosystems, parasites play a crucial role in maintaining balance and driving evolution. They act as natural population regulators by selective targeting weaker or overabundant species while preventing any single species from dominating the ecosystem. This promotes biodiversity and strengthens ecosystems by fostering competition and resilience. For instance, parasites like ticks and lice can reduce overpopulated deer or rodent populations and indirectly supporting predators by ensuring a stable food web (Meekums et al., 2015). Moreover, host-parasite dynamics drive evolutionary adaptations on both sides. This ongoing co-evolution fosters genetic diversity and resilience across species. Furthermore, certain parasites can alter host behaviors to ensure their transmission. For example, the Ophiocordyceps fungus turns ants into “zombies,” directing them to spread fungal spores before they die. Similarly, toxoplasma gondii reduces rodents’ fear of feline predator which enhances the likelihood of completing its life cycle in a

cat host (Gadallah, n.d.). Parasites are thus integral to ecological stability and evolutionary innovation. Parasites are far more than just agents of disease. From their biological complexities to their roles in medicine and ecosystems, they have significant impacts on life. While their harmful effects should be made aware, they should not be overlooked, as it is crucial to appreciate their benefits and contributions. By understanding parasites, we gain insights into evolution, health, and the interconnectedness of all living organisms. Nature’s balance, after all, often relies on even its most unexpected participants. ~

The Past, Present, and Future of

Robotic Surgery

Over the last few decades, many industries have experienced rapid development, and massive changes have occurred to their structures. The medical world is no exception and developments across the field of medicine are happening even up until now. Today, we’ll dive into a biography of robotic surgery: the most cutting-edge medical technologies of the modern era.

Overview

Robotic surgery or robotic-assisted surgery is a method that allows doctors to perform many

types of complex procedures with better precision, flexibility, and control than any other conventional technique. Typically, robotic surgery is performed through minimally invasive incision(s) also known as keyhole surgery, which has become the preferred choice by many surgeons over the last 20 years (Mayo Clinic, no date). The most widely used robotic surgical systems has several main components including a patient-side cart with mechanical arms attached to it. The console, which is where the operating surgeon controls the mechanical arms and a vision system that provides the surgeon with a

high-definition, magnified 3D view of the surgical site (Mayo Clinic, no date).

The Procedure

(Cleveland Clinic, 2024)

1. The surgeon first makes small keyhole incisions on the patient, these incisions are known as ports.

2. Through these ports, the robotic arms attached to the patient-side cart are inserted into them.

3. Each robotic arm may have a camera or surgical instrument attached to it.

4. The camera provides a high-definition, magnified 3D view of the surgical site and relays this back to a 3D vision

system at the console for the surgeon to see.

5. The surgeon uses joystick-like controls and pedals to control the robotic arms.

6. An assistant (usually a nurse) is also present to change any medical instruments.

Benefits

(Cleveland Clinic, 2024)

Improved precision using robotic devices (mechanical arms) compared to a surgeon’s hand, which allows for better access to hard-to-reach places. Eliminates a surgeon’s hand tremor.

Better visualisation of the surgical site due to the magnification of the camera, which is displayed on the console.

Shorter hospital stay, as less tissue damage due to robotic surgery usually being minimally invasive, therefore smaller incisions.

Less scarring, scars on the patient may cause psychological effects such as being self-conscious, as some individual may find the scar to be ‘unattractive’. Less risk of infection and blood loss, due to smaller incisions.

Risk

Human error whilst operating the technology: robotic surgery is a relatively new technology and not many surgeons are experienced with it. However, some medical schools are putting a larger emphasis on robotic training, but whether this will continue for all medical schools is unknown. (Cleveland Clinic, 2024)

Mechanical failure: while highly unlikely, the mechanical components of the system, such as the arms, instruments, or cameras could potentially fail. Therefore, specialists are always on site in the operating room to handle

these extremely rare events (Cleveland Clinic, 2024).

Electrical arcing: Unintentional burns may occur from the cauterising device, this occurs when current from the device leaves the robotic arm and is misdirected at surrounding tissue. However, newer, and more improved robotic systems such as da Vinci XI offer warnings for a risk of arcing (Cleveland Clinic, 2024).

Common risks associated with surgery:

Risk of pneumonia due to anaesthesia, as stomach acid enters the lungs, Allergic reactions from medication, Infection, bleeding, breathing problems.

PAST

In 1967 the thought of robots and surgery had already been theorised, but it wasn’t until 1985 that the first robot, the PUMA 200 was used to perform a brain biopsy (the removal and examination of a tissue to test for any symptoms or causes of diseases). In the following years, robots in surgery became more and more common with robots such as ROBODOC, assisting in orthopaedic surgery. Eventually, in 1996, the first completely robotic-dependent surgical system, ZEUS, was created by Computer Motion. This system was the first in which a doctor would control mechanical arms behind a screen to operate on the patient (Morell et al., 2021).

PRESENT

In 2000, the da Vinci system was fully cleared for action by the FDA (Food and Drug Administration) in the US. Now, the da Vinci system is the spearhead of robotic surgery; having 60,000 trained surgeons and completed more than 10 million procedures, it can be used in many specialty surgeries including cardiothoracic surgery, general surgery, neurological surgery, and many more; over its millions of operations, robotic surgery has had a favourable prognosis (the likely outcome of a medical procedure) of a 95% success rate. Since the first use of the da Vinci system, many newer and more improved da Vinci class systems with better technology and more advanced instruments are being used today.

FUTURE

For now, a surgeon is still required to operate a robotic surgical system, but when will robotic-assisted surgery become robotic-performed surgery? While we could theoretically pass on the role of decision-making entirely to AI (Artificial Intelligence), the question of “What happens if something goes wrong” will always linger around when entirely depending on robots, especially if it involves a delicate matter such as surgery. The future of surgery being performed entirely by robots will lie entirely on the patient’s trust and will. (Morell et al, 2021)

In my opinion, robotic surgery, at least in the coming years will likely still have humans in control – at least in a supervisory role, standing by in the event of an unlikely emergency. ~

Alcohol-Related Neurological Diseases:

Are you “Nervous” about having a drink?

Alcohol is estimated to have been invented in 7000 BCE, China; Thus, it has long embedded itself in societies across different cultural backgrounds. If you enjoy TV and movies, you may notice how characters typically end up suffering from memory blackouts and terribleheadaches after a night wasting themselves at the bar, and whilst these masquerade as trivial occurrences, they may be symptoms of more a serious problem: Alcohol-Related Neurological Diseases – conditions that affect the central and peripheral nervous system caused by the intake of alcohol. It is important, however, to first understand the relationship between alcohol and the brain before diving into the topic.

In the Latino culture, drinking is often associated with masculinity, leading to high levels ofalcohol consumption, displaying how it is considered a social norm in countries and societies,where alcohol is often incorporated in celebrations and act as a lubricant. Alcohol is adistilled or fermented drink that is made up of ethanol, and it is a well-known depressant of the brain, as well as a toxic and psychoactive substance. It mainly targets the central nervoussystem (brain and spinal cord), which is responsible for receiving, perceiving, and processing general sensory information whilst also generating responses. The brain is the most delicateorgan within the body, comprising parts such as the cerebellum, frontal lobes, hippocampus,thalamus, and medulla; they play key roles in maintaining internal conditions, decisionmaking, forming thoughts and memories, muscle control, and many more vital processes. Alcohol damages nerve cells and interferes with communication pathways in the brain,severely undermining the CNS’s ability to work, and thus, it reduces the brain’s effectiveness.

WordsRyan Chiu

Wernicke-Korsakoff Syndrome

Of all alcohol-related neurological diseases, one of the most dangerous is Wernicke-Korsakoff Syndrome (WKS), and while being considered one condition, it is separated into two stages: Wernicke’s Encephalopathy and Korsakoff’s Psychosis. Starting with Wernicke’s Encephalopathy, it is an acute onset and severe brain disorder that, with early diagnosis and treatment, can be reversible. Symptoms include confusion, lack of muscle coordination, hypothermia, vision problems, for instance nystagmus and double vision, and coma. However, it can progress into Korsakoff’s psychosis if not treated promptly, which is an irreversible condition. Individuals with the disease often present with memory impairments such as anterograde amnesia, hallucinations, repetitive speechand actions, emotional apathy, confabulation, and problems with decision-making. In people with WKS, there is permanent brain damage across a

variety of brain regions, notably the thalamus, hippocampus, hypothalamus, and the cerebellum, so it impairs crucial functions and processes such as vision, movement, speech, memory, and sleep. On the whole, WKS is caused by a lack of thiamine (Vitamin B1), and alcoholism is the leading cause of WKS, as it reduces thiamine absorption in the intestines, whilst those who are malnourished, undergo kidney dialysis, suffer from colon/gastric cancer, or have AIDS are also at risk of WKS. The disease is clinically diagnosed based on patients’ medical history and carrying out electrocardiograms (EKG) and magnetic resonance imaging (MRI) and computed tomography (CT) scans of the brain. Treatment includes intravenous administration of thiamine and glucose, oral supplements, and memory therapies. Serious cases may require intensive residential care.

Marchiafava-Bignami Disease (MBD)

Marchiafava-Bignami Disease (MBD) is a rare CNS disorder associated with chronic alcoholism, characterised by demyelination of the corpus callosum, which can extend into the hemispheric white matter, internal capsule, and middle cerebellar peduncle. In rarer cases, Morel Laminar Sclerosis is visible. The disease is caused by a deficiency in the B vitamins, whilst males between the ages of 40 and 60 are most affected. Nonetheless, the tempo of onset and clinical presentation of this disease varies and are often limited to non-specific features (motor or cognitive disturbances). Other symptoms are seizures, stupor or coma, (acute, subacute, chronic) dementia, psychiatric disturbances, aphasia, apraxia, hemiparesis, and signs of interhemispheric disconnection. To diagnose for MBD, toxicology screening, CT scans, complete blood count, and serum measurements are used, but MRI scans are currently the best diagnostic tool for it. To treat MBD, vitamin B complex is administered, whereas thiamine, cobalamin, and folate supplements are also administered intravenously for management. Still, some patients do not recover and die from MBD.

MRI scan of the brain

Foetal Alcohol Syndrome

The devastating effects of alcohol are not only harmful for drinkers, but also apply to innocent foetuses. Foetal Alcohol Syndrome (FAS) is a severe form of foetal alcohol spectrum disorders (FASD), which are caused by the alcohol consumption of pregnant mothers. As foetuses cannot process alcohol like adults, alcohol is more concentrated and prevents nutrition and oxygen from reaching vital organs. Children affected often present with specific facial features, impaired vision and hearing, smaller head and brain size, slow physical growth, poor coordination/balance, and changes with how the heart, bones, and kidneys develop. They may also have poor judgement, a short attention span, a lack of time awareness, issues with controlling large mood swings, challenges emotions

with in social interactions, and problems with managing life skills. Diagnosis of FAS requires monitoring of the symptoms mentioned above, and whilst there is no treatment, it can be prevented by stopping alcohol intake if there are uncertainties of pregnancy and contacting intervention services.

Alcoholic Neuropathy and Alcoholic Myopathy

Alcoholic Neuropathy is a form of nerve damage affecting both the central and peripheral nervous systems. It is typically permanent, as alcohol induces structural changes in the nerves. Nutritional deficiencies—particularly in thiamine, niacin, pyridoxine, folate, and vitamin E—exacerbate the condition. Common symptoms include dysesthesias (burning, tingling, and prickling sensations), muscle spasms, weakness, sexual dysfunction, gastrointestinal disturbances (nausea and vomiting), impaired speech, movement disorders, and dysphagia (difficulty swallowing). Treatment focuses on rehabilitation, alcohol abstinence, and nutritional support, especially through supplementation with B-complex vitamins. Alcoholic Myopathy is a progressive muscle disease caused by excessive alcohol consumption and manifests in two forms: acute and chronic.

Acute alcoholic myopathy is typically triggered by binge drinking and resolves within one to two weeks. However, it may cause rhabdomyolysis—a serious

condition where muscle breakdown products enter the bloodstream, potentially leading to kidney failure.

Chronic alcoholic myopathy results from long-term alcohol abuse, often affecting muscles in the hips and shoulders. Recovery can take several months after cessation of alcohol intake.

Symptoms include muscle atrophy, weakness, cramps, stiffness, fatigue, and dark-coloured urine. Unlike alcoholic neuropathy, alcoholic myopathy is generally reversible with appropriate intervention.

Both conditions share similar diagnostic approaches, including neurological examinations, blood tests, toxicology screening, and electromyography (EMG).

Alcohol Withdrawal Syndrome (AWS)

Alcohol Withdrawal Syndrome (AWS) affects individuals who abruptly stop or significantly reduce alcohol intake after prolonged use. Symptoms typically emerge within hours to a few days and include tremors, nausea, vomiting, anxiety, tachycardia (increased heart rate), sweating, headaches, insomnia, and vivid nightmares.

In severe cases, Delirium Tremens (DT) may develop, characterized by profound confusion, agitation, fever, seizures, hallucinations (tactile, auditory, and visual), and rapid breathing. Diagnosis relies on toxicology screening and physical assessments, such as monitoring for heart arrhythmias and dehydration.

Treatment involves administering benzodiazepines (e.g., chlordiazepoxide, lorazepam, alprazolam) and may be conducted at home or in a hospital depending on severity. Patients are also encouraged to engage in counselling to support long-term recovery.

Preventing Alcohol Misuse:

Alcohol Misuse refers to harmful or dependent drinking patterns, typically defined as consuming more than 14 units of alcohol per week (1 unit = 10 mL of pure alcohol).

At a global policy level, the World Health Organization’s SAFER initiative recommends five evidence-based strategies:

Strengthen restrictions on alcohol availability

Advance and enforce drink-driving countermeasures

Facilitate access to screening, brief interventions, and treatment

Enforce bans or comprehensive restrictions on alcohol advertising, sponsorship, and promotion

Raise alcohol prices via excise taxes and pricing policies

On an individual level, prevention begins with setting realistic personal goals to reduce consumption, using digital reminders, and replacing drinking habits with healthier alternatives such as exercise or social engagement. Maintaining open conversations with trusted individuals can also help promote accountability.

According to the World Health Organization, the European Region (9.2 litres per capita) and the Region of the Americas (7.5 litres per capita) report the highest levels of alcohol consumption globally. In 2019, 52% of men and 35% of women worldwide were regular drinkers, and approximately 7% of individuals over 15 years old were affected by Alcohol Use Disorder (AUD)—with 209 million diagnosed with alcohol dependence.

Alcohol misuse is responsible for around 3.3 million deaths annually, accounting for 6% of all global mortality. In addition to the disorders discussed above, alcohol is also linked to conditions such as alco-

hol-induced cerebellar degeneration, dementia, and other cognitive impairments.

While moderate alcohol consumption may offer some cardiovascular benefits, it is essential to recognize and avoid the risks of excessive drinking, which can lead to severe, sometimes fatal, neurological consequences. ~

Antidepressants Uncovered

The science, myths and future of mental health treatment

Imagine waking up every day on repeat feeling like a rain cloud is consistently following you, extracting every essence of joy and hope in you. For approximately 280 million people worldwide this is a common feeling . Depression – a condition which has ravaged the lives of countless people. Enter antidepressants: small seemingly insignificant pills which have ignited numerous debates amongst scientists, saved lives, and have caused some of the biggest misconceptions in the neurological field. Despite being one of the most prescribed meds in the world, there are still lots of misconceptions about what they do and how they do it. The most controversial of these is the idea that depression is simply caused by a chemical imbalance – in particular, low serotonin. To tackle these myths, I will be exploring three different sections in this article: What are

What are antidepressants ?

What Happens at The Synapse?, PictureAntidepressants are a section of medications which are created in order to negate the symptoms of depression and other similar disorders. It does this by targeting chemical signals in the brain between neurotransmitters, in particular, antidepressants commonly affect neurotransmitters such as serotonin, norepinephrine and dopamine. These chemicals serve as messengers between the synapses in neurons and control moods.

As seen above , these neurotransmitters cause nerve impulses to be carried along the neuron which results in feelings or muscle contractions. For those with Depression, I will be looking at the feelings that occur due to these neurotransmitters. In individuals with this problem, the processes controlling neurotransmitter release and reuptake may be dysregulated, which may lead to cognitive disruption. Antidepressants work by restoring these issues to the norm. However it is incorrect to assume that therefore low levels of these neurotransmitters are the problem.

Here are two common types of antidepressants:

Selective serotonin reuptake inhibitors (SSRIs) – these increase the serotonin levels in the brain by blocking its reabsorption into the neuron, this allows the transmitter to be able to send signals for longer.

Serotonin norepinephrine reuptake inhibitors (SNRIs) –these do the same thing as an SSRI but also targets the other neurotransmitter – norepinephrine.

However, as mentioned before, an antidepressant does more than just boost neurotransmitter level, they work

by enhancing the brain’s neuroplasticity. Neuroplasticity is the ability for the brain to adapt and form new connections by creating new neurons. Neuroplasticity occurs naturally within the brain, it occurs when a new memory is made. Depression is often linked to low neuroplasticity in areas such as the hippocampus, according to the national institution of health many people with depression have a smaller hippocampus.

(1) Antidepressants work by promoting neuroplasticity, resulting in increased brain health and connectivity.

A famous example and how it works

One of the most used antidepressant with approximately 14.8% of sales is fluoxetine. It is more widely known as Prozac. It is an SSRI and in this section we will be looking at what it does and how it affects neurotransmitters. Prozac (Fluoxetine) | Side Effects, Dosage, Uses & Interactions, PictureWe touched on neuroplasticity in the sections above and how it refers to how the brain can form new neural connections and strengthen existing ones. Prozac has been shown to promote neurogenesis. In a paper published in the national Library of medicine by David Samuels and co authors, studies were conducted on animal models and shows that Prozac has increased the expression of brain derived neurotrophic factor – this is simply a protein that causes the growth of neurons. Low levels of this protein have been linked to reduced neuroplasticity. However Prozac does not

not cause an immediate change even though it causes serotonin levels to increase within the first few hours of taking the drug. The effects typically take a couple of weeks to occur. This supports the idea that depression is not due to low serotonin levels. The Prozac drug only starts to work properly after a few weeks in order to allow the neuroplasticity to occur. It also increases the levels of the protein called BDNF as this allows new neurons to grow and join themselves to existing neuron pathways. These changes are gradual and show how the use of Prozac is a gradual increase and is not sudden. By promoting neuroplasticity, Prozac does not only alleviate the temporary symptoms of depression but it also helps the patient recover from the long term damages to the brain.

and antidepressants

sants have come far in the last 30 years but there are still various misconceptions about the illness and how the drugs work.

One of the key myths about antidepressants are that depression is simply caused by low serotonin and this idea is far too simple to truly grasp the complexity of the issue. Some of the key evidence supporting this idea is a study that was published in the 2022 review of molecular psychiatry which analysed various different experiments and concluded that the evidence supporting the idea of depression and low serotonin was not sufficient. This is be

There are still many other myths about depression as a whole which is the idea that depression is not a real thing however scientific experiments done on donated brains clearly show a scientific element to depression as evidence d by the smaller hippocampus and under developed prefrontal cortex in people with depression. However this does not mean that depression is purely genetic as people who have experienced significant trauma can have changes to their brain due to the neuroplastic nature of the brain.

Finally, one of the largest and most prominent beliefs is that antidepressants are a one size fits all, many of these drugs only work for about 50% of users. Prozac for example, reports only a 54% success rate for major depressive disorders. These types of myths about the causes for depression persist due to the simplicity of them in the past. This is because in the early days of diagnosing people with depression, the serotonin trend was easy to spot and pharmaceutical companies relied on this idea for marketing. Phrases such as ‘chemical imbalance’ became the norm and encouraged treatment but they also caused confusion and over simplification of a very complex issue.

Antidepressants have come a very long way in improving mental health treatment but it is important to remember that they are not magic pills that make you happy as some may think. As we develop our understanding of how the brain works, we can revolutionise the future of antidepressants and make new drugs that help even more people. Many may ask what is the purpose of creating new drugs when we already have many that work, the answer to this is: to help everyone living with this problem to find the relief they deserve. ~

Natural Language Processing (NLP)

How do computers understand us?

An overview of NLP

NLP is a branch of artificial intelligence and computer science that “allows computers to process and respond to written and spoken language in a way that mirrors human ability” (Britannica). The fields of computational linguistics, statistics, machine learning (specifically, ‘deep learning’) are crucial in making NLP models work.

Nowadays, we can see the wide-ranging presence of NLP in everyday life. Apple’s ‘Siri’, Amazon’s ‘Alexa’, customer service chatbots on various websites, and the (in)famous ChatGPT all make use of some NLP model that acts as the bridge between the human on one side and the breadth of (some kind of) knowledge/information/data on the other. However, the usefulness of NLP doesn’t end at communication between e.g. a chatbot and a human. For example, NLP is extensively used in email applications for spam filtering and fraud detection purposes. As a result, its importance can clearly be seen, especially in the modern era when these kinds of phishing attempts are becoming increasingly more common.

Furthermore, one crucial task that NLP is often used for is “sentiment analysis”, in which a computer system is able to detect the tone of a given piece of text. Consequently, NLP can be used for applications such as monitoring brand sentiment and public opinion online regarding various issues.

But how does it all work?

A typical NLP model

It is worth noting that most processes/tasks listed below are possible only through the use of machine learning approaches. This is where a program is trained on a large set of data to recognise certain patterns of that data set using different machine learning algorithms – for instance, the ability to divide a string of words in an audio clip into individual words for analysis. I will not make machine/deep learning a focus point but understanding and awareness of this general idea is all that is required for the purposes of the rest of this article.

NLP usually consists of some form of the following steps (diagram provided by geeksforgeeks):

Step 1: Lexical (and morphological) analysis

The focus in this step is breaking down text into the smallest units possible for easier analysis and use later.

Text/speech processing

If an audio clip has been provided (e.g. through Siri), speech recognition, the ability to determine the textual representation of a given sound clip of speech, must be performed. Furthermore, speech segmentation is a necessary sub-task of speech recognition that allows for a string of words in a clip to be separated out for analysis.

Once the speech has been processed, or if a chunk of text has been provided, then word segmentation or tokenization must happen – this is the process of dividing a given piece of text into individual words called tokens. Tokenization results in the generation of a word index, in which each token/word is mapped to a numerical value (often in a dictionary format, as shown below). This task also produces tokenized text in which words are represented as numerical tokens (from the above word index) for use in deep learning methods – a specific type of machine learning that utilises artificial neural networks and multiple layers of processing to “extract progressively higher level features from data” (Oxford Dictionary) – for later processes. The next sub-task that must be performed on both text chunks and speech clips is lemmatization. This is when inflectional endings of a word are removed and the base dictionary form – known as a lemma – of a word is returned. For instance, ‘running’ would be converted into ‘run’, and ‘better’ would turn into ‘good’. This is in order to make processing of the text easier for the model.

Another common process involves stopword removal, in which common words without significant meaning are removed to sufficiently ‘clean’ the given text for easier analysis – this often includes words such as ‘a’, ‘the’, ‘and’, etc.

Morphological analysis

The next sub-task of Step 1 involves dividing individual words up into the smallest possible units that still carry some meaning – also known as morphemes.

Morphological segmentation, in which words are separated out into individual morphemes, is done to make later (machine learning) processes easier.

This task first involves identifying the types of morphemes in a given word: a free morpheme is a part of a word/text that would make sense on its own – for example, the word ‘chair’ is a free morpheme. The other type of morpheme is known as a bound morpheme. This is a part of a text/word that would not make sense on its own and would need to be attached to free morphemes to convey any kind of meaning. For instance, the suffix (and bound morpheme) ‘-ing’ needs to be connected to a free morpheme, ‘cook’, to form a sensical word, ‘cooking’.

Step 2: Syntactic analysis

The primary aim of this phase is to identify the structure and grammar of sentences in a given piece of text.

Part-of-speech tagging

This crucial sub-task involves identifying a word in a sentence as performing a specific part of speech (POS), which commonly include the likes of verbs, nouns, adjectives, etc.

However, a common difficulty with this stage is the fact that many words can have multiple possible parts of speech that they can serve as.

Take the word ‘fine’ as an example. Someone could say that they are ‘doing fine today’ (i.e. doing well), but you could also describe a piece of thread as ‘fine’ (i.e. thin). To overcome this, aforementioned deep learning programs are trained to identify the surrounding context of words in order to identify the best possible match.

Grammar/syntax checking

In order to make sure that the given text is error-free, a program may compare given sentences against standard grammar rules for a language using the partof-speech tagging process described above.

This sub-task is especially useful for applications such as machine translation and sentiment analysis (the ability to detect the tone of a piece of text).

Sentence breaking

Locating the boundaries of sentences – often marked by periods or other punctuation marks – within a given chunk of text is particularly useful for Step 4 (Discourse integration) in terms of finding relationships between various sentences (once again, in order to help the program identify the context surrounding a word or sentence).

Step 3: Semantic analysis

This stage focuses on deciphering the context of individual words.

Entity identification

This process consists of two smaller sub-tasks: named entity recognition and entity linking – an entity in a piece of writing is the name of a person, place, company, etc.

Named entity recognition (NER) is simply identifying these entities within a text. This may involve having the NLP program search the internet for a matching Wikipedia article, for example. Entity linking is a necessary subtask of the above. Many words can refer to many entities – the word ‘Paris’ in the name ‘Paris Hilton’ could refer to the capital of France or a person’s first name. In these cases, aforementioned deep learning algorithms are used to derive the correct entity from the context of the given sentence.

Relational semantics (semantics of individual sentences)

This process mainly involves a sub-task known as relationship

extraction, in which relationships are identified amongst named entities – e.g. who is who’s brother/wife/husband/etc.

Word-sense disambiguation (WSD)

Many words can have multiple meanings – this process focuses on determining which meaning fits the given context best, once again, with the help of machine learning programs that have already been trained to do this specific task.

Step 4: Discourse integration

In this phase, relationships between sentences in a text are evaluated to further derive context.

Coreference

resolution

Anaphora resolution is the commonly used method for this task. Put simply, it involves matching up pronouns with the nouns or names to which they refer. Consider the sentences, ‘Daisy was ready to leave. She picked up her bag and left’. Using the above process, we would be able to appreciate the fact that ‘she’ and ‘her’ in the second sentence are referring to ‘Daisy’ from the first sentence.

Discourse analysis

This process simply determines the types of speech acts in a given piece of text – e.g. whether a sentence is a yes-no question, a statement, an assertion, etc.

Topic segmentation and recognition

In this sub-task we separate a given chunk of text into different sections based on common topics identified in these segments. Step 5: Pragmatic analysis

Finally, the NLP model shifts the focus to understanding intentions behind word choices – i.e. the inferred meaning of a text rather than what has literally been written.

For example, suppose that someone said, ‘What time do you call this?’

This particular phrase could be interpreted very differently depending on the tone and the context of the situation – it could be an angry teacher questioning/ mocking a student for being late (i.e. a serious tone), or it could be a friend jokingly remarking about you being late to a get-together. This stage is crucial for sentiment analysis and, as is especially relevant to the present day, chatbots such as ChatGPT in terms of being able to ‘converse’ realistically with the human on the other end.

Uses of NLP

After looking at all of those technical aspects, it’s time to look at the bigger picture: what is this actually used for?

We have touched upon a few of these already, but some real-world use cases include:

Sentiment analysis (as mentioned above - the ability to detect the tone of a piece of text as sad/angry/serious etc., which would be very useful for e.g. a company trying to quickly get the general feelings of customers in their reviews without having to read them all)

Grammatical error correction (very useful for programs such as Word and email services)

Machine translation (NLP is used in e.g. Google Translate to effectively understand what has been inputted)

Question answering (as in ChatGPT or a customer-service chatbot)

Text-to-image generation

Text-to-scene generation (i.e. creating a 3D model)

Text-to-video generation

Fraud prevention and spam detection in email applications

Conclusion

Whilst NLP is truly fascinating, it is not without its challenges, one of the biggest being ambiguity in human language. The varying meanings that words and sentences can have in many different contexts means that the correct interpretation relies on a very accurate and well-trained model, which is quite difficult.

In addition to technical challenges, there are ethical ones, too. Depending on the training data provided to NLP models, biases could be developed and perpetuated, which could lead to unwanted and, in some cases, potentially discriminatory outcomes (e.g. in the hiring process for a company).

Fundamentally, however, what we have so far in terms of NLP technologies is nothing short of extraordinary and only the beginning of what will undoubtedly be an AI-, and, by extension, NLP-dominated next few decades globally. I believe we will see many more ground-breaking technologies and applications realised in the future that will further help in increasing computers’ ability to understand mankind and, thereby, help us accomplish great things. ~

on the aerospace industry

From Navier-Stokes equation to turbulence in planes

Introduction

Most of us would have been victims of turbulence once in our lives, whether it be on an aircraft or in a boat. This loosely used term “turbulence” refers to fluid dynamics. Fluid dynamics involves the study of the movement of liquids and gases in response to external forces from the environment that are exerted upon them. Fluid flow has a myriad of implications ranging from maximising efficiency in air conditioning units to modelling blood flow circulations to inform the design of medical devices. Fluid motion can either exhibit laminar flow (smooth or regular path of gas/liquid) or turbulent flow (regions of fluid move irregularly with colliding paths). We will start by discussing a simplified version of the Navier-Stokes equation, Euler’s Equation.

Euler’s equation

This equation is derived from the Newton’s second law of motion and describes the motion of inviscid fluids (Ideal fluid where there is zero viscosity). You have probably come across Newton’s second law of motion which states that the acceleration of an object is directly related to the net force and inversely related to its mass; F=ma. This equation describes how pressure, density and gravitational forces can cause acceleration changes in fluids. In order to understand Euler’s equation better, we need to grasp a fundamental understanding of Bernoulli’s principle. This states that an increase in speed occurs simultaneously with a decrease in pressure or fluid potential energy. This is important as it generates lift which is the force of flight caused by pressure imbalance; in other words, allowing the airplane to be pushed upwards. To relate this phenomenon more explicitly to Bernoulli’s principle, the air moving over the curved upper surface of the wing would travel faster which would lead to a lower pressure than the slower moving air on the flat underside of the airplane wing. Two conservation laws are crucial for Euler’s equation which include the law for conservation of mass and conservation of momentum. Finally, an understanding of fluid flow would prove to be useful. Euler’s equation assumes that the fluid is homogenous and incompressible (meaning mass density is constant) and that the flow is continuous and steady (flow does not vary with time). The effect of pressure and velocity can be seen from the left-hand side of the equation where DV/Dt represents the change in fluid velocity with respect to time. The ΔP represents pressure variation and is crucial as fluid naturally moves from a region of higher to lower pressure. This would hence

affect fluid motion and velocity patterns. The following section would provide an insight into the multitude of practical applications of Euler’s equation.

Practical applications of Euler’s Equation

To start with, an application of Euler’s equation is analysing problems in flight dynamics. Euler’s equation can be used to simulate fluid flow over aircraft bodies, improve aerodynamics of airplanes, optimise fuel consumption and even determine aircraft stability. With the advancement of airplanes, more extreme manoeuvres at higher angles of attack are performed. The angle of attack refers to the angle at which a relative wind meets an Aerofoil. A Euler solver (computational tool to analyse fluid dynamics using Euler equations) can be applied to both steady (constant flight conditions) and unsteady flows. Specifically, we would be looking at the determination of stability derivatives and flow at high angles of attack in steady flow. To put this simply, I would be looking at how aerodynamic forces and moments change when airplanes fly at different angles in constant flight conditions. In highly manoeuvrable aircrafts, vortex lift (method by which highly swept wings produce lift at high angles of attack) that is created at high angles of attack allow for rapid turns and faster take off. The Euler solver is able to predict positions of the vortex by analysing flow pattern to determine vortex creation

and strength. The stability of these vortex systems can also be analysed. The determination of the positioning of the vortex is crucial in flight dynamics and structural analysis. Axial velocity helps to indicate the stability of a vortex. A symmetrical axial velocity with a clear peak in axial velocity at the vortex centre (as seen by the red section) would indicate stability. Therefore, changes in axial velocity would impact lift and stability of the aircraft. This is

relevant as particularly at high angles of attack could lead to different strength and positioning of vortex which would affect stability. Euler’s equation requires significantly less computational power than if calculating the Navier-stokes equation, and still has good levels of accuracy. It is also important to note that Euler’s equation is better when analysing supersonic aerodynamics where viscosity is negligible.

Navier-Stokes equation

Previously, Euler’s equation is described to be able to determine fluid dynamics however the main limitation was it excluded viscosity, assuming inviscid fluids. Viscosity is important as it describes the internal friction experienced by a fluid which would influence energy dissipation levels and transitions between laminar and turbulent flow. The equation is similar to Euler’s equation whereby law of conservation of mass and momentum are both important. Due to the computational unfeasibility of the Navier stokes equation, the Reynolds average Navier-Stokes (RANS) equation is commonly used whereby fluid motion is broken down into two parts, the mean flow which is the overall behaviour of the fluid and the fluctuations that are happening within the fluid as well. Mathematically written using Reynolds decomposition where u− is the mean velocity and u’ is the fluctuating component. Using this and then time averaging the values leads to the RANS equation formed. RANS eliminates the need to calculate instantaneous flow field however it introduces the Reynolds Stresses, an additional term which captures turbulence. The Reynolds Stress Tensor is a consequence of the Reynolds decomposition and averaging process that is used in deriving the RANS equations. The Reynolds stresses represent how turbulence would impact the overall fluid flow without calculations of every specific eddy/swirl in a fluid. Applications of the RANS equation include aeroacoustics whereby turbulence modelling can inform how much noise is generated by different aircraft engines.

HYDROGEN CARS What happened?

Remember a few years back when everyone was talking about hydrogen cars being the future? Cars like the Hyundai Nexo and Toyota Mirai were all the rage in mainstream media, being praised as alternatives to dirty petrol cars in the same breath as battery-powered cars. But what happened? Why did these battery cars succeed, and hydrogen cars have faded away?

How Do They Work?

Firstly, in order to understand why they failed, we need to understand how they work.

A hydrogen fuel cell turns the energy stored in hydrogen molecules into electricity. It does this by using two electrodes – a negative cathode, and a positive anode – that surround an electrolyte solution. They work in the opposite way to electrolysers. In an electrolysis reaction, an ionic compound is broken down into the separate molecules by putting in energy, but in this hydrogen fuel cell, energy is generated by combining hydrogen and oxygen to make oxygen. Hydrogen is supplied when the car is fuelled up at the anode, and oxygen is drawn from the air to the cathode. Hydrogen gains electrons at the anode, oxygen loses electrons at the cathode, and the ions are moved through the electrolyte to form water at one of the

electrodes, depending on the type of fuel cell. This is the equation:

2H2 + O2 → 2H2O

The type of fuel cell Toyota uses in the Mirai is the fuel cell we will be looking at – the PEM fuel cell. (See bottom right)

PEM fuel cells – the Proton-exchange membrane cells – is the main fuel cell used in the transport industry, and uses low temperatures while having high power density and quick startup times, making it ideal for use in cars. In PEM cells, the hydrogen is oxidised at the anode with the aid of a catalyst, which is usually platinum with carbon supports to expose it to the reactant, and the electron lost goes up through the external load circuit. The hydrogen then moves through the electrolyte membrane to form water at the oxygen side, with the oxygen ions having been reduced by the platinum catalyst and the electron from the Hydro-

gen. The circuit createdforms the energy used to make the car move.

So, Why Don’t We Use Them?

Why aren’t hydrogen fuel cells used in cars? Well, there are many reasons, and despite advantages such as short refuel times (the Mirai takes 4 minutes to fill up enough for 400 miles says Jean-Michel Billig, Stellantis CTO) and longer range, research into these cars have stagnated. There are 3 main reasons for this

Infrastructure

Simply put, the infrastructure just isn’t there for hydrogen cars. As of December 2023, there were “16 operational hydrogen fuel stations across the UK” (Pulse Energy), and that number was going down due to low demand. If you can’t refuel a car, you can’t buy it, and nobody wants to make a car that nobody is buying.

Electric vehicles have overcome this issue over time, with the number of chargers across the country (72,594 as of November 2024 according to Zapmap), but it required heavy investment from the government and companies for the infrastructure to be jumpstarted, which isn’t likely for hydrogen vehicles.

Production cost of hydrogen

Hydrogen production is also energy intensive. It is mainly produced in natural gas reforming, where high temperature steam is used to produce hydrogen from methane under heat (steam-methane reforming), or carbon monoxide and steam are reacted using a catalyst to form carbon dioxide and hydrogen (water-gas shift).

Steam-methane reforming:

CH4 + H2O (+ heat) → CO + 3H2

Water-gas shift:

CO + H2O → CO2 + H2 (+ small amount of heat)

This process emits carbon dioxide, which isn’t great for the environment. This counteracts the “greenness” of using hydrogen in order to not emit carbon dioxide whilst driving.

Although there is a green way to produce hydrogen (through electrolysis), it’s very energy inefficient and is not widely available.

Storage of the hydrogen is also expensive, as hydrogen needs high pressure tanks as a gas (350-700 bar) or very cold temperatures as a liquid (-253°C) and requires isolation due to its reactivity.

Battery-powered vehicles are better

Unfortunately, battery-powered vehicles are better than fuel-cell vehicles in the personal vehicle industry. Battery technology has just gotten better and better over time, with a more diverse range, better infrastructure, and lower prices, which makes them more attractive to consumers and producers. The development of these vehicles has just snowballed the industry into taking over the mainstream personal vehicle market, due to the economies of scale and regulatory incentives. It’s also far easier to just convert electricity into battery power than to produce, store, and transport hydrogen. The future for hydrogen powered vehicles doesn’t look too bright overall. Electric battery vehicles took the market by storm, leaving the hydrogen fuel cell technology in the personal vehicle industry in the dust. Hydrogen fuel cell cars haven’t gotten the same type of love as battery powered cars, and now that there is a cleaner alternative to petrol and diesel cars, the development of the hydrogen fuel cell for use in cars has stagnated, and they remain a thing of the past.

Introduction to Projective Geometry

Geometry is a branch of mathematics that deals with the properties of figures in the plane or in space. The most common type of geometry taught in every school is known as Euclidian geometry, which is the x-y plane or x-y-z space that we are all very familiar with. It is based on the premises that space is flat and includes measurements like length and angle. I am sure that we all remember the equation for the distance between two points, or the gradient of perpendicular lines. But projective geometry is a type of non-Euclidian geometry, it has a completely different view of space, so we can forget everything we learned about geometry and start from the basics, again. Imagine looking at a picture, the picture itself is a 2D plane with lengths and angles all distorted from the original view, but we can still immediately recognize the geometrical structure of the 3D space. How is this possible? It must be because there are geometrical properties that remain unchanged though the process. The picture can be seen as a projection of the original view, and projective geometry is the study of properties that are invariant under projections. Or as the artists call it, perspective, everything looks small in the distance.

Projective Transformations

Projective transformations allow us to project points from one plane to another. If there is a point P on plane pi and you want to project it onto plane pi’, choose any point in space, point 0, draw a line through 0 and P, the intersection of line 0P and plane pi’, point P’, is the projection of point P on plane pi’. Similarly, because a line is a collection of points, lines can also be projected. Note that the planes do not need to be parallel to each other, although it is depicted as such in the diagram. This is basically how you take pictures, points in space are projected onto the plane of the camera. Remember from school that parallel lines never intersect each other? Well here, they do. Parallel lines are seen to intersect at a point infinitely far. So, point 0 can also be infinitely far and then the projection rays will be parallel to each other, and that is called a parallel projection.

When the cross ratio of four points is -1, or when CA/CB = -DA/DB, point C and D divide the segment AB internally and externally by the same ratio, C and D is said to harmonically divide the segment AB. For this to happen, point C is between A and B and point D is outside AB. An important conclusion is that if point D is at infinity, point C is the midpoint of AB. A complete quadrilateral can be used to generate points with cross ratio −1. A complete quadrilateral is a figure with any four straight lines which are not collinear, like the figure here. If we connect all three diagonals, the four points on each diagonal all have a cross ratio of −1. To prove this, we simply observe that projected from point E, x = (IFHD) = (ABCD); projected from point G, x = (IFHD) = (BACD). So (ABCD) = (BACD). Since (ABCD)= 1/(BACD) we have an equation x = 1/x, x = ±1.Because point C is between A and B, the cross ratio must be negative, x = −1. Similarly, x = −1 for the other two diagonals as well.

Cross Ratio

But what is the thing that does not change after projective transformations? If you have looked closely at the graph above, you might have noticed that CA/CB = C’A’/C’B’. the ratio CA/CB have stayed constant after the projection. If you did see that, congratulations, you just discovered the invariance of cross ratio. Cross ratio is defined as x = (CA/CB) / (DA/ DB) where A, B, C, D are points on a line and directions matters, for convenience I will denote it x = (ABCD). It is constant for any four points undergoing projective transformation. This can be proven simply by using the area of a triangle equation. We can find the area of a triangle in two ways, 1/2b×h, 1/2abSinC. So, for triangles OCA, OCB, ODA, ODB we have:

We can see that the cross ratio is only dependent on the angle at O, and since the angles are always the same under projective transformation, the cross ratio must also be constant. For the case before with only three points, since the planes are parallel, we can say that point D is at infinity, therefore DA/DB = 1 and the cross ratio becomes CA/CB. When you think about it, in a picture, all the objects are scaled down to a certain ratio, that is the idea.

Beyond

Projective geometry is another way of representing the world, it is like a more generalized version of the Euclidean space. The above are only the basics, there are so much more to projective geometry: Desargues’ Theorem, Pascal’s Theorem, Brianchon’s Theorem, representing conic curves, and hyperboloids. But this margin is too narrow to contain. ~

BEHAVIOURISM

Control is an important theme in the world we live in today, but many of us are not aware of the ways in which every major institution, from the government, to your school, to multibillion dollar companies, control us in our day-to-day lives and on a large scale. When we delve into the details of this, the revelations will shock you. In this article we will delve into Behaviourism, a concept with infinite reach that has disrupted and revolutionised the field of psychology as an important theory, quickly becoming the dominant school of thought and even a philosophical doctrine. Its applications are already prevalent in today’s society, in all areas including technology, education, law enforcement and general control of the masses. So continue reading to find out how and why humans are made to act a certain way without even knowing it.

What is Behaviourism?

First, we need to understand what Behaviourism is. Conceived by founding Father John B. Watson in 1924, Behaviourism can be simply defined as the theory that human or animal behaviour is based on conditioning, rather than thoughts, or feelings (Cambridge Dictionary, 2019). The ‘conditioning’ mentioned here refers to operant and classical conditioning, the two different methods of controlling behaviour. Ivan Pavlov, a Russian physiologist (18491936), released works on classical conditioning, where an organism was exposed to a stimulus (let’s say dogs salivating at the sight of food) and displayed a reflex, but once said stimulus had been associated with a new stimulus (the ringing of a bell before showing the food), the same reflex could be displayed in response to the new stimulus, even after the original stimulus was removed (ringing the bell would still make the dogs salivate at the thought of food, even if no food was ever brought).

John B. Watson

John B. Watson claimed that the at the time prevalent study of the mind was fundamentally impossible, and that only observable behaviour can be studied, and thus controlled by deriving insights from the data. His work on Behaviourism influenced the way psychological experiments are carried out today, the way behavioural therapy is carried out, classroom teachings, and the study of environmental influences on human behaviour. Watson once claimed in his if he was given a dozen healthy infants to raise in his own specific environment, he could raise each one to be a specialist of any type he might fancy- a doctor, lawyer, or even a thief, although admitting he was ‘going beyond the facts’ with this claim. Watson wasn’t a stranger to controversy, with the

credibility of his theories coming into criticism by R. Dale Nance (1970) on many grounds, one being that the theories were founded on a Watson’s tough upbringing on a farm in South Carolina with no father, causing a rude awakening to the world and the loss of childhood, influencing his treatment of children as young adults and lack of understanding of the nature of an unaffected child, and the intricacies of child-rearing. One experiment which aroused significant controversy around him was the ‘Little Albert’ experiment, his unethical experiment which set out to prove classical conditioning. Watson and his assistant Rayner were able to condition a 9-month-old infant (named ‘Little Albert’) to be afraid of a white rat.

Initially, the boy wasn’t afraid, but they began clanging an iron rod whenever they showed the rat. The iron rod, a negative stimulus, frightened Albert and made him cry, and thus over time, the two stimuli (rat and iron rod noises) were associated with each other, and thus when the rat on its own was shown to the infant, he immediately cried. This was widely considered a successful proof of classical conditioning in humans, and has been cemented in scientific history. However, the fact that Watson didn’t have the time to decondition the child as Albert was taken out of town immediately afterwards, and the late discovery that Albert was mentally disabled raised many ethical questions in the scientific community, causing people to question the integrity of the results, and stirring up controversy.

B.F. Skinner

B.F. Skinner, the most influential psychologist of the 20th century and a pioneer of Behaviourism with Pavlov and Watson, developed his own type of behaviourism called ‘Radical Behaviourism’, which made a key distinction between behaviours, citing respondent behaviours as behaviours displayed in response to certain stimuli, and operant behaviours as behaviours displayed intentionally as a result of consequences (positive and negative reinforcement). For example, a rat pressing a button for food that only works when a sound plays and doing this whilst the sound plays only is operant conditioning. Skinner carried out most of his work in what he called a ‘Skinner Box’, a box used to record the behaviour of an organism experiencing operant conditioning (see above). Now we’ve touched on the basics, how does this apply to us? We begin by looking at technology, specifically social media, the most relevant and perhaps disturbing use of behaviourism. Social Media Applications such as Snapchat, TikTok and Instagram all have the same two purposes: The first is their purpose to give you a platform to connect with others and communicate with your social circle. The second purpose, however, is to ward off any possibility of you ever leaving the app, to consume as much of your time as humanly possible.

How Social Media Controls You

How do they do this? The first way is by using a psychological effect and phenomenon called Herding. This is when, instead of acting perfectly rationally and solely based on the information available to you, individuals act based on the actions of others, often following large groups in making the same decisions due to the innate human desire to be part of a community, for example buying a trending jacket rather than the one you liked the look of in order to fit in with the social norms. On social media, this translates into staying on the site for long periods of time, simply because your friends are on there and active, so you feel the need to stay caught up due to FOMO, or the Fear Of Missing Out. This also applies to driving traffic to certain types of content, as certain videos will have many likes, and some will have significantly less despite being similar, but the user will always choose to like the video with more likes subconsciously, even if they prefer the one with less. This is a prime example of herding, and all of the 50 people I interviewed reported that they had experienced this effect before, due to the desire to feel part of a group.

The second way is through the use of forms of physical conditioning. When you scroll, chances are you are sitting and therefore sedentary, so your body begins shutting down as you spend time inactive. This has major implications because

even when you get bored of social media, or consider leaving to do something else, chances are that something requires energy, energy you now don’t have because of the amount of time you have spent staring at a screen and remaining sedentary. As a result, you remain on the app, or you leave it but are now so mentally and physically drained that you do absolutely nothing productive as it requires brainpower or output.

The third way is by exploiting your nature. Platforms like X are notorious for this- designing algorithms that deliberately generate outrageous content that is overwhelmingly offensive of shocking to you, which elicits a much more passionate response out of you, thus retaining your attention for longer periods of time, and sending you down rabbit holes arguing with people who you think are wrong, or posts claiming things that simply aren’t true, and are known as ‘ragebait’, made purely to troll you and get a reaction out of you.

The final way is through the use of scrolling. You will notice almost all platforms have adopted scrolling as their main feature (think Instagram Reels, TikTok, Snapchat Spotlight). This is by far, the most powerful form of psychological manipulation used on the site. A major way companies keep you present on their apps or websites is through an emphasis on good UX. UX is short for user experience and refers to design focus on the way a user interacts with an interface. Said experience must be seamless in order to make the easiest option at any point for the user to be to remain on the site or app.

Therefore, when designer Aza Raskin developed the concept of infinite scrolling, it revolutionised the world of technology forever. It works by greatly reducing the inertia of existing on social media, as rather than clicking on each page and waiting for it to load which can take a while and becomes tedious with poor Wi-Fi, you can instead scroll down with the mere swipe of a finger and virtually no effort, and your brain is instantly rewarded with a rush of dopamine, the drug of desire, as you experience another funny meme. There is no friction whatsoever. However, as you overload your nucleus accumbens (region of the basal forebrain responsible for converting motivation to action) with large quantities of dopamine, it becomes desensitized over time, requiring larger and more frequent doses of dopamine for the very same reward, thus developing in you a short attention span and addictive behaviour towards social media as you need more frequent and potent doses to achieve the same proverbial ‘high’.

The results of infinite scrolling were so adverse that Raskin founded the Centre for Humane Technology, a non-profit organisation that focuses on leaders being accountable for, and addressing the consequences of technology, alerting people to the

impact of technology on society, and encouraging the use of humane technology which is socially responsible and avoids exploitation of users, and apologised for inventing infinite scrolling in the first place, which is alarming to say the least. Furthermore, if you recall the work of B.F. Skinner and Pavlov, one concept they both proved was Variable Ratio Enforcement, where operant behaviours such as scrolling are enforced almost irreversibly through positive reinforcement i.e. reward (of dopamine), however the reward comes at random intervals, for example in the Skinner box, giving a pigeon food pellets when it presses a food lever, but only at random intervals so it wouldn’t get them every time, which causes it to desire and chase food much more than if the food pellets were to come out at regular, predictable intervals. This causes your dopamine to skyrocket as you can’t predict the outcome of the perceived positive activity (scrolling and whether the video will be good or not), and therefore you begin obsessively scrolling, noticing yourself becoming bored but still being unable to exit the app because the next swipe might be a huge reward. Not to mention you start losing awareness of how much time is passing, with the fact that social media apps hide the clock in the top corner by default not making matters any better. The fact that you have no way of knowing when you will ‘win’ and be rewarded also makes this addiction stand the test of time in almost all cases, which bears a striking resemblance to, you guessed it, gambling.

The Final Boss of Behavioural Conditioning and Manipulation

Gambling is considered by many to be the paramount of all addictions and the final boss of effective operant conditioning. In terms of being the easiest activity to start and the hardest to stop, Gambling beats drugs and even alcohol. To conduct a small test of this, I asked 50 people in my school which addiction they thought was worse, and 44 out of 50 said gambling, whilst just 6 said alcohol, and this is not only due to the high availability of gambling games to people of all ages, but also the fact that you can easily continue your life while being a gambling addict and remain in the cycle for extreme amounts of time, with significantly less bodily depreciation than alcohol or drugs. When you enter a casino, your brain is overwhelmed by the myriad of games there are to play, almost all of which encourage the placing down of hefty sums of money, and yet you would be surprised to hear that the most successful game in the entire casino is the one that asks for mere coins or single notes as opposed to wads of cash, the slot machine, raking in over 65% of the average American casino’s revenue. Consider the standard slot machine. Players are encouraged to put a miniscule amount of currency into the slot and spin the wheel. If the same symbol is shown 3 or 4 times on every wheel, you win a prize. The slot machine uses a random number generator (RNG) which determines whether you win or not, and how big your win is. U.S. military-run slot machines earn $100 million a year from service members overseas. However, players don’t often win because all slot machines apply Skinner’s operant conditioning principle of variable ratio enforcement, where players are kept in a perfect balance between tension as they try to win and release as they reap the rewards of a win, not winning enough to be perfectly satisfied and always losing money overall yet still trying to make it back, perfect for the casino to line their pockets at the player’s expense. This uses yet another psychological tendency called loss aversion, where players can’t leave the machine after losing, they must end on a win, so they keep playing. It is not uncommon for a player to enter a flow state while gambling, where they are faced with a challenge or goal: to win, which is just out of reach yet attainable, and the activity is entertaining and gives instant gratification when successful, but is also a challenge, and this causes hours to pass like minutes, and the player to become extremely dedicated to, and focused on, beating the machine. This is why in the long run, ‘the house always wins’, and this is how casinos make ludicrous profits from exploiting human behaviour.

The Government

It doesn’t come as a surprise that the government uses behavioural insights to inform their legal system and prisoner system. The legal system uses operant conditioning to ensure we stay in line, as breaking the law results in punishment and negative reinforcement. This means if you are caught speeding, you may be jailed overnight and fined. However, once you pay the fine and receive the points on your license, your freedom will be returned to you, leaving you feeling perhaps relieved you don’t have to live in jail, and certainly afraid to ever speed again. This is an example of stopping negative behaviour resulting in the removal of a negative stimulus, conditioning you to drive responsibly. The prison system works similarly, you may meet with psychologists

regularly to help rehabilitate you and prepare you for reintegration into the world, and cooperation with guards in jail generally results in privileges being given, such as a TV, more time outdoors and potentially parole (time cut off your sentence). However, poor behaviour will result in these privileges being removed and an increase in your sentence. This helps improve prisoner behaviour to an extent as they are eager to leave the prison. Even in the case of COVID-19, many people who may have been initially averse to taking the vaccine were forced to, as the governments of the world used negative reinforcement to force people to take it, simply by banning those without a vaccine from travelling to their countries as tourists, a simple and effective method to encourage people to take the vaccine and prevent the spread of the virus.

Positive effects

However, this story is not all doom and gloom, as behaviourism has had a positive effect on education and other areas.

Educational institutions such as schools, colleges, sixth forms, and universities use behavioural conditioning to get the best from their students. They use punishments to discourage bad behaviour (e.g. vaping), negative reinforcements to encourage conscious efforts to increase good behaviour (e.g. no privileges to leave campus if punctuality is low), and positive reinforcement to acknowledge and reward those who are constantly displaying the traits they desire. This results in schools full of students who want to thrive on their own accord. Skinner, when discussing how best to approach education, recommended choosing a topic to learn, breaking it down into manageable tasks for the student and let the student do each task, positively reinforcing correct work. After continuing this process ensuring the student’s successful at each step before moving on, the topic can simply be revisited every so often to ensure it remains in memory.

Another positive is the fact that the government levies taxes on addictive or essential goods with negative externalities (negative spillover effects) such as fuel, cigarettes and alcohol to raise money to improve the country, which is essentially deriving funding from peoples’ addiction. Because these goods are price inelastic (increasing the price results in a less than proportional decrease in demand), people buy the product regardless of price and profits can be maintained whilst money is raised for the government to spend on improving the country. However, one could argue the negative externalities of the goods might outweigh the revenue made by the Government at times. Behaviourism is a vast field of psychology with tools uncovered that have huge implications and can be used to inflict harm like with social media, or to push for positive change, like in education. The work of Skinner, Pavlov, Watson and many other psychologists has undoubtedly had an astronomical effect on the world today, and hopefully with time, their behavioural insights will be put to practical use for the greater good. ~

Words - Tommy Wu

Materials of Denture: What Is the Best Choice?

Have you ever worn dentures? I got my first denture at 14 years old, and it made me wonder what the best denture material would be if I needed them in the future one day. Perhaps you are wondering the same thing about denture options if you are looking ahead to oral health. There are several options, and each has advantages and disadvantages. Here, I will provide some information on the most widely used denture materials, their differences, and which may be the best option for various requirements.

Acrylic Resin Dentures

Acrylic resin is the most common material used to make dentures nowadays, valued for its convenience, affordability, and ease of fitting. Acrylic resin is light and can be accurately molded in form to suit a patient’s mouth, thereby making it a popular choice for complete dentures, where fit with the mouth is critical. The flexibility of acrylic also allows for

relatively simple repairs and modifications should there be any alteration in the mouth over time. Acrylic dentures is an affordable choice for a lot of patients, simply because they provide an acceptable compromise between cost and function. For people looking for an efficient and cost-effective way to treat tooth loss, acrylic dentures continue to provide a sensible and easily obtainable remedy.

Porcelain Dentures

Porcelain is typically the material for those concerned about the look of the dentures. Porcelain dentures are designed as close as possible to the look and color of the user’s own teeth. Porcelain dentures have a shiny look and are tough and resistant to stains. There is a disadvantage: porcelain is heavier than acrylic and breaks if dropped. It also places additional pressure on the gums, and this may be uncomfortable. Despite these disadvantages, porcelain remains a popular choice for those who want the look of natural teeth.

Metal-Based Dentures

Metal dentures (see below) are most commonly made of cobalt-chromium alloy and thus are tough and resistant. Metal dentures are stronger and slimmer than the acrylic and provide more support for the remaining natural teeth to avoid further damage. Some people are allergic to the metal, and the visible metal clasps are not pleasing aesthetically. Metal dentures are expensive but a suitable choice for people who want strength and stability.

Flexible Dentures

Flexible dentures are made of nylon and are thus more comfortable as the material molds the contours of the mouth. Flexible dentures are less noticeable as they do not have metal clasps that show. That makes them suitable for people who worry about the smile look. Flexible dentures are not as strong as metal or porcelain. Flexible dentures are easier for bacteria to build upon and harder to fix if they break. However, for the look and for the feel, the flexible dentures are suitable.

The Future of Denture Materials

Looking ahead to the future of dentures, denture will benefit largely from many different technological advancements. Developments like 3D printing are already changing the manufacturing process by enabling quicker turnarounds and more individualization to meet the individual needs of the specific patient. In addition, ongoing work on the development of more resilient and biocompatible materials will go a long way towards significantly enhancing the lifespan and functionality of dentures. Of particular interest as well is the new hybrid model of combining traditional dentures and dental implants for enhanced patient satisfaction, stability, and comfort. Of particular concern going forward will be making these advancements both accessible and affordable, particularly among lower-economic and underserved populations.

Choosing the best denture material depends on a number of factors: budget, lifestyle, and individual preference. Acrylic resin is cheap and can be molded according to any individual and thus the best choice for most people. Porcelain gives a realistic appearance, but weighs a lot and is fragile. Metal dentures are extraordinarily strong but perhaps not the best choice for people who have a concern over the way they look. Flexible dentures offer discreteness and comfort at the cost of strength. The best choice always depends on individual preference, and one needs the advice of a dental professional on what would be the best choice for you. ~

Digital Dentures

Mathematical Approaches for Reducing Data Dimensionality in Machine Learning

Machine learning aims to imitate human learning, gradually improving accuracy. This requires not only immense processing power but also vast amounts of data to be learned. This data often exists in high-dimensional forms, including text, images, sound, and so on. Processing such high-dimensional data efficiently is challenging; reducing its dimensionality offers significant benefits, enabling deeper analysis, faster computation, and improved model performance. But how is this raw data transformed into low-dimensional numerical representations? How does mathematics facilitate this data handling process?

Word Embeddings

Natural language processing (NLP), a major area of machine learning, enables computers to understand and process human language. Since computers don’t understand language as humans do, they rely on word embeddings, which are representations of words as real-valued vectors. While preserving semantic and syntactic relationships, word embeddings convert words into numerical forms. The word embedding process begins with preparing an embedding matrix where each column initially corresponds to a word in the vocabulary. The values in the matrix often start with random values, but through training on large text corpora, these values are refined to keep the unique characteristics of each word. After training, each column represents the vector

embedding of a word. These vectors are in high-dimensional spaces, capturing features like word meaning, context, and interrelationships. For instance, GPT3’s word embeddings are 12,288 dimensions. This is significantly large taking into account that the dimensional size of GPT-2 is at most 1,600 dimensions. Surprisingly, GPT-4, which is widely used today, is considered to have at most 16,000 dimensions. This large number of dimensions is used to capture the complex relationships between every word. However, this requires insurmountable computational power as well as cost. To train GPT-3 from scratch, thousands of GPUs need to run for a month, which costs over $4.6 million. Although such vectors are high-dimensional, they are projected into lower dimensions for visualisation and better understanding. For

example, when visualised in three dimensions, the vector difference between “man” and “woman” is likely to be akin to that between “king” and “queen” (Figure 1), suggesting that a specific direction in the vector space encodes gender information. Furthermore, word embeddings enable computations like calculating word similarity by using cosine similarity or performing vector arithmetic to find semantic relationships (e.g. queen−king≈woman−man). These mathematical properties make word embeddings critical for modern NLP.

Principal Component Analysis (PCA)

Principal component analysis is a widely used mathematical technique for dimensionality reduction that simplifies complex, high-dimensional datasets. PCA transforms the original data into a new set of uncorrelated variables called principal components by identifying the directions in which the data varies the most. Variables must be uncorrelated since high correlation among independent variables can be problematic for causal modelling. These principal components capture the greatest variance in the data, which means the most important information is retained while irrelevant or redundant information is discarded. For example, in a dataset with 10 features, PCA might reduce it to two or three principal components for the ease of analysis, visualisation and machine learning, avoiding overfitting or excessive computational costs. PCA is implemented with the help of linear algebra and matrix operations, and it transforms the original dataset into a new coordinate system that is structured by the principal components. In order to find principal components, PCA requires covariance matrix, eigenvectors and eigenvalues. Covariance matrix is a square matrix that shows how the variables of the original dataset vary from the mean with respect to each other. This is how it looks like.

This is a 3×3 covariance matrix of a dataset. The diagonal entries are variances of each variable whereas the off-diagonal entries are covariances between pairs of variables. In general, for a dataset with n features, the covariance matrix is n×n.

Eigenvectors and eigenvalues are calculated from this equation.

Ʃv= λv

Where: Ʃ is the covariance matrix, v is an eigenvector and λ is the eigenvalue associated with the eigenvector.

In PCA, eigenvectors and eigenvalues have a special meaning; eigenvectors indicate the directions of variance in the data, and eigenvalues quantify the amount of variance explained by corresponding eigenvectors. Imagine that a dataset with multiple features is mapped out and results in a multi-dimensional scatterplot. In this case, eigenvectors give the direction of variance in the scatterplot. Eigenvalues are the coefficients of the eigenvectors. Two major components are calculated in PCA (Figure 2): the first principal component (PC1) and the second

principal component (PC2). PC1 is the direction in space along which the data points have the highest variance. More features are retained from the original dataset as the variability captured in PC1 becomes larger. PC2 accounts for the second highest variance in the dataset and is orthogonal to PC1 since it must be uncorrelated with PC1. Principal components are given by this equation.

Principal component=(Eigenvector)×(Original data)

Therefore, PC1 is calculated by multiplying the original data by the eigenvector with the highest variance, and PC2 is calculated using the eigenvector with the second highest variance.

Finally, the data is transformed into the new coordinate system defined by the principal components, creating new data which captures most of the information but exists in lower dimensions. PCA can be utilised for various situations. For instance, in image processing, high-resolution images are made up of millions of pixels. Applying PCA enables a reduction of the pixel data to a smaller dataset that captures essential visual patterns, such as edges and textures, and exclusion of noise and redundant information as well. This means that PCA not only reduces computational demands but also improves model performance by focusing on the most relevant data features. With the help of linear algebra and eigen decomposition, PCA is a fundamental tool in machine learning and data preprocessing. (see image above)

t-Distributed Stochastic Neighbour Embedding (t-SNE)

t-SNE is a non-linear dimensionality reduction algorithm for machine learning, embedding high-dimensional data in a low-dimensional space of two or three dimensions for visualisation. Specifically, it models each high-dimensional object by a two or three-dimensional point in such a way that similar objects are modelled by nearby points and dissimilar objects are modelled by distant points with high probability.

The t-SNE algorithm finds patterns in the data based on the similarity of data points with features; the similarity of points is calculated as the conditional probability that point α would choose point β as its neighbour. Conditional probability (probability of A given B) is:

It then tries to minimise the difference between these conditional probabilities (or similarities) in higher-dimensional and lower-dimensional space for a perfect representation of data points in lower-dimensional space. t-SNE contributes to the creation of comprehensible visualisations of complex data. For example, Figure 3 shows the implementation of t-SNE for cytometry data visualisations. The more iterations are performed, the clearer the dataset’s characteristics become. Although t-SNE has some limitations, such as computational cost and potential difficulty in preserving global relationships, it is an innovative algorithm that overcomes the difficulty of projecting high-dimensional data onto a lower-dimensional space.

SUMMARY

Machine learning has a longer history than we would think. Despite the fact that the PCA algorithm is widely used today, the earliest version of PCA was, surprisingly, proposed in 1901 by Karl Pearson. This historical background implies that mathematics is the foundation of machine learning. In spite of the recent remarkable advancements in computing power, the need for efficiency and simplicity remains. Mathematical approaches like PCA and t-SNE not only compress data for faster processing but significantly improve the understandability of models. For instance, these algorithms transform large datasets into lower dimensions that are understandable by both computer and humans, just as a map does by extracting and highlighting important features in a complicated environment. ~

NERVE AGENTS

Nerve agents are a group of synthetic chemicals classified as organophosphates that are designed to disrupt the normal functioning of the nervous system. They are highly toxic and act by interfering with the transmission of nerve impulses. Nerve agents are considered some of the deadliest chemical weapons ever developed and have been used in both warfare and acts of terrorism. These agents are usually colourless, odourless, and can exist in liquid or vapor form, making them difficult to detect. An organophosphate is a compound that contains a phosphoryl group (–PO4), where the phosphorus atom is covalently bonded to oxygen atoms and typically also to organic groups (such as alkyl or aryl groups). The general structure of an organophosphate functional group is: R−O−PO3R′ These agents work by disrupting the way neurons transfer messages across the synapse between other. The synapse is the very small space between neurons. Neurons transfer information via electrical impulses, but these impulses can’t cross the empty space of the synapse. To combat this problem, small chemicals called neurotransmitters diffuse across the gap, and upon arrival to the ‘post-synaptic neuron’ they stimulate another impulse. This process happens almost instantaneously and repeats 100s of millions of times, until the impulse reaches an effector. Once the neurotransmitters have attached to the receptor in the post synaptic neuron, they are broken down by enzymes in the synapse to avoid overstimulation. One type of neurotransmitter is called Acetylcholine. It is made when an enzyme called choline acetyltransferase causes a reaction between choline and the acetyl group to create acetylcholine, and It’s made at the end of nerve cells.

Acetylcholine

Acetylcholine is broken down by the enzyme Acetylcholinesterase. The nerve agent, for example Novichok, competitively binds to the enzyme, meaning acetylcholine can’t be broken down, as the active site is occupied. This means the acetylcholine will keep binding to the receptors and overstimulate the effector (which can be a muscle or a gland). Poisoning by a nerve agent leads to constriction of pupils, salivation, convulsions, as well as involuntary urination and defecation, with the first symptoms appearing in seconds after exposure. Death by asphyxiation or cardiac arrest may follow in minutes due to the loss of the body’s control over respiratory muscles. There is an antidote however, called Atropine, and works by drastically reducing the acetylcholine concentration in each synapse, reducing stimulation of the effector.

Novichok

Novichok refers to a class of nerve agents developed by the Soviet Union during the 1970s and 1980s as part of a secret chemical weapons program. The name “Novichok” means “newcomer” in Russian, reflecting the fact that these agents were designed to be more potent and harder to decipher by NATO. Novichok agents were developed by Soviet chemists, including Vil Mirzayanov, a scientist who informed on the Russians a few years later. The Soviet Union invested heavily in chemical weapons research during the Cold War, to counter what they saw as inferior nerve agents like VX and sarin (which were European Nerve Agents developed in the ‘50s). In terms of toxicity, VX (A British agent) is generally considered the most potent nerve agent, followed closely by the Novichok agents. Tabun, Sarin and Soman (all German) are highly toxic but somewhat less potent than VX, as are the Chinese ‘P and H series’.

Vladimir Putin and the clean-up at Salisbury

The potency of Novichok were demonstrated in March 2018, In Salisbury where a former Russian double agent, Sergei Skripal, and his daughter, Yulia Skripal, were poisoned with it. Both survived, but tragically a Homeless women died after she sprayed it on her wrist (as the agent was in a perfume bottle). Another Notable poisoning by the Russian government was in 2020, whereby Alexei Navalny, a prominent Russian opposition leader, anti-corruption activist, and critic of Putin was murdered (see right). Furthermore, the Tokyo Subway Sarin attack of ’95 left 13 people dead and over 1000 injured after a cult group released it underground. Nerve agents were accidentally discovered in 1936, by the German chemist Gerhard Shrader. He was researching insecticides (which are usually organophosphates) for the chemical giants IG Farben, who also synthesized mustard gas, explosives, and the infamous Zyklon B gas which would be employed in the holocaust a few years later. Schrader experienced the effects of his new nerve agent (subsequently named Tabun) first hand, when a small drop of it fell of his lab bench. He didn’t touch or go near it, but within minutes he and his lab assistant began to experience miosis (the constriction of the pupils), dizziness and severe shortness of breath. It took them three weeks to recover fully.

The future of nerve agents is laced with both challenges and hope. While international treaties like the Chemical Weapons Convention and advances in detection technology provide some optimism, the potential for new chemical weapons development, use by non-state actors, and violations of disarmament agreements remain significant threats. Continued diplomatic pressure, scientific advancements in detection, and international cooperation will be crucial in ensuring that nerve agents become a relic of the past, rather than a weapon for the future. ~

FALSE VACUUM THEORY World Enders?

Typically, when we think of our problems naturally, we think small scale such as a test the next day. However if we go more apocalyptic focused, we think quasars or meteors, rouge black holes, rouge planets and the list goes on. However, these are all very large-scale wipe outs - what if instead we focus on the atomic level.

The HIGGS-BOSON!

Although this seems harmless independently as it is part of the standard model of particles, (even having a Frank Ocean song named after it), it could kill us all with no warning. But how? As most of you will know, very simply the Higgs Boson is responsible for mass through Higgs Fields, however that is not how it kills us all. Instead, it is something more despicable. Standard model of elementary particles

ENERGY LEVELS

Since the beginning of the universe, it has been stable, but Energy Potential may ruin it, all due to the most popular physics buzz word “quantum tunnelling.” See every particle wants to reach its ground state where it is completely stable where it has as little as energy as possible, the best way to think about this is a hill with a ball on the top, and it wants to go to the bottom making it more stable so it rolls down - and that is the most stable point. As most of the universe we believe has reached that state, it is in a vacuum state as all are at ground level. This is where the Higgs comes into play. See, imagine that it is currently at false vacuum, and we think that is the ground state as it does not have enough energy to get to its true vacuum state. But via Quantum Tunnelling it will be able to get there.

THIS IS BAD

Now this is where it gets the name false vacuum as once the Higgs reaches its true vacuum it is theorised to stop exerting Higgs field giving it mass. Now, at this point, this True Higgs-Boson will expand through the universe at C (See references for speed exactly). Now, why is this so bad? See as mentioned before Higgs gives properties mass, and referring to the standard model diagram from earlier we can see electrons have a mass. Now if we get rid of the mass it will not orbit as mass will be zero.

This means that every atom will become unstable as it touches, and this will expand to us at the speed of light, and we will be vaporised within seconds with no notice. This is much scarier than just some meteor as we could recover, with a false vacuum there is no recovery it is just over. This is all hypothetical though so we may live, and the universe is so big it may be happening completely outside of observable universe so we are safe or so far away we will be long dead. ~

during the start of the Information Age. Nascently, with the magnetic tape (developed in 1951), data storage continued to evolve, and more complex technologies were made to accommodate for the demand of data storage. Some included the DVD, made in 1995 and the creation of cloud computing and storage (released in 2006), where remote servers can manage and protect your data. However, demands are ever-growing, thus providing purpose to the research of other unique data repositories: Quantum CDs, graphene-based SSDs, and the superman memory crystal, which has a lifetime that far outlasts the other storage techniques listed above.

Superman

Memory Crystal

nanostructured dots within the glass via a femtosecond laser, a laser which emits ultrafast optical pulses within extremely short durations (10−15 seconds). A Spatial Light Modulator modulates the light to create specific patterns of nanostructures within the quartz. The halfwave plates matrix rotates the polarisation direction of the light to ensure it has the correct polarisation state when hitting the sample, and the Fourier lens focusses the femtosecond laser to ensure accurate etching on the glass. Writing of data is usually done on fused quartz.

Writing on fused quartz has several advantages. While being a great material to store data on, it can withstand up to 1000C (without deteriorating rapidly), thus, this crystal can preserve data for several future generations and be used for unmanned space exploration. A thin glass disc with a diameter of 12 cm can hold up to 360 terabytes of data, or store 72 million photos. Keep in mind that a DVD has the same diameter but can only store 17GB! The nanostructures in glass can be preserved for several millions of years (if the glass lattice remains intact). Its 5D property comes into play with the use of size and orientation. It utilises our standard three spatial dimensions relating to the X, Y, Z planes (our 3D world), including the orientation that can vary depending on the angle a user views the nanostructure from (reading to) and the angle at which the light is shone (writing to), and the size varies with the magnification of the microscope used to view it (reading) and the size of the nanograting when writing to it. This allows for a vast amount of information to be stored in a stable medium, in addition to the fast-reading speeds, since this multiplexing increases the amount of data stored in one spot.

Under the microscope

When the femtosecond laser is shone tightly on to the quartz, it produces nanostructures. When observed through a polarisation microscope, a phenomenon called birefringence occurs, allowing the 4th and 5th dimensions to be observed. The quartz undergoes photon absorption and causes ionisation in the crystalline structure, forming a high-density electron plasma. Micro-explosions take place in the glass which form high-contrast patterns that selfalign when the silica relaxes. This process could take place several times if a user wanted to conduct many read-write operations on the same glass.

Current progress and Future

These glass discs were recognised by Elon Musk, and in early 2018, this storage device was sent to space in a 30-million-year orbit with its trajectory around the Sun, further proving its temperature resistance. Peter Kazansky and his team were the first to experimentally demonstrate this superhuman memory crystal. Now, SPhotonix are commercialising the memory crystal in the field of optical instrument component fabrication. Some of the uses of this device: longterm data archiving to preserve important data including important historical information; its utilisation during space exploration since data can be read from it at any time; and possibly governmental data since the current process of writing data to it is complex. Although still in its initial stages, it is projected to have great impacts on the future of data reservation.

Problems

Currently, the superman memory crystal is not perfect. The glass matrix takes a long time to write to and is prone to breaking if not managed properly. Although justifiable due its preliminary stages, the process of femtosecond lasering is extremely expensive. Furthermore, reading data from it is difficult because specialised equipment is required. Moreover, one challenge scientists need to amend is the reduction of mistakes when data is read from the crystal. A 0.36% error rate was calculated when reading 8696 bits from the nano gratings, however, was reduced to 0.22% when a different readout was used from previous research. Only a few companies and universities (for example, Southampton University, Kyoto University and SPhotonix) are managing to continue their research and overcome the problems stated above.

Definitions

Birefringence – Where a ‘double image’ is formed when passed through a crystal. This is due to the light entering the crystal being split into 2 beams because of the direction of light. Multiplexing – The ability to store and retrieve multiple layers of data from the same physical space. In this case, it is done using the 4th and 5th dimensions.

Electron Plasma – A short-lived state of matter which consists of several free-moving electrons. Optical instrument component fabrication – The manufacturing of fine-tuned optical instruments, such as microscopes. ~

The complex biology of Epigenetic Modifications, and how they cause diseases to

be expressed in the genome

What are ‘Epigenetic Modifications’ and the types of these modifications:

Epigenetic modifications change the genome without altering the DNA sequence, but still affecting gene expression. There are two main types of these modifications: DNA methylation and Histone Modification. Both of which play a pivotal role in how genes are expressed in the genome.

How does DNA methylation work, and how does this relate to diseases being expressed:

Description automatically generated with medium confidence, PictureWithin the mammalian genome, DNA methylation is an epigenetic mechanism/modification involving the transfer of a methyl group onto the C5 position of the cytosine (a nitrogenous base of DNA that is complementary to guanine). DNA

methylation is catalysed by a family of DNA methyltransferases (Dnmts) that transfer a methyl group from a molecule called S-adenyl methionine (SAM) to the fifth carbon of a cytosine residue to form 5-methylcytosine (5mc). The process of DNA methylation is mainly controlled by DNA methyltransferases, methyl-CpG binding proteins and other chromatin-remodelling factors. This process regulates gene expression by recruiting proteins involved in gene repression (e.g. PCR1/2 Polycomb repressive complexes) or by inhibiting the binding of transcription factor(s) to DNA. Aberrations in the DNA methyl system have an important role in human disease. For example, in cancer DNA methylation patterns are globally disrupted, with genome-wide hypomethylation and gene-specific hypermethylation events occurring simultaneously in the same cell. Loss of normal imprinting contributes to several other inherited

Epigenetics is the study of how your genes and the environment interplay to change the way in which genes are expressed in the genome (the entirety of your genes). Epigenetic modifications relate to this by regulating gene expression through chemical changes of DNA. In this article, we will explore these modifications in detail and how they bring deadly diseases in your genes to life.

genetic diseases in humans, including diseases such as: Beckwith-Wiedemann, Prader-Willi and Angelman syndromes.

How does Histone Modification

work, and how does this also relate to how diseases are being expressed:

Histones are proteins mainly found in eukaryotic cell nuclei. They provide structural support for chromosomes (by allowing the chromosome strands to wrap around them) and play a role in gene expression regulation, which is where modifications to them come into play.

Histone modification is an epigenetic mechanism, which interferes in the gene regulatory pathway. It’s a dynamic, reversible process with vital cellular consequences, including effects upon critical molecular pathways that may lead to cancer.

In detail, they’re covalent and reversible post-translational modifications of amino acids (amino acids join to form a polypeptide which in turn can join to form proteins) which make up the core histone proteins in the nucleus. Histone modifications are orchestrated by multiple enzyme-substrate complexes, which are responsible for the site-specific attachment and removal of chemical groups e.g. addition of methyl groups in DNA methylation.

There are four types of histone modifications: Acetylation, methylation, phosphorylation and ubiquitination. Acetylation is where the positive charge on the histones is removed. Methylation is where methyl groups are transferred to amino acids that make up the histones. Phosphorylation establishes interactions between other histone modifications. Finally, ubiquitination refers to the process of adding or removing a large modification to histone proteins. Modifications to histones lead to the activation or repression of certain genes, these affects processes such as memory formation which are critical in neurological function and disease. An abnormality in Histone modification which causes for genes, that should be activated, to be repressed is a common pattern found in most genetic diseases.

Summary

In summary, epigenetic modifications alter which genes are activated or repressed without affecting the DNA sequence. There are two epigenetic modifications which have been discussed, DNA methylation and Histone modification. DNA methylation works through methyl groups (which are clusters of hydrocarbons) which act as signals along strands of DNA, turning some genes on and others off. This can cause disease given that a certain gene which contains the disease may be activated by DNA methylation. A similar case for disease occurring happens in Histone modifications. Histone modification works by altering the structure of histones which support chromosomes containing genes. This causes for certain genes to be activated or repressed, where certain genes may be activated causing a disease to be expressed in the genome. ~

THE FERMI PARADOX What’s really out there?

Introduction

Dating from as far back in history as the ancient Greek and Roman civilizations, the subject of extraterrestrial life has been a heated subject of debate for humankind, for centuries. Indeed, facilitated by the common, human yearn to satiate their curiosity; driven by their common desire to venture forth into the unknown, mankind has continuously endeavoured in efforts to prove or disprove the existence of life beyond our home planet, Earth. Provided this notion, the Italian-American physicist, Enrico Fermi (see above), proposed an interesting paradox in the summer of 1950: the Fermi Paradox.

So what exactly is the Fermi Paradox?

The Paradox entails the inconsistency between the lack of evidence for ‘intelligent’ extraterrestrial life and the apparently high likelihood of its existence(see Chain of Reasoning). The general argument is that, if the formation of complex, intelligent life forms are as permissive as suggested by evidence available on Earth, then the ‘intelligent’ extraterrestrial life we seek should be so common in our observable universe that it would be implausible for it not to have been detected yet.

‘There may be aliens in our Milky Way galaxy, and there are billions of other galaxies. The probability is almost certain that there is life somewhere in space.’

Chain of Reasoning

The following are some of the facts and hypotheses that together serve to highlight the apparent contradiction:

There are billions of stars in the Milky Way similar to the Sun.

With high probability, some of these stars have Earth-like planets in a circumstellar habitable zone.

Many of these stars, and hence their planets, are much older than the Sun. If Earthlike planets are typical, some may have developed intelligent life long ago.

Some of these civilizations may have developed interstellar travel, a step humans are investigating now.

Even at the slow pace of currently envisioned interstellar travel, the Milky Way galaxy could be completely traversed in a few million years.

Therefore, since many of the Sun-like stars are billions of years older than the Sun, the Earth should have already been visited by extraterrestrial civilizations, or at least their probes.

However, there is no convincing evidence that this has happened.

This therefore gives rise to the Fermi Paradox, as a major problem that must be solved if we are to suggest that there is indeed complex alien life forms in the vast expanse of the universe, and that we are not the only intelligent life form in its vicinity.

Modelling the Fermi Paradox using the Drake Equation

As an attempt to evaluate a numerical estimate of the probability involved in the existence of extraterrestrial life, we can use the theories and principles denoted in the Drake equation, which link closely to the Fermi Paradox.

It is indeed due to the last four variables that attempts to estimate the number of advanced civilizations in our galaxy has been incredibly difficult, and subject to greatly differing results.

Further speculation

If Aliens Showed Up, Would Humans Stick Together? | Psychology Today, PictureThe first scientific meeting on the Search for Extraterrestrial Intelligence (SETI) accommodated an optimistic estimate of there being roughly between 1,000 and 100,000,000 civilizations in the Milky Way galaxy alone. Conversely, the work of two other scientists, Frank Tipler and John D. Barrow, speculated that there could only exist <1 of such civilizations( so only humankind itself). It is recognized that this phenomenal disparity is greatly due to the former using optimistic values for the last four variables in Drake’s equation, and the latter using pessimistic ones. This therefore demonstrates that humans are yet unable to determine whether there exists alien life in our galaxy, let alone estimate the quantity. Additionally, the difficulty of any attempts to estimate the probability of there existing extraterrestrial beings is greatly enhanced, as our insight into the difficulty of complex life formation is heavily influenced by our lack of information on the subject, as we are currently only aware of our own existence. Since we are limited to only this knowledge, scientists can merely guess specific numbers for likelihoods of events whose mechanism is not yet understood.This includes the likelihood of abiogenesis(the natural process by which life arises from non-living matter) on an Earth-like planet, with current likelihood estimates varying over many hundreds of orders of magnitude, thus making accurate predictions near-entirely impossible.

Taking this all into account, the analytical work of three additional scientists(and philosopher), namely Anders Sandberg, Eric Drexler and Toby Ord, suggests “a substantial ex ante probability of there being no other intelligent life in our observable universe”. The scientific credibility of this result is rather strong, especially since it was agreed upon by both members of the scientific and philosophical communities, the latter of which approaches the problem from a theory-perspective, whilst the former endeavours on more practical analytics, hence serving to arrive at the same conclusion from different perspectives.

So are we truly alone?

On a more optimistic note, the arguments presented so far only provide an insight into the vast and complicated nature of the likelihood of complex life existing beyond Earth. Indeed, as a species, humankind has only begun its lengthy journey to discover the peculiar secrets of the universe, and what this all entails.

Despite our current inability to predict or determine whether we are the unique and sole exception of life in the cold, bleak emptiness of the universe, it is more than likely that we will find an exact and logically flawless solution to the Fermi Paradox in the near future, hence uncovering the ‘truth’ about aliens.

As a species blessed with the motivation to satiate our curiosity through discovery, and the intelligence necessary to realize such ambitions, one must believe that it is merely a matter of time before we unveil the reality to life in our galaxy, perhaps in the universe as a whole. ~

Evolution of man

THE IMMORTAL JELLYFISH

Eternal Rebirth: How the Immortal Jellyfish Defies Death

Introduction

The Turritopsis Dohrnii, also known as the Immortal Jellyfish, is the only biologically immortal species known to exist. Through its ability to revert to earlier stages of its life cycle, it can effectively turn back time and live forever. Through its unique ability to transdifferentiate, the Jellyfish can return to earlier stages of its life once damaged or under stress. From its origination in the Mediterranean Sea, the Immortal Jellyfish has migrated all around the world and can be found in every ocean, showcasing the species remarkable survivability in differing environments. Fundamentally, the regenerative capabilities paired with the magical mechanisms of reverse aging, raises great interest within the scientific community and questions the limitless capabilities they bring to the human race.

Discovery

The species T.Dohrnii was first observed by scientists in the late 1800s. However, it was 100 years later in 1988 before man finally discovered eternal life. The discovery was made

accidentally by marine-biology students Christian Sommer and Giorgio Bavestrello. Both students were conducting research on Turritopsis polyps, which refers to an early stage in life of the Jellyfish when it is attached to a hard surface. After leaving these in petri dishes, Sommer expected the jellyfish to mature and produce larvae, but days later, the petri dish contained many newly settled polyps. After further research and observations, the students realized that putting T.Dohrnii under stress would lead to the Jellyfish turning back into polyps and aging in reverse. This process of reverse aging led to the species being nicknamed the “Immortal Jellyfish.”

Classification and Physical Characteristics

The Turritopsis Dorhnii is classed amongst 9000 living species within the Cnidaria Phylum. Going down the levels of classification puts the Jellyfish in the family of Oceanidae and the genus of Turritopsis. Interestingly, none of the four other species within the genus Turritopsis or the 50 species within the family Oceanidae share the remarkable regenerative abilities of T. Dohrnii,

highlighting its unique place in the animal kingdom. Turritopsis Dorhnii are much smaller than the average jellyfish, which may be key to its anti-aging mechanism. When fully grown, the jellyfish only spans up to 4.5mm (about 0.18 in) across, which equates roughly to the size of your pinky finger’s nail. Its small size allows for less energy required for cellular regeneration and reprogramming, which is key in the rejuvenation of the Turritopsis Dohrnii. Furthermore, It can be easily identified through its bright red stomach visible within its transparent body. The jellyfish also has around 90 tentacles, which are used to capture plankton for food as well as serving as defense against predation.

Life Cycle and Reproduction

The life cycle of all species of Jellyfish is fundamentally the same. It begins with an existing adult jellyfish (medusa). The Medusa will release eggs as well as

sperm into the water, which allows both cells to join via sexual reproduction and interact to form a fertilized egg. This egg will then slowly grow into a small larvae known as a planula. As the planula swims around, they eventually find a solid surface where it is able to develop a digestive system and begin to feed itself. Places where polyps can develop include the seabed or even large moving animals. It is interesting to note that jellyfish can also reproduce asexually through the polyps budding. In budding, the polyp is able to reproduce through rapidly forming genetically identical clones of itself. As the polyp slowly forms muscles and nerves, a small part of the polyp breaks off to form an ephyra which is an independent organism. As the ephyra grows and feeds, it eventually becomes a fully grown adult jellyfish known as a Medusa.

Biological Immortality

Turritopsis Dorhnii are the species known to have the innate ability to potentially live forever. Their existence predates the extinction of the dinosaurs meaning that biologically speaking, it is possible that there has been a single jellyfish which has lived for around 66 million years. However, scientists deem this highly unlikely, due to the jellyfish being easily attacked and killed by predators as well as being very susceptible to disease.

The ability for the jellyfish to ‘live forever’ stems from the cellular mechanism known as transdifferentiation. The specialized adult cells within a medulla can undergo transdifferentiation to form an

entirely different specialized cell. This means when the medusa of Turritopsis Dorhnii jellyfish is damaged, stressed or dying of starvation, all the specialized cells on the medulla can transdifferentiate back into the cells found on a polyp. Around 24h later, a newly developed polyp settles onto the sea floor. Effectively, the Turritopsis Dorhnii can cycle between these two stages within its lifecycle infinitely, providing itself with biological immortality as it is continuously reborn. This form of immortality comes from the number of genes that repair and protect DNA the jellyfish contains; compared to normal jellyfish, the Turritopsis Dorhnii has over double the number of these genes, allowing it to produce more restorative proteins. Furthermore, the jellyfish has unique genes associated with cell replication as well as stem cell formation. Specific genes which are under investigation include: PI3KAKT pathway genes, which are similar to the Yamanaka factors in their cell regenerating abilities. Proteins involved are also very significant, with FOXO Transcription research showing its importance in the cell cycle, DNA repair and stress resistance. Scientists still haven’t fully understood the mechanisms this unique creature can undergo, yet from these promising developments, it seems the implications for science these tiny creatures bring could be vast.

Implications for Science

The fascinating discovery of the Turritopsis Dohrnii has opened countless scientific corridors, including the one to biological immortality and

reverse aging. The findings from research could potentially help humans, yet not in exactly the same way, due to the inherent difference between Jellyfish and humans. Instead of granting humans with immortality, this research hopes to cure diseases associated with aging, which includes cardiovascular diseases, neurodegenerative diseases and even cancer. Although this research seems imminent and very developed, many scientists still question our ability to use the mechanism of transdifferentiation for ourselves. When the Jellyfish transdifferentiates, it is not clear whether the organism is still the same individual. While all the genes remain the same, if all the cells transdifferentiate, the organism’s structure and molecular composition is changed in its entirety, potentially suggesting a change in the individual completely. Other scientific opportunities include regenerative medicine through the usage of stem cells. The immortal jellyfish’s ability to produce all the cell types within both a polyp and a medusa suggests potential within the ability to regenerate specific specialized cell types. If scientists are able to understand the cellular reprogramming techniques the Turritopsis Dohrnii undergo, then they might have the chance to further develop stem cell research and help repair damaged cells and tissues within our bodies. Through these breakthroughs, research can help treat patients with diseases such as diabetes, Parkinson’s and even certain cancers. ~

(Above) Life cycle of an immortal jellyfish
What would happen if you made a periodic table out of 15cm^3 bricks, where each brick was made of a corresponding element?

Of the 118 total elements, roughly 30 of them can be bought in pure form from chemical hardware stores or online, from companies such as the chemical giants Sigma-Aldrich (who are owned by the Merck group). Another few dozen can be scavenged by taking things apart, such as Americium samples in your smoke detector. All in all, its possible to get samples of give or take, 80 elements, but this would be a risk to your health, wallet and arrest record. The rest are too radioactive or short-lived to obtain more than a few atoms. But what if you did? The periodic table has 7 periods. The first is boring. The cube of Hydrogen will rise upwards and disperse, as would Helium. Period two is more interesting. The Lithium would

immediately tarnish, and beryllium is toxic so you would have to be careful to limit the amount of beryllium dust becoming airborne. Boron and Carbon sit there doing nothing. The gases of Oxygen Nitrogen and Neon would drift around, slowly dispersing. Moving back to element nine, Fluorine, we already have problems. The pale-yellow gas would spread across the ground and is the most reactive and corrosive element in the periodic table. All the current substances exposed to it (apart from Neon) would react and catch fire. If the Fluorine encountered any moisture, it would form hydrofluoric acid. If you breathed in any Fluorine, it would seriously damage or destroy your nose, lungs, mouth, eyes and eventually the rest

rest of you. A gas mask wouldn’t solve your problems either, as Fluorine eats through many of the materials used to make them! For the third period, the big problem would be phosphorus. While red Phosphorus is reasonably safe to handle, white phosphorus spontaneously ignites with the Oxygen in the air, burning with hot, hard to distinguish flames, and all the while releasing poisonous gases, such as Phosphorus pentoxide. Sulphur wouldn’t be a problem under

Words - Harry Fisher

under usual conditions, but it is currently sandwiched between burning Phosphorus on the left and Fluorine and Chlorine on the right, resulting in it catching fire and creating some horrific smells. The inert Argon is denser than air, so would spread out and cover the ground. The ongoing fire would cause all kind of terrifying compounds with names like sulphur hexafluoride. If the experiment was inside, you would be choked by toxic smoke and your building might burn down. Period 4 gets worse, with friendly elements such as Arsenic being introduced. The burning phosphorus and sulphur as well as a few others, are joined by potassium which tends to spontaneously combust and could ignite the arsenic, releasing large amounts of arsenic trioxide. The smell would be unbearable. Selenium and bromine are next to each other and would react vigorously, causing the selenium to burn. The smell of burning selenium was described by the organic chemist David Lowe as ‘making sulphur smell like Chanel’. The burning sulphur would meet the Bromine. At this point, the range of toxic compounds produced by the fire are incalculably large. However, if the experiment was done from far enough away you might survive. As the Fifth period is constructed, we encounter our first radioactive brick. Technetium is the lowest numbered element to have no stable isotopes. The dose wouldn’t be enough to kill you if you didn’t touch it, breathe in the dust or wore it as a hat. Overall, the Fifth period would be a lot like the fourth. No matter how careful you are, the sixth period would kill you. It contains several radioactive elements, including Promethium, Polonium, Astatine and Radon. We don’t know what Astatine looks like, as its half-life is so short. Our cube would briefly contain more astatine than has ever been synthesized. I say briefly because it would immediately turn into a column of superheated gas. The heat alone would give anyone nearby third-degree burns, and the building would be demolished. Dust and debris containing astatine, polonium and other radioactive products would rain down from the cloud, rendering the downwind neighbourhood completely uninhabitable. Building period 7 wouldn’t be great for anything. Most of the elements are so unstable that they can only be created in particle accelerators and

don’t exist for more than a few minutes. For example, if you had 100,000 Livermorium (Element 116) atoms, after a second you would only have 1 left. All of Period 7, also called the ‘transuranic elements’ would decay radioactively, releasing huge amounts of energy almost instantaneously. The result would be like a nuclear bomb, and the flood of energy would turn you and the rest of the periodic table to plasma. A mushroom cloud would rise above your lab, reaching the stratosphere. The fallout would be horrific. The debris would spread around the world. Entire regions would be devastated, and the cleanup would go on for years. While collecting things and neatly organising them is fun, when it comes to elements, you do not want to collect them all. ~

THE SOLAR POWERED SEA SLUG

The Elysia chlorotica, commonly known as the eastern emerald Elysia or the nickname ‘The solar-powered sea slug’, is a species of sacoglossan sea slug found along the east coast of the United States. This exceptional creature has earned its nickname due to its ability to photosynthesize, similar to how plants do. It achieves this by the process of kleptoplasty in which the sea slug sequesters chloroplasts from the algae it feeds on. After having been consumed, these chloroplasts remain functional for a few months within the slug cell’s allowing it to make use of sunlight in order to produce energy, hence ‘solar-powered’. This adaptation not only provides the sea slug with an additional source of energy, allowing it to survive for up to 12 months in the absence of food supply, but also gives it a coruscating emerald, green colour, blending in faultlessly with the algae it feeds on. The Elysia chlorotica’s ability to incorporate the chloroplasts of the algae it feeds on is a striking example of the endless complexities of evolution and the symbiosis of the animal kingdom.

Distribution and Life Cycle

As mentioned previously, the Elysia chlorotica inhibit the salt marshes of North America’s east coast from Florida to Nova Scotia (Canada). This wide geographic range impacts the species’ growth heavily with seasonal factors such as temperature and humidity dictating the timing of reproduction and photosynthesis and playing a crucial role in their life cycle. Like all sacoglossans, the Elysia

chlorotica is a hermaphrodite, allowing it to produce both female and male gametes. However, despite having the potential for self-fertilisation the Elysia chlorotica chooses to mate with others of the same species, promoting genetic diversity which is vital for the growth and variation of the population. Their lifespan can vary but most live from several months to a year. The adults lay eggs in late spring and the larvae then hatch from these eggs after 7-8 days, and feed on the single-celled algae found in plankton. After metamorphosis, which is induced by feeding on the algae, the juveniles change to their characteristic green colour due to the absorption of the algal chloroplasts and become adults. This process is known as kleptoplasty, allowing the sea slug to photosynthesize and produce its own energy.

Kleptoplasty

Photosynthesis in a Sea Slug Kleptoplasty in gastropods allows sea slugs to capture intact, functional chloroplasts from algae, retaining them within specialized cells lining the mollusc’s digestive diverticula. Elysia chlorotica acquires chloroplasts by consuming Vaucheria litorea, storing the chloroplasts in the cells that line its gut. Juvenile sea slugs establish the kleptoplastic endosymbiosis when feeding on algal cells, sucking out the cell contents, and discarding everything except the chloroplasts. Chloroplasts then work as normal until they begin to degrade due to a lack of nutrients that the algae would have provided to the chloroplasts. If the sea slug found a way to produce its own nutrients to maintain the chloroplasts retained within the specialised cells, it would be able to survive on only photosynthesis for a long period of time, which brings us to the potential existence of Horizontal Gene Transfer.

The link to other ‘Solar-powered’ Sea slugs gene is functional within thesea slug, it not only confirms it is capable of horizontal gene transfer but also could be the proof as to why it retains chloroplasts for longer compared to other kleptoplastic sea slugs. This could also be significant for other scientific fields but further research would be needed to prove this.

The Elysia chlorotica is not one of a kind as there are other gastropods that utilise kleptoplasty such as Costasielle kuroshimae or Elysia timida. Despite having minor differences, these species share the same characteristics which include being able to undergo kleptoplasty by feeding on algae, as well as residing in similar habitats/environmental conditions. However, what makes the Elysia chlorotica stand out is the potential ability to not only intake the chloroplasts for a short period of time and photosynthesizing via kleptoplasty but also being able to assimilate the genes of the algae (Vaucheria litorea) into their own genetic structure through Horizontal Gene Transfer. Researchers found a vital algal gene, psbO within the sea slug which is identical to the one found in V. litorea. The gene was also present in the sex cells of the Elysia chlorotica. This implies that if the

(Pictured) - Life Cycle of a Solar Powered Sea Slug

The Future of the Species and the Importance of Further Research

Researching the Elysia chlorotica could have far-reaching applications, for instance fields such as immunology and gene therapy. However, over the course of the last 10 years, many researchers have either lost interest or moved on to other studies. The few scientists who do remain committed to studying the many mysteries behind these creatures have expressed their concerns with the scarcity of the many kleptoplastic organisms including Elysia chlortica which are now found less and less of in their natural habitats.

National Geographic have claimed that a dedicated expert known as Krug who studies a related genus called Alderia, which consume the same algae and live in the same salt marshes as Elysia chortica stated that ‘the habitat could be suffering or growing increasingly ephermal’ and that nobody has conducted population studies on the species. This apparent decrease in population could be due to increasing sea levels and other impacts of global warming which would most certainly impact a salt marsh.

Furthermore, they are difficult to raise in a controlled laboratory due to their preference of eating Vaucheria litorea, a difficult type of algae to harvest which takes longer to grow than the scientists can feed the young adult sea slugs.

This highlights the vital need for more research surrounding these scientifical marvels because not enough people are dedicated to unveiling the secrets surrounding this species such as the theory behind its Horizontal Gene Transfer which only a few have tried to prove or disprove and could be the start of a breakthrough especially in the field of Gene therapy.

In conclusion, with Elysia chlorotica facing declining population and the challenges faced by researchers in studying these organisms, by dedicating more attention to the study of these creatures, we could uncover invaluable insights that could not only deepen our understanding of evolutionary biology but also lead to significant advancements in medicine and environmental science. Elysia chlorotica holds many untold secrets and if they dedicated more time and resources to it, the scientific community could unlock the full potential of the ‘solar-powered’ sea slug. ~

PUSHING THE BOUNDARIES OF THE PERIODIC TABLE

The three elements, livermorium, tennessine and oganesson, are three elements that you will never encounter. You probably did not even realize that these are the last three known elements on the periodic table. The reason why you will never encounter these elements is since they were artificially created within a laboratory and only existed on this planet, and maybe even the universe, for a split second. The first transuranium element that was created with in 1940 and since then there has been the further discovery of 25 others. This article will be purely focussing on the current method that is used. But for continuity elements 93 through to 95 were discovered using a process called neutron capture. While for elements 96 through to 98 as well as 101 a cyclotron accelerated alpha particles and then the alpha particle would bombard into the previously largest known atom. And element 99 and 100 were discovered when observing the detonation of the first hydrogen bomb. These methods no longer work due to the short half-lives, meaning that the target would not last long enough for the alpha particle to collide.

But how are elements created now? Currently there are 4 main places7 leading the world in element discovery. GSI Helmholtz Centre for Heavy Ion Research in Germany, RIKEN Nishina Centre for Accelerator Based Science in Japan, Joint Institute for Nuclear Research (JINR)8 in Russia and Lawrence Berkley National Laboratory in the USA. Each place conducts their research in slightly differently, for example, a slight difference is that in RIKEN they have used a process called cold fusion9. While at GSI they use the process called hot fusion10 11. But for the most part the method that they carry out is the same. The process is carried out in five steps, ionisation, ion acceleration, collision and fusion, isolation and detection.

IONISATION

In the ionisation stage the ions required for the experiment are created, the sample, of the istope that is being accelerated and fired at the target is bombarded by high energy electrons. The most beam isotope is Ca-48, this is because it has more neutrons which allow the superheavy element to get closer to the “Island of Stability”. The “Island of Stability” is the idea that there a “magic number” of neutrons and protons which will allow for a stable superheavy element, which currently have half-lives of milliseconds. These high energy electrons are created by applying a large amount of energy to a filament wire, causing this wire to become very hot and then emit high energy electrons. These electrons need to have a high energy, as it is often required to created ions with +2 or even +3 charges so that the ions have greater interactions with electric fields and therefore can be accelerated to higher velocities. These electrons bombard the sample, whenever they collide with an electron in an atom in the sample, it causes and electron to be “knocked out” with this electron removed it forms positive ion. This results in a domino effect as more electrons are into the sample, the result is a mixture of positive ions and free electrons called plasma.

ACCELERATION

To accelerate the ions a particle accelerator is used, these are linear particle accelerators (LINACs) or synchrotrons, to increase the speed and energy of an ion so that they have sufficient to overcome the coulomb barrier. Most places use a mixture of both, for example GSI use a LINAC to accelerate the ions to and then a synchrotron to accelerate the ions to around 10% speed of light as a synchrotron is more efficient at accelerating ions to higher energies.

LINEAR PARTICLE ACCELERATORS

The ions pass through a sequence of alternating electric fields. As they pass through more of these electric fields their gain more energy, therefore meaning that their velocity increases until they reach speed of around 10% the speed of light, when used in conjunction with the synchrotons. These can span over 100 meters in length to ensure the velocities and energies are high enough to overcome the coulomb barrier and ensure that the speed if high enough to make further acceleration in the synchrotron effective.

SYNCHOTRONS

Unlike the linear particle accelerator these are cyclic, yet the ions are accelerated in the same way as in the LINAC, using an alternating electric field, except for the fact that there is the constant presence of a magnetic field to guide the ions. As the momentum of the ions increases, due to a higher velocity, the strength of the magnetic must change to ensure that the correct path for the ions is maintained. To guide direction and focus of the ion beam magnetic fields are used. This is especially useful in synchrotrons, which have a circular acceleration of ions. The magnets used are electromagnets, this is advantageous as the strength of the magnet can be controlled, so can be adjusted with the increase in momentum of the ions as their velocity increases. This is important in keeping the ion beam on the right path. The magnet often used is a quadrupole, which is a magnet which has 4 poles, two north and two south, that are arranged opposite to each other. This allows the beam to be focused both horizontally and vertically. For greater focussing and precision sextupoles and octupoles can be used.

COLLISION AND FUSION

The target material varies on what you are trying to create, but often it is heavy elements such as curium and plutonium. Once the ions are traveling fast enough, they are extracted

from the synchrotron and fired at a target. Targets are thin to allow ions to pass through, this ensures that enough interaction occurs with minimal energy loss. The targets turn a when the beam passes through the target it generates heat, which could melt the target. By turning the target, it ensures that the heat is evenly distributed so the target does not melt. Since the beam is not continuous and a pulse, it means that the turn is timed with each pulse, therefore each sample does not need to be too large. This reduces the cost since transuranium elements have a very high price tag. If the collision is successful, the two nuclei fuse together forming a new superheavy element. Due to the chance of a successful collision being very low only a few atoms of a being superheavy element is created.

ISOLATION

When a superheavy element is created it is mixed in with the ions from the beam, that had not collided with the target and just passed through the film. Therefore, it needs to be separated and isolated from all the ions that have not undergone fusion. A fragment separator is used to focus the beam again so that the superheavy element ions can be separated. The beam is focused using series of magnets, which are often sextupoles or octupoles, to guide and focus the beams. These magnets bend the trajectory of the ions. The lighter ions are bent more sharply, due to their being a higher mass to charge ratio, while heavier ions bend less sharply, due to their lower mass to charge ratio. This causes the heavier and lighter ions to separate. Since the original beam is known, this means the lighter ions can be identified and filtered out along predetermined paths. The superheavy elements are then all that is left, they then are implanted into a detection plate, this is often made from silicon19. This plate helps measure their decay chains and half-lives, which allows the element identified.

(Above) The Fragment Seperator FRS at GSI (Below) Shows one of the targets from RIKEN

A new element is confirmed by a decay chain, a decay chain of an element is predictable. Therefore, by looking for a specific element it means that you are looking to detect a specific decay chain. The half-life of each element in the decay chain can be detected and then compared against known values. This is due to characteristic halflives and the energy release by radiation. They use specialised sensors that can record the energy and half-lives of each step in the decay chain. By finding the decay chain it gives concrete evidence that an atom of the new element has been formed since it can be predicted theoretically, so you are able to project forwards to the start of the chain showing the element created. Since you are also able to measure the half-life of the new element it can provide further evidence. This decay chain can obviously not just be detected once, it needs to detected multiple time in order validate the claim, but also ensures reproducibility in the experiment. Now it is all good being able to create these new elements, but a question on that is on the forefront of your mind is probably why? Why go through all this effort, to have a single atom of an element last for less than a second? There is the argument that it is to understand the universe that we live in and to expand scientific

knowledge. For example, some transuranium elements, such as berkelium and curium have been detected in the Przybylski’s Star. Furthermore, other previously undiscovered elements, such a technetium (43) the first human made element, now have everyday uses in nuclear medicine. Uses for superheavy elements may a practical use in the future too and if not, the future elements discovered may have practical uses. Currently no new elements have been added to the periodic table since 201623, what are the issues we are facing? For one the process takes lots of time, it can often take years of experimentation and repeated attempts to successfully detect a single atom of a new element. Another issue is that there is a scarcity in target materials (such as californium-249 and berkelium-249), this drives up the cost, berkelium for example is being sold a mind-blowing $27 million per gram, this limits the scale of the experiments and therefore increases the time. This paired with no immediate practical applications of the elements being discovered, therefore there is reduced urgency for rapid discovery. There may be a huge uncertainty around the creation of new elements, but one thing that is know for sure is that we will discover element 119 and beyond. The only question is when. ~

Have you ever watched something break, crack, or splinter and wished it could magically glue itself back together? While nothing like that exists today, the answer of the future may be self-healing polymers. Self-healing polymers can be described as smart materials that can heal themselves back to their original properties following damage. This ability of a material to return to (or near) its original state is an immensely desirable characteristic for a multitude of sectors ranging from general household appliances to heavier machinery. One example of this unique characteristic would be to reduce the need to replace parts of a production process as frequently, such as in stopping the slow deterioration of phone batteries, allowing for lower production costs for companies and cheaper costs for consumers in the long run. The application process of these polymers is just one of the numerous benefits that self-healing polymers can provide to all aspects of life.

SELF-HEALING POLYMERS

Words - Archie Bradbury

You might be wondering, how are self-healing polymers made?

Self-healing properties are often a result of different interactions, including covalent and non-covalent structures, with all self-healing polymers requiring a form of external factor to prompt the healing. One way in which the polymers are created is the Diels-Alder reaction. These reactions and the connected mechanism are considered a reversible reaction, which helps to provide self-healing qualities. A byproduct of this is the creation of a thermoreversible bond which allows the polymer’s properties to change when subjected to temperature. The reaction itself is an addition reaction that takes place when a diene molecule (an organic compound containing 2 carbon-carbon double bonds in its structure) reacts with a dienophile (an electron-deficient alkene/alkyne that reacts with an electron-rich diene) causing the creation of a new cyclic compound that may have a host of diverse properties from the original molecules used to create it, including thermoreversible that allows the polymer to return to its original state. Whilst Diels-Alder reactions are one way to create self-healing polymers there are a multitude of other ways to develop self-healing bonds with other properties. One such idea is the creation of polymers through densely packed hydrogen bonds in repeating structures. This idea stems from the strength of hydrogen bonds, creating a strong polymer, as well as the repeating aspect of the polymer meaning that in the event of the structure being broken, by placing the pieces together they will be able to re-bond together and hence create the ‘healing’ property. This allows both strong structures that take lots of energy to break down, whilst also having the ability to repair themselves when broken. Now we know how self-healing polymers can be created, you might be asking how they can be used. The current uses of self-healing polymers in today’s society range from consumer-available products to medical equipment. One example may be the rise in self-healing phone screens

Diels–Alder reaction

and screen protectors. This has been achieved through the direct incorporation of self-healing polymers within the chemical makeup of the screen protector itself. This is particularly effective on phone screens as they are susceptible to minor scratches with daily use which can easily be repaired through the change in particle configuration within the structure. As a result, if the screen was made from self-healing polymers it would allow for the small gouges and scratches to both visually and structurally disappear and therefore increase the lifespan of the phone. Furthermore, lessened screen damage may lead to fewer phone repairs or upgrades, thereby leading to lower wastage of different materials and hence less pollution towards the environment. Another use of self-healing polymers is in the growing field of epidermal electronics - the science of creating electronic circuits for the skin - specifically in monitors or prosthetics; one primary way in which this may be achieved is through Self-Healing Hydrogels (pictured right). A hydrogel is a large network of hydrophilic polymers (a polymer that can react with water due to its strong polarity) that are linked together through either covalent bonds or physical entanglement, providing the hydrogel with distinctive semi-solid properties that make it similar to human tissue. As a result of this similarity to human tissue and properties such as responsiveness to environmental conditions, many scientists point to its possible applications in drug delivery systems, sensors, artificial organs and much more. These possibilities make self-healing hydrogels an essential tool for future medicine through their ability to help in areas such as reducing transplant wait times to enabling direct drug delivery to diseased areas of the body. Along with benefits to both consumer goods and medical equipment, there are also immense implications of self-healing polymers for construction. The idea of self-healing materials in construction specifically can be dated as far back as the Romans and their ‘self-healing concrete’. This self-healing property was

achieved through a reaction between Calcium Oxide (colloquially known as ‘quicklime’) and seawater, creating a solid rock that contains smaller pockets of more calcium oxide. Due to calcium oxide being able to react with water, such as that from rain, it means that it can rebond and hence form new bonds, thereby healing itself. Although the idea of Roman ‘self-healing concrete’ is not directly related to self-healing polymers, it was one of the earliest renditions of this idea of self-healing materials, sparking curiosity in the area and has arguably led to the discovery of self-healing polymers. Though the Romans’ concrete was an incredible feat of construction, there are many more modern uses of self-healing polymers in the field of construction, such as in the very roads we walk upon. One example used in roads is self-healing asphalt which works by having hollow steel fibres embedded into the asphalt that can be heated to high temperatures through rapidly changing magnetic fields. The use of an alternating current coil, found within vehicles driving on the asphalt, induces changing magnetic fields that can heat the steel fibres and hence briefly melt

the asphalt, allowing it to repair and fill any cracks. This method not only leads to reduced repair and maintenance costs for roads, with the added benefit of fewer delays, but it also reduces the overall wastage of asphalt due to its increased lifespan. No material is solely beneficial, self-healing polymers included and whilst their potential benefits are numerous, some drawbacks reduce their applicability today and make them a resource of the future rather than the present. One such example might be the requirements of the polymer to self-heal, such as temperature or pressure thresholds that must be met for self-healing to occur. An example of this may be self-healing polymers produced in certain Diels-Alder reactions that require high temperatures to cause thermoreversibility, hence making it more difficult, or expensive, to be used for general consumption. Along with this, some self-healing polymers may be affected by ultraviolet radiation which could reduce or remove the self-healing properties of the polymer itself, such as in photopolymers that are directly affected by ultraviolet light rays and can cause a change in the polymer’s properties. Another potential drawback may be the repeatability of a polymer to self-heal, with some polymers only being able to self-heal once or only partially rather than back to their original state. This may pose a potential challenge for use if a polymer is required to self-heal multiple times. Along with this, some companies would be disadvantaged by implementing self-healing polymers in their products as if the product doesn’t naturally break down or deteriorate then there will not be as many repeat customers, reducing the profit potential for companies. One example of this may be television sets that specifically have easily broken parts to ensure customers need to repurchase their products. This would be a direct negative for the companies as it reduces their repeat customers and may lead to many of the companies going out of business, having the joint impact of stagnation in innovation within the market as a result of lower profits and fewer companies; thereby harming both consumers and producers. Finally, one of the largest current problems with self-healing polymers is the trade-off to the mechanical properties of the polymer itself, as to make a self-healing polymer, other traits from the original polymer, such as density or hardness, may be lost, thereby reducing the overall applicability. So, can self-healing polymers address all of these problems? For the time being, the answer may be no, however, we must recognise the advancements that have already been made and the future progress that lies ahead. These self-healing materials may not yet be the ultimate solution to every issue that we face. Nonetheless, as research continues and new advancements are made one can foresee a future in which the application of self-healing polymers continues to increase and has an impact on every measure of human life. ~

Ryan Chiu – Alcohol Related Neurological Diseases Healthline. Alcohol-Related Neurologic Disease: Types, Signs, Treatment. Available at: https://www.healthline.com/health/alcohol-related-neurologic-disease#neurologic-effects (Accessed 12 February 2025). Juntunen, J. (1984). Alcohol, Work and the Nervous System. Scandinavian Journal of Work, Environment & Health, 10(6), pp. 461–465. Available at: http://www.jstor.org/stable/40965115 (Accessed 12 February 2025).

Cleveland Clinic (2022). Wernicke-Korsakoff Syndrome: Causes, Symptoms & Treatment. Available at: https://my.clevelandclinic.org/health/diseases/22687-wernickekorsakoff-syndrome (Accessed 12 February 2025).

National Institute on Alcohol Abuse and Alcoholism (2022). Wernicke-Korsakoff Syndrome. Available at: https://www.niaaa.nih.gov/publications/brochures-and-fact-sheets/wernicke-korsakoff-syndrome (Accessed 12 February 2025).

Muengtaweepongsa, S. (2023). Marchiafava-Bignami Disease: Practice Essentials, Background, Etiology and Pathophysiology. Medscape. Available at: https://emedicine.medscape.com/article/1146086-overview (Accessed 12 February 2025).

Luong, D. and Sambhaji, C. (2009). Marchiafava-Bignami Disease. Radiopaedia. Available at: https://doi.org/10.53347/rid-7232 (Accessed 12 February 2025). Mayo Clinic (2024). Fetal Alcohol Syndrome – Symptoms and Causes. Available at: https://www.mayoclinic.org/diseases-conditions/fetal-alcohol-syndrome/symptoms-causes/syc-20352901 (Accessed 12 February 2025).

American Addiction Centers. Neurological Effects of Alcohol: Impact of Alcohol on the Brain. Available at: https://americanaddictioncenters.org/alcohol/risks-effects-dangers/neurological (Accessed 12 February 2025).

Ridgefield Recovery Center (2023). Alcoholic Myopathy: Causes and Treatment. Available at: https://www.ridgefieldrecovery.com/drugs/alcohol/alcoholic-myopathy/ (Accessed 12 February 2025). Boskey, E. (2018). Alcohol Withdrawal Syndrome. Healthline. Available at: https://www.healthline.com/health/alcoholism/withdrawal (Accessed 12 February 2025). World Health Organization (2018). Urban Health Initiative. Available at: https://www.who.int/initiatives/SAFER (Accessed 12 February 2025). Forrest Zhu – Introduction to Projective Geometry Courant, R. and Robbins, H. (1996). What is Mathematics? An Elementary Approach to Ideas and Methods. 2nd ed. Oxford: Oxford University Press. Google Redirect (2025). Perspective Drawing. Available at: https://www.google.com/url?sa=i&url=https%3A%2F%2Fwww.pinterest.com%2Fbeeayetay%2Fperspectivedrawing%2F (Accessed 9 January 2025). Harry Fisher – Nerve Agents and the Periodic Table WUWM. Periodic Table of the Elements Turns 150. Available at: https://www.wuwm.com/podcast/lake-effect-segments/periodic-table-of-the-elements-turns-150 (Accessed 12 February 2025). American Scientist. Nerve Agents: What Are They and How Do They Work?. Available at: https://www.americanscientist.org/article/nerve-agents-what-are-they-and-how-do-they-work (Accessed 12 February 2025).

Prachod Netrakar – Superman Memory Crystal Kazansky, P. (2016). Nanostructures in Glass Will Store Data for Billions of Years. SPIE.org. Available at: https://spie.org/news/6365-eternal-5d-data-storage-via-ultrafast-laser-writing-inglass (Accessed 12 February 2025).

Sphotonix. 5D MEMORY CRYSTAL – Launch Video. Available at: https://www.5dmemorycrystal.com/#belowfold (Accessed 12 February 2025). Dark Web Deacon (2021). 5D Optical Storage – Superman Memory Crystal. YouTube. Available at: https://www.youtube.com/watch?v=q_obcZ5yfT8 (Accessed 12 February 2025). Arch Mission Foundation. Superman Memory. Available at: https://www.archmission.org/5d-optical-memory (Accessed 12 February 2025). Youngblood, T. (2016). 5D Data Storage: How Does it Work and When Can We Use it?. All About Circuits. Available at: https://www.allaboutcircuits.com/news/5d-data-storage-how-does-it-work-and-when-canwe-use-it/ (Accessed 12 February 2025).

Zhang, J., Cerkauskaite, A., Drevinskas, R. (2016). Eternal 5D Data Storage by Ultrafast Laser Writing in Glass. ResearchGate. Available at: https://www.researchgate.net/publication/312605376_Eternal_5D_data_ storage_by_ultrafast_laser_writing_in_glass (Accessed 12 February 2025).

University of Southampton (2016). Eternal 5D Data Storage Could Record the History of Humankind. Available at: https://www.southampton.ac.uk/news/2016/02/5d-data-storage-update.page?form=MG0AV3 (Accessed 12 February 2025).

YouTube (2016). Eternal 5D Data Storage. Available at: https://www.youtube.com/watch?v=ItNT9BGDB4o (Accessed 12 February 2025). YouTube (2024). Sphotonix Launch Video. Available at: https://www.youtube.com/watch?v=okaRTU77FW8 (Accessed 12 February 2025).

Forrest Zhu – Introduction to Projective Geometry Courant, R. and Robbins, H. (1996). What is Mathematics? An Elementary Approach to Ideas and Methods. 2nd ed. Oxford: Oxford University Press. Google Redirect (2025). Perspective Drawing. Available at: https://www.google.com/url?sa=i&url=https%3A%2F%2Fwww.pinterest.com%2Fbeeayetay%2Fperspectivedrawing%2F Sammy Winson – 3D Printing Markforged. Additive Manufacturing History. Available at: https://markforged.com/resources/blog/additive-manufacturing-history Protolabs. Prototyping Technologies for 3D Printing: SLA vs FDM. Available at: https://www.protolabs.com/resources/blog/prototyping-technologies-for-3d-printing-sla-vs-fdm Unionfab (2023). 3D Printing Cost and Speed. Available at: https://www.unionfab.com/blog/2023/07/3d-printing-cost-and-speed Anish Thayalan – Antidepressants Uncovered Vythilingam, M. et al. (2002). Childhood Trauma Associated With Smaller Hippocampal Volume in Women With Major Depression. American Journal of Psychiatry, 159(12), pp. 2072–2080. doi: https://doi. org/10.1176/appi.ajp.159.12.2072

David, D.J. et al. (2009). Neurogenesis-Dependent and -Independent Effects of Fluoxetine in an Animal Model of Anxiety/Depression. Neuron, 62(4), pp. 479–493. doi: https://doi.org/10.1016/j.neuron.2009.04.017 Drugs.com (2020). Prozac for Major Depressive Disorder Reviews. Available at: https://www.drugs.com/comments/fluoxetine/prozac-for-major-depressive-disorder.html Seb Pabst – Natural Language Processing (NLP) Wikipedia. Natural Language Processing. Available at: https://en.wikipedia.org/wiki/Natural_language_processing Britannica. Natural Language Processing. Available at: https://www.britannica.com/technology/natural-language-processing-computer-science Amazon AWS. What is NLP?. Available at: https://aws.amazon.com/what-is/nlp/ Geeks for Geeks. Phases of Natural Language Processing (NLP). Available at: https://www.geeksforgeeks.org/phases-of-natural-language-processing-nlp/ DeepLearning.ai. Natural Language Processing Resources. Available at: https://www.deeplearning.ai/resources/natural-language-processing/ Shelf.io. Challenges and Considerations in NLP. Available at: https://shelf.io/blog/challenges-and-considerations-in-nlp/ Mark Tang – Fluid Dynamics and Aerospace Faber, T.E. (2019). Fluid Mechanics. Encyclopædia Britannica. Available at: https://www.britannica.com/science/fluid-mechanics Physics LibreTexts (2016). 14.7: Fluid Dynamics. Available at: https://phys.libretexts.org/.../14.07%3A_Fluid_Dynamics StudySmarter. Euler’s Equation Fluid: Dynamics & Derivation. Available at: https://www.studysmarter.co.uk/explanations/engineering/engineering-fluid-mechanics/eulers-equation-fluid/ Khan Academy (2023). Newton’s Second Law of Motion. Available at: https://khanacademy.org/.../newton-s-second-law-of-motion NASA (2010). Principles of Flight: Bernoulli’s Principle. Available at: https://www.nasa.gov/.../bernoullis-principle-k-4-02-09-17-508.pdf Testbook (2017). Euler’s Equation for the Motion of Liquid. Available at: https://testbook.com/question-answer/eulers-equation-for-the-motion-of-liquid-assu--5a0e828dc745e30fc3510973 Rashaduddin, M. and Waheedullah, A. (2017). Engineering and Technology. International Journal of Innovative Research in Science, 6. doi: https://doi.org/10.15680/IJIRSET.2017.0610021 Loewen, H. Stability Derivatives: What They Are and How They Are Used. Available at: https://www.micropilot.com/pdf/stability-derivatives.pdf Leishman, J.G. (2023). Wing Shapes & Nomenclature. Eaglepubs.erau.edu, 31. doi: https://doi.org/10.15394/eaglepub.2022.1066.n20 Aviation Stack Exchange. What is Vortex Lift?. Available at: https://aviation.stackexchange.com/questions/21069/what-is-vortex-lift Tandfonline. Fluid Mechanics Paper. Available at: https://www.tandfonline.com/doi/pdf/10.1080/16487788.2007.9635956 ScienceDirect. Reynolds-Averaged Navier-Stokes – Overview. Available at: https://www.sciencedirect.com/topics/engineering/reynolds-averaged-navier-stokes Ethan MacPherson – Hydrogen Cars Car and Driver. Hyundai Nexo. Available at: https://www.caranddriver.com/hyundai/nexo Jimi Ikumawoyi – Behaviourism Cambridge Dictionary (2019). Behaviourism. Cambridge.org. Available at: https://dictionary.cambridge.org/dictionary/english/behaviourism Votaw, K. (2020). 1.6: Pavlov, Watson, Skinner, and Behaviorism. Social Sci LibreTexts. Available at: https://socialsci.libretexts.org/.../1.06%3A_Pavlov_Watson_Skinner_And_Behaviorism Nance, R.D. (1970). G. Stanley Hall and John B. Watson as Child Psychologists. Journal of the History of the Behavioral Sciences. doi: https://doi.org/10.1002/1520-6696(197010)6:4%3C303::aidjhbs2300060402%3E3.0.co;2-m Baddeley, M. (2010). Herding, Social Influence and Economic Decision-Making: Socio-Psychological and Neuroscientific Analyses. Philosophical Transactions of the Royal Society B, 365(1538), pp.281–290. doi: https://doi.org/10.1098/rstb.2009.0169

Schwartz, D.G. (n.d.). How Casinos Use Math to Make Money When You Play the Slots. Forbes. Available at: https://www.forbes.com/sites/davidschwartz/2018/06/04/how-casinos-use-math-to-make-moneywhen-you-play-the-slots/ Thompson, J. (2021). What Does Infinite Scroll Mean for AdWord Users?. Peppermint. Available at: https://peppermintcreate.com/what-does-infinite-scroll-mean-for-adword-users/ Tommy Wu – Materials of Dentures

Arafa, K.A. (2016). Effects of Different Complete Denture Base Materials and Tooth Types on Short-Term Phonetics. Journal of Taibah University Medical Sciences, 11(2), pp.110–114. doi: https://doi.org/10.1016/j. jtumed.2015.11.003

Carlsson, G.E. and Omar, R. (2009). The Future of Complete Dentures in Oral Rehabilitation: A Critical Review. Journal of Oral Rehabilitation, 36(8), pp.629–640 Dentures UK. Choosing the Right Denture Material for Your Lifestyle. Available at: https://www.denturesuk.com/denture-types/right-denture-material-lifestyle/ Huge Dental. Exploring Different Types of Artificial Denture Teeth Materials. Available at: https://www.hugedental.com/exploring-different-types-of-artificial-denture-teeth-materials.html National Center for Biotechnology Information (2020). Types of Dentures. Institute for Quality and Efficiency in Health Care (IQWiG). Available at: https://www.ncbi.nlm.nih.gov/books/NBK279192/ MediCenter Dental. Guide to Types of Denture Materials. Available at: https://www.mediacenterdental.com/blog/guide-to-types-of-denture-materials/ Muhammad, N. et al. (2022). Characterization of Various Acrylate-Based Artificial Teeth for Denture Fabrication. Journal of Materials Science: Materials in Medicine, 33(2), p.17. doi: 10.1007/s10856-022-06645-8 Park, C. (2025). A Comprehensive Narrative Review Exploring the Current Landscape of Digital Complete Denture Technology and Advancements. Heliyon, 11(2), pp.e41870–e41870. doi: https://doi. org/10.1016/j.heliyon.2025.e41870

Quest Dental (2023). The Different Types of Dentures and What Best Fits You. Available at: https://www.questdental.com/articles/the-different-types-of-dentures-and-what-best-fits-you Singh, J. et al. (2011). Flexible Denture Base Material: A Viable Alternative to Conventional Acrylic Denture Base Material. Contemporary Clinical Dentistry, 2(4), p.313. doi: https://doi.org/10.4103/0976237x.91795

van Noort, R. (2012). The Future of Dental Devices is Digital. Dental Materials, 28(1), pp.3–12. doi: https://doi.org/10.1016/j.dental.2011.10.014

Kyoto Kazami – Mathematical Approaches for Reducing Data Dimensionality

Wikipedia. Machine Learning. Available at: https://en.wikipedia.org/wiki/Machine_learning

Wikipedia. Dimensionality Reduction. Available at: https://en.wikipedia.org/wiki/Dimensionality_reduction Wikipedia. Word Embedding. Available at: https://en.wikipedia.org/wiki/Word_embedding

Wikipedia. Word2Vec. Available at: https://en.wikipedia.org/wiki/Word2vec

Wikipedia. Natural Language Processing. Available at: https://en.wikipedia.org/wiki/Natural_language_processing

Wikipedia. Principal Component Analysis. Available at: https://en.wikipedia.org/wiki/Principal_component_analysis

Wikipedia. T-distributed Stochastic Neighbor Embedding. Available at: https://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embedding Google Code Archive. Word2Vec. Available at: https://code.google.com/archive/p/word2vec/

IBM. Principal Component Analysis. Available at: https://www.ibm.com/think/topics/principal-component-analysis YouTube. t-SNE Visualisation. Available at: https://www.youtube.com/watch?v=wjZofJX0v4M

Belkina, A.C. et al. (2019). Automated Optimized Parameters for t-SNE Improve Visualization and Analysis of Large Datasets. Nature Communications. doi: https://www.nature.com/articles/s41467-019-13055-y Hong Kiu Yeung – Parasites

Anthony, R.M. et al. (2007). Protective Immune Mechanisms in Helminth Infection. Nature Reviews Immunology, 7(12), pp.975–987

Gadallah, M.M.S. (n.d.). Introduction to Medical Parasitology. ResearchGate. Available at: https://www.researchgate.net/publication/259670265_Introduction_To_Medical_Parasitology

Kocahan, T., Dede, S. and Kara, T. (2019). Toxicity of Parasites and Their Unconventional Use in Medicine. ResearchGate. Available at: https://www.researchgate.net/publication/334044249_Toxicity_of_parasites_ and_their_unconventional_use_in_medicine

Maizels, R.M. and McSorley, H.J. (2014). Regulation of the Host Immune System by Helminth Parasites. Journal of Allergy and Clinical Immunology, 133(6), pp.1557–1566. Available at: https://pmc.ncbi.nlm.nih. gov/articles/PMC3969036/ Meekums, H., Hawash, M.B.F. and Nutman, T.B. (2015). One Health: Parasites and Beyond. Parasitology, 142(1), pp.1–6 Wang, X. et al. (2023). Application of Toxoplasma gondii in Cancer Immunotherapy. Frontiers in Immunology, 14. Available at: https://pubmed.ncbi.nlm.nih.gov/36641293/ Public Health Image Library (PHIL). Details. Available at: [link unspecified] Max Woodley – Pushing the Boundaries of the Periodic Table Chapman, K. (2019). Superheavy: Making and Breaking the Periodic Table (2nd ed.). Bloomsbury GSI Helmholtz Centre. Ion Sources & Accelerators. Available at: https://www.gsi.de/en/researchaccelerators Britannica School. Particle Accelerators. Available at: https://school.eb.co.uk/levels/advanced/article/particle-accelerator/108531 Wikipedia. Island of Stability. Available at: https://en.wikipedia.org/wiki/Island_of_stability Wikipedia. Coulomb Barrier. Available at: https://en.wikipedia.org/wiki/Coulomb_barrier Chemistry World. Superheavy Elements – Explainer. Available at: https://www.chemistryworld.com/news/explainer-superheavy-elements/1010345.article LLNL Seaborg. Superheavy Element Discovery. Available at: https://seaborg.llnl.gov/research/superheavy-element-discovery YouTube. Superheavy Elements Video. Available at: https://www.youtube.com/watch?v=RDvOOVH0AX4 YouTube. Heavy Nuclei Explainer. Available at: https://www.youtube.com/watch?v=z3oY-XHwss8&t=134s Dictionary.com. Coulomb Barrier Definition. Available at: https://www.dictionary.com/ Archie Bradbury – Self-Healing Polymers ✅ Suggested Sources Based on the Article (please confirm if these are acceptable): Nature (2016). Self-Healing Polymers. Available at: https://www.nature.com/articles/nature16989 ScienceDirect. Diels–Alder Reaction in Self-Healing Polymers. Available at: https://www.sciencedirect.com/science/article/abs/pii/S0141391019304211 Gabriel Islam – False Vacuum Theory Wikipedia. Standard Model. Available at: https://en.wikipedia.org/wiki/Standard_Model Umar Siad – The Complex Biology of Epigenetic Modifications Rogers, K. and Fridovich-Keil, J.L. (2018). Epigenetics | Definition, Inheritance, & Disease. Encyclopædia Britannica. Available at: https://www.britannica.com/science/epigenetics Moore, L.D., Le, T. and Fan, G. (2012). DNA Methylation and Its Basic Function. Neuropsychopharmacology, 38(1), pp.23–38. doi: https://doi.org/10.1038/npp.2012.112 Bannister, A.J. and Kouzarides, T. (2011). Regulation of Chromatin by Histone Modifications. Cell Research, 21(3), pp.381–395. doi: https://doi.org/10.1038/cr.2011.22 Wu, T. et al. (2023). Epigenetic Regulation of Neurotransmitter Signaling in Neurological Disorders. Neurobiology of Disease, 184, p.106232. doi: https://doi.org/10.1016/j.nbd.2023.106232

Zouali, M. (2020). Epigenetics of Autoimmune Diseases. In: Autoimmunity: From Bench to Bedside. Elsevier EBooks, pp.429–466. doi: https://doi.org/10.1016/b978-0-12-812102-3.00025-7

Yichen Zhao – The Fermi Paradox CNRS News (2015). Fermi’s Paradox and the Missing Aliens. Available at: https://news.cnrs.fr/opinions/fermis-paradox-and-missing-aliens Wikipedia. Fermi Paradox. Available at: https://en.wikipedia.org/wiki/Fermi_paradox Psychology Today (2024). Image Resource. Available at: https://cdn.psychologytoday.com/sites/default/files/styles/article-inline-half-caption/public/field_blog_entry_images/2022-03/shutterstock_738535111.jpg Geeks for Geeks (2024). Human Evolution Stages [Redirect Notice]. Available at: https://www.geeksforgeeks.org/human-evolution-stages/ Cloudfront.net (2024). Image: Greg Rakozy – Starscape. Available at: https://dhjhkxawhe8q4.cloudfront.net/yup-wp/wp-content/uploads/2022/01/27150044/greg-rakozy-38802-unsplash.jpg Kai Sun Yiu – The Immortal Jellyfish Osterloff, E. (2019). Immortal Jellyfish: The Secret to Cheating Death. Natural History Museum. Available at: https://www.nhm.ac.uk/discover/immortal-jellyfish-secret-to-cheating-death.html Rich, N. (2012). Can a Jellyfish Unlock the Secret of Immortality? The New York Times, 28 November. Available at: https://www.nytimes.com/2012/12/02/magazine/can-a-jellyfish-unlock-the-secret-of-immortality.html

Animal Diversity Web. Turritopsis dohrnii: Classification. Available at: https://animaldiversity.org/accounts/Turritopsis_dohrnii/classification/ Ling, T. (2023). The Secrets of the Immortal Jellyfish, Earth’s Longest-Living Animal. BBC Science Focus. Available at: https://www.sciencefocus.com/nature/immortal-jellyfish American Museum of Natural History (2015). The Immortal Jellyfish. Available at: https://www.amnh.org/explore/news-blogs/on-exhibit-posts/the-immortal-jellyfish Magazine, S. and Osborne, M. (2022). ‘Immortal Jellyfish’ Could Spur Discoveries About Human Aging. Smithsonian Magazine. Available at: https://www.smithsonianmag.com/smart-news/immortal-jellyfish-could-spur-discoveries-about-human-aging-180980702/ Kate, P. The Immortal Jellyfish: Facts & Photos. HubPages. Available at: https://discover.hubpages.com/education/immortal-jellyfish

Cyril Sze Chau Leung – The Past, Present, and Future of Robotic Surgery Cleveland Clinic (2024). Robotic Surgery. Available at: https://my.clevelandclinic.org/health/treatments/22178-robotic-surgery

George, E. et al. (2018). Origins of Robotic Surgery: From Skepticism to Standard of Care. ResearchGate. Available at: https://www.researchgate.net/figure/Computer-Motions-ZEUS-in-an-operating-room_ fig6_329469759

Mater Private Network (2024). Robotic Surgery. Available at: https://www.materprivate.ie/our-services/robotic-surgery Mayo Clinic (n.d.). Robotic Surgery. Available at: https://www.mayoclinic.org/tests-procedures/robotic-surgery/about/pac-20394974 Morrell, A.L.G. et al. (2021). The History of Robotic Surgery and Its Evolution: When Illusion Becomes Reality. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10683436/ Alvaro Ciccia Rodriguez – The Solar-Powered Sea Slug Cai, H. et al. (2019). A Draft Genome Assembly of the Solar-Powered Sea Slug Elysia chlorotica. Scientific Data. Available at: https://www.nature.com/articles/sdata201922

Cartaxana, P. et al. (2021). Photosynthesis from Stolen Chloroplasts Can Support Sea Slug Reproductive Fitness. Proceedings of the Royal Society B, 288(1963). doi: https://doi.org/10.1098/rspb.2021.1779

Main, D. (2018). Solar-Powered Slugs Hide Wild Secrets—But They’re Vanishing. National Geographic. Available at: https://www.nationalgeographic.com/animals/article/solar-powered-photosynthetic-sea-slugsin-decline-news

Rafferty, J. (n.d.). Elysia chlorotica | Sea Slug. Encyclopaedia Britannica. Available at: https://www.britannica.com/animal/Elysia-chlorotica Wikipedia. Elysia chlorotica. Available at: https://en.wikipedia.org/wiki/Elysia_chlorotica Wikipedia. Kleptoplasty. Available at: https://en.wikipedia.org/wiki/Kleptoplasty

WHITGIFT SCHOOL

Spectra 2025 - ISSUE 3

Contributors:

Head of Publishing - Max McInnes

Chief Editor - Theo Chandler

Editor - Max McInnes

Editor - Mark Tang

Editor - Omi Lashev

Designer - Theo Chandler

Writer - Ryan Chiu

Writer - Anish Thayalan

Writer - Umar Siad

Writer - Jimi Ikumawoyi

Writer - Tommy Wu

Writer - Cyril Sze Chau Leung

Writer - Hong Kiu Yeung

Writer - Alvaro Ciccia Rodriguez

Writer - Kai Sun Yiu

Writer - Prachod Netrakar

Writer - Archie Bradbury

Writer - Sammy Winson

Writer - Ethan MacPherson

Writer - Seb Pabst

Writer - Kyoto Kazami

Writer - Forrest Zhu

Writer - Gabriel Islam

Writer - Yichen Zhao

Writer - Harry Fisher

Writer - Max Woodley

Writer - Mark Tang

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.