St. Olave's Academic Journal 2025

Page 1


The Academic Journal 2025

Table of Contents

Foreword

Shaun Abraham Page 3

Aviation Journal

Materials of the Future

Anwesha Ghosh Y13 Page 4

Clear Air Turbulence

Ewan Butterworth Y13 Page 6

The Forgotten Story of the Rotodyne

Abhinav Malladi Y11 Page 9

Biology Journal

Phage Therapy: A revolutionary treatment for bacterial infections

Pavlo Kotenko Y12 Page 12

Causes of Alzheimer’s disease

Arunima Karve Y12 Page 14

Chemistry Journal

Chemistry of Fentanyl – The Anaesthetic that has caused a Crisis in the US

Raphael Dadula Y13 Page 18

Chemistry behind Fragrances

Vedika Tibrewal Y13 Page 22

Computer Science Journal

Zero Trust Architecture: A Radical

Rethink of Cybersecurity

Sahishnu Jadhav Y12 Page 24

How do computers compute?

Mikhail Sumygin Y12 Page 29

Economics Journal

The Rise of Islamic banking in the West

Aryen Adhikari Y12 Page 33

Labour’s GB railways and the historical clash between nationalisation and privatisation

Michael Bowry Y12 Page 37

History Journal

The Dangers of Appeasement

Chris Choi Y13 Page 42

India vs Pakistan- A Study in Sports

Diplomacy

Shaun Abraham Y13 Page 44

Machine Learning Journal

The Future for Robotics

Dev Mehta Y12 Page 48

Game Theory to Machine LearningSHapley Additive exPlanations

Fifi Siddiqui Y12 Page 52

Maths Journal

Benford's Law: The Strange Predictability of Numbers

Shaurya Mehta Y12 Page 57

Medics Journal

To Fight or to Fly? To Freeze or to Fawn? An Evolutionary Viewpoint

Sophie Li Y12 Page 60

Hypoplastic Left Heart Syndrome

Hermione Kerr Y12 Page 67

Modern Foreign Languages Journal

The Spanish Siesta

Freya Keable Y13 Page 73

Multilingualism in Morocco

Vaidehi Varma Y12 Page 76

Physics and Engineering Journal

The Physics of Time: From Newtonian Absolutes to Einsteinian Relativity

Anna Greenwood Y12 Page 79

A Hunt for the Invisible

Eashan Rautaray Y12 Page 81

Foreword

When I set out to edit this year’s Academic Journal, I perhaps naively thought that it would be a relatively simple task. I soon found this to be far from the reality. However, more than the editing itself, the greatest difficulty I faced was in choosing among the articles submitted to comprise this journal. It is testament to the talent, passion, and initiative of students at Saint Olave’s that for every article published here, I could easily have included many more of equal calibre, equally deserving of recognition.

From personal experience, I can confidently say that the greatest strength of this school is the depth of opportunity available for students – of which this journal is a product. Regardless of where one’s interests lie, there is a society here which indulges that interest, enabling students to delve deeper into their chosen niches and create such brilliant works as those represented in the coming pages. With topics ranging from the empirical methods of mathematics to the intangibilities of politics, from the molecular to the cosmic, and inspired by societies as varied as Aviation and Machine Learning, the diversity of passions at this school is plain to see.

I could go on for pages about how excellent the articles in this journal are, but it is probably easiest for me to let you discover that for yourself. I would like to thank all the students and society leaders who have contributed to this publication, and in doing so have further enriched the vibrant academic environment of this school. This journal is just a snapshot of the many society publications available on the school website, and I cannot encourage you enough to go explore those too in your free time. As it is, I hope you find the articles here as enjoyable as I did while editing them. There really is something for everyone.

Materials of the Future

Anwesha Ghosh Y13

High entropy alloys are, as the name suggests, a bit chaotic. We first must understand the structure of normal alloys to understand these odd materials, though. I’m sure you all know what alloys are: a composite material that contains a primary metal, and a small amount of another. The inclusion of the other metal disrupts the even layers of the primary metal and makes them harder to slide over each other, and hence the metals can withstand greater forces without deformation You’ll most definitely be familiar with the diagram on the left, below; this diagram is an alright visualisation in 2D, but in reality, the alloys have 3D crystalline structures like the diagram on the right:

As you can see, depending on the different molecules used, the alloys arrange themselves in a different crystalline structure to accommodate the relative sizes of each of the atoms. As a result, an alloy with metals of similarly sized atoms will have different packing to that of metals with differently sized atoms. The first is known as solid solution alloys (a good example would be brass with Zn and Cu), and the second is known as interstitial alloys (which applies to the Fe and C in steel). The two structures are displayed below:

Now, what does this all have to do with aviation? Alloys have greatly improved mechanical and physical properties compared to pure metals, and allow us to test the boundaries of what we thought possible. If we go back to the Wright Brothers, they switched from their initially wooden frame to using aluminium alloys for the airframe, which was both lighter and able to withstand higher pressure loading Nowadays, we have exciting

alloys like Inconel 625, termed a ‘nickel based super alloy that possesses high strength properties and resistance to elevated temperatures’ It is often used in hypersonic aircraft due to this! Another version of Inconel – Inconel X – was used to coat the hypersonic aircraft X-15 in order to withstand the effects of aerodynamic heating at the crazy speeds it went at!

But wait! Alloys may have been cool at the time the X-15 was made (1958), but we have the newer and better subgroup of high entropy alloys waiting to be used. Now we’ve seen the crystalline

crystalline structures of normal alloys, let’s expand that to high entropy alloys! As you’ve seen, conventional alloys have a relatively regular crystalline structure High entropy alloys contain five or more elements in roughly equal amounts, which results in high configurational entropy (which essentially means more disorder because of the many different types of atoms). Because of this, HEAs are normally composed of elements with different crystalline structures, and end up looking something like this:

Contrary to belief, this weird shape actually results in HEAs having higher strength, hardness, wear resistance, thermal stability and corrosion resistance compared to their conventional counterparts In order to understand why this is the case, we need to take an even closer look at

crystal lattices, namely ‘dislocations’. Due to natural variation, crystal lattices have points at which they are distorted due to an additional plane of atoms sliding between the regular structure. When a shear force is applied along the crystal lattices, the dislocation shifts, until it reaches a grain boundary, as displayed below:

Due to the irregular structure of the HEA crystal unit cells, they have more grains. This means that when a shear force is applied to them, the dislocations have less of a distance to shift before they hit a grain boundary, and thus the metal does not deform.

Due to their unique combination of mechanical, thermal and physical properties, HEAs have various applications in aviation. One example is that HEAs may be used for high-temperature applications, such as in jet engines and hypersonic vehicles – so perhaps the X-15 could’ve lasted even longer if it had been coated with a HEA! The usage of HEAs in jet engines means that fuels can be combusted at higher temperatures, and thus complete combustion can occur, which means the fuel has a greater energy content.

Unfortunately, as with every emerging piece of technology, we’re not quite there yet… HEAs are notoriously difficult to synthesize at scale whilst keeping costs low, so it isn’t really feasible to use them in aircraft just yet. Their chaos lends itself to helpful properties, but it also means that we lack a clear understanding of the relationships between composition, microstructure and properties, and so can’t really predict what will happen to them in different conditions

Clear Air Turbulence

Ewan Butterworth Y13

Turbulence

For anyone who has flown anywhere, one would be very lucky to have never experienced inflight turbulence before, even in its mildest effects, because turbulence comes primarily as a result of the movement of air (in most cases wind) hitting and exerting forces on an aircraft, or dramatic (and I mean very sudden) changes in air pressure that may knock it out of its original alignment. Naturally, to fully understand that you can think of the very basics of how an aircraft flies – the wings create lift through manipulation of the air they are travelling through. If this air is suddenly dramatically impacted by something like a gust of wind, it can impact the amount of lift produced by the wings and cause the aircraft to be deflected along its vertical axis of movement, or deflected sideways if the wind comes from the side. Simples!

Now, this obviously can cause forces on the occupants of the aircraft as it is nudged about, which is what we feel when the aircraft experiences turbulence

Because wind creates turbulence, we will look at what creates the wind Most commonly, this will be weather – formation of clouds like a cumulonimbus (BIG cloud) or storms, and this is the turbulence we can see and predict Pilots will be able to see this weather on their weather radars, if they can’t see it out of their window (which they should be able to) or if they haven’t been notified by air traffic control (ATC) about the existence of the weather which they may encounter:

The patches of green, orange and red display the varying severities of the weather ahead on the weather radar

Obviously, the pilots can then ask the ATC if they can be directed around the weather if it’s very severe, and, apart from in the most critical cases, this will be approved. Naturally, if it’s just a small cloud the assumption is that there is minimal effect on the aircraft, and thus little worry is spent on flying through the small clouds (if you ever descend/ascend through

clouds in a flight, you’ll most likely experience small amounts of turbulence caused by the dramatic pressure changes within the cloud). It is generally only the most serious of weather which demands avoidance:

A typical cumulonimbus cloud. These are clouds which pilots should make a concerted effort to avoid, such is their potential to cause severe turbulence

Aircraft can also encounter turbulent movements of air in scenarios like:

1. Mountain waves: why you’ll find no aircraft over the Himalayas –https://en.wikipedia.org/wiki/Lee_wave

2. Thermals – https://wiki.ivao.aero/en/home/training/documentation/Turbulence (under convective turbulence)

3. Wake Turbulence – https://en wikipedia org/wiki/Wake_turbulence#

All of these are central to how flight paths and plans are structured, as well as in key principles of safe flight practiced in aviation.

Clear Air Turbulence

However, aircraft like SQ321 can also come across what is known as clear air turbulence This turbulence is essentially “unavoidable” because it is almost impossible to predict, at the speeds aircraft are travelling at, when and where the aircraft might meet clear air turbulence It occurs primarily between 22,000 and 39,000ft, which is where most airliners will take their cruising altitude, and as a result you are most likely to experience clear air turbulence in the cruise.

The turbulent air is formed at the point where bodies of air moving at widely different speeds meet, and the shear caused by the “friction” between the two bodies. The reason it is so hard to detect clear air turbulence is because, as suggested by the name, is consists of entirely “clear” air, so is almost impossible to see with the naked eye or detect by radar Various methods can be implemented, but these can take a while to find the pockets of turbulence, so in most part predictions are made based on the prevalence of different factors that can cause clear air turbulence to warn pilots. Mountain waves and thermals, listed earlier, are also forms of clear air turbulence, though these can be predicted with more ease than with the cause most recently described, which occurs most frequently near jet streams. We can visualize this using the displays on the next page

Top: at 5000ft, you can see the rising thermals from the Sahara Desert meaning a prediction of very high amounts of clear air turbulence

Bottom: at 30000ft, you can see predictions following the shapes of jet streams

As you can also see from the picture above, I marked the position where Singapore Airlines Flight 321 experienced clear air turbulence resulting in the death of a passenger earlier this year, above the Irrawaddy Basin between Thailand and Myanmar It seems from here that there is little clear air turbulence predicted, which is contradictory to what SQ321 went through, but it further emphasises the point that clear air turbulence is so hard to accurately predict, which is why it can be so dangerous to aircraft. My parents in their days working in South-East Asia experienced flying between Thailand, Myanmar and Bangladesh very frequently, and they always mention the fact that the region is turbulent, owing to the huge weather systems which can develop in the countries

Clear air turbulence can range hugely in intensity, going from a level barely noticeable to something SQ321 experienced, and likely even further. Indeed, there are other famous cases of aircraft suffering from severe clear air turbulence, which can be found through this link https://en.wikipedia.org/wiki/Clear-air_turbulence#Effects_on_aircraft .

The Forgotten Story of the Rotodyne

Abhinav Malladi Y11

On the 6th of November 1957, a new era of flight was about to begin – a new, revolutionary aircraft built by the British plane manufacturer, Fairey Aviation. This was no regular plane or a regular helicopter, yet it could do Vertical Take-Off and Landing (VTOL) manoeuvres while flying internationally, carrying seventy-five people. It may look part plane, part helicopter, but in truth – it is neither

All of that costed airlines just 25p per seat mile of travel in today’s money, or <2p per seat mile at the time, cheaper than any helicopter. Even the colossal Airbus A380 is only just half the price of this 1960s marvel to operate relative to its range and capacity, even with modern technologies, huge size, and unbeatable range. To clarify, a seat mile is a unit of price used to find the cost efficiency of a plane, the cost of operating it per person per mile it travels. What was this helicopter-plane hybrid super-aircraft? The Fairey Rotodyne

The Rotodyne was the perfect mix of these two forms of transport – a type of aircraft known as anAutogyro – a type of aircraft largely forgotten today.

What is an Autogyro?

An autogyro is essentially a plane with helicopter blades on top. However, these blades are not powered, so they cannot qualify as a helicopter – instead they take advantage of autorotation –the tendency of these blades to spin unpowered when the autogyro plane is moving forward. This contributes to lift This is why a helicopter’s blades continue spinning even when power is lost, allowing it to land slowly, even slower than a parachute would let it. This is why Autogyros were used for many years as scout planes or to deliver mail in the 1930s – a plane could continue flying even at speeds where any wing would stall.

How did the Rotodyne work?

It is based on the autogyro concept but leverages parts of it to gain VTOL capabilities while remaining efficient. It had shortliftingwingsdesignedtokeepitflyingduringforward flight (however assisted by the main rotor which will undergo autorotation, creating more lift). This forward flight is maintained by the two main turboprops.

But how does itdoVTOLs?The main rotor is free-spinning and has no engine attached to it after all. This is where tip jets come in. The turboprops run during VTOL, however, they run to produce compressed air for the “tip jets” instead of providing any meaningful thrust upwards. This compressed air and some fuels are pumped into the main rotor to miniature jet engines on the tips of each blade in the rotor – allowing for it to be spun up during take-offs and landings without needing to rely on autorotation, which usually requires some amount of forward flight. These tip jets would later become an issue.

Why did Fairey Aviation build the Rotodyne?

The Rotodyne was built upon the ideas behind the much smaller, prototype testing plane FB-1 Gyrodyne. It was made to solve the biggest issue with inter-city centre travel. The traffic of cars, and cost. Travel by car was far too slow for the businesspeople who needed to attend meetings in different cities with only a few hours to travel. The solution for many? Planes.

However, there was one (very large) issue The airports were outside the city centre and the transport time between these airports to the city was only growing – often reaching or surpassing the flight time, which on average, was only an hour

You may have a specific transport option on your mind by this point – one that can land almost anywhere, even on the rooftop of the very building they may have a meeting in. You are not alone, and this resulted in an explosion of helicopter transport services

However, helicopters lacked the range and speed to do longer trips Even in the case that they were capable of certain journeys, the new helicopter airlines that had arisen from this demand were unable to turn a profit due to the high costs of operating a helicopter, instead relying on government subsidies that could fade away at any moment.

Then

why

aren’t they in our skies?

Due to the nature of Britain’s aircraft industry at the time (many, smaller manufacturers) – many companies did not have the budget to develop a new type of aircraft from scratch, and so prioritised designing for military uses as to be awarded a grant for the development of the aircraft from the government. The Rotodyne was built on this government funding. However, there were simply too many manufacturers for the government to give individual grants to, and it was also causing other problems logistically and economically. This meant Fairey Aviation and practically every other manufacturer in the industry was forced to form multiple mergers into just a few, larger companies.

So how did the project die? In the worst way possible.Aseries of truly unlucky, unfortunate events.

The mergers had the Rotodyne competing with other helicopter projects trying to solve the same issue in the same company. The tip jets were simply too loud (113dB!) for use in cities, and even with the development of mufflers dropping their loudness by 85%, they were still extremely loud. And worst of all, facing economic troubles, funding for the program was suddenly pulled by the British government in 1962. All documents and the 40-passenger prototype (labelled Type-Y, the 75-passenger version to be the Type-Z) were destroyed, with only a few scale models and remnants left of the project as a whole.

Phage Therapy: A revolutionary treatment for bacterial infections

Disease has been an unending problem for humanity since its beginning. Many treatments and medicines have been used throughout history to fight disease, with the most significant breakthrough being the invention of antibiotics in 1928. However, as overuse of antibiotics leads to rising numbers of MDR (multi-drug-resistant) and TDR (totally-drug-resistant) pathogens, scientists have been looking for a new treatment that could turn the tide and make MDR bacterial infections easier to fight; this is where phage therapy comes in.

Phage therapy utilises bacteriophages – aka phages, they are a type of virus that replicates in bacteria and archaea, but not eukaryotes. This is key to their function because it means that, when introduced to the human body, they only target certain surface receptors, and therefore certain species of bacteria – making them specific and avoiding common downsides of conventional antibiotics such as collateral damage to the gut microbiota, or to cells that are part of the human body1.

Another advantage of using phages is their lytic cycle. Through the lytic cycle, the phages reproduce by hijacking bacterial cells. In this way, the reproduction of the viruses is also linked to their main method of fulfilling their function as an antibacterial treatment – every bacterial cell killed grows the numbers of the phages, allowing the introduction of just a small number of them to snowball into an effective treatment to most bacterial diseases2. However, some phages utilise a lysogenic cycle instead, meaning that the viral DNA is replicated within the bacterial cell hosts over many generations instead of being read immediately The bacteria are still killed at the end of the cycle, but the delay between applying the phages and their bactericidal properties taking effect is too long to be practically used as an antibacterial treatment. Therefore, only some species of phage that undergo a lytic cycle can be utilised for phage therapy3 Another possible issue with phages is that the immune system could kill off all the phages that have been administered before they reach the target bacteria to begin reproducing, as they are also a foreign organism in the human body, making the treatment unsuccessful.

Eventually, the same problem arises with phage therapy as did with the overuse of antibiotics: the bacteria develop resistance to the phages, and they no longer perform as well or at all. However, there are several ways in which phage therapy can be adapted to overcome this obstacle. The most common method is a “phage cocktail”: instead of administering just one phage species to target a specific species of bacterium, many different species are used to target the bacterium in different ways (e.g. by attaching to different receptors). Phages are especially suited to these kinds of “cocktails” because of their sheer abundance in nature – bacteriophages are the single most common biological agent on Earth4. This makes it easy to find several different species of phage that target the same species of bacteria with just a small sample of phages Using phage cocktails, the bacteria have far fewer opportunities to survive all the different phage species and evolve a resistance to all of them, making MDR bacteria that resist phage therapy far less likely to

to develop Genetic engineering can also be utilised to engineer the phages to become more effective in killing bacteria – e.g. stimulating formation of proteases that more efficiently destroy the structures of the bacterial cell

If a phage cocktail is still not enough to deal with a bacterial infection, they can be combined with an antibiotic to make a very potent treatment. As the bacteria have had to adapt to the selection pressure of the bacteriophages, they would have lost the ability to resist antibiotics as they were no longer a significant selection pressure after prolonged exposure to phages. Ultimately, this is the most powerful application of phage therapy – as a kind of catch-22 which prevents bacteria from gaining any significant resistance to one treatment without losing their resistance to the other, effectively preventing the emergence of new MDR and TDR bacteria

Bibliography

1. “Phage therapy: a new frontier for antibiotic-refractory infections” – Josh Jones – Royal College of Pathologists – 2024 - https://www.rcpath.org/resource-report/phage-therapy-a-newfrontier-for-antibiotic-refractory-infections.html

2. “Phage therapy: An alternative to antibiotics in the age of multi-drug resistance” - Derek M Lin/ Britt Koskella/ Henru C Lin – National Library of Medicine – 2017https://pmc.ncbi.nlm.nih.gov/articles/PMC5547374/

3. “Phage Therapy: Past, Present and Future” – Madeline Barron, PhD –American Society of Microbiology – 2022 - https://asm.org/articles/2022/august/phage-therapy-past,-present-andfuture

4. “Phages as lifesavers” – Claudia Igler – Biological Sciences Review, Volume 37 – 2024

5. “Phage therapy as a potential solution in the fight against AMR: obstacles and possible futures”- Charlotte Brives/ Jessica Pourraz – Nature – 2020https://www.nature.com/articles/s41599-020-0478-4

Causes of Alzheimer’s disease

Arunima Karve Y12

Alzheimer’s disease is the most common type of Dementia. Therefore, I think it’s important to begin with a brief overview of Dementia to provide some context before exploring Alzheimer’s disease in more depth.

Dementia is an umbrella term for a collection of neurodegenerative diseases where the nerve cells (neurons) in the brain stop working properly. The causal mechanisms behind this, as well as the symptoms exhibited by the patient, vary according to the type of Dementia the patient is thought to have. There are four main types of Dementia: Alzheimer’s disease, Vascular Dementia, Dementia with Lewy Bodies and Frontotemporal Dementia (listed in order of prevalence)

Three out of these, including Alzheimer’s Disease, are thought to be caused by abnormal deposits of proteins aggregating and disrupting neuron functioning. As these protein deposits increase in frequency and size, so does the internal brain damage caused by them, which also leads to a deterioration in symptoms. This is why Dementia is described as a ‘neurodegenerative’ disease.

The common symptoms for all of these types of Dementia are a loss of cognitive functioning: thinking, remembering and reasoning to the extent that the patient ends up struggling to complete basic daily tasks like walking, navigating and handling money. Contrary to popular belief, Dementia is not just memory loss and confusion. It can cause mobility issues [1], as well as a range of behavioural issues such as agitation, depression and even hallucinations Late-stage Dementia patients may experience difficulty balancing, swallowing and have bladder control issues (incontinence)

In this article I am going to explore the proposed biological causal mechanisms behind Alzheimer’s disease, and how they are thought to cause its characteristic symptoms.

Alzheimer's disease has 2 main proposed causal mechanisms:

1 Beta amyloid plaques

2. Tau tangles

Beta amyloid plaques originate from the amyloid precursor protein (APP) This is a normal protein produced in the body which is important in the growth and repair of neurons. Usually, APP is broken down by two enzymes known as alpha secretase and gamma secretase to form soluble peptides which are then broken down and either

removed from the bloodstream or are recycled However, if beta secretase teams up with the gamma secretase in the place of alpha secretase, then the leftover fragment produced is an insoluble monomer called amyloid beta These monomers are chemically sticky, and form beta amyloid plaques as they clump together. These plaques get in between neurons and disrupt neuron-to-neuron signalling, which impairs brain functions such as memory They may also trigger an immune response and cause inflammation, damaging the surrounding neurones. This is an example of how the damage to the brain compounds in Dementia patients. Amyloid plaques can also deposit around blood vessels in the brain (which is known as amyloid angiopathy), which weakens the walls of the blood vessels and increases the risk of haemorrhage in the brain Haemorrhage is the term for when a blood vessel bursts, and if this occurs in the brain, then the patient could then develop vascular dementia as well because a haemorrhagic stroke in the brain is huge risk factor for vascular dementia The patient would then have mixed-diagnosis dementia and probably also have worse symptoms. This is how the condition progresses: the damage caused to the brain worsens as more and more beta amyloid plaques build up.

It is currently theorised that these beta amyloid plaques outside the neurons lead to activation of an enzyme called kinase inside neurons, which causes the tau protein inside neurons to become hyperphosphorylated Tau ensures that the microtubules (track-like structures that ship nutrients along the length of a cell) which make up the cytoskeleton of neurons do not break apart. As a result of becoming hyperphosphorylated, the tau protein changes shape and stops supporting the microtubule, instead clumping together to form neurofibrillary tangles inside the neurons. As neurons with tangles have non-functioning microtubules, they cannot signal as well as normal nerve cells, so they may undergo apoptosis (programmed cell death). As neurons die, the brain undergoes atrophy (it shrinks). The gyri (ridges of the brain) get narrower and the sulci (grooves between the gyri) get wider Finally, the brain ventricles, which are fluid-filled cavities in the brain, get larger. We can see this physical change from MRIs and post-mortem examinations of patients withAlzheimer’s

So how exactly do all of these biological changes cause the symptoms experienced by an Alzheimer's patient? In order to answer this question fully, it’s necessary to consider the functions of different parts of the brain

The beta amyloid plaques and tau tangles are most commonly found in the temporal lobe (specifically in the hippocampus), and the parietal lobe The temporal lobe is involved in language, memory, recognising faces, hearing and our sense of smell. The hippocampus is inside the temporal lobe, dealing with emotion as well as forming new memories and transferring them to our long-term memory store. This is why Alzheimer's patients struggle to form new memories The neurons in the hippocampus have impaired function so new memories are not transferred to their long-term memory stores. The parietal lobe controls how we react to our environment It is the part of the brain where sensory information from the environment enters into (known as the somatosensory cortex). If there is damage to the right parietal lobe, for example, then the person might have problems with judging distances in three dimensions and they may struggle to navigate stairs. You may be familiar with Alzheimer’s patients describing shiny floors as “wet”- this is likely due to their parietal lobe being compromised.

An interesting point to consider is that some people have the same amount of amyloid beta plaques in their brains as someone with Alzheimer's, but don’t actually have the condition in terms of the associated brain damage or characteristic symptoms [2] These types of people are given the name “Super Agers” by scientists because they may live up to 100 with these beta amyloid plaques and experience no symptoms or compromise in their brain function The mere existence of these “Super Agers” seems to contradict the current theory behind the cause of Alzheimer’s However, it has been found that these people have significantly lower levels of tau tangles in their neurons, so perhaps they are inherently resistant to the build-up of tau tangles

Bibliography

[1] Tolea, M. I., Morris, J. C., & Galvin, J. E. (2016). Trajectory of Mobility Decline by Type of Dementia. Alzheimer disease and associated disorders, 30(1), 60–66, available at https://pmc.ncbi.nlm.nih.gov/articles/PMC4592781/pdf/nihms659303.pdf (date accessed 05/02/25)

[2] Gefen, T., Kawles,A., Makowski-Woidan, B., Engelmeyer, J.,Ayala, I., Abbassian, P., Zhang, H., Weintraub, S., Flanagan, M. E., Mao, Q., Bigio, E. H., Rogalski, E., Mesulam, M. M., & Geula, C. (2021). Paucity of Entorhinal Cortex Pathology of the Alzheimer's Type in SuperAgers with Superior Memory Performance. Cerebral cortex (New York, N.Y. : 1991), 31(7), 3177–3183, available at https://pmc.ncbi.nlm.nih.gov/articles/PMC8196247/pdf/bhaa409.pdf (date accessed 05/02/25)

Chemistry of Fentanyl – The Anaesthetic that has caused a Crisis in the US Raphael Dadula Y13

Introduction

In 1959, Dr Paul Janssen, a Belgian chemist who founded Janssen Pharmaceutic (now under Johnson and Johnson), became the first person to synthesise fentanyl. At the time, it was the most potent opioid (a class of drugs that reduce pain and derive from the opium poppy plant). It is over 100 times more powerful than morphine It is now often used by the NHS to treat severe pain during or after an operation and it used as an alternative when other painkillers are ineffective. Unfortunately, it has been misused by doctors by inappropriate prescriptions and by illegal manufacturers especially in the United States. On 26th October 2017, US President Donald Trump declared the opioid crisis a national public health emergency In 2022, there were an alarming 73,838 deaths from fentanyl overdoses with only 1295 deaths 20 years earlier.

Basic Chemistry of Fentanyl

Figure 1: Skeletal formula of Fentanyl

Molecular formula: C22H28N2O

IUPAC Name: N-(1-(2-phenethyl)-4piperidinyl-N-phenyl-propanamide

Boiling Point: 466°C

Melting Point: 83-84°C

Solubility in Water: 200mg/L at 25°C

Density: 1.087g/cm3

Fentanyl is made up of 3 main functional groups: 2 aromatic rings, one amine, and one amide

When Dr Janssen tried to create a new analgesia, he thought that the piperidine ring (the ring that contains a nitrogen atom) was the most important to form an analgesia The doctor and his colleagues wanted to create stronger analgesics than morphine and meperidine. They decided to make more fat-soluble derivatives Fentanyl has many hydrophobic regions such as benzene rings allowing it to dissolve in non-polar solvents. It is slightly soluble in water due to lone electron pairs in oxygen atoms which creates regions of negative charge.

Why is it an effective painkiller?

As previously discussed, fentanyl falls under the class of opioid analgesics, so it mainly acts at the μ-opioid receptor. It is absorbed through the skin as it has a low density, and high lipid solubility Opioid receptors are found throughout the central nervous system Usually, endorphins activate receptors to slow down the pain signal along neurones. As opioids have a similar chemistry to endorphins, opioids can help to delay the pain signal

Below is the step-by-step mechanism of action of opioids on opioid receptors:

1. Receptors stimulate inhibitory pathways which affect the PAG and the NRPG.

2. More inhibitory neurones are stimulated which causes more neuronal activity in the NRPG (situated at the brainstem).

3. More neurones which contain 5-hydroxytryptamine and enkephalin These neurones are connected to the dorsal horn (located in the spinal cord).

4. There is less transmission of pain from the periphery to the thalamus which is responsible to relay signals to the cerebral cortex.

What makes fentanyl special is that analgesia may take effect between only 1-2 minutes after intravenous injection Tablets and lozenges will take 15-30 minutes to work Low concentrations of 0.2 to 1.2ng/mL fentanyl plasma are enough for somebody to be unable to feel pain Tablets and lozenges wear off after 4-6 hours

Why is fentanyl addictive and deadly?

As we discuss pain, there are 2 main neurotransmitters that we should focus on: glutamate (an excitatory signal) and GABA(an inhibitory signal) Opioids inhibit the level of GABAsecreted towards the nucleus accumbens and other areas. Usually, the nucleus accumbens secretes dopamine which is often known as the “happy hormone” Therefore, when opioids bind onto opioid receptors, more dopamine will enter the central nervous system. Fentanyl also suppresses the release of noradrenaline which impacts digestion, wakefulness etc. When the body becomes more used to taking opioids, the body may have less opioid receptors or become less responsive leading the individual to take more fentanyl to produce the same effect. This causes noradrenaline levels to reduce so the body creates more noradrenaline receptors. This makes the body dependent on opioids to maintain the balance. If the individual stops taking fentanyl, the body is a lot more sensitive to noradrenaline causing stomach aches, fever and other excruciating symptoms. Even a simple activity such as wearing a shirt can be painful. Another problem is that fentanyl can lead to respiratory depression As fentanyl is saturated in your blood, this can lead to less CO2 being exhaled so you are unable to breathe properly.

Why is fentanyl popular among criminals?

The Drug Enforcement Administration says that 2mg of fentanyl is lethal, so it would make sense to avoid selling fentanyl as it leads to the death of many customers. However, it is cheap and highly addictive which makes it good for business This is why fentanyl can be found in drugs such as cocaine and counterfeit Xanax.

Figure 2: A schematic diagram of inhibitory pathways

Fentanyl is easy to synthesise due to precursors which are building blocks used to form new chemicals which is why these precursors are strictly regulated. For fentanyl, the main precursor is a piperidine ring (which contains nitrogen) and at least one of the other functional groups mentioned earlier. However, the other functional groups are slightly modified so that they can still produce the same effect as regular fentanyl but avoid the risk of legal trouble Think about making ramen, using instant ramen is a lot easier and more convenient compared to making the soup from scratch. As illicit drug dealers only require a small dose, it is easy for them to hide chemicals in tiny containers with false labels.

A reporter from Reuter found that even a 12-year-old could make fentanyl and that the process was as easy as making “chicken soup”. To avoid anybody from taking inspiration from this individual, we will only discuss it is processed The powder is made into pills which also contain sugars, other painkillers, and colouring. Shockingly, Reuters were also able to by 12 fentanyl precursors and most of the chemicals were easily mailed to them as packages. It would be sufficient to create 3 million tablets, and they only spent $3,607 18 with most of the money paid in Bitcoin Some of the chemicals were given under fake labels such as hair accessories.

Conclusion

Fentanyl is still a common analgesic used in the NHS and it has helped to relieve the pain of many patients. However, the UK government must be vigilant to avoid a fentanyl crisis happening. It could be a game changer in the illegal drug market, and it is important to raise awareness so that people are aware of the risks so that they and the police will avoid the disastrous consequences that it can bring.

Bibliography

1. Pharmaceutical Technology (2018) Fentanyl: where did it all go wrong. [online] Last accessed 2 August 2024: https://www.pharmaceutical-technology.com/features/fentanyl-gowrong/?cf-view

2. NHS (2023)About fentanyl [online] Last accessed 2August 2024: https://www.nhs.uk/medicines/fentanyl/aboutfentanyl/#:~:text=It's%20used%20to%20treat%20severe,the%20rest%20of%20the%20bod y

3. Vanker, P. (2024) Number of overdose deaths from fentanyl in the U.S. from 1999 to 2022 [online] Last accessed 2August 2024: https://www.statista.com/statistics/895945/fentanyloverdose-deaths-us/

4. Stanley, T. The Journal of Pain, Vol 15, No 12 (December), 2014: pp 1215-1226

5. National Geographic (2017). This Is What Happens to Your Brain on Opioids | Short Film Showcase. YouTube. Last Accessed 2August 2024: https://www.youtube.com/watch?v=NDVV_M__CSI

6. Pathan, H. and Williams, J. (2012). Basic opioid pharmacology: an update. British Journal of Pain, [online] 6(1), pp.11–16.

7. Institute of Human Anatomy (2022). Why Fentanyl Is So Incredibly Dangerous. [online] YouTube. Last Accessed 3August 2024: https://www.youtube.com/watch?v=LxyyvW_fcqw&t=367s

8. TED-Ed (2020). What causes opioid addiction, and why is it so tough to combat? - Mike Davis. YouTube. Last accessed 3August 2024: https://www.youtube.com/watch?v=V0CdS128-q4

9. Ordonez, V. and Salzman, S. (2023). If fentanyl is so deadly, why do drug dealers use it to lace illicit drugs? [online] Last Accessed: https://abcnews.go.com/Health/fentanyl-deadlydrug-dealers-lace-illicit-drugs/story?id=96827602

10. Chung, D., Gottesdiener, L. and Jorgic, D. (2024). Fentanyl’s deadly chemistry: How rogue labs make opioids. [online] Last accessed: 3August 2024 https://www.reuters.com/investigates/special-report/drugs-fentanyl-supply-chain-process/

11. Tamman, M., Gottesdiener, L. and Eisenhammer, S. (2024). We bought everything needed to make $3 million worth of fentanyl. All it took was $3,600 and a web browser. [online] Last accessed 3August 2024: https://www.reuters.com/investigates/special-report/drugsfentanyl-supplychain/

Chemistry behind Fragrances

Vedika Tibrewal Y13

Humans have always been attracted to the sense of smell: since the first recorded form of perfume- used by the Mesopotamians in religious ceremonies over 4000 years ago- man-made scents have kept an undeniable presence in society across the globe. A smell is essentially a light molecule that floats in air, generally produced by aromatic organic compounds (hence the name) Perfumes as we know them today mainly revolve around arenes, giving them a pleasing scent: these aromatic compounds consist of a cyclical, planar structure with an abundance of C H bonds Since C H bonds are non-polar, hydrogen bonds are not formed between arene molecules, leaving nothing but weak London Forces to hold them together- as a result, they have a high vapour pressure, causing gaseous molecules to diffuse into the atmosphere and release scent.

Although most compounds used in perfumes tend to be arenes, other compounds such as terpenes are used due to their high carbon-hydrogen bond frequency, an example of which being Geranyl Acetate- the natural, unsaturated compound responsible for the scent of roses. Terpenes were originally considered to be under the ‘aromatic’ compound umbrella, however, have since been rejected from the group due to not fitting Hückel’s Rule, determining that aromatics must follow a ‘4n+2’ formula for its number of pi orbitals, a quality almost exclusively reserved for cyclical compounds.

An example of a perfume everyone knows and loves is Chanel N°5 Ernest Beaux, a renowned chemist of the 1920s, used at least 80 substances to make a simple yet intense fragrance- this perfume was special because of the complex blend of natural essences along with synthetic aldehydes.

The main components of perfume include:

Oil

Oils are essential in creating perfumes, providing the primary scents and determining the longevity of a fragrance. These oils can be natural- extracted from flowers and spices- or synthetic- made in labs for unique scents and consistency via steam distillation. The oils are categorized into top, middle, and base notes that unfold over time, with top notes being light and quick to evaporate, middle notes giving character, and base notes providing depth and longevity. Perfumers blend different oils to create complex fragrances, with each oil contributing unique olfactory qualities. Some oils act as fixatives such as musk- usually derived from the molecule Galaxolide- helping the scent last longer and remain stable over time In the formulation process, perfume oils are diluted with carrier oils or alcohol to achieve the desired strength, with higher concentrations creating stronger fragrances These oils contain functional groups such as alcohols, esters, aldehydes and ketones which influence the characteristics of the scent.

Water

Figure 1: The molecular structure of Galaxolide

Figure 2: the different aromas coming off a single flower. Some are isomers, while some are completely different compounds because only a few have a major impact on the overall scent of the flower, while some exist in smaller degrees.

Like alcohols, water’s polar nature also allows it to dissolve floral and citrus oils. It also emulsifies the formulation- especially when creating colognes or lighter sprays Water is an overlooked part of the perfume-making process, but it influences the diffusion of the fragrance into the skin- affecting how the fragrance unfolds over time.

The Dies-Alder Reaction

This diagram shows the different structures of perfumes. Different rings including aromatic, nonaromatic, and unsaturated hydrocarbons are usually used as they can easily be obtained from natural resources and petrochemicals The most common reaction mechanism to synthesise aromatic compounds is the Dies-Alder reactioncycloaddition of pi reactants.

➢ Conjugated diene(cis) and alkene (pi bond) react to form 6-membered ring.

➢ Two carbon-carbon sigma bonds are formed as well as a carbon-carbon double bond: This reaction is faster when the pi group and electron donate groups on the diene.

➢ 3 pi bonds are broken.

The Dies-Alder reaction occurs at moderate temperatures with a catalyst and is key both in synthesizing fragrances and in other industries.

Fragrance chemistry is a unique blend of both art and science. As our taste continues to become more sophisticated and as sustainability has gained more importance, the reactions involved become more complex in captivating our senses.

Zero Trust Architecture: A Radical Rethink of Cybersecurity

Sahishnu Jadhav Y12

The world of cybersecurity is ever-evolving, and few concepts have shaken it up quite like Zero Trust Architecture. If you have been following developments in network security, you might have come across the term “zero trust,” which can sound rather dramatic. Indeed, it is a bold and somewhat radical idea: to trust no one and nothing, either inside or outside a network, without strict verification This article will explore what Zero Trust Architecture is all about, why it emerged, and how it might shape the future of digital security. Although the idea may seem technical, the core principles are easy enough to grasp With the right perspective, Zero Trust can be seen as a logical progression in the ongoing battle against cyber threats one that invites us to question long-held assumptions and embrace a more dynamic, flexible way of defending our online spaces.

The Evolution of Traditional Security Models

Before delving into Zero Trust Architecture itself, it is worth looking at the context in which it was conceived. In earlier years, network security was largely based on what is often called a “castle-and-moat” approach. You can imagine an ancient fortress with thick walls, guarded gates, and perhaps a moat surrounding it. The idea was that anyone inside the fortress was effectively “trusted,” whilst those outside were treated as potential attackers. The digital equivalent involved building a robust perimeter around a network using firewalls, intrusion detection systems, and similar tools. Once you were inside, you had a relatively open environment in which data and information flowed freely This approach was considered adequate for organisations in which employees worked mainly at office locations, often using desktop computers connected directly to a local network The fortress (or perimeter) was clear, and traffic crossing the boundary could be carefully filtered.

However, times have changed. Today, people work from home and coffee shops, or on the move, connecting to their corporate networks via remote connections and using personal devices like smartphones, tablets, and laptops. Cloud computing platforms host data in remote data centres, far away from a traditional on-premises network perimeter Employees, contractors, partners, and even customers may require different levels of access to resources. Threats have also evolved, becoming more sophisticated and often originating from within the network itself whether through malicious insiders or by hijacking legitimate user accounts. Consequently, relying on a single well-guarded perimeter has become insufficient. Once an attacker slips inside by stealing credentials or exploiting a vulnerability, the trust-based nature of internal networks makes it easy to roam around, exfiltrating data or causing disruption.

The Emergence of Zero Trust

Recognising this challenge, cybersecurity experts began to question the premise that “inside” users automatically deserve trust. In 2010, an analyst at Forrester Research named John Kindervag popularised the term “zero trust ” In essence, zero trust proposes that no user, device, or

or system should ever be inherently trusted Instead, they must earn trust by repeatedly verifying their identity, privileges, and security posture. Organisations such as Google started adopting principles akin to zero trust to protect their internal systems, especially after experiencing high-profile attacks that exploited vulnerabilities in the traditional perimeter model

The rise of cloud computing also accelerated the uptake of zero trust, because hosting data and services in someone else’s data centre implied that the old fortress walls had become porous at best. When resources are scattered across different geographies, with users constantly connecting from unpredictable locations, trust must be re-evaluated every time Although zero trust did not appear out of thin air it built on decades of research and earlier security best practices it is often viewed as a fresh perspective that cements the “never trust, always verify” ethos. This shift resonates with the increasingly decentralised digital ecosystem, where static boundaries no longer apply Moreover, as regulatory requirements around data privacy and security become stricter, zero trust provides a structured, auditable framework for controlling and monitoring access

Core Principles of Zero TrustArchitecture

At the heart of Zero Trust Architecture lies a set of core principles that guide how resources are protected:

1. Never Trust, Always Verify: The baseline assumption is that no user or device is trustworthy by default. Every access request needs verification ideally more than just a username and password. Multi-factor authentication (MFA) is a common requirement in zero trust systems, ensuring that an attacker cannot just guess or steal a single set of credentials and then freely wander about

2. Least Privilege Access: Even after verifying identity, a user should only be granted the minimum amount of privilege necessary to perform their tasks If someone only needs to read certain files, they should not gain access to modify them. This principle limits the scope of damage if an attacker manages to compromise an account

3. Micro-Segmentation: Traditional networks might treat the internal network as a large open space Zero trust, by contrast, segments the network into small zones or micro-perimeters Each zone may contain a specific application or set of data. Access to one zone does not imply access to others, making it more difficult for attackers to move laterally

4. Continuous Monitoring and Validation: Rather than a simple, one-off check at login, zero trust encourages constant monitoring of user behaviour and device health. If something odd is detected such as an unusually large data transfer or access from an unfamiliar location further validation can be demanded, or access can be blocked altogether.

These principles collectively encourage organisations to think differently about security. Instead of hoping to keep the “bad guys” out, zero trust presupposes that attackers might already be inside, or that they could easily waltz in if we are not vigilant. By layering multiple checkpoints and restricting privileges, zero trust makes it far harder for cybercriminals to exploit a single vulnerability or stolen credential to compromise an entire network.

Implementing Zero Trust: The Human Element

Despite zero trust being underpinned by technology, the human element is crucial Organisations must ensure that staff, contractors, and even customers understand how the new security measures work and why they are necessary A carefully considered zero trust strategy might involve regularly training employees in best practices like using strong passphrases, and

recognising phishing attempts, and reporting suspicious behaviour It also typically requires changes in organisational culture. The notion that being “inside” the network confers inherent trust is deeply ingrained in many workplace environments Shifting to a zero-trust mindset can sometimes lead to frustration if people find themselves repeatedly challenged for credentials or forced to navigate micro-segmented networks

However, if introduced with clarity and accompanied by well-designed authentication processes, zero trust can actually feel less cumbersome for end users. Technology such as single sign-on (SSO) and adaptive authentication can minimise friction, asking for additional verification only when risk indicators arise such as logging in from a new country or at an unusual hour The trick is striking the right balance: you want robust security without making daily tasks unbearably tedious. Communication is key: if users understand why zero trust policies exist and how they protect sensitive data, resistance to change usually decreases

Technological Building Blocks

Zero Trust Architecture does not usually rely on a single product or technology but rather an ecosystem of tools and frameworks that work together:

• Identity and Access Management (IAM): At the core, zero trust requires robust identity management This goes beyond merely having a user database It typically involves automated workflows for provisioning and deprovisioning accounts, enforcing MFA, and generating detailed access logs for auditing purposes.

• Endpoint Security: In a zero-trust world, each device is scrutinised for compliance with security policies. Is the device running updated antivirus software? Has it been patched recently? Is it jailbroken or running suspicious processes? Tools that can verify the health and posture of an endpoint device are essential to enforce trust on a per-connection basis.

• Network Segmentation and Firewalls: Proper segmentation requires next-generation firewalls capable of enforcing granular policies. These might operate at the application layer, recognising specific services and controlling traffic accordingly Software-defined networking (SDN) can help create dynamic segmentation, spinning up or tearing down network “segments” in response to real-time conditions

• Security Analytics and Monitoring: Continuous monitoring demands the collection and analysis of vast amounts of data Security information and event management (SIEM) systems and advanced analytics platforms can help spot anomalies, detect intrusions, and take automated actions such as isolating a device or locking an account

These components intertwine to create a multi-layered defence. For instance, consider a remote employee trying to access a company’s internal database. The user must first prove their identity via MFA, then the device must pass an endpoint security check confirming it is patched and free of malware. Only then is a secure connection established to a specific microsegment of the network hosting the database. Throughout this session, if any suspicious activity arises, like unusual download patterns ,the session might be terminated or a secondary verification requested.

Potential Challenges and Criticisms

Like any paradigm shift, zero trust is not without its critics One of the most frequent concerns is complexity. Setting up micro-segmentation, enforcing consistent policies across cloud andon-premises resources, and integrating advanced monitoring tools can be daunting and expensive. Smaller organisations may lack the technical expertise or budget to deploy a fullblown zero trust framework On the other hand, many security vendors market “zero trust” solutions as if you can just buy a product and instantly become zero trust-compliant. In reality, zero

zero trust is more of a philosophy requiring continuous effort, planning, and maintenance

Another sticking point is user experience If implemented poorly, zero trust can hamper productivity by forcing endless re-authentications and restricting access so strictly that staff cannot do their jobs effectively Careful tuning, user education, and the adoption of technologies that streamline authentication processes are essential for ensuring that security does not become a bottleneck.

Finally, whilst zero trust certainly raises the bar for attackers, it is by no means a magic bullet. A sophisticated attacker might still manage to compromise user credentials or exploit zero-day vulnerabilities. As with any defensive strategy, constant vigilance and a layered approach remain essential Zero trust must be seen as part of a broader security posture one that includes security awareness training, updated patching regimes, and incident response plans.

Real-World Applications and Success Stories

Perhaps the best way to understand zero trust is to look at how forward-thinking organisations have applied it. One notable example is Google’s BeyondCorp initiative. This approach eliminates the need for a traditional VPN (virtual private network), instead allowing employees to access internal applications from any location but only after proving their identity and device security posture. By treating every connection as potentially hostile, Google reduced the risk that a compromised internal network segment could endanger all its services.

Banks and financial institutions are also embracing zero trust, recognising the high stakes involved in data breaches. By aggressively segmenting their networks, they make it more difficult for cybercriminals to jump from one compromised account to other lucrative targets Some hospitals and healthcare providers have started adopting zero trust to protect patient records, especially as medical devices and telehealth services expand If a single medical device is hacked, zero trust principles ensure the attackers cannot automatically move on to the entire database of patient information

These success stories highlight that zero trust does not have to be a nuisance if carefully implemented. Indeed, many organisations that have adopted it report improved visibility into their networks They know exactly who is accessing what, from which device, at any given time. This visibility can lead to better auditing and compliance, making it easier to investigate suspicious incidents and demonstrate good security practices to regulators.

The Future of Zero Trust

As networks continue to sprawl across cloud services, remote work remains common, and data privacy regulations tighten, it is likely that zero trust will become an even more prominent aspect of cybersecurity Developments in artificial intelligence and machine learning may further enhance the ability to monitor and respond to threats in real time. Imagine a system that, within milliseconds, analyses a user’s access request, checks hundreds of signals, calculates a risk score, and grants or denies entry accordingly all without the user noticing more than a slight pause As technology advances, zero trust could become more seamless, weaving itself into the fabric of everyday computing. Nonetheless, challenges remain For smaller organisations, the complexity and cost of zero trust might seem overwhelming. In response, many cloud providers are incorporating zero trust features into their managed services, offering simpler, more cost-effective pathways to adoption. Meanwhile, standards bodies and industry groups are working to publish guidelines and

and best practices, hoping to reduce the guesswork and help organisations avoid costly missteps.

Cybersecurity is a continuous cat-and-mouse game, with defenders and attackers each evolving their methods Zero trust, by taking a more granular and dynamic stance, is arguably the next logical step in this arms race. It pushes us to drop the illusion of a perfectly secure perimeter and instead focus on validating every request, every device, and every action. In doing so, it helps us adapt to a digital reality where threats can appear from anywhere inside or out.

Conclusion: AParadigm for the Next Generation

In many ways, Zero Trust Architecture is more a philosophy than a singular technology. It challenges traditional assumptions about where we draw our boundaries and who we consider “trusted. ” By encouraging continuous verification, least privilege access, and microsegmentation, zero trust bolsters defences against both external and internal threats For high school students looking to pursue computer science or cybersecurity, it is a fascinating example of how new models can emerge when old assumptions no longer hold It also demonstrates that the human aspect user understanding, organisational culture, and willingness to adapt remains crucial even in a field as technical as cybersecurity

As you contemplate further studies or careers in this domain, remember that zero trust will likely be at the centre of future cybersecurity designs. You might soon be helping companies implement these ideas, refining them with machine learning, or even inventing the next evolution of security paradigms. While zero trust may sound radical at first, it is ultimately about realism. In a world where data and connections flow in every direction, verifying identities, segmenting resources, and assuming that breaches are always possible can provide a more robust and forward-looking security foundation. This mindset of constant vigilance and flexible defence will undoubtedly guide the next generation of cyber professionals and perhaps shape your own journey into the world of cybersecurity.

How do computers compute?

ENGINE | The first computer...kind of It’s really hard to pinpoint a single device as the earliest computer created – does an abacus, Sumerian clay tablet, or a gear-driven, hand-powered, mechanical ancient Greek model of the Solar System (search up theAntikythera mechanism) count?

Either way, Charles Babbage’s Difference Engine is a solid place to start.

Tired of reading massive tables that approximated complicated operations, such as logarithms, and polynomial calculations, Babbage devised to create a machine that did it all for him Thus, in 1822, he announced his design of the difference engine, which was completely mechanical and made use of all sorts of parts we might expect to be more likely to find in a windmill rather than a computer – gears, rods, ratches, and so on. Numbers were represented using 10-toothed gears stacked in columns to represent the decimal system However, all these incredibly intricate mechanical parts came at a price, and an exceedingly high one apparently, as the British government cut funding in 1833 (probably after realising they could have purchased twenty-two steam locomotives from Stephenson’s factory instead of funding a design which only ended up being one-seventh complete for the same sum of money).

While Babbage didn’t give up on his machines, going on to design the Analytical Engine and later the Difference Engine 2 in subsequent years, everyone else around him did, and he died having failed to find any more funding for his work. However, Ada Lovelace took interest in his work and designed some of the world’s earliest computer programs, along with predicting some potential uses of the machine, including making music and manipulating symbols – functions that modern computers still carry out today

TABULATOR | An appetiser of what computers can do 1880. A frustrated inventor can't handle the inefficient and taxing process of counting and processing census questionnaires Enter Herman Hollerith, 20 at the time, who invented one of the world’s first electronic automated counting machines. Inspired by passenger tickets pierced by conductors on trains, he used the concept to create a machine that used punch-cards to store and process the ever-increasing amounts of data generated by censuses.

It worked by lowering metal pins onto the cards; if there was a hole, then a current would be conducted to close a circuit. If a current was detected, this would register a 1; if there was none, a 0 As we know, this principle of using zeros and ones is the foundation of all the hardware that we know and love today.

ENIAC | First-ever general-purpose electronic computer

The Electronic Numerical Integrator And Computer was created by a group formed of US physicists

physicists in consultation with mathematician John von Neumann during World War II It was the first general-purpose, electronic, programmable, computer. Contrary to the aforementioned tabulator, it used plugboards (a wall into which sockets can be inserted), to input instructions, meaning the speed of the machine was not limited to how quickly you were able to shove information into it However, a massive drawback to this was that it could sometimes take days to rewire these plugs to create the desired program.

While it didn’t make use of binary, it was one of the first computers to use vacuum tubes, giving it the label of a first-generation computer. Vacuum tubes are tubes of glass with the air removed, and with electrodes inside, allowing the flow of current in them to be manipulated By adjusting the voltage going into them, they can be made to act as switches. These were much more effective than anything mechanical gears could achieve as they could act as fast as electrons could flow. This allowed the ENIAC to perform up to 5,000 additions per second.

Left to right: Babbage’s Difference Engine, the Hollerith Tabulating Machine, ENIAC

TRANSISTORS | From big-room-sized to small-room-sized Vaccuum tubes, although very convenient compared to Babbage’s mechanical monstrosities, had one fatal flaw: they were massive (and used a bunch of unnecessary energy) If computers could never be shrunk beyond the size of an entire basement (as was so in the case of the ENIAC), any hopes of developing or even commercialising computers any further would be in vain and the industry doomed.

Thankfully, in 1947, Walter Brattain, John Bardeen and William Shockley invented the world’s first transistor. Transistors are made from semiconducting materials, namely silicon, and they work by acting as switches; similar to the vacuum tubes, depending on what is input into them, they either let current through or block it completely, allowing them to represent binary. Initially, most scientists, including our innovative trio, believed that the best transistors could do was perform extremely specialised functions – in other words, they were seen to be basically useless However, very soon applications were found in radios, notably by the newly emerging company Sony, and soon after – you guessed it – computers.

Now called second-generation computers, the transistors in them, only a couple millimetres long, allowed them to shrink massively in size Computers, although still relatively large, now only had to take up a couple cubic metres, such as the IBM 1620, or the DEC PDP-8, which you could stuff into your fridge if you really wanted to relatively

CIRCUITS | Let’s go smaller

It was on a dreary night of January that Robert Noyce beheld the accomplishment of his toils. In other words, he realised that instead of just making the transistors out of semiconductors, this could be applied to all the components, and thus the entire circuit could be made on a single chip, essentially packing all the components together very tightly This would make it way quicker and efficient to create computer parts, and also allow them to shrink even more. Thus the unitary, or integrated, circuit was born, and officially patented in 1961.

Now, the benefit of integrated circuits is that they act as one discrete component. Whereas before you would have a bunch of transistors, resistors, and capacitors connected by an even bigger bunch of wires, now these components are miniaturised and placed very close together, meaning much less power is used This also improves processing speeds as signals take shorter times to travel over shorter distances. Additionally, this allows for production of computers to cost less due to the parts being much easier to mass-produce, as an integrated circuit can be made from just one single silicon crystal.

MICROPROCESSORS | ...even smaller

At this point, computers still used stacks of integrated circuits to make up CPUs It was nice, and they were decently small, so much so that calculators and “minicomputers” began to gain commercial success. However, the release of the Intel 4004 in 1971 took it one step further –although it could only add and subtract, and only 4 bits at a time, the entire CPU was on one chip, called a microprocessor, instead of being assembled from discrete components. This made it even easier and cheaper to manufacture, thus allowing the tech industry to explode to the levels we can observe today, which is why we see computers pretty much everywhere nowadays – in our phones, airPods, even smart-fridges

TODAY | What happens next?

Computers are now able to be integrated into any device, allowing for instantaneous communication across the entire planet, intelligent flight systems within aeroplanes, or tracking of vital signs in medical machines. Yet without all the work done previously, they never would have been able to evolve into the incredibly powerful and incredibly small machines that we see and use everyday in modern times. From this point technology can only bloom further; million-core supercomputers, foldable phones, dedicated AI chips and even quantum machines are only specks of sand in the sea of endless possibilities that await us in the years to come.

References

https://www.g2.com/articles/history-of-computers

https://www.sciencemuseum.org.uk/objects-and-stories/charles-babbages-difference-enginesand-science-museum

https://findingada.com/about/who-was-ada/

https://wwwibm com/history/punched-card-tabulator

https://www.britannica.com/technology/ENIAC

https://wwwcomputerhope com/jargon/v/vacuumtu htm

https://www.ericsson.com/en/about-us/history/products/other-products/the-transistor aninvention-ahead-of-its-time

https://en.wikipedia.org/wiki/Transistor_computer

https://wwwpbs org/transistor/teach/teacherguide_html/lesson3 html#:~:text=Transistors%20ar e%20the%20main%20component,it%20off%20to%20represent%200.

https://wwwansys com/en-gb/blog/what-is-an-integrated-circuit

https://etc.usf.edu/lit2go/128/frankenstein-or-the-modern-prometheus/2295/chapter-5/

https://wwwibm com/think/topics/microprocessor

https://ethw.org/Rise_and_Fall_of_Minicomputers

https://computer howstuffworks com/microprocessor htm

The Rise of Islamic banking in the West

Aryen Adhikari Y12

What lessons can western financial systems learn from an alternative approach to banking? The term ‘teleology’ is derived from the Greek word ‘telos', which, when used by Aristotle, means ‘the end or purpose of action’. Objects, like actions, can have a telos as well and knowing what the telos of an object is can reveal both what an object is and what makes it good. Teleologically, a knife is an object whose purpose is to cut, and the good knives are those that cut well Perhaps we need to consider the 19th century American doctrine of Manifest Destiny to fully grasp this concept – it takes the axiom that ‘It is right and proper for the USA to annex all land westward of the ocean because that is what land is for’.

For Aristotle, precisely the same thing is true for money. Once its function is understood, we will be able to understand both what money is and how it ought to be used In Politics, Aristotle declares the telos of money to be a medium of exchange, which is all we must know to see both what money is and why it should not be lent at interest ‘Money was intended to be used in exchange,’ Aristotle explains, ‘but not to increase at interest.’ Since money’s function is to serve as a way to facilitate exchanges whose end is the trade of useful and necessary commodities, those who exchange it not for goods, but instead for more money, ‘pervert the end that money was created to serve’, and so engage in ‘unnatural’ exchanges ‘of the most hated sort.’ ‘Usury, which makes a gain out of money itself, and not from the natural object of it,’ is ‘of all modes of getting wealth […] the most unnatural’.

Clearly, this left fertile ground for the assault on usury which the Church would mount following its Christianization of the Roman Empire This prohibition of interest was upheld as a key principle throughout the development of the Abrahamic faiths. Despite this, the rise of trade and commerce, paired with greater liberal economic thought during the Reformation, led to the growing recognition of credit as essential for economic development, which would eventually result in the practice of lending at a ‘reasonable’ rate becoming accepted within Europe

‘O you who have believed, do not consume usury, doubled and multiplied, but fear Allah that you may be successful.’ (Surah Al Imran 3:130)

As the global economy modernized, however, Islamic financial systems have adapted differently to enforce the prohibition of interest, or riba. This is possibly due to less economic pressure on Islamic economies to introduce reforms as they remained more agrarian and trade based, with less reliance on complex financial instruments. Another contributing factor, however, is the lack of secularism and the intertwining of religious and government affairs as seen through the prevalence of Sharia law – this law equates riba with injustice as it undermines the fairness in transactions; it allows the lender to profit without any effort or risk, while the borrower bears all the risk.

The telos of money is re-emphasized, it exists as a tool for trade used to measure the value of commodities, not a commodity in itself. As such, the generation of wealth through conventional banking – the profit acquired via the difference between interest gained from loans given and interest paid on deposits taken – is seen as unethical and even sometimes deceptive, exacerbating social divisions as wealth accumulates without effort with those who already have it. In line with these moral principles, the investment into haram industries such as alcohol, gambling or tobacco is strictly forbidden. This extends to the elimination of speculation, maisir, and uncertainty, gharar, covering most financial derivatives like futures and options as well as conventional insurance contracts, affecting the profitability and possible opportunities within Islamic finance

In order to replace riba, a profit and loss sharing paradigm is adopted, which is predominantly based on the mudarabah (profit-sharing) and musharakah (joint venture) concepts of Islamic contracting, replacing the cost of capital as a predetermined fixed rate Profit and loss sharing is a contract between two or more transacting parties, allowing them to pool their resources to invest in a project in which all contributors share both risk and reward Due to these principles, Islamic banking relies on asset backed financing to forge a strong connection between investment and the real economy in sectors such as agriculture, sukuk bonds, project financing, but especially real estate and lease-to-own agreements.

For example, the concept of mortgage financing differs greatly from western systems through the use of murabaha (cost – plus financing) – instead of loaning the necessary downpayment to the potential homeowner, the bank might buy the property from the seller, and then re-sell it to the buyer at a profit whilst allowing them to pay the bank in installments. In other words, the bank profit cannot be made explicit as would a fixed interest rate, and therefore there are no additional penalties for late payments which would lead to many issues for investors in the conventional system, creating a more equitable financial environment

Perhaps a parallel can be found between the core morals at the foundation of Islamic banking, and the recent growth of ethical investing, with sustainable equity and fixed income funds accounting for 7 9 percent of global assets under management It is not only the similar exclusion of harmful and haram industry as highlighted by figure 1, but also through the lens of public and welfare economics, there is a shared sentiment of social justice and economic equality. Both systems declare that private market actions should contribute to the public good, maximizing social utility rather than individual profit alone. In other terms, this could lead to greater demand for policies that promote socially responsible investments as both ideals discourage harmful externalities, and these policies may manifest themselves as tax incentives, sustainable investments, or implementing regulations that promote corporate social responsibility.

Despite of its essential role in the progress of efficiency and equality in a society, 2.7 billion people (70% of the adult population) in emerging markets still have no access to basic financial services, and a great part of them come from countries with predominantly Muslim population However, Islamic banking addresses the issue of cfinancial inclusion through the development of microfinance initiatives encouraged through the use of risk sharing financial instruments In Bangladesh, such initiatives have enabled thousands of rural entrepreneurs under Islami Bank Bangladesh Limited (IBBL) to grow their businesses and improve their livelihoods without resorting to high-interest loans Individuals from lower socioeconomic backgrounds who may struggle to access traditional banking services are given much greater opportunities

opportunities to be able to contribute to the economy, whilst also aiding their personal welfare

Financial inclusion therefore can be seen through the provision of services for underrepresented populations, especially when considering the Muslim population of almost four million in the UK However, Al Rayan Bank, a pioneer of Islamic banking in the west, claims that approximately 80% of its fixed term deposit customers are non – Muslim. So, how did this alternative approach to banking first gain traction in the UK, the largest western hub for Islamic finance?

Growth in the UK

Islamic banking in the UK began in the 1980s to meet the demand from the Muslim community for Sharia-compliant banking services, starting with the establishment of Al Baraka International Bank in 1982, which laid the foundation for Islamic financial services in the country However, it was not until 2004 that the UK saw its first fully licensed Shariacompliant retail bank – the Islamic Bank of Britain (nowAl Rayan Bank) –

a significant milestone that opened the door for the future growth of Islamic financial services. The UK government’s willingness to accommodate Islamic finance reflected a broader strategy to support diversity in financial services and attract investments from Muslim-majority countries.

However, many regulatory hurdles stood in the way, as existing regulations were not designed to accommodate Sharia compliant financial principles such as murabaha. Thinking back to mortgage financing, two separate transactions are often involved as the bank must first purchase the property and then resell it back to the buyer at a markup, which leads to double taxation through Stamp Duty Land Tax (SDLT). To provide an equal footing with conventional banks, the British government introduced reforms in 2003 to remove the double SDLT costs, showing how initiative has been taken to integrate Islamic finance into the mainstream financial system

This integration not only increases opportunities for Muslim households looking to gain greater access to financial services, but also enhances diversification within the financial market, especially when considering a focus on asset backed investment across multiple sectors and sukuk bonds. As a result, more capital may be attracted domestically and internationally as there is a wider range of financial products available to investors, and the economy’s resilience to market volatility will also increase, encouraging sustainable growth. The stability achieved from this alternative approach to banking is evident when considering the comparatively insignificant impact of the 2008 financial crisis on Islamic banking services, possibly also due to the distancing from ‘toxic assets’ and speculative investments. In 2009, the official newspaper of the Vatican (L’Osservatore Romano) put forward the idea that ‘the ethical principles on which Islamic finance is based may bring banks closer to their clients and to the true spirit which should mark every financial service’

Its growth was further facilitated in 2014 following the introduction of sukuk bonds and the Sovereign Sukuk (issued by the government) worth £200 million. Like bonds, sukuk have a maturity date and provide income flows over the life of the security with a payment at maturity to their holders. Unlike bonds, the value of sukuk is not based on the creditworthiness of the issuer, as holding sukuk shares represents the ownership in tangible assets, usufruct, or services of revenue-generating issuers. Consequently, sukuk prices can vary both with the creditworthiness

creditworthiness of the issuer and with the market value of the underlying asset

Today, the Islamic banking sector in the UK is well-established and continues to grow The total value of Islamic finance in Britain is estimated to be worth £5.3 billion, and the market is expected to expand further as demand for ethical finance grows, as seen with Al Rayan’s diverse customer portfolio. Given its successes and effective integration, other western countries such as Canada are observing the model’s viability and appeal. Canada’s Muslim population is a sizeable and growing 1.5 million, and the government has made legislative some efforts to allow the growth of Islamic banking, although many issues remain such as double taxation on property transactions

It is clear that an ever-increasing global Muslim population of 1 9 billion paired with a growing regard for ethical investing and financial inclusion will lead to a significant growth in demand for Islamic financial services Can this growing demand reshape how banking systems operate globally?

Labour’s GB Railways and the Historical Clash between Nationalisation and Privatisation

One of Labour’s new flagship policies has been to renationalise the railways in an attempt to both save money and to create a more efficient and effective railway service for people all around the country. The past of British railways and nationalisation against privatisation has a long and rich history.

Background

There was a long period of time where Labour and the Tories clashed over this issue of nationalisation vs privatisation for railways and countless other industries. Naturally, both ideas have major pros and cons especially in relation to the complex issue of the railway system The railways were first nationalised into a state-owned company called British Railways back in 1948 under Labour rule This was part of the first wave of nationalisation of many British industries including other areas such as the coal, gas and electric industries. Everything, including the actual rail, the trains and the franchises themselves were state owned in an example of vertical integration – where a company controls different stages along the supply chain rather than just one specific level.

This all changed in 1994 when Conservative PM, John Major, decided to privatise the railways once again. John Major was the Prime Minister succeeding the famous (or infamous, depending on who you ask) Margaret Thatcher who undid most of the Labour nationalisation, with the distinct omission of the railways This was due to the success of the railways at the time being one of the highest reputation railways in Europe during the 1980s in many people’s opinion. Thatcher felt no need to fix an industry which seemed to be working while running at relatively low government cost, remaining below £3 billion in subsidies throughout the 1980s.

However, in 1994, after Thatcher was kicked out of 10 Downing Street by her own party, John Major made the decision to privatise the railways in a choice which many argue was political rather than what he thought was best for the railways as he attempted to win over Thatcher supporters by continuing her legacy of privatisation across the country The new system involved franchises such as Thameslink and Southeastern, bidding to be the ones to run different locations of the railways. They would then be given 5-year contracts to run the railways while having to reach targets set by the government called KPIs (Key Performance Indicators).

At this point it starts to get a little more complicated: franchises, track and train were no longer owned by the same companies. They were now separate entities. Rolling stock leasing companies (AKA ROSCOs) leased out trains to the franchises and Railtrack companies dealt with the upkeep of the tracks. The reason for the need of this horizontally integrated system was that since franchises only had short contracts, it would not be viable for them to invest in long term assets such as the tracks and the trains themselves and so other private companies instead dealt with these elements This privatised system lasted about 8 years before its failure

Railtrack was renationalised in 2002 and succeeded by Network Rail following two train crashes caused by faulty track, namely the Southall train crash (1997) and the Ladbroke Grove rail crash (1999). Not only were the Railtrack companies responsible for the accidents, but they were also unable to provide adequate reasoning for the incidents and ways to prevent future accidents due to most of their resources being outsourced And so, in 2002, Network rail was born, a government owned company which deals with the upkeep of the rail to this day. This brings us to present day and so now would probably be a good time to outline Labour’s actual plans for railway renationalisation. As mentioned, the track itself is already state owned and Starmer seems to have no intention of renationalising the ROSCOs largely due to the high cost it would require to buy all the trains needed to be used on the railways. GB Railways and the renationalisation of the railways only refers to the railway franchises themselves, 40% of which are already government owned due to past bankruptcies and fraud.

The government intends to take over these franchises and turn them into one centralised company called GB Railways. This will be done at a relatively low cost due to the government planning to wait until the expiry of the franchises’ contracts, most of which expire within 12 months and all within 4 years. The most notable exception of this is the franchise Avanti West Coast which, due to underperformance, the government wishes, and has threatened to, nationalise before the expiration of its contract in 2027.

What are the reasons for the renationalisation of railways?

The main reason for this change in government policy was to reduce inefficiency by creating one central company and to create a better railway service for the people of the UK. Labour hopes the nationalisation will help them to achieve these goals due to the following advantages of this economic policy. As you may be able to imagine, by having 15 or more different franchises instead of one centralised one, the system is much more complicated and inefficient 15 franchises mean 15 management teams, one for each franchise, a waste of money which the government hopes it will be able to benefit from.

There’s also a whole aspect of the blame game whenever something goes wrong on the railways. When a delay occurs franchises blame it on Network Rail or even other franchises and people must be employed to work out who and what was actually at fault for even the most minor of delays When the railways become nationalised and under one company, there will be no such problem. Another advantage of nationalisation is to stop rival companies from wasting money competing rather than focusing on improving their own service An example of this, not particularly relevant for railways in the present but nonetheless interesting, was the ability to shut down needless lines and stations set up by rival companies battling for control of the same area.

Before the nationalisation of the railways in 1948, many towns would have two stations set up by rival companies An example of this local to St Olaves is Bromley Bromley North was built in 1878, 20 years after the building of Bromley South. It was built by the ‘Southeastern’ franchise to rival the ‘London, Chatham and Dover Railway (LCDR)’ franchise which had built Bromley South. There was absolutely no need for two stations so close to each other and so nowadays Bromley North is practically redundant. The grouping of the railway companies in the 1920s and nationalisation in 1948 was able to deal with this particular inefficiency and closed many stations similar to this example such as in Tunbridge Wells.

The final advantage I’ll discuss for nationalisation stated above is the fact that privately owned companies will always do what’s better for profit and not necessarily what’s better for the people This is why many believe that services for the people of Britain such as transport networks and the recent topic of the water companies, should be under state ownership run be people who (supposedly) care about the people of the country and who will aim to create a better service rather than focusing on making profit.

With this policy, Labour hope to increase efficiency and improve the rail system for the people of Britain who, at large, are discontent with the current state of the railway system especially given the huge increase of government subsidies of the railways in recent to times to up to £7 billion, more than double that of pre-privatisation.

Nationalisation vs privatisation

The key arguments against this renationalisation and the main advantage of privatisation includes the theory that privately owned businesses will always be more likely to succeed due to ‘entrepreneurial spirit’ and the risk that future decisions made in a state-run company may be made for political reasons rather than for the benefit of the corporation.

The first point is a more general and simplistic argument in support of privatisation. The theory is that is that entrepreneurial minds at the head of private companies will always be able to outinnovate and out-manage the highly qualified, yet non-entrepreneurial managers and senior figures the government appoints to run publicly owned companies. Private companies have more freedom to innovate and make improvements to their enterprise and so will end up creating a more efficient company despite the drawbacks that come with privatisation explained earlier

In terms of the railways, this is largely a non-point Despite technically being private, the railway franchises of today are extremely highly regulated, ironically more so than before 1994 Franchises must ask government permission to raise fare prices which is frequently rejected. Franchises must reach goals set by the government on how frequently their trains run. This was particularly relevant during the COVID-19 pandemic when the government forced railway franchises to run higher frequency of their trains than the companies would have wanted to despite leading to losses due to most services being nowhere near full The

fact is that the privatised railway franchises don’t have anywhere near the bandwidth in regulation they need to be able to innovate in the way that they need to for the switch to a state-owned company to make much of a difference due to this idea of a loss of entrepreneurial spirit.

The second advantage of privatisation, or rather, disadvantage of nationalisation, is that decisions made in a state-run company may not always be made for the benefit of the company itself but for political reasons A perfect example of this was shortly following the first nationalisation of the railways in the 1948.

One of the first major dilemmas for the railways after its nationalisation was what type of trains it would run on its tracks The use of diesel locomotives had been rising rapidly in many countries around the world and switching from coal trains to diesel would likely have been beneficial for the railway industry itself However, the decision was made to stick with the coal trains due to the countries large coal mine industry in the UK which the use of coal locomotives

locomotives would help to support rather than having to import oil from the Middle East or other foreign countries. Although in the short term there may not be any similar situations, at some point there will be a situation where the interests of the railways and the government will conflict and when this happens, I can almost guarantee that the needs of the railways will not be prioritised

A final point and worry about the renationalisation specific to the railways is the risk of a decrease in government investment into the railways. As it stands, the government is forced contractually into putting money into the railways as it signs contracts with these private franchises The worries many rail users have is that once these obligations disappear, the railways may see budget cuts when the government needs to save money which, it seems, in light of Rachel Reaves’ recent comments about a £22 billion black hole in the nation’s finances, looks extremely likely.

Alternatively, an interesting view was brought my attention by George Paterson, Senior Stakeholder Engagement Manager at Southeastern Railway To simplify his lengthy title, his general job is to inform the public, rail user groups and campaign groups about the actions and plans of the Southeastern rail network On September 19 (just soon enough for me to be able to make this late addition into this article) he gave a talk at one such campaign group called Railfuture which my father happens to be affiliated with.

As part of his job, he speaks regularly with senior figures at Southeastern and so was able to provide insight into the differences seen since Southeastern became state owned in 2021. He presented the view that since becoming state owned, the network had in fact received increased investment This, he believed, was due to a greater trust that the money would be spent effectively by the government since the company was now state owned rather than private. Therefore, it seems that it’s harder to predict what will happen to government investment in the railways after renationalisation and so even harder to predict whether it will be a success or a failure

Will renationalisation and GB railways be a success?

Onto the big question, there’s no doubt that having one centralised company will remove inefficiencies such as the multitude of management teams and employees needed to decipher who exactly was at fault for various railway delays. Furthermore, as mentioned before, there seems to be widespread discontent within the British public around the state of the railways. Labour obviously thought this would be a vote winner and considering the outcome of the election, seemed to have judged public opinion correctly.

However, by the same token, it seems that many couldn’t care less about who owns the railways When I mentioned how Southeastern became state-owned in 2021, it may have come as a surprise to you despite Southeastern likely being the railway franchise closest and most relevant to you To be honest, it was a surprise to me as well and many of my mates who had been taking Southeastern trains to school for the last 5 years. 40% of all railways are already state owned due to poor performance or fraudulent finances which was the case for Southeastern and is there a difference in their services compared to privatised services? Not from my personal experience This may be because 50% of all delays are said to be caused by Network Rail which, as discussed earlier, is also already state owned so there may not be as large of a difference as many are expecting

It also doesn’t help that Labour as with most of their policies, have been extremely vague in how they will carry out their plans of renationalisation. They’ve said that they would wait for contracts to expire to create GB Railways and that they would not be changing anything to do with ROSCOs, but past that they haven’t specified much at all such as planned investment, changes in train frequencies and plans on carrying out modernisation they have promised

My previous points and the government’s last mess of a major railway plan (HS2) make it extremely easy to give a negative outlook, but I’ll choose to be positive. The railways need a substantial change and judging by the way Labour seem to be rushing to make these changes, it seems that they must have a coherent plan to improve the railways What they have said they would do so far seems intelligent (waiting for contracts to expire to avoid costs of contract breaches and not attempting to renationalise ROSCOs to avoid buying expensive trains) and benefits in efficiency from this renationalisation cannot be overlooked.

Conclusion

Before privatisation in 1994, Britain had one of the best railways in the world compared to our railway now which lags behind most of the world. The past shows us what’s possible with a state-owned railway and I’m extremely hopeful that we can reach those higher standards once again. Out of any industry, the railways are probably the one most suited to nationalisation. Most of the industry is already nationalised and the rest is so tightly regulated that it might as well be. In theory we should see all of the benefits of nationalisation without the major negatives of leaving privatisation. Therefore, I very much believe that renationalisation and GB Railways is something railway users should be getting excited about, as long as the government doesn’t try to create the fastest railway in the world for no reason, leading to ridiculous overspending like with HS2

The Dangers of Appeasement

Chris Choi Y13

History has repeatedly demonstrated the dangers of appeasement – the cowardly and defeatist policy of conceding to an aggressor's demands in the hope of avoiding conflict. Perhaps the most infamous example of this occurred in the lead-up to World War II, when British Prime Minister Neville Chamberlain and French Premier Édouard Daladier, seeking to maintain peace in Europe, allowed Adolf Hitler to annex parts of Czechoslovakia This decision, made without Czechoslovakia's participation in the Munich Conference of 1938, only emboldened Nazi Germany, ultimately leading to greater territorial conquests and, soon after, global war While Chamberlain and Daladier’s actions have been widely condemned, some historians argue that Britain used this period to rearm, recognising that war was inevitable Nonetheless, appeasement failed spectacularly. Today, echoes of this failed strategy are emerging once again as the United States meets with Russia in Saudi Arabia- without Ukraine present- raising fears that history may be repeating itself.

In September 1938, Hitler demanded control over the Sudetenland, a region of Czechoslovakia with a significant German-speaking population. Rather than stand firm against this aggression, Britain and France sought a diplomatic solution, out of fear that a war of a similar scale to the Great War would break out again. The Munich Agreement, signed on September 30, 1938, granted Hitler his demands in exchange for promises of peace, notably without the consent of the Czechoslovak State. Chamberlain famously returned to Britain declaring “peace in our time,” waving a worthless piece of paper while Hitler laughed and made plans to gobble up the rest of Czechoslovakia. Predictably, by March 1939, Hitler had done just that. Appeasement wasn’t just a failure- it was a farce This event exposed the fallacy of appeasement: giving in to an aggressor does not satisfy their appetite for power – it only encourages further expansion. This failure directly led to World War II Hitler, emboldened by his easy victory in Czechoslovakia, turned his sights on Poland. When Germany invaded Poland on September 1, 1939, Britain and France had no choice but to declare war The lesson was clear: appeasement does not prevent war- it delays it while making the aggressor stronger.

Fast forward to today, and here we go again. Russia’s invasion of Ukraine has led to a prolonged conflict with devastating consequences. The West initially responded with strong rhetoric, sanctions, and military aid, but as the war drags on, the weak-kneed politicians are getting nervous. Now, diplomatic efforts are being made to negotiate a resolution- but, in true Munich fashion, without Ukraine at the table The U S and Russian officials meeting in Saudi Arabia might as well be standing on a balcony, waving a piece of paper and promising “peace for our time ” There are growing fears that parts of Ukraine may be ceded to Russia under the guise of securing peace. If this occurs, it would send a dangerous message: aggression is rewarded, and borders can be changed through force Just as Hitler’s annexation of Czechoslovakia emboldened his later conquests, allowing Russia to retain Ukrainian territory could encourage further territorial ambitions- not just by Russia but by other revisionist powers worldwide

And who are the ringleaders of this disgrace? Look no further than VPOTUS J D Vance and POTUS Donald Trump, two men who have made it their mission to turn American foreign policy into a circus act Their rhetoric, such as calling the Ukrainian President a Dictator, blaming the Ukrainians for the War, along with threats to pull the United States out of NATO, sends a dangerous signal – not just to Ukraine but to every American ally This abandonment is especially galling given that the United States expected European solidarity after the attacks of 9/11. America did not stand alone then, yet today, these political charlatans would have us believe that isolationism is the way forward. It’s a pathetic, gutless betrayal, plain and simple.

And what about the so-called leadership in the Pentagon and the State Department? The U S Secretary of Defense Pete Hegseth and Secretary of State Marco Rubio have been about as useful as a chocolate teapot- blustering about democracy while offering half-baked support, dithering like frightened schoolboys who’ve forgotten their homework. Their indecision, or rather, decision to allow Russia to maybe even retain control of the areas they have invaded, to the extent of defending Russia, has emboldened them, signaling weakness when strength is needed most If Ukraine is abandoned, it won’t just be a strategic blunder- it will be a moral failure of epic proportions, one that will stain America’s reputation for generations.

The lesson from history is clear: failing to stand firm against aggression does not prevent conflict- it invites more of it. The Munich Conference of 1938 showed that sacrificing smaller nations for the illusion of peace only delays the inevitable and empowers aggressors. If Western leaders today attempt a similar strategy with Ukraine, they may be repeating the mistakes of Chamberlain and Daladier, leading to even greater instability in the future.

The world must remember that true peace comes not from appeasement, but from deterrence Strength and unity are the only ways to prevent further expansionist aggression. If history teaches us anything, it is that conceding land to aggressors does not buy lasting peace- it only sets the stage for greater conflict down the road. The current stance of American leadership, particularly from isolationist figures in the Republican Party, is not just weak- it is outright treacherous. They are betraying Ukraine, betraying NATO, and betraying every principle they claim to uphold They are not leaders; they are appeasers, enablers of tyranny, standing idly by while history repeats itself.

If the US turns its back on Ukraine, it will not just be a stain on its record- it will be an act of cowardice that future generations will curse. And no amount of spin, no carefully-worded diplomatic statements, will erase the act of abandoning a nation fighting for its survival. When history looks back, let it record their names alongside the great appeasers of the pastChamberlain, Daladier, and now, Vance, Trump, and every other masquerading leader. Appeasement is the defeatist and cowardly way out, and they have no place in history except as a warning to others

India vs Pakistan- A Study in Sports Diplomacy

Shaun Abraham Y13

When faced with the prompt ‘Lessons Learnt from History’ , the first instinct is to retrospect on history’s great failures. This is borne out of the generally reasonable, if somewhat idealistic, assumption that society has been ever advancing, building on lessons from past failures towards a mutually beneficial future. However, in an increasingly fractious modern climate of geopolitical tensions, environmental disasters, and social unrest, it would be foolish to assume that society has not taken a backward step at some point in its journey up to today. As many as the failures in history we can learn from, there are also successes which we can, and far too often don’t, emulate. One such lesson, of particular relevance in today’s unstable geopolitical context, is in the forgotten art of sports diplomacy Among the best examples of this is the cricket diplomacy employed between India and Pakistan from the 1980s to the late 2000s, set against a volatile backdrop of flaring border tensions and aggressive political posturing

Background

The Partition of British India in 1947 into the independent states of Pakistan and India was a tumultuous period in the history of the subcontinent. Conflicts between the Muslim and Hindu ethnic groups dominating the two states respectively had left millions dead and tens of millions displaced. This fraught beginning to the two nations’ relationship, despite coexisting under British rule for more than a century preceding this, left any interaction between the two, however small, highly charged and subject to massive public interest on both sides of the border. Cricket matches between the two took on significance both as a peaceful outlet for the tensions between the two, but also as an opportunity for communities sundered by the Partition to come together and put aside their differences under the unifying banner of sport

However, following the Indo-Pakistani war of 1965 – the largest engagement of armed vehicles and tanks since WW2 – and later the War of 1971 over the formation of Bangladesh from East Pakistan, cricket matches between India and Pakistan were suspended till 1978 Cricket was at that point structured such that bilateral international tours were the primary basis of the sport –with no leagues paralleling those in football or basketball - meaning this suspension had a large and definitive impact. The resumption of cricketing ties between the two from 1978 was thus an incredibly important event, particularly given the explosion of cricket’s already huge popularity in India (and by extension Pakistan) following their ODI World Cup win in 1983.

1987 Operation Brasstacks

In the latter months of 1986 ,leading into early 1987, India launched a mass mobilisation of its armed forces in the border state of Rajasthan, in a military simulation exercise codified ‘Operation Brasstacks’ . Though India’s stated aim was to determine tactical nuclear strategy, the Pakistan Military viewed this operation as a rehearsal for a “blitzkrieg-like” infiltration of their borders, with over 500,000 Indian troops amassed within 100 miles of Pakistani territory. In response

Indian tanks deployed in Rajasthan, 1987

A cricket tour may seem inconsequential against such high stakes, but when in February of 1987 Pakistan toured India for a 5 match Test series, Pakistani President General Zia ul-Haq was invited to watch the 3rd test in Jaipur – the state capital of Rajasthan – where he engaged in cordial discussions with Indian PM Rajiv Gandhi that helped precipitate the de-escalation of tensions By March 1987 an agreement had been reached for troops to be withdrawn on both sides from the Rajasthan border and from the contested state of Jammu and Kashmir, with India inviting Pakistani statesmen to observe the conclusion of Operation Brasstacks as a marker of its peaceful intentions. Given the political ill will between the two nations, the organisation of a face to face summit between the two leaders would have proved difficult without the use of cricket as a facilitating medium. Despite simmering tensions between the two nations, the sheer public excitement around an Indo-Pak cricket tour meant their differences were temporarily put aside and the tour was able to go ahead as normal, making it a prime vehicle for diplomacy where more conventional pathways of discussion were hindered.

1999-2004 Tensions

Pakistani backed militant movements in Indian-administered Kashmir from 1989 led once more to a cooling of relations between the two countries in the 90s. Cricketing relations between the two were maintained, albeit with matches being played at neutral venues such as Sharjah and Toronto, where matches remained highly anticipated and well attended by large audiences of expatriates Between 1996 and 1999, Canada even hosted an annual Indo-Pak ODI series titled “The Friendship Cup”. However, Pakistani infiltration beyond the ‘Line of Control (LoC)’ into Indian territory in May 1999, beginning the Kargil War, saw bilateral cricketing relations cease once more, aside from a one-off encounter in the 2003 World Cup. Though India had claimed victory in the war by July of 1999, tensions between the two countries remained at a dangerous high over the next 5 years, exacerbated by numerous flashpoints and standoffs. Terrorist attacks on the Indian Parliament in 2001 and on an Indian army camp in 2002 (where the majority of victims were civilians) saw the two nations come to the brink of nuclear war, with India having set concrete plans in motion for ground assaults on Pakistan It was only through the diplomatic efforts of the UK, US, Russia, France, and Japan that tensions finally began to ease in late 2002, with a ceasefire deal signed in November 2003.

The ensuing period of détente between the two nations remained fraught, with tensions liable to boil over at even the smallest provocation In this context, India’s tour of Pakistan in 2004 was a landmark proposal of diplomacy, marking the first Indian tour on Pakistani soil in 15 years, and planned by the government rather than the central cricket board It was seen as an exte response, Pakistan increased its own military presence on the border and put its nuclear installations on high alert in January 1987, with Pakistani foreign minister Zain Noorani overtly threatening India’s ambassador S.K Singh with the infliction of “unacceptable damage” on India Given India’s own nuclear capabilities, established from their 1974 ‘Smiling Buddha’ nuclear test, there was a genuine threat that these tensions could escalate into nuclear war.

attempt by Indian PM Vajpayee at burying hostilities with President Musharraf of Pakistan, despite the latter standing accused of orchestrating attacks on India just a few years prior. The tour required hitherto unseen levels of security for the Indian contingent, but they were greeted warmly by Pakistani crowds and players across the country – despite beating Pakistan in both the Test and ODI legs of the series (losses having previously inspired acrimonious reactions from both sets of fans). Indian fans and journalists were also given special ‘cricket visas’ to Pakistan to watch the tour, inspiring a cultural exchange and dispelling hostilities between the two nations at the grassroots level. Sending off the Indian team on their tour, PM Vajpayee had instructed them to “win hearts too, beside matches” and this was indeed the case, as the tour helped create a period of prospering cricketing relations and thawing diplomatic ones, with General Musharraf visiting India for a cricket match in 2006, and Pakistani players forming an integral part of the newly formed ‘Indian Premier League (IPL)’ in 2008

2008 Onwards

Following the Pakistani backed 26/11 terror attacks on Mumbai, relations between the two countries deteriorated once more, with Pakistani players expelled from the IPL and all bilateral tours between the two countries ceased, apart from one Pakistani tour of India in 2012 Aside from that, India and Pakistan have only played each other since 2008 under the auspices of official tournaments, with Pakistan visiting India for the 2011 and 2023 World Cups, but themselves having to host any ‘home’ matches against India in neutral venues like Dubai India’s domination of the cricketing market has in some senses been weaponised by the Indian Cricket Board against Pakistan, with India recently refusing to travel to Pakistan for the ongoing (as of February 2025) Champions Trophy, forcing them to accede to Indian wishes about where they play. Cricket matches between the two countries today garner fervorous public attention due to their scarcity, and though they continue to serve as a source of camaraderie between fans and players, the political will to convert them into opportunities for governmental diplomacy is unfortunately far less prevalent than it once was.

Conclusion

Despite its death beyond 2008, there are many lessons to be learned from the cricket diplomacy between India and Pakistan across the 1980s and 2000s. Sport unifies international communities in a way not many other mechanisms can, acting as an equaliser by subjecting all participants to the same rules on the pitch or court, the same competitive jeopardy, and the same end goal of victory It serves as a reminder to viewers that international conflicts are not against the common people of the opposing country, who are fans or players in just the same way as them, but with their leaders, and thus helps prevent the escalation of political disagreement into wider racial or ethnic hatred Many Indians and Pakistanis today still refer to each other as brothers and sisters, not just because of their shared past but because of a shared culture – particularly when it comes to cricket – that unifies them despite their differences. In today’s world sporting relations are often the first casualty of international conflicts - players being banned from tournaments and championships not on their own demerit but on that of the country they represent.

India celebrating their ODI series win
Captains Rahul Dravid (left) and Inzamam ul-Haq

represent There is of course a case to be made for this – some conflicts are too pervasive for them to be set aside in the name of sport – but it is important to remember that sport can also be extended as an olive branch, or a vehicle for diplomatic discussion, as it was by India and Pakistan. Though they too no longer abide by the policy of sports diplomacy, we should learn from their past successes, and recognise that sport is not just a pastime, but holds value as a tool for peace and diplomacy.

Machine Learning Journal

The Future for Robotics

Dev Mehta Y12

How are robots shaping our future?

When thinking about the word “robot”, a tall humanoid moving machine with many parts resembling the human anatomy may come to mind. While these types of robots may be what scifi movies display, there are many more types than what meets the human eye. With modern day robots coming in many shapes and sizes from tiny, miniscule robots ready to aid in precise surgery to giant industrial arms designed to create the latest cars we know today. Robotics has made a huge leap forward since the last couple of decades

Furthermore, with the advancement of artificial intelligence, some robots don’t need to have a physical presence to compute data that the user provides Having a large diverse community of robots, able to accomplish a multitude of tasks, allows us humans to build and shape the future, alongside the robotic revolution

Abrief history of robotics:

The idea of robots or self-operating machines dates to ancient times. With the ideas of first robot taking form from Talos, a giant bronze automaton* in Greek mythology. Talos was design to protect the island of Crete displaying the earliest recorded human imagination about autonomous machines. With the myth originating from around 700 BC, the idea of robots wouldn’t be recorded for another 1500 years. Moving to the Middle Ages, we see more examples of robots being designed, specifically by an Islamic engineer who advanced in mechanical engineering, Ismail Al-Jazari Being the author of “the knowledge of ingenious mechanical devices”, he described various mechanical devices including a programmable musical automaton, an elaborate handwashing machine and used gears and levers to automate many other tasks forming the foundations for the future of robotics we would later see.

Left: An image of many robotic arms used to aid in the construction of a car

Right: An image of a handwashing machine from "The Book of Knowledge of Ingenious Mechanical Devices"

During the industrial revolution Jacques de Vaucanson, a French inventor, created a mechanical duck that could eat, drink, and move. This duck demonstrated the potential for machines, inspiring future developments in automation and robotics Being over 100 years old, the term "robot" was first used in 1920 by Karel Capek, a Czech writer, in his play R.U.R. (Rossum's Universal Robots) Capek's work introduced the concept of machines designed to perform human-like tasks, sparking widespread interest and debate about artificial beings. In the mid20th century, Isaac Asimov introduced the “three laws of robotics” in his science fiction stories, aiming to ensure robots serve humans ethically and safely, ultimately leading to discussions on robots’ethics in the real world.

The first ever industrial robot was then seen just 30 years later. Created by George Devol and Joseph Engelberger, the first industrial robot was tasked to weld and handle materials This would be the first real step to build the modern robotics we see used in factories today and paved the way for advanced manufacturing robots programmed for repetitive tasks

The Science Behind Robots: How Do They Work?

With robots having a history dating back to 700BC, we may ask how these automated machines are really made, especially seeing as they have been idolised for over 2 millennia’s and having only been able to introduce them into society 150 years prior. Fundamentally, robots need three core criteria in order to function: sensing, thinking, and acting.

1. Sensing – just like human’s, robots too need to able to gather information from their environment to complete actions. These may take form by collecting data from camaras, microphones or other sensory tools. Examples of sensing abilities could include cameras creating 3D images to recognize objects and measure distances, microphones capturing sound to convert into data or with some robots, the ability to “smell” using specialized biological sensors Most advanced robots will use a multitude of sensors in order to make informed decisions about their surroundings allowing them to have a more productive output

2. Thinking – once robots have collected useful data from sensors, processing of the data begins Robots process information using pre-programmed data and artificial intelligence (AI). Unlike humans, robots have to rely on stored data and algorithms to analyse situations with some robots even using machine learning models to adapt to new scenarios over time Currently, robots mostly run using predefined instructions, but research in these fields aims to enhance theirdecision-making with real-time adaptability.

3. Acting – With the instructions from the data ready to be executed, robots preform their actions by using mechanical limbs, such as hands, arms, and legs. Some examples of robotic mobility are robots that walk on legs similar to humans or animals. Some robotic systems enable movement and manipulation of objects with precision from multiple actions being carried out at once

With these three core criteria’s being the stem for all robots we use today, their manufacturing also plays an essential role on their reliability. Initially, they begin in the design phase. Engineers draft a blueprint using CAD (computer-aided design) software These CAD’s allow engineers to virtually simulate real world situations the robot would face without having to expend any materials Prototypes of the robot are then created after the blueprints are satisfactory. These prototypes go under rigorous testing to help identify flaws an ensure optimal functionality and safety before full-scale production After all these tests are passed with satisfactory results, materials are selected tailored to the robot’s function. This step is usually

usually also followed by a series of test to ensure that the material is suitable for the robot Ultimately, the robot is then ready to be assembled and used in real world situations. A primary example of this is the ReWalk robot, tasked to help people with injured spinal cords to walk again. Having undergone many tests, the robot is made to be as safe as possible when used by the user displaying the safety provided in the manufacturing industry

Robots in Everyday Life: How are they helping us?

Left: An image of the first industrial robot built by George Devol and Joseph Engelberger:

Right: An image of the ReWalk robot used to aid people walk

Being in the middle of the robotic revolution, there is a constant use of robots everywhere we go. Robots appear in all shapes and sizes, their functions more varied that their appearance. A prime example of would be the Mars curiosity rover. Built by NASA, the Mars curiosity rover was created to research the surface of mars. One of the main achievements of the rover was discovering ancient riverbeds by using a variety of robotic sensors. It had a Rocker-Bogie* suspension with six wheel and used Mast cameras to capture high-resolution images and map 3D terrain. It also used a ChemCam (chemistry and camera complex), a laser to analyse the composition of Martian rock allowing us to find this discovery The mars rover is one of the many robots that demonstrates the aid robots provide, allowing us to further develop understanding in places us humans are unable to reach Another key example for robots aiding us is in the healthcare sector. Da Vinci’s surgical system is a robotic device used by surgeons to help in precision surgery The system uses robotic arms controlled by a surgeon via a console It is fitted with a magnified 3D vision display to aid in minimally invasive surgeries which have higher accuracy and smaller incisions These smaller incisions mean a decrease in recovery time as less tissue is needed to be repaired, displaying how robots can help humans reach higher levels of healthcare

From a more general perspective, robots are used by almost everyone more than 70 time in a single day due to how crucial and integrated they are in today’s society. With more and more robots being developed and introduced into people day to day lives, this number is sure to rapidly increase. From supermarket inventory-checking robots to airport security robots, robotics has improved our lives for the better allowing us to focus on much more important tasks whilst they complete simpler more time-consuming ones This not only provides us with the opportunity to develop our understanding of the things around us, even allowing us to build better robots to one day help us in more complex tasks

The Role ofAI in Modern Robotics

Although we have seen the advancements made in the past with robotics, there becomes a limit to what can be achieved with a preprogrammed robot This limit can be surpassed by AI Artificial Intelligence (AI) is a branch of computer science that allows machines to mimic human intelligence by learning from data, recognizing patterns, and making decisions with minimal human intervention. Created through a combination of algorithms, AI development needs

needs programming computers to recognize patterns and adapt to new information to allow robots to provide outputs that suit the environment. This allows for many advancements in the field of robotics as it means that the robot can provide a more tailored response to the information its given, contrary to a generic answer. A commonly used example of AI is ChatGPT ChatGPT is an AI-powered conversational model developed by OpenAI which can understand and generate human-like text. It is built using a type of AI called NLP* (Natural Language Processing) which uses GPT* (Generative Pre-trained Transformer) architecture. This enables ChatGPT to work by training itself by analysing vast amounts of text data from books, articles and online sources. It also learns grammar, facts, and conversational patterns to reproduce human-like text The innovative responses that are provided by ChatGPT are due to self-adapting algorithms providing a bespoke answer to user’s questions.

AIs just like ChatGPT are utilized in robotic mechanisms to provide more human responses which in turn provide a safer feeling to the user when interacting with the robot AI’s not only help the current robot it being utilised in but help a greater community of robots. This is due to most of the data received from the robot to be saved to a cloud allowing other AIs in development to test and train themselves by using the data provided by a robot that has already experienced this scenario, essentially like a teacher student type relation

Future Trends in Robotics: What’s Next?

As technology advances, robotics is set to improve our daily lives and revolutionize industries. We can already see the help it provides us in our daily lives just by looking around us. We use so many new types of technology, eventually it would become a hassle to operate all of them repeatedly in a day. A possibly solution to this problem could be to design more generalised Cobots using AI to help work alongside us Cobots are human-robot collaboration where robots work alongside human to enhance productivity and efficiency. Each cobot would be equipped with sensors and advanced AI to give safe and useful outputs With the advancement of AI, we would be able to put more trust into automatic tasks such as self-driving taxis or automated identification Cobots could help us reinvent many labour-intensive jobs to find much more relaxed solutions. A primary example for this would be developing cobots to work alongside firefighters or policemen in order to prevent human lives being put into danger

Another way robotics could evolve as we leap into the future could be filling out jobs many humans aren’t willing to do. Robots potentially could be designed to undergo tasks such as help in parts of healthcare where many nurses are required due to shortages. They could range from simple patient monitoring systems to full-fledged Cobots working to provide and care the patients needs. Whilst this strategy is already a work in progress, many more advancements for this are to come due to the new health risks being identified as we go into the future.

As robotics continues to evolve, we can expect to see even more integration with robots in our daily lives. From AI-powered cobots that enhance our productivity to robots that take on essential challenges, the possibilities for innovation are endless Whilst there being some ethical and technical challenges to overcome, the future of robotics promises a world where humans and machines collaborate seamlessly, making our lives safer, more efficient, and more relaxed. As technology advances, one thing is certain robots will continue to shape the future, pushing the boundaries of what we once thought possible

Game Theory to Machine LearningSHapley Additive exPlanations

Fifi Siddiqui Y12

Are you blindly trusting the predictions your machine learning models provide? In a world increasingly driven by data and potential ‘hallucinations’, understanding how these models arrive at their conclusions is crucial. This article explores Shapley values - a concept rooted in game theory - and how they can shed light on the inner workings of machine learning models. By delving into SHAP (SHapley Additive exPlanations), we’ll uncover how individual features contribute to predictions, going on to prove through code that Los Angeles has the most expensive house prices in California! Through this exploration, we aim to enhance our understanding of model transparency and explainability, ensuring that we don't just accept predictions at face value

What are Shapley Values?

Shapley values originate from game theory and aim to provide a fair solution to the following question: “If we have a coalition C (i e A group of cooperating members that work together) that collaborates to produce a coalition value V: how much did each individual member contribute to the final result?” To use a practical example, this may be used when trying to determine how much share of profit generated by a group of company employees each employee deserves based on their individual contribution (you may be able to see the ML model application here). This seems relatively simple in theory, but is often made more complicated due to interacting effects between members, when certain permutations can cause members to appear to contribute more than their total individual contributions Therefore, we can compute Shapley values for each member of the coalition to try to generate a fair answer to the question.

How are they calculated?

To compute the Shapley value for a member 1 of a coalition, we sample a coalition containing member 1 and then we compare this to the coalition formed by removing member 1. By looking at respective values for the coalition values (V) and comparing the difference between them, we find the marginal contribution of member 1, to the coalition consisting of just the other members. Then we can enumerate all such pairs of coalitions that only differ on whether member 1 was included and look at the marginal contributions for each. The mean marginal contribution is the Shapley value of that member. We then compute the Shapley value for each member of the coalition and we’ve found a fair solution to our original question. Mathematically the process looks like this:

How does this relate to ML?

SHapley Additive exPlanations or ‘SHAP’, essentially reframes the Shapley value problem from one where we look at how the members of a coalition contribute to a coalition value, to one where we look at how individual features contribute to an ML model’s outputs. With a clear understanding of Shapley values, we can now approach explainability in ML as a means to clarify the model's processes from input to output, addressing the black box problem via increased transparency. However, you may wonder what the significance of the 'Additive’ component in 'SHapley Additive exPlanations’is all about.

Additive?

Ludberg and Lee in their paper ‘A Unified Approach to Interpreting Model Predictions’[1], where they first introduced SHAP, define an additive feature attribution as follows: If we have a set of inputs x and a model f(x): We can define a set of simplified local inputs x’, which usually involves turning a feature vector into a discrete binary vector, where features are either included (1) or excluded (0) e.g. three features (A, B, C) → binary vector = [1,0,1], showingAand C to be included and B to be excluded

We can also define an explanatory model g(x’), ensuring that:

Further Desirable Properties of theAdditive Feature Method

There are three additional, main desirable properties of such a method and they are: Local Accuracy: States one of our previous conditions for g(x’):

Missingness: States that if a feature is excluded from the model, then it’s attribution must be equal to zero, so that the only factor affecting the explanatory model’s output is the inclusion of features i.e.

Consistency: States that: if feature contribution changes, the feature effect (the attribution in the explanatory model) cannot change in the opposite direction. Lundberg and Lee ‘propose SHAPvalues as a unified measure of feature importance’ and that only SHAP satisfies all three features. If the feature attributions in our additive explanatory model are chosen to be the Shapley values of those features, then all three properties are upheld.

Problems? Combinatorial Explosion

This is all well and good when we have a model that operates over 4 features, as we only have to sample 64 coalitions, but the problem occurs when we have a model that operates over 32 features, as we now have to sample 17,100,000,000 total coalitions. This dramatic increase in the number of coalitions illustrates the concept of combinatorial explosion, which ‘occurs when a huge number of possible combinations are created by increasing the number of entities (features in this case) which can be combined - forcing us to consider a constrained set of possibilities when we consider related problems.

In this context, even one new feature significantly increases the number of possible combinations that must be evaluated, making it computationally infeasible to analyse all coalitions exhaustively as we delve into the realm of two digit feature numbers This complexity ultimately makes it significantly harder to derive meaningful insights from a model using Shapley values

Solution - the Shapley Kernel

So, how do we fix this? Enter the Shapley Kernel; essentially a means of approximating Shapley values via the use of far fewer samples and a weighted linear regression, to mitigate this complexity issue. It works by passing samples through a model of various permutations of the data point we are trying to explain and since it isn’t feasible to just remove a feature out of an ML model, we define a background dataset (B) that contains a set of representative data points that the model we are using was trained over. We then fill in our omitted feature/s with values from the background dataset, whilst holding the features that are included in the permutation fixed to their original values. We then take the average of the model output over all of these new synthetic datapoints as our model output for that particular feature permutation, called ȳ: e.g.

Once we have a number of samples computed in this way, we can formulate this in a weighted linear regression with each feature assigned a coefficient The weighting is done like this:

The returned coefficients are now equivalent to the new, approximated Shapley values and we have now drastically reduced the computational burden, enabling us to obtain reliable approximations of Shapley values even in high-dimensional spaces!

The Code

So far, everything has been very theoretical and hard to visualise in practice so here is a Python implementation testing the contributions of different attributes of houses (e g property tax, number of rooms etc.) in California neighbourhoods, to their house prices, using an XGBoost regressor model for price prediction:

Implementing SHAP

Results andAnalysis - What am I looking at?

Firstly, notice that the ‘Original Model Prediction’ and ‘Explanatory Model Prediction’ are exactly the same - showing that we have successfully achieved local accuracy - so both models make sense!

Now comparing the SHAPvalues for each feature, we discover that the highest positive contributor to house prices is surprisingly Longitude (SHAP value of 0.751849). Taking a look at a map of California, we actually see that Los Angeles, known for its high house prices, has a high longitude value - cool right? This could potentially suggest that, in California, the higher the longitude, the more expensive the home. On the flip side, an interestingly smaller negative contributor is the age of the house (SHAPvalue of -0 007581), possibly implying that in California newer homes are more desirable.

These SHAP values clearly show us how the different house attributes have influenced the XGBoost Regressor’s predicted output, leading to some pretty interesting results! All in all, while it does take some time to set everything up and understand the process, the knowledge we gain is incredibly valuable and I hope next time you’re playing around with an ML model, you remember to leverage SHAP values to better understand your predictions

References

[1] Lundberg, S., 2017. ‘Aunified approach to interpreting model predictions’. arXiv preprint arXiv:1705.07874.

[2] KIE, 2021. ‘Shapley Additive Explanations (SHAP)’.Available at: [https://youtu.be/VB9uVx0gtg?si=TxUH4W63RXL5gKgI] (Accessed: December 2024)

[3] Krippendorff, K., (n.d.). 'Combinatorial Explosion'. Web Dictionary of Cybernetics and Systems. PRINCIPIA CYBERNETICA WEB.Available at: [https://web.archive.org/web/ 20100806122506/http://pespmc1.vub.ac.be/ASC/COMBIN_EXPLO.html] (Accessed: December 2024)

Benford's Law: The Strange Predictability of Numbers

Shaurya Mehta Y12

The leading digit of a number is defined as the first non-zero digit in a number. For example, the leading digit of 123456 is 1, and for 0.086 it’s 8. Let’s say you have the free time to find out the population of every country in the world, and then you make a list of the leading digit of all these populations. Then you ask yourself- if I pick a number randomly from this list, what’s the probability of it being a 1? Well, there’s 9 digits, so 1/9= 11% right Not exactly

Benford's Law, a surprising pattern in numbers, states that in most naturally occurring datasets, the first digit is much more likely to be 1 than any other number. In fact, if you repeated the process of picking a number from the list, the probability of picking a 1 would come out to about 30%, with the probability of picking a given digit decreasing as the digit increases. This counterintuitive result has powerful applications in fraud detection, data science, and even forensic accounting.

The History of Benford’s Law

Here's a graph comparing the distribution of leading digits from the country population data with the probabilities expected by Benford’s Law. Although they’re not quite the same, the validity becomes apparent when you see the difference between the probabilities of picking 1s compared to 9s.

The origins of Benford’s Law date back to 1881, where American astronomer Simon Newcomb happened to notice that the pages of logarithmic tables were more worn out at the start compared to the end, meaning that numbers with smaller leading digits were being used more often. He published his findings in a paper, pointing out this disparity, but it went largely unnoticed. It wasn’t picked up on until 57 years later, when in 1938, Frank Benford, a physicist at General Electric independently rediscovered the pattern. He tested it on a variety of datasets population numbers, physical constants, addresses, and more confirming that the law held across diverse sources. Since Benford had the proof to back his claims, this phenomenon became associated with his name

The Math Behind It

Benford's Law states the probability of a number having a leading digit d (where d ranges from 1 to 9) is described by the equation overleaf:

Crazily, numbers that start with 1 are more than six times more likely to occur than those that start with 9!

Something even cooler about Benford’s Law is the fact that it holds in any base For the same formula, where d is the leading digit in the given base b:

For example, if we consider the law base 2, we can see that the only possible leading digit is 1, which we can verify by doing which equals 1, demonstrating that 1 being the leading digit is the only possible outcome Doing the same for a larger base, base 16 (or hexadecimal) we get = 0.25 or a 1 in 4 chance to get a leading digit of 1 when picking from a set.

Why Does Benford's Law Hold?

After reading about Benford’s Law, I found that although it seemed reasonable, I still couldn’t get why it works. Looking online, I saw that there were different reasons for why it occurs basically everywhere. Benford’s Law holds regardless of the units that quantities are measured in, something known as ‘scale invariance’, meaning it appears in so many contexts Furthermore, if you have a dataset that follows the law, then multiplying by scalars will conserve the distribution of leading digits Finally, most data occurs from multiplicative growth, and so these numbers span several orders of magnitude, leading to a logarithmic distribution of digits However, none of these reasons really get into the why of things

This explanation really helped me visualise things Let’s say you start with the number 1 and count- the next instance the leading digit is a 1 occurs when the number is 10, so 9 numbers later However, if we start with a 2, the next number with 2 as a leading digit is 20, which occurs 18 later. Finally, if we go up to 9, we can see 90 occurs 81 numbers later. Basically, there is a higher spread between the next instance of the same leading number the higher you go, making it less likely that you will have that digit as a leading number. We can see this for larger numbers as well- an 8001 digit spread between 1999 and 10000 compared to an 80001 digit spread between 9999 and 90000.

Applications in the real world

Since humans anticipate digits will appear with equal frequency, fraudulent financial reports will typically deviate from Benford's Law Auditors leverage this principle for identifying tax evasion, election fraud, and fabricated corporate reports. When a company’s reported revenues don’t fit Benford’s expected pattern, it is a red flag for further investigation

The most famous, or infamous, example of this happened in 2001 with a company called Enron. They had been claiming massive profit margins, and had forecasted huge returns, and so with the support of investors, expanded (largely) on promises At the peak of their success, in February 2001, Enron’s share price was sitting at about $80 a share. As a result of their success

success, investigations were launched to validate or invalidate their legitimacy Benford’s Law was applied on various financial records, including revenue figures, expenses and accounts receivable, and the data showed too few 1s and far too many 9s This wasn’t completely conclusive, but it gave the government confidence to launch more detailed investigations, where they ultimately found convincing evidence for data falsification Thus, Enron later filed for bankruptcy in December, and corporations have been wary of the power of Benford’s Law ever since.

Conclusion

Benford's Law is a lovely illustration of how math reveals underlying patterns in common data From forensic analysis of financial data to the study of natural phenomena, the surprising tendency for smaller leading digits gives us a powerful instrument for detecting anomalies and making sense of numerical trends in the world around us.

To Fight or to Fly? To Freeze or to Fawn? An

Evolutionary Viewpoint

Sophie Li Y12

Ever wished for superhuman strength, like that of Batman?

In a sense, people are already capable of going beyond their physical boundaries because of the hormone adrenaline (or epinephrine), which gives the body access to a store of strength and energy that even the most fit athletes cannot match (14) Released during the fight-or-flight response, adrenaline initiates a highly orchestrated, near-instantaneous series of chemical changes that flood the body with intense sensations: heart rate increases, pupil dilation, muscles tensing, and sweating intensifies (1) (13). The body may get paralysed with fear, speech may be impaired, and a sense of fear may overwhelm the individual. This automatic response, first described by Walter Bradford Cannon in 1915 as "the necessities of fighting or flight" triggered by a general discharge from the sympathetic nervous system, is specifically designed to enable rapid reactions to danger This response is emblematic of the body either preparing to fight, flee, or freeze in order to assess the situation before taking action; it is an ancient, instinctual mechanism, refined over millennia of evolution, embedded in our genetic code to optimise survival in environments once fraught with constant danger.

The Origins of Physical Survival Instincts: Adapting to a World of Predators

Supported by various experiments and a myriad of research approaches, Darwin’s Theory of Evolution, derived from studying the variations among species on the Galápagos Islands off the coast of Ecuador, remains a time-honoured concept within the biological sphere of genetics and inheritance. Thus, with the central foundation of natural selection and “Survival of the Fittest” in mind, it can be hypothesised that an evolutionary drive facilitated the development of the phenotype of the acute stress response (Darwin, 1859). It is commonly referred to as the fightflight-freeze-or-fawn response (FFFF), which is a relic, most likely emblematic of past ancestors’ adaptations to survive within a primal environment where humans were once not the apex predator. Early Homo. Sapiens, often outmatched in strength and speed by predators like bears, tigers, and lions, naturally evolved the ability to respond quickly and efficiently, allowing the body to prepare itself for FFFF (10). To understand how it evolved, it is useful to explore it at a molecular level

The entire process begins in the brain, specifically in the amygdala and the hypothalamus The amygdala is responsible for processing emotions such as fear, distress, and anger, while the hypothalamus regulates vital functions like heart rate, body temperature, and breathing Together, these two brain regions form a coordinated system: the amygdala detects and identifies a threat, and in response, the hypothalamus signals the sympathetic nervous system to turn on, triggering an almost instantaneous physiological response where breathing increases (18). This signals an increase in heart rate, enhancing blood circulation to ensure that adrenaline is quickly delivered throughout the body, allowing for a rapid and efficient response (17).

Figure 1. showing the position of the amygdala and hypothalamus in comparison to the rest of the brain. Own adaptation.

Approximately 500 ng of adrenaline is released by the adrenal glands, as communicated by the splanchnic nerves, into the blood. Once in the blood, the adrenaline binds to two types of receptors: the alpha-adrenergic receptors (AAR) and the beta-adrenergic receptors (BAR), which are present within nearly all types of cells in the body, allowing different organs made up of these cells to be controlled.

Figure 2. A diagram showing the location of the adrenal glands and to where it secretes adrenaline to. Own adaptation.

Initially, organisms developed adrenergic receptors that responded broadly to adrenaline, helping prioritise blood flow to vital organs by constricting vessels in less critical areas (e g , the digestive system) as per the role of AAR. As mammals evolved with higher metabolic rates and increased activity, more specialised adrenergic systems emerged, including beta-adrenergic receptors in the heart, lungs, and muscles. Beta-1 receptors (B1R) play a crucial role in this (4). When adrenaline binds to B1R in the sinoatrial node, it accelerates the rate of depolarisation, leading to faster electrical impulses and an increase in heart rate (chronotropy). In cardiac muscle

muscle cells, B1R enhances calcium availability, strengthening the force of heart contractions (inotropy) (4). Additionally, B1R in the atrioventricular node speeds up electrical conduction (dromotropy) (4), ensuring rapid and coordinated atrial and ventricular activity Insulin is also halted from being released to ensure high blood glucose levels (which are further increased by the cortisol hormone (14)) are maintained and not stored as glycogen in the liver These adaptations maximise oxygen delivery to cells in organs supporting FFFF, allowing more respiration to take place here and thus more ATP stored in these areas of the body. Pupil dilation also occurs as a response to stress, triggered by the sympathetic nervous system. This mechanism enhances vision by allowing more light to enter the eyes, helping individuals assess their surroundings and better identify potential threats Additionally, sweating is triggered, cooling the body in preparation for physical labour, further optimising the body’s ability to react efficiently in a high-stress situation

From Instinct to Strategy: Evolving More Complex Responses

Over time, the FFFF system has evolved even further with new responses emerging, such as the freeze response (10) The freeze response likely evolved after the fight-or-flight response and is more pronounced in humans and higher animals due to the critical role of cognitive processing in assessing and responding to threats On a basic level, when an animal faces a predator it cannot defeat, freezing can help it avoid detection or blend into the environment, a tactic often referred to as "playing dead" (11). Over time, especially in more complex vertebrates like mammals, the freeze response became more sophisticated. It involves the inhibition of movement and heightened alertness, enabling the individual to pause and evaluate the situation (11). This momentary delay is particularly advantageous when fight or flight are not viable options, as it allows the organism to assess whether the threat is real, how immediate it is, and if a non-confrontational approach, such as hiding or fleeing, might be more effective (16)

In humans, the freeze response is supported by the advanced brain structures, particularly the prefrontal cortex (PFC) (16), which facilitates complex decision-making, social reasoning, and the ability to consider long-term consequences. The PFC has developed remarkably in humans compared to other species and, as a result of its associations with various superior functions in humans, it has been referred to as the “organ of civilisation” (2). A small part of the PFC is located at the base of the frontal lobes and just above the eye sockets, known as the orbitofrontal cortex (OFC) (3). Primarily, it is involved in emotional regulation and the modulation of reactive aggression (3). Although decision-making is not mediated by simply the OFC alone, it still acts as a critical structure in the neural system subserving decision-making, guided by reward probability. This reward-guided learning is a process through which organisms gather information about stimuli, actions, and situations that predict positive outcomes and adjust their behaviour when encountering a new reward or when outcomes exceed expectations (3) Damage to the OFC is often seen in research to lead to difficulty dealing with stressful situations and, thus, the concept of freeze-or-fawn may not be as prominent in such individuals (3)

As humans evolved, so did their emotional intelligence and social cognition, leading to the development of the fawn response, where the instinct to appease a threat became a survival strategy Rather than escalating a confrontation, early humans may have learnt to diffuse tension by adopting submissive or pacifying behaviours, so they can avoid direct combat. Biologically, this response involves a shift in autonomic nervous system activity, particularly the

the parasympathetic branch (part of the nervous system that controls the body when at rest), which promotes relaxation and de-escalation.

Figure 3. Shows the size comparison between the brain sizes in 4 different species. The area highlighted in red is the prefrontal cortex. Adapted from “Skills for a Social Life” by P. Churchland 2011, What Neuroscience Tells Us about Morality. Taken from Orbitofrontal cortex and aggressive behavior in children ages 11 to 13 by Journal of Basic and Applied Psychology Research 2020:pg.8.

Figure 4. Showing the area of the PFC that is the COF, outlined in blue. Taken from Orbitofrontal cortex and aggressive behavior in children ages 11 to 13 by Journal of Basic and Applied Psychology Research 2020:pg.8.

According to Polyvagal Theory, which emphasises the role of the vagus nerve and its two branches, the autonomic nervous system (ANS) regulates physiological and behavioural responses to stress, safety, and social interactions (Porges, 1995). The vagus nerve has two primary branches: the ventral vagal complex and the dorsal vagal complex (15). The ventral vagal complex, which is associated with social engagement and calming functions, helps regulate heart rate, breathing, and facial expressions, facilitating a state of relaxation and promoting feelings of safety. This system is vital for engaging in social interactions and forming bonds, as it enables individuals to remain calm and connected in safe environments, thereby reducing the likelihood of resorting to a fight response.

On the other hand, the dorsal vagal complex, which is an older and more primitive system, is linked to the body's freeze or shutdown response during extreme stress or threat This branch slows heart rate and induces immobilisation, which can help the individual avoid detection or harm in life-threatening situations

Together, these two vagal branches represent an adaptive mechanism that allows humans to modulate their responses to threats, either by engaging socially (ventral vagal) or by protecting themselves through immobilisation (dorsal vagal) (7) Over time, the ventral vagal complex played a significant role in enhancing social bonds, cooperation, and emotional regulation, crucial for survival in complex, socially-driven environments - this is the fawn response

The Double-Edged Sword of Memory and Fear

Despite this, the development of FFFF may not necessarily always be a positive contribution to the body within modern society.

One effect of adrenaline is the enhanced long-term memory from the hippocampus. In theory, this was most likely caused by the need to recognise threats and prey When a stimulus then occurs that is similar to a past experience of a threat, patterns would be identified, and this would help to automate our body into FFFF mode (7) Back in ancient civilisation, this would have been advantageous and acted as a defence mechanism. But now, within modern society, where our environments are generally safer, this response simply causes maladaptive effects such as Posttraumatic Stress Disorder (PTSD) and Attention Deficit Hyperactivity Disorder (ADHD) (13) Subconsciously, the human body is put into survival mode when it may not necessarily be needed: active avoidance, hypervigilance, freezing, and this leads to long-term side effects.

In PTSD, the hypothalamic-pituitary-adrenal (HPA) axis, a complex set of interactions between three key components of the endocrine system the hypothalamus, the pituitary gland, and the adrenal glands is dysregulated, and the adrenal glands secrete cortisol. This stress hormone focuses energy on dealing with the stressor, increasing blood sugar and brain function (7) Unlike in healthy patients, those with the disorder experience persistently low or high levels of cortisol within the blood, reducing the body to a constant state of nervousness (5) With this, PTSD is found to contribute to chronic stress and a diminished quality of life in many individuals (12) A nationwide study was conducted and found that PTSD was a risk factor for suicide. In a cohort of 3,194,141 individuals, 22,361 (0.7%) were diagnosed with PTSD, and 192 (0 9%) died by suicide Individuals diagnosed with PTSD are twice as likely to die by suicide than those without PTSD (Smith et al., 2020).

Conclusion

In its essence, the FFFF response is an evolutionary artifact of our ancestors' constant struggle for survival in a world where threats were often physical and immediate. Over time, as species evolved, particularly in humans and higher animals, these responses became more specialised and sophisticated, incorporating cognitive processing to assess the situation and determine the best course of action. However, as it became recontextualised by today’s society, this once vital mechanism now occasionally misfires in response to the stressors of contemporary life - be it the pressures of work, social anxieties, or financial concerns. The modern riddle remains for time to solve: as psychological disorders emerge, will this ancient survival mechanism continue to prevail within society, evolving to address contemporary mental health challenges? Or will it undergo a mutation, gradually fading from human inheritance over generations?

References (1) Acute stress response: Fight, Flight, freeze, and Fawn (no date) WebMD. Available at: https://wwwwebmd com/mental-health/what-does-fight-flight-freeze-fawn-mean (Accessed: 24 December 2024).

(2) (No date a)Anatomy and connectivity of prefrontal cortex (PFC) in the...Available at: https://www.researchgate.net/figure/Anatomy-and-connectivity-of-prefrontal-cortex-PFC-inthe-human-and-monkey-brain-A_fig1_283974050 (Accessed: 24 December 2024)

(3) (No date b)Amygdala and orbitofrontal reactivity to social threat in individuals with impulsive aggression | request PDF.Available at: https://www.researchgate.net/publication/6591452_Amygdala_and_Orbitofrontal_Reactivity_t o_Social_Threat_in_Individuals_with_Impulsive_Aggression (Accessed: 24 December 2024).

(4)Alhayek, S. (2023) Beta 1 receptors, StatPearls [Internet]. Available at: https://www.ncbi.nlm.nih.gov/books/NBK532904/ (Accessed: 24 December 2024).

(5)Asalgoo, S. et al. (2016) Posttraumatic stress disorder (PTSD): Mechanisms and possible treatments - neurophysiology, SpringerLink. Available at: https://link.springer.com/article/10.1007/s11062-016-9559-9 (Accessed: 24 December 2024).

(6)Author links open overlay panelR. McCarty andAbstract The fight-or-flight response was a concept developed by Walter B. Cannon in the course of his studies on the secretion of epinephrine from the adrenal medulla of laboratory animals. This concept was an outgrowth of his studies of homeostatic mecha (2016) The fight-or-flight response: Acornerstone of stress research, Stress: Concepts, Cognition, Emotion, and Behavior. Available at: https://www.sciencedirect.com/science/article/abs/pii/B9780128009512000042 (Accessed: 24 December 2024).

(7) Bethan_admin (2023) Exploring the nervous system: Part II, Khiron Clinics.Available at: https://khironclinics.com/blog/exploring-the-nervous-system-partii/#:~:text=In%20dangerous%20situations%2C%20the%20vagus,regulate%20their%20respons es%20to%20danger. (Accessed: 24 December 2024).

(8) Cannon, Walter (1932). Wisdom of the Body. United States: W.W. Norton & Company. ISBN 978-0-393-00205-8.

(9) Cleveland Clinic (2024) What happens during fight-or-flight response?, Cleveland Clinic. Available at: https://health.clevelandclinic.org/what-happens-to-your-body-during-the-fight-orflight-response (Accessed: 24 December 2024).

(10) Fight-or-flight response (2024a) Wikipedia. Available at: https://en.wikipedia.org/wiki/Fight-or-flight_response (Accessed: 24 December 2024).

(11) Fight-or-flight response (2024b) Wikipedia. Available at: https://en.wikipedia.org/wiki/Fight-or-flight_response (Accessed: 24 December 2024).

(12) Fox, V. et al. (2021) Suicide risk in people with post-traumatic stress disorder:Acohort study of 3.1 million people in Sweden, Journal of affective disorders.Available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC7758737/#sec0012 (Accessed: 24 December 2024).

(13) Kendra Cherry, Mse. (2024) The fight-or-flight response prepares your body to take action, Verywell Mind.Available at: https://www.verywellmind.com/what-is-the-fight-orflight-response-2795194 (Accessed: 24 December 2024).

(14) Klein, S. (2013) The 3 major stress hormones, explained, HuffPost UK.Available at: https://www.huffingtonpost.co.uk/entry/adrenaline-cortisol-stress-hormones_n_3112800 (Accessed: 24 December 2024).

(15) Porges, Dr.S.W. (2022) Pvt background + criticism, Polyvagal Institute. Available at: https://www.polyvagalinstitute.org/background (Accessed: 24 December 2024).

(16) Roelofs, K. (2017) Freeze for action: Neurobiological mechanisms in animal and human freezing, Philosophical transactions of the Royal Society of London. Series B, Biological sciences. Available at: https://pmc.ncbi.nlm.nih.gov/articles/PMC5332864/#:~:text=Freezing%20is%20not%20a%20 passive,rigidity%20in%20behavioural%20stress%20reactions. (Accessed: 24 December 2024).

(17) Understanding the stress response (2024) Harvard Health. Available at: https://www.health.harvard.edu/staying-healthy/understanding-the-stress-response (Accessed: 24 December 2024).

(18) User, G. (2024) The effects of epinephrine and norepinephrine in the fight or flight response, CSFJ.Available at: https://csfjournal.com/volume-6-issue-3-1/2023/11/22/theeffects-of-epinephrine-and-norepinephrine-in-the-fight-or-flight-response (Accessed: 24 December 2024).

Hypoplastic Left Heart Syndrome

Kerr Y12

Deriving from the Greek ‘hupo’ meaning under, hypoplastic left heart syndrome (HLHS) refers to an underdeveloped left side of the heart, which is fatal if left untreated. This syndrome is congenital, meaning that it is present from birth It occurs in less than 2 per 10,000 live births (Siffel et al , 2015) Severe symptoms will become apparent within a few days after being born.

Babies that are diagnosed with this condition often present with a small left ventricle, blocked or malformed mitral valve and narrow aorta (Great Ormond Street Hospital). This causes the blood to have a vastly different journey around the circulatory system to most people. Deoxygenated (blue) blood flows through the right atrium, right ventricle and pulmonary arteries to the lungs as it is meant to, but the story is quite different for the left side of the heart. Oxygenated (red) blood fills into the left atrium but is unable to pass to the left ventricle due to the dysfunctional mitral valve.

Instead, the blood travels through the foramen ovale This is a remnant of when the baby was receiving oxygen from the mother through the umbilical cord As it was not breathing for itself, the blood had no need to pass through the lungs, so bypasses (holes) were developed to make the circulation more efficient These bypasses are present in all foetal hearts,

with or without HLHS (Little Hearts Matter, 2022)

The foramen ovale connects the two atria, so the red blood moves through the septum to the right atrium, and mixes with the blue blood on its way to the lungs. The only way for the oxygenated blood to reach the body is through another of the bypasses- the ductus arteriosus, between the pulmonary artery and the aorta. (Little Hearts Matter, 2022)

This results in a very inefficient system, as the right ventricle is having to work twice as hard to pump blood to the lungs and the body; plus, there is less oxygen travelling through the aorta as it is a mixture of red and blue blood. But the system holds for a few days or weeks, as that is when the holes begin to close over, and that is when the symptoms begin as the baby rapidly becomes oxygen deficient.

The symptoms include blue/grey skin, lips and fingernails; rapid, difficult breathing; poor feeding; cold hands and feet; weak pulse; and drowsiness compared to other babies. Without treatment, they can go into shock, which presents as exacerbated versions of the above symptoms plus cool, clammy skin and dull eyes. (Mayo Clinic, 2018) If these symptoms are shown, it is imperative that the child is taken to hospital immediately, where tests will be done to identify the issue.

Often, congenital heart diseases such as hypoplastic left heart syndrome are picked up on ultrasound scans done of the foetal heart while they are still in the womb. However, if this were not spotted then, what would the doctors be doing to confirm a diagnosis?

First, they would listen to the baby’s heart using a stethoscope to check if there is a murmur. They signal the presence of turbulent blood, which means that blood is not following the correct path around the circulatory system and is mixing/swirling around in places. A heart murmur can be heard as a whooshing sound in between heart beats

This would then be checked with an echocardiogram (Mayo Clinic, 2018) which, similar to the ultrasound, uses sound waves to make pictures of the heart It uses the Doppler effect to colour code the blood depending on its direction in relation to the probe. This allows doctors to identify where the turbulent blood is, as the colours would be mixing The echo can confirm a diagnosis of hypoplastic left heart syndrome as the left ventricle, aorta and mitral valve would be underdeveloped

So once HLHS is confirmed, what is the path of treatment? Ultimately, all of the options are palliative, meaning that they don’t fix the problem but instead mitigate the symptoms

Treatment can start with medications (e g alprostadil) to keep the bypasses open for longer and the baby may be connected to a ventilator and tubes for fluids and food (Mayo Clinic, 2018). But for the best chance of survival beyond infancy, surgery is necessary If they don’t still have the foramen ovale or it is too small, they will first have an atrial septostomy (Mayo Clinic, 2018) which uses catheters and a balloon to create a larger hole between the atria. (Therefore, not all patients require this surgery).

The first surgical stage for all HLHS patients is the Norwood procedure, which is done at 2 weeks old Surgeons rebuild the aorta and connect it to the right ventricle and pulmonary arteries so it can receive the (partially) oxygenated blood and distribute it to the body. A shunt is also added to provide the lungs with blood. The surgery can be modified or hybridized where a stent can be placed in the ductus arteriosus, and bands are put on the pulmonary

arteries to reduce flow to lungs. (Mayo Clinic, 2018) (Little Hearts Matter, 2022) (Children’s Heart Federation, 2022)

The second surgical stage is at 4-6 months and is called the hemi-Fontan procedure This removes the shunt to the lungs from the previous surgery and connects the superior vena cava to the pulmonary artery. This means that the lungs receive blood from the vein instead of the shunt. This lessens the work of the right ventricle as it is pumping mainly just to the aorta, as deoxygenated blood from the head and arms goes straight to the lungs. (Mayo Clinic, 2018) (Little Hearts Matter, 2022) (Children’s Heart Federation, 2022)

At 3-4 years old the thirds, and hopefully final, surgical stage is performed- the Fontan procedure. This connects the inferior vena cava directly to the pulmonary arteries. This means that the rest of the deoxygenated blood goes directly to the lungs rather than through the right ventricle This can be done externally, to bypass the heart completely, or internally This results in a much more efficient system as there is hardly any mixing of red and blue blood anymore, so symptoms like blue/grey skin usually go after this surgery (Mayo Clinic, 2018) (Little Hearts Matter, 2022) (Children’s Heart Federation, 2022)

In some cases, a heart transplant may still be necessary even after the Fontan procedure, with a study finding that Fontan failure is nearly 30% at 20 years, with an underlying diagnosis of HLHS as the primary risk factor (Konstantinov, Schulz and Buratto, 2022). This option isn’t offered straight away as there are very few hearts small enough, meaning that the babies would die before one became available. Furthermore, having this as a back-up option after surgery has

has better outcomes (Little Hearts Matter, 2022) For these reasons, a transplant is usually only done on patients 8 years old and older, but it depends on how the surgeries went.

What does life living with hypoplastic left heart syndrome look like? Well, even though treatments and surgeries have had major developments in recent years, many children die before they can answer the above question. The following information shows the outcome for the 742 babies who underwent surgery for HLHS in England and Wales between 2006 and 2017, with the information about their survival included until 2020. One of the children had a heart transplant in this timeline. (Little Hearts Matter, 2022)

This fits with previous studies that showed with surgical intervention, there was a 52% chance of survival (Siffel et al., 2015). Although, if the child lives past one year, this goes up to 90%. However, considering a child that does live beyond infancy, what will they experience?

One side effect can be stunted development A lack of oxygenated blood to the body can cause hindered growth, resulting in them hitting developmental milestones much later than normal. A lack of oxygen to the brain could cause learning difficulties and neurological problems as the brain cannot mature properly; educational challenges are present in 1/3 of children with a one pump heart

Throughout their life, people with hypoplastic left heart syndrome may experience low energy levels and other symptoms of heart problems. This is because the Fontan procedure, although it increases efficiency massively, it does not solve the inherent issue with the system But these side side effects can be reduced by regular exercise to increase stamina. However, there are a whole

whole range of side effects like infections or blood loss that come after having a high-risk major open heart surgery at an incredibly early age.

A fact I found curious is that cardiologists advise against patients with Fontan systems from ever getting tattoos or piercings This is because of the risk of infection, as if it did happen, it could have catastrophic effects on the surgical areas of their heart.

Where is research into HLHS going right now? Currently, scientists are looking into the genetic causes of the disease, as not much is known about how you get hypoplastic left heart syndrome All that is confirmed is that the children or siblings of sufferers of congenital heart diseases have an increased risk factor. Researchers are sequencing the genomes of people with HLHS and their parents and then running experiments, to try and identify the genes involved (News-Medical, 2023)

What we know on hypoplastic left heart syndrome has come on so far in the past 50 years- the creation of the Fontan procedure in 1971 (Talwar et al , 2020) has greatly impacted the life expectancy of people with the syndrome. Survival probability went from 0% in 1979–1984 to 42% in 1999–2005 (Siffel et al , 2015) I am so excited for the future of research in this condition, to help many more children survive into adulthood.

References

Hypoplastic left heart syndrome (2018)- Diagnosis and treatment - Mayo Clinic [online] (29.01.25) https://www.mayoclinic.org/diseases-conditions/hypoplastic-left-heartsyndrome/diagnosis-treatment/drc-20350605

Little Hearts Matter (2022) Hypoplastic Left heart syndrome - little hearts matter. [online] (29.01.25) https://www.lhm.org.uk/hypoplastic-left-heart-syndrome/ Great Ormond Street Hospital Hypoplastic left heart syndrome [online] (29.01.25) https://www.gosh.nhs.uk/conditions-and-treatments/conditions-we-treat/hypoplastic-left-heartsyndrome/

Children’s Heart Federation (2022) Hypoplastic Left Heart Syndrome (HLHS) - Children's Heart Federation [online] (29.01.25) https://chfed.org.uk/how-we-help/informationservice/heart-conditions/hypoplastic-left-heart-syndrome-hlhs/.

Konstantinov, I.E., Schulz,A. and Buratto, E. (2022). Heart transplantation after Fontan operation. JTCVS Techniques. [online] (30.01.25)

https://pmc.ncbi.nlm.nih.gov/articles/PMC9195631/

Siffel, C., Riehle-Colarusso, T., Oster, M.E. and Correa, A. (2015). Survival of Children With Hypoplastic Left Heart Syndrome. PEDIATRICS, 136(4), pp. e864–e870. [online] (30.01.25)

https://pmc.ncbi.nlm.nih.gov/articles/PMC4663985/#:~:text=Hypoplastic%20left%20heart%20 syndrome%20%28HLHS%29%20is%20a%20complex,survival%20for%20HLHS%20ranges% 20from%2020%25%20to%206

News-Medical (2023) Researchers find new genes that contribute to hypoplastic left heart syndrome. [online] (30.01.25) https://www.news-medical.net/news/20230717/Researchersfind-new-genes-that-contribute-to-hypoplastic-left-heart-syndrome.aspx

Talwar, S., Marathe, S.P., Choudhary, S.K. and Airan, B. (2020). Where are we after 50 years of the Fontan operation? Indian Journal of Thoracic and Cardiovascular Surgery, 37(S1), pp.42–53. [online] (02.02.25) https://pmc.ncbi.nlm.nih.gov/articles/PMC7858722/

The Spanish Siesta Freya Keable Y13

Cuando pensamos en la cultura española, a menudo nos viene a la mente las siestas Esta pausa a media tarde arraigada en siglos de tradición, representa no solo una pausa en el día sino un reflejo más amplio de los valores españoles en torno al descanso, la comunidad y el equilibrio estoy seguro de que a todos nos gustaría participar de este aspecto de la cultura española, pero ¿cómo surgió esta practica?

La palabra «siesta» procede de la expresion latina «hora sexta», que se refiere a la sexta hora de luz del día, alrededor del mediodia antiguamente, los trabajadores de las zonas rurales descansaban a mediodía para escapar del agobiante calor del sol ibérico. Este habito práctico pronto se convirtió en una norma cultural, profundamente arraigada en la sociedad española

En la era preindustrial, la siesta servía para descansar del trabajo y para que las familias se reencontraran y compartieran las comidas. Incluso en la transición de españa a una sociedad urbanizada, la tradición perduro adaptándose a los estilos de vida modernos de diversas maneras.

Actualmente, la siesta se practica menos, pero sigue siendo un concepto apreciado, sobre todo en las ciudades pequeñas y las zonas rurales

En los centros urbanos, las presiones de los horarios de trabajo modernos han reducido la prevalencia de las pausas prolongadas a mediodía, sin embargo, muchas empresas siguen manteniendo un largo periodo para comer, y algunos comercios cierran por la tarde antes de volver a abrir por la noche.

Los estudios científicos también han dado credibilidad a los beneficios de la siesta se ha demostrado que una siesta corta de 20 a 30 minutos mejora la función cognitiva, reduce el estrés y aumenta la productividad. Estos hallazgos han despertado un renovado interés por la siesta y algunas empresas incluso han introducido espacios de siesta designados para los empleados

La siesta representa algo mas que dormir, es un reflejo del énfasis que pone españa en vivir bien y dar prioridad a la calidad de vida. A diferencia de muchas culturas que valoran la productividad constante, en españa se celebra la idea de bajar el ritmo para disfrutar del momento, esta filosofía se extiende a otros aspectos de la vida española, como el ritmo pausado de las comidas la importancia de las reuniones sociales

Además, la siesta encarna un sentido de adaptabilidad. Las famosas cenas tardías y la vibrante vida nocturna de espana son posibles en parte gracias a este descanso del mediodía la flexibilidad para conciliar trabajo, familia y ocio es un rasgo distintivo de la cultura española.

La siesta ha sido a menudo malinterpretada por los forasteros como símbolo de pereza. Sin embargo

embargo, los españoles la consideran una práctica e incluso productiva lejos de ser ociosa la siesta responde a la necesidad histórica de españa de armonizar el trabajo con las exigencias de un clima caluroso y largas jornadas agrícolas

Curiosamente, otros países han adoptado prácticas similares desde el «riposo» italiano hasta la cultura China de la siesta después de comer esta perspectiva global subraya la necesidad humana universal de descanso y el valor de respetar los ritmos naturales

Aunque la siesta ya no sea una rutina diaria para todos los españoles sigue siendo un poderoso simbolo de su identidad cultural en un mundo acelerado, la siesta nos recuerda la importancia del equilibrio y el bienestar tanto si se practica tradicionalmente como si se reinventa para la era moderna la siesta sigue inspirando a quienes buscan un modo de vida mas armonioso.

La siesta como la propia españa es una mezcla de tradición e innovación, es un testimonio de los valores perdurables del descanso v el rejuvenecimiento y ofrece lecciones que resuenan mucho más allá de las fronteras de España.

When we think of Spanish culture, siestas often come to mind. This mid-afternoon break, rooted in centuries of tradition, represents not just a pause in the day but a broader reflection of Spain’s values around rest, community, and balance: I’m sure that we all would like to partake in this aspect of Spanish culture, but how did this practise come to be?

The word siesta: comes from the latin phrase "hora sexta" which refers to the sixth hour of daylight - around noon in the past, workers in rural areas would take a midday rest to escape the oppressive heat of the Iberian sun. This practical habit soon evolved into a cultural norm deeply ingrained in Spanish society

In the pre-industrial era, the siesta served as a respite from labour and a time for families to reconnect and share meals. Even as Spain transitioned to an urbanised society, the tradition endured adapting to modern lifestyles in various ways

Today the siesta is less practised but remains a cherished concept, especially in smaller towns and rural areas In urban centres, the pressures of modern work schedules have lessened the prevalence of extended midday breaks. However, many businesses still observe a long lunch period and some shops close in the afternoon before reopening in the evening.

Scientific studies have also given credibility to the siestas benefits. A short nap of 20 or so minutes has been shown to improve cognitive function, reduce stress, and enhance productivity these findings have sparked renewed interest in the siesta, with some companies even introducing designated nap spaces for employees

The siesta represents more than just sleep it is a reflection of Spain’s emphasis on living well and prioritising quality of life Unlike many cultures that prize constant productivity Spain celebrates the idea of slowing down to enjoy the moment. This philosophy extends to other aspects of Spanish life such as the leisurely pace of meals and the importance of social gatherings.

In addition, the siesta embodies a sense of adaptability. Spain’s famously late dinners and vibrant nightlife are made possible in part by this midday rest The flexibility to balance work family and leisure is a hallmark of Spanish culture.

The siesta has often been misunderstood by outsiders as a symbol of laziness However, spaniards view it as a practical and even productive practice, far from being idle, the siesta aligns with Spain’s historical need to harmonise work with the demands of a hot climate and long agricultural hours.

Interestingly other countries have adopted similar practices from Italy’s "riposo" to china's post-lunch napping culture. This global perspective underscores the universal human need for rest and the value of honouring natural rhythms.

While the siesta may no longer be a daily routine for all spaniards it remains a powerful symbol of their cultural identity in a fast-paced world the siesta offers a reminder of the importance of balance and well-being whether practised traditionally or reimagined for the modern age the siesta continues to inspire those seeking a More harmonious way of life

The siesta, much like spain itself is a blend of tradition and innovation, it stands as a testament to the enduring values of rest and rejuvenation, offering lessons that resonate far beyond the borders of Spain

Multilingualism in Morocco

Vaidehi Varma Y12

Le Maroc a une culture pleine de diversité et de différences. Un exemple de ces différences est la langue (en fait le nombre de langues!) Bien qu’il n’y ait que 2 langues officielles (Arabe Standard Moderne (ASM) et berbère), environ 40% des Marocains parlent de Français, et 21% parlent d’Espagnol. Il n’y a que 15% qui parlent en Anglais. Il y aussi l’arabe marocain (c’est connu comme Darija aussi) et généralement on le parle à la maison

Les différentes langues

L’alternance codique c’est quand on utilise deux langues (ou plus) dans les conversations. Il y a des termes spécifiques pour les langues utilisées au Maroc : Darija est un exemple d’une langue vernaculaire. Une langue vernaculaire est une langue parlée par les gens ordinaires. Bien sûr, le ASM est une langue officielle Il y aussi des langues de prestiges, comme le Français

Le contexte historique du multilinguisme

Le Maroc a une histoire vraiment compliquée. Le colonialisme a joué un rôle non négligeable dans les histoires des langues Il y a des centaines des années, les Romains ont envahi le Maroc (et beaucoup d’Afrique aussi). En fait, l’origine du mot « Berber » est le mot de Latin « barbarian » Donc, les gens Marocains préfèrent se décrire comme les gens Amazigh Je crois que les langues Amazigh (le Tamazight, le Tachelhit, et le Tarifit par exemple) sont incroyables Ils ont survécu depuis 2000 av. J-C. aux côtés de l’Arabe et des autres langues. Ce montre la persistance et la ténacité de ces langues Ça aide la diversité linguistique tellement

Les contextes sociaux

Ici, on peut voir une carte de Maroc Avant le 20ème siècle, le Maroc était un royaume indépendant. Mais pendant les guerres mondiales, la plupart de Maroc était gouverné par la France, mais dans le nord il y avait des parts gouvernés par l’Espagne (en fait, en 2005, un cinquième de Marocains pouvait parler en Espagnol) Cette complexité a contribué à la scène linguistique actuelle Donc, même maintenant il y a beaucoup de diversité linguistique. Finalement, en 1956, le Maroc a obtenu son indépendance Mais, l’effet de cette colonisation est présent et visible encore aujourd’hui.

La formalité du contexte affecte énormément quel langage est parlé Par exemple, on parle Darija dans les situations informelles. Cependant, si on était dans un bureau ou parlait à des supérieurs, on parlerait le Français ou l’Arabe (Comme j’ai dit auparavant, bien que le Français ne soit pas une langue officielle, il est parlé répandu).

Comme presque tous les pays, il y a bien sur des dialectes différents et variations pour chaque région En générale, ces variations affectent les langues vernaculaires le plus, en particulière

dans les endroits ruraux Dans le nord (comme Tanger), il est plus probable que des mélanges d’espagnol soient entendus, et le français dans les grands villes (comme Rabat).

De nombreux universités utilisent le Français comme la première langue. Ça peut causer des problèmes pour les étudiants qui utilisent le ASM ou Darija dans ses vies quotidiennes, car ils doivent étudier avec le Français. Ces problèmes peuvent tourner autour du changement de langues, mais aussi de l’identité Cependant, dans les écoles primaires et les collèges, l’Arabe est enseigné. A cause de ça, l’alternance codique est une capacité essentielle.

Dans les médias :

Maintenant, la télévision utilise Darija de plus en plus. Je crois qu’ils essaient de rendre la télévision plus accessible à tous les Marocains Malgré l’augmentation de Darija dans le divertissement, Français restent la langue la plus dominant dans les médias et les journaux télévisés Cependant, même dans ces contextes, le Français est souvent mélangé avec l’Arabe afin d’avoir un ton courant.

L’alternance codique est répandue partout la musique marocaine, particulièrement, dans la musique populaire auprès des jeunes Ces chansons incluent des mélanges de Français et Arabe et même Anglais quelquefois. Je pense que ça réfléchit l’identité culturelle hybride des jeunes marocains

Le fossé générationnel, en termes de langue, est accentué par l'influence des médias mondiaux sur les jeunes. Souvent, les générations âgées voient Darija comme pire que Français pour les situations formelles.

Conclusion

En conclusion, je pense que le mélange des langues parlées en Maroc représente l’identité culturel pour les jeunes. Je pense qu’il est essentiel qu’on ait le multilinguisme pour garder l’histoire de Maroc et pour protéger leur culture – particulièrement dans les régions plus urbaines (où la mondialisation peut causer des jeunes à arrêter de parler les langues vernaculaires, donc on les perd) Cependant, l’inclusion des langues européens (le Français et l’Espagnol) aider à moderniser et créer des relations internationales pour le Maroc.

Morocco has a culture full of diversity and differences. One example of these differences is the language (actually the number of languages!). Although there are only 2 official languages (Modern Standard Arabic (MSA) and Berber), around 40% of Moroccans speak French, and 21% speak Spanish. Only 15% speak English. There’s also Moroccan Arabic (also known as Darija) and generally it’s spoken at home.

The different languages

Code-switching is when we use 2 languages (or more) in conversations There are specific terms for the languages used in Morocco: Darija is an example of a vernacular language. A vernacular language is a language spoken by ordinary people Of course, MSA is an official language. There are also ‘prestige’ languages, like French.

The historical context of multilingualism

Morocco has a very complicated history Colonialism has played a significant role in the histories of the languages. Hundreds of years ago, Romans invaded Morocco (and also lots of Africa

Africa) In fact, the origin of the word “Berber” is the Latin word “Barbarian” So, Moroccan people preferent to describe themselves as Amazigh. I think that the Amazigh languages (Tamazight, Tachelhit, and Tarifit for example) are incredible They have survived since 2000 B.C. alongside Arabic and other languages. This shows the persistence and the tenacity of these languages This really helps linguistic diversity

Here, we can see a map of Morocco Before the 20th century, Morocco was an independent kingdom. But, during the world wars, the majority of Morocco was governed by France. However, in the North, there were parts governed by Spain (in fact, in 2005, 1 in 5 Moroccans spoke French). So, even now, there is lots of linguistic diversity. Finally, in 1956, Morocco gained its independence. But the effect of this colonisation is still present and visible today.

Social contexts

The formality of the context hugely affects which language is spoken For example, one would speak Darija in more informal situations. However, if one were in an office or speaking to superiors, one would speak French or Arabic (As I said before, although French isn’t an official language, it’s common spoken. Like almost every country, there are of course different dialects and variations for each region In general, these variations affect the vernacular languages the most, in particular in rural areas. In the North (like Tangier), it’s more likely that mixes of Spanish would be heard, and French in big cities (like Rabat)

Many universities use French as the main language. This can cause problems for students who use MSA or Darija in their daily lives, as they must study with French. These problems can revolve around changing languages, but also around one’s identity. However, in primary or secondary schools, Arabic is taught. Because of this, code-switching is an essential skill.

In the media:

Now TV uses Darija more and more. I think that they’re trying to make TV more accessible for all Moroccans Despite the increase in Darija in entertainment, French remains the most dominant language in media and news programs. However, even in these contexts, French is often mixed with Arabic in order to have a normal tone

Code-switching is widespread throughout Moroccan music, particularly in the music that’s popular with young people. These songs include mixes of French and Arabic, and even English sometimes I think that this reflects the hybrid cultural identity of Moroccan young people The generational gap, in terms of language, is widened by the influence of global media on young people. Often the older generations see Darija as ‘worse’than French in formal situations.

Conclusion

In conclusion, I think that the mix of languages spoken in Morocco represents young people’s cultural identity. I think that it’s essential that we have multilingualism to keep Morocco’s history and to protect their culture – in particular in more urban areas (where globalisation can cause young people to stop speaking vernacular languages, so we lose them). However, the inclusion of European languages (French and Spanish) helps to modernise and create international relations for Morocco.

The Physics of Time: From Newtonian Absolutes to Einsteinian Relativity

Anna Greenwood Y12

From a young age, our understanding of time is founded upon the idea of seconds, minutes, and hours as absolute quantities, a concept introduced by Newton in his 1687 work Philosophiae Naturalis Principia (Mathematical Principles of Natural Philosophy). Newton's assertion that time flows uniformly and universally, though seemingly logical, was later contradicted by the rational consequence of Einstein’s theory of Special Relativity: Time Dilation

Time Dilation is defined as ‘a physical phenomenon in which time moves differently for different observers in the same inertial reference frame’. For example. time moves slower (or ‘dilates’) for an observer who is in motion relative to another observer This idea which seemingly contradicts all instinctual understanding of our universe comes as an extension of Einstein’s two postulate formulation, which forms the basis of his theory of Special Relativity The first principle states that the laws of physics are the same in all inertial frames of reference (an inertial reference frame being a coordinate system in which Newton’s principle of inertia is obeyed) For instance, the simple phenomenon of bouncing a ball on a train will not differ if the train is at rest or moving at constant velocity. The second principle of Einstein’s postulates asserts the invariance of C, that the speed of light in a vacuum is the same in every inertial reference frame (e.g. the same for all observers regardless of their motion). This second postulate has greater divergence to our observations of everyday life, as you would expect the application of Physics to the speed of light to parallel the application of Physics to other moving objects. By way of example, two cars travelling at 70mph on a motorway would have a relative velocity to each other of 140mph Similarly, the expectation of a torch being shone at the front of a spaceship travelling at 0.9c, would be of that the light would then be travelling at 1 9c And yet, Einstein theorises that the light will still travel at the speed of light, both from the frame of reference of an observer on the spaceship, or a stationary frame of reference beside it This ‘speed limit’ of the universe at 299 792 458 m/s is the speed of any massless particle’s motion in a vacuum.

But how does this affect time? The phenomenon of light being unable to exceed a certain speed leads to various paradoxes in the world of physics One of which can be illustrated through the attempt to standardise time with ‘pneumatic clocks’. Following French success with SI units, an idea of ‘astronomically authorized time’ was attempted using the pulsing of compressed air and seismograph machines. This attempt to coordinate clocks gave rise to various inconsistencies, especially when extended to larger distances as timing signals would arrive at near clocks sooner than far clocks This meant that moving the central clock would produce different results, and as such no two events could be truly ‘simultaneous’. This can be exemplified by a simple experiment with two people equidistant from a lightbulb on opposite sides At rest on a basic plane, both people will witness the light switching on at the same time, because the speed of light is the same and they are at the equal distances from the source Similarly, the same circumstances on a moving train will yield the same result: if the train is moving at constant velocity, it is an inertial reference frame and as such Einstein’s first postulate applies (the laws of physics are the switching

same) However, from a reference frame outside of the train (for example on the platform), it appears that the two people witness the light switching on at different times. This is because the light ray travelling in the same direction as the trains motion will have further to travel than the ray moving against it, and since the speed of light cannot exceed 299 792 458 m/s this takes more time (v=s/t) Physicists resolved this problem with a simple idea - everyone correctly measures time, it just depends on the point of view. But for these measurements to be useful we need some way to switch between different points of view. We do this using Lorentz transformations, of which Time Dilation is just a special case.

As previously mentioned (and probably over-iterated) velocity = displacement / time, or v=s/t For a massless particle moving in a vacuum, velocity cannot exceed 299 792 458 m/s, and yet rays of light such as from our lightbulb on a moving train must travel greater distances in the same time. To resolve this discrepancy in measurements, Einstein suggested an adjustment to Newton’s law so that time itself (from the reference frame of the moving object) slows down to compensate for the increase in distance. This allows the speed of light to remain constant. It also implies that faster movement in space is equivalent to slower movement in time In terms of space, another reference frame is moving relative to you, so time in that reference frame slows down relative to the time you measure This can be illustrated using light clocks (2 parallel mirrors with a light beam ‘bouncing’ between). A light clock at rest measures time through the movement of light through one oscillation, if this light clock is then set at constant velocity an oscillation will take greater time (because the ray has further distance to travel, moving diagonally as opposed to directly up and down). However, time will only slow down in the moving light clock from a reference frame outside of the clock. From a reference frame inside the clock (moving with the same velocity) the time taken for the light ray ‘bouncing’ between mirrors in the clock does not differ from when the clock was stationary

Evidence for Time Dilation is difficult to accumulate as its effects are only clear when moving at a speed close to the speed of light. In 1971, the Hafele-Keating experiment flew four caesium-beam atom clocks around the world aboard commercial aircrafts The clocks were then compared to a previously synchronised atomic clock which had remained stationary on Earth Different amounts of time had elapsed on the clocks to an amount of 16 nanoseconds This was exactly as Einstein’s formula, γ=1/√(1−v2/c2), had predicted. Another example of Time Dilation is observed in the lifetimes of muons travelling at relativistic speeds (as opposed to those at rest). Muons are particles similar to electrons but with a lifetime of only 2.2x10-6 seconds. It is demonstrated that muons in motion take longer to break apart (due to time moving slower).

The concept of time, once thought to be absolute and universal, has been fundamentally redefined by Einstein's theory of Special Relativity. Time dilation challenges our intuitive understanding of the universe and reveals that time is not a fixed constant but a variable quantity that depends on the relative motion of observers. The importance of Einstein's postulates are demonstrated through thought experiments such as the moving light clock, and real-world evidence such as the Hafele-Keating experiments. Einstein’s theory Einstein’s theory of Special Relativity highlights the intricacies of the relationship between space and time and continues to shape how we perceive and measure the passage of time in our everexpanding universe

A Hunt for the Invisible

Eashan Rautaray Y12

415 years ago, a brilliant mind observed the heavens above – and noted 4 bright objects in the sky, orbiting around Jupiter. He also began to theorise the idea of orbits of planets and moons. Half a century later, a renowned Cambridge philosopher confirmed mathematically the attraction between two objects of mass. Even more ingenious, he managed to link the laws of motion of objects to that of celestial bodies, theorised by a partner 30 years prior I am, of course, referring to the great physicists Galileo, Newton and Kepler. Thus began a cycle of scientific theories, proof and equations Einstein himself said that he attributes his visionary work to those who came before (Maxwell and Faraday for instance).

Astrange way to start a discussion on the undiscovered, but nonetheless creates a contrast for one of the most challenging problems facing physicists today: dark matter When observing a typical galaxy, Einstein’s general relativity tells us that objects are too fast, rotating with such speeds that the gravitational forces from (baryonic) matter should not be able to hold them together Asimilar effect on clusters implies the existence of some undetectable phenomena with extra mass, aptly named dark matter (DM) or non baryonic matter. It does not interact with electromagnetic forces, which makes it undetectable by conventional means and observable only through its gravitational effects on visible matter.

That’s about all we know, so let’s begin our hunt for it.

Part 1: Identifying candidates

WIMPs (Weakly interacting massive particles) have long been considered the leading candidate for dark matter due to their theoretical foundations in supersymmetry and their predicted properties that align with cosmological observations. These particles are characterized by their weak interactions with ordinary matter[1], which makes them difficult to detect [2]

Supersymmetry (SUSY) is an extension to the standard model, suggesting a limited number of fundamental forces and particles including bosons (integer spin) and fermions (non-integer spin). It was suggested to prevent the Higgs Boson’s mass from becoming unrealistically large Minimal supersymmetry standard model (MSSM) suggests each particle has a (undiscovered) counterpart: a particle differing in spin by ½. Neutralinos (composite of zino, photino and higgsino – antiparticles to Z-bosons, photons and higgs boson) are a hypothetical explanation for DM.

WISPs (Weakly interacting slim particles). Axions are a type of WISP and a hypothetical elementary particle that arise from theories attempting to solve the strong CP problem in quantum chromodynamics. They are considered a compelling dark matter candidate due to their potential to be produced in large quantities during the early universe and their extremely low mass, which would allow them to behave as cold dark matter, and detected through EM field interactions [3]

xtremely

Sterile neutrinos are a proposed extension of the Standard Model, theorized to interact only through gravity and possibly the Higgs mechanism. These particles are particularly preferred as dark matter candidates due to their predicted production mechanisms and their ability to explain the observed neutrino masses and mixing patterns. Sterile neutrinos could also lead to interesting astrophysical signatures, such as X-ray emissions from their decay [2], [3]

Or perhaps we could modify the laws of gravity. Unpopular but MOND (Modified Newtonian Dynamics) posits that the force of gravity behaves differently on galactic scales compared to smaller, planetary scales. This modified understanding of gravity can account for the same observations attributed to dark matter [4]

Part 2: Establish detection methods

Direct detection refers to experiments that aim to observe the effects of dark matter particles, particularly weakly interacting massive particles (WIMPs), as they collide with nuclei in a detector located on Earth. Although WIMPs are hypothesized to only interact through gravitational and weak forces, various experiments have been developed to attempt to detect them directly. These include cryogenic crystal detectors, such as those used by the Cryogenic Dark Matter Search (CDMS), which utilize very cold germanium and silicon crystals to detect vibrations caused by WIMP interactions [5]. Despite the challenges, direct detection remains a necessary complement to indirect detection methods, as it provides a means to confirm the existence of dark matter candidates under the prevailing theoretical framework.

Indirect detection seeks to observe the products of dark matter annihilations or decays occurring far from Earth, particularly in regions where dark matter is expected to accumulate, such as the centres of galaxies and galaxy clusters These regions typically contain minimal baryonic matter, which reduces background noise from standard astrophysical processes [6]. Researchers focus on detecting excess gamma rays, which are produced as a result of WIMP annihilations, or through interactions of charged particles with ambient radiation. Current experiments also explore anti-protons and anti-deuterons as potential signatures of dark matter annihilation. Recent studies have introduced novel methodologies to search for dark matter, including the use of quantum devices

Researchers at the SLAC National Accelerator Laboratory propose that these devices could be tuned to detect what they refer to as thermalized dark matter, a form that may have been present on Earth for an extended period [7]. This represents a shift from traditional galactic dark matter searches, which focus on dark matter entering from space. Additionally, the Light Dark Matter eXperiment (LDMX) aims to utilize primary electron beams to produce light dark matter in fixed-target collisions, offering unique sensitivity to sub-GeV dark matter candidates [8]. This broadened approach to dark matter detection highlights an expanding theoretical landscape of experiments required to explore as many dark matter candidates with differing masses and properties.

Part 3: Evidence and results

Neutralinos are the lightest supersymmetric particle (LSP) and a WIMP Their R-parity conservation of -1 ensures the LSP cannot decay into lighter particles. The Wilkinson Microwave Anisotropy Probe (WMAP) predicts the cold matter relic density to be 0 1126 [9] Minimal supergravity (mSUGRA) [10] is a constrained SUSY model characterised by: universal scalar/ gaugino mass, trilinear coupling and the ratio of Higgs vacuum expectation values (lots of big words that essentially assume perfect conditions to get the most accurate value

value) When the neutralino mass is close to that of the next-to-lightest supersymmetric particle (NLSP), co-annihilation processes, such as charginos, reduce relic density, bringing it closer to WMAP values

DarkSUSY [11] is a computational tool used to model neutrinalino properties It calculates the relic density with the Boltzmann equation. When constrained MSSM and mSUGRAconstraints are applied to darkSUSY model, the relic density is similar. This is strong evidence for SUSY’s relevance in dark matter searching, due to the detailed effects of co-annihilation, resonance annihilation – and can include collider limits and indirect detection bounds. However, both models assume a thermal freeze-out (still out current interpretation of the big bang) Neutralinos were in thermal equilibrium in the big bang, annihilated and produced SM particles at an even rate During expansion, neutralinos no longer annihilated efficiently, leaving behind a “relic density” determined by their annihilation cross-section. Again lots of complex sounding theories that show XENONnT and LUX-ZEPLIN [12], [13], [14] are direct observational experiments for WIMPs. They employ dual-phase XENON time projection chambers (TPCs), and detects for nuclear recoil during neutralino-nuclei interactions Over 280 days, mass range of 30 GeV and the most stringent limits on spin-independent reactions have found no data, and similar toATLAS push hypothetical mass ranges higher

Better and quicker progress has been made with the Fermi-LAT-gamma-ray telescope and Veritas aim to look at galactic centres to identify DM interaction [15]. Hypothetical particles like sterile neutrinos or Kaluza-Klein [16] particles align with observed characteristics of DM and interact via weak to gravitational forces. Their mass range is broader (from KeV to TeV) than SUSYparticles and haven’t encountered as many null results with ongoing observations.

Similarly, axions and WISPs [17, 18] can help search for DM. Axions are similar to SUSY models Proposed as a solution to charge parity violation in quantum chromodynamics (conservation of charge when acted upon by weak force, and the led to more complex ideas about mesons) Their production mechanisms, such as misalignment or topological defects also agree with WMAP relic density [19]. Axions interact primarily via their coupling to photons and gluons, leading to reduced backgrounds in experiments compared to WIMP searches like LUX-ZEPLIN. However CERNs Axion Solar Telescope (CAST) [20] has not recorded results yet

Taking a step back, woah lots of complicated words (blame CERN and universities for their terrible naming systems) – where are we actually poised in this hunt right now. Poorly? Yes, lots of money and resources invested, and almost all results being null. What we can however conclude is the hypothetical mass ranges for each particle – if they do even exist – and thus provide a framework for the future.

Part 4: The future

It is inevitable that we discuss the future of our theories and candidates More facilities, more insight into ‘annihilation cross sections’. Significant strides have been made in the design and execution of liquid xenon direct-detection experiments, with upcoming initiatives like DARWIN [21]aiming to surpass the limitations of current technologies. The Pancake facility has been instrumental in testing new detector components, such as hermetic time projection chambers (TPCs), to minimize background noise from radon emanation. The EDELWEISS collaboration [21] is also pursuing innovative techniques using Transition Edge Sensors (TES) [21] to enhance sensitivity to light DM particles, achieving promising results in background reduction

reduction Notice that a lot of these focus on the collaboration and method of data collection rather than the methodology itself.

However, different approaches can yield varying results, and the reliance on a single method may not adequately address the complexities of the situation Consequently, experimentalists often quote results from multiple approaches to mitigate potential disputes and to provide a broader understanding of the findings.

Thus the problem of dark matter remains just that. No one elegant solution, no single equation, no apple from a tree and no star in a sky Reflecting beyond the problem makes us appreciate the lengths and stringent regulations that modern theoretical physicists must get to in order to find any evidence for and against theories

References

[1] Paul, B. (2024) In the Hunt for Dark Matter, it is Harder for WIMPs to Hide – UCLA Division of Physical Sciences. https://physicalsciences.ucla.edu/in-the-hunt-for-dark-matter-it-is-harder-for-wimps-to-hid

[2] Jacob (2023) Illuminating Dark Matter (2023). https://www.simonsfoundation.org/event/illuminating-darkmatter-2023/.

[3] Dark Matter Candidates from Particle Physics and Methods of Detection (no date). https://ar5iv.labs.arxiv.org/html/1003.0904

[4] Stuart, C. (2024) What could dark matter be? Five key theories. https://www.skyatnightmagazine.com/spacescience/what-could-dark-matter-be

[5] Davis, J.H., McCabe, C. and Bœhm, C. (2014) 'Quantifying the evidence for dark matter in CoGeNT data,' Journal of Cosmology and Astroparticle Physics, 2014(08), p. 014. https://doi.org/10.1088/14757516/2014/08/014.

[6] Grube, J. and Collaboration, N.V. (2012) 'VERITAS limits on dark matter annihilation from dwarf galaxies,' AIP Conference Proceedings, pp. 689–692. https://doi.org/10.1063/1.4772353.

[7] Angelides, N. (no date) XLZD Collaboration. https://xlzd.org/.

[8] UCLA Dark Matter 2023 (2023). https://indico.cern.ch/event/1188759/timetable/?view=standard

[9] Chattopadhyay, U., Corsetti, A. and Nath, P. (2003) WMAP constraints, Susy Dark matter and implications for the direct detection of susy, arXiv.org. Available at: https://arxiv.org/abs/hep-ph/0303201

[10] Baer, H. et al. (2003) Updated reach of the CERN LHC and constraints from relic density, B->s gamma and a(MU) in the MSUGRA model, arXiv.org. Available at: https://arxiv.org/abs/hep-ph/0304303

[11] Gondolo, P. (2004) ArXiv:astro-ph/0406204v1 8 Jun 2004. Available at: http://arxiv.org/pdf/astroph/0406204

[12] Aad, G. et al. (2018) The ATLAS experiment at the CERN large hadron collider, Repositorio Institucional. Available at: https://ri.conicet.gov.ar/handle/11336/64823

[13] (No date) First Dark matter search results from the lux-ZEPLIN (LZ ... Available at: https://link.aps.org/doi/10.1103/PhysRevLett.131.041002

[14] (No date a) First WIMP SEARCH RESULTS FROM THE XENONNT experiment released by the aprile lab! | department of physics. Available at: https://www.physics.columbia.edu/news/first-wimp-search-resultsxenonnt-experiment-released-aprile-lab

[15] Weakly interacting massive particle (2024) Wikipedia. Available at: https://en.wikipedia.org/wiki/Weakly_interacting_massive_particle

[16] (No date a) Kaluza-Klein Dark matter. Available at: https://arxiv.org/pdf/hep-ph/0207125.pdf

[17] (No date a) Supersymmetric Dark matter candidates. Available at: https://arxiv.org/pdf/1001.3651.pdf

[18] (No date a) Radware bot manager Captcha. Available at: https://iopscience.iop.org/article/10.1088/13672630/11/10/105006/pdf

[19] Day, C. (2023) The search for wimps continues, Physics. Available at: https://physics.aps.org/articles/v16/s106

[20] (2019) Ntua. Available at: https://dspace.lib.ntua.gr/xmlui/bitstream/handle/123456789/55174/simatou_isidora_diploma_thesis%20.pdf?sequ ence=1

[21] After 30 years of R&D, Breakthrough announced in Dark Matter Detection Technology, definitive search to begin for axion particles (no date) UW News. Available at: https://www.washington.edu/news/2018/04/09/admxdetection-technology/

[22] Misiaszek, M. and Rossi, N. (2024) Direct detection of dark matter: A critical review, MDPI. Available at: https://www.mdpi.com/2073-8994/16/2/201

[23] Results from the LZ Dark Matter Search (no date) Bulletin of the American Physical Society. Available at: https://meetings.aps.org/Meeting/APR23/Session/U05.3

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.