



As a biologist the concept of DNA is ingrained in my psyche, just as a specific trait or characteristic is coded in each individual’s genes. I am regularly asked what makes the RGS community so special, so distinctive, and this edition of The Annual goes a long way to answering that question. The RGS has existed for well over 500 years; the School has innovated, evolved and developed and yet – irrespective of the time – our values have endured and, of these, as well as our pride in our culture of inclusivity and respect, scholarship has defined who we are and what we do. It is in our DNA.
In this light I was particularly proud that our Inspection earlier in the year focused on this particular aspect as one of the two 'Significant Strengths' we were awarded. The ISI Inspection report mentioned, “The strong academic culture leads to pupils who readily engage in critical thinking and deep learning, and display intellectual curiosity.” This culture, they felt, permeated through every element of school life. As I read through this edition of The Annual , this passion for learning absolutely shines through. The depth, diversity and complexity of the research impress; however, it is the sheer enthusiasm and excitement for learning which highlights that, far from mere rhetoric, our philosophy of Scholarship for All remains a reality.
I would like to take this opportunity to congratulate my Head of Scholarship, Mrs Tarasewicz, and all those students who have contributed to The Annual ; I hope that all who read it are inspired. Scholarship is in our DNA at the RGS and I have absolute confidence that it remains, and will continue to remain, at the very heart of all we do, and all we strive for.
Dr JM Cox Headmaster
Iam delighted to introduce the 2025 edition of The Annual , celebrating an incredibly exciting year of Scholarship amongst our younger students here at the RGS.
Scholarship is one of our core school values and we are extremely proud of the culture of Scholarship for All that we have created, encouraging all of our students to develop intellectual curiosity and creativity, as well as academic ambition. This was recognised as a ‘Significant Strength' by the inspectors when they visited in January 2025 and it is certainly something that has been embodied by the students that you will encounter in the following pages.
For several years, the Senior Independent Learning Assignment (the ILA) has been a flagship part of our Scholarship for All programme, offering students in the Lower Sixth the opportunity to research a topic of their own choosing, with subject specialist support from a member of the teaching staff.
This year, for the first time, we were very excited to launch the Junior ILA and so to extend this opportunity to members of the Third and Fourth Form. Initial interest far surpassed our expectations, with almost 80 students signing up. In total, 40 students successfully submitted finished projects, representing remarkable individual and collective achievements. All 40 of these students were invited to share their projects with a guest list of staff, parents and fellow students at the Junior ILA Celebration Evening, culminating in the presentation of the inaugural Junior ILA awards.
The very best of this year’s Junior ILAs have been published in full over the following pages, including all of the students whose projects were Commended and Highly Commended, as well as those produced by our award winners. With such a remarkable range of titles, I am sure you will agree that there is something to capture everyone’s interest.
At this point I need to also pay tribute to the remarkable team of 21 Sixth Form students who acted as mentors for their younger counterparts. This support of students by students was the main point of difference between the Junior ILA and the Senior ILA, where supervisors are drawn from the staff body. It turned out to be one of the most successful and inspiring features of the programme and I am incredibly grateful to the mentors for sharing their time, enthusiasm and experience with the younger pupils. Several mentors attended the Celebration Evening where they celebrated their mentees’ achievements and were also themselves celebrated.
Finally, I would like to say a huge thank you to Georgina Webb, our Partnerships and Publications Assistant, who has been responsible for putting together this beautiful publication.
I do hope that you enjoy reading it.
Mrs Henrietta Tarasewicz Head of Scholarship
With
to the
With thanks to: Mr Dunscombe, Mr Wright and Miss Goul-Wheeker for acting as judges at the Junior ILA Celebration Evening, and to Mrs Webb for producing this publication.
This essay was commended at the Junior ILA Celebration Evening
Motorsport has a variety of applications one of which is how it can be applied to the cars that people drive every day. This innovative technology allows drivers across the world to be safer and faster. The purpose of this research essay is to find out how much or how little does motorsport affect
cars. This research paper answers how motorsport, is useful to the car industry. We found most of the technology came from motorsport. However there have been other influences such as the aerospace industry provides the automobile industry with Anti-Lock braking and new composites.
Cars are the most used transport in the world. (Armstrong, 2022) Since cars were invented in 1886, Motorsport has been a testbed for road car technology. The intense competition created by racing teams to research and develop breakthroughs in automotive engineering has led to improved performance, efficiency, and safety for drivers worldwide.
R&D is an abbreviation of Research and Development. Motorsport and R&D go together. Whether it is the top level of Formula 1, the tough tracks of the World Rally Championship, motorsport relies on constant innovation. This ongoing progress helps improve performance, promotes sustainability, and makes the sport more entertaining for fans around the world.
Aerodynamics is one of the most important parts of a car. If a car is not streamlined, it will not be very efficient. For example, a Tesla Model 3 has a drag coefficient (Cd) of 0.203 (Rawlins, 2024) and will do 132 MPGe (Miles per Gallon equivalent) (Brown, 2023) whereas the Mercedes EQXX (0.16
Figure 1: CFD (Computational Fluid Dynamics) predicts airflow around a car. (Red = high pressure Blue = low pressure)
Cd) will do around 282 MPGe (unknown, unknown) with a smaller battery. Even if there is only a tiny difference in the shape of the car it can be very costly.
The Panhard CD LM64, made for Le Mans is one of the most aerodynamically made circuit cars with a drag coefficient of 0.12. (Gitlin, 2020) The Eco-Runner 8 is the most aerodynamic car ever with a drag coefficient at only 0.045. (Gitlin, 2020) To put that into context the most aerodynamic shape in nature is a teardrop, it has a drag coefficient of 0.04. (Tallodi, 2024)
The EQS by Mercedes is the most aerodynamic road car tested with a drag coefficient of 0.2. (Harrison, 2021) This is worked out by this equation Cd = Drag Force / (0.5 * ρ * A * V2) where ρ, A and V stands for fluid density, reference area and velocity, respectively. From this equation there are a few ways to decrease drag an example is the frontal area. (Explained, 2023) For example, cars that have been influenced by aerodynamics researched by motorsport is a Prius or a Model 3. These cars have a frontal area of around 2.2 metres2. (Explained, 2023)Motorsport has aided in the development of this by researching CFD (computational fluid dynamics) and wind tunnels for these companies to use.
The first wing in motorsport was introduced when Michael May, a driver and engineer came to the Nürburgring on the 27th of May 1956 in his Porsche 550 Spyder with an upside-down airplane wing. On the straights the wing would be flat allowing less drag but, in the corners, he would tilt the wing allowing more downforce. (Mansell, 2023)
An example of active aerodynamics in motorsport is DRS in Formula 1, a part of the rear wing opens allowing the air to go straight through, therefore reducing drag. Active aerodynamics can be found in road-going vehicles such as, in the Porsche Panamera a section of the rear opens and expands into a rear wing allowing more downforce. In more high-performance cars such as the Bugatti Chiron, the rear wing does not have to open as it can just raise using hydraulics and it can serve as an air brake. The GT3 RS has an F1 type rear wing where the fixed rear wing has a flap that tilts up to allow air to pass through it.
Rally cars use fixed wings as they prioritised cornering over top speed because of the number of corners. We can clearly see this in homologated road cars such as the Subaru WRX STI and the Mitsubishi Lancer Evolution. However fixed rear wings do not appear just in homologation special road cars, for example the Mercedes A45s have a small rear wing as well as the Honda Civic type R.
A hybrid car is the best of both worlds, the instant response of electric cars and the top end speed and range of petrol cars. In F1 they reuse lost energy from the braking system which it is deployed back into power, this is called Energy Recovery System (ERS). An example with a car that has ERS is a Porsche Taycan. (McDee, 2022) Porsche has World Endurance Championship team which helps in their R&D for road cars, and they have discovered like many other brands that ERS is the most efficient way for a car to run.
In both circuit racing and rally racing the cars are equipped with a sequential gearbox rather than a stick manual or automatic because a sequential gearbox is quicker to use as it is more accurate than an automatic (if used correctly) and quicker than a manual because paddle shifters don’t require for them to take their hands off the steering wheel and is easy to change. (Unknown, Unknown) In road cars paddle shifters are quite normal for higher performance models even in cheaper variants such as the A45 because it is safer not to take your hands off the wheels. If you are racing with these cars the paddle shifters could save fractions of a second as it is easier to shift gear. Motorsports aided in the development of paddle shifters as time was lost between gear changes slowing the car down.
A difference between F1 and road cars, is in roadgoing cars where they have dual clutch transmission (DCT), F1 cars use one, carbon, multi-plate clutch with a sequential gearbox. Unlike roadgoing cars, they run on with something called straight-cut gears that combine with the clever Engine Control Unit (ECU) for a smoother and quicker shift. Road cars run with a synchromesh gearbox – gears that are cut on an angle and use a synchroniser ring to match gear speed between shifts. (Dobie, 2024)
Figure 2: The difference between helical/synchromesh (bottom) and straight cut gears (top).
Turbochargers first entered motorsport in 1952 with Cummins Company's 6.4L diesel Indy 500 engine. (Writer, 2018) However, the intake did not filter air, causing debris to be inside the engine and they retired from the race. Turbos require exhaust gas to turn turbines, making air faster and dense and this enables the car to go faster. Insufficient exhaust gas can cause turbo lag. Turbos are common in road cars like Golf GTI, GR Yaris, and Honda Civic Type R, as they increase power without reducing fuel economy and helps them pass through ever restricting fuel emissions while getting the power the consumers want. They also are easy to purchase and install
Suspension serves vastly different purposes in racing and commercial use. In circuit racing such as F1 or WEC the main point the manufacturers are thinking about is a stable aerodynamic platform allowing a constant level of downforce. This causes the air underneath the floor at the back of the car to speed up causing more downforce but there is an imbalance of it so the driver may spin off.
easily to their surroundings. Even though there is a vast number of homologation specials none of them use rallying suspension because of how extreme the sport is.
The tyres play a big part of a car’s cornering because it is what contacts the ground. This is called mechanical grip. Slick tyres are the standard in drag racing as there are no grooves which increases the contact with the ground, so more mechanical grip. Even though slick tyres are not road legal it gives tyre companies a good understanding of how to make a summer tyre for commercial use when it will be dry. This means that the grooves will be thinner and shallower as there is not much water to push away which means a larger contact area with the tarmac which means more grip. An example of a high-performance summer tyre is the Michelin Pilot Sport Cup 2R. This tyre has very few grooves and is meant for dry track days. It was used in the Mercedes-AMG ONE to break the Nürburgring Nordschleife road legal record.
This differs from road cars where comfort is the main priority. The is done by making sure the suspension takes care of all the bumps and potholes. In racing cars, the driver is not given the comforts of a road car as is only given as much comfort as they need to drive. In rally cars however the suspension must endure gravel, dirt and snow so it must be very rugged and must absorb impacts from jumps. They do this by having each wheel independent from each other so they can adapt
In rallying the sidewall of the tyres need to be strong because of loose gravel that could puncture the tyre, and they are also studded for better grip in snow and particularly in ice. Most road legal snow tyres are restricted to 1-millimetre studs to avoid damage to the tarmac. This helps manufacturers create winter tyres where they need to withstand the freezing conditions.
The steering wheel plays a crucial role in performance optimisation as in circuit racing all the controls are on the steering wheel. By doing this it makes it safer for drivers and quicker to switch the different power or battery modes. These safety features are now integrated into most modern cars where they have climate control or the radio on the buttons which makes it safer as the driver does not have to take the hands off the wheels and can continue to look at the road, this has been implemented in cars since 2007.
In the 1950 and 1960’s when F1 first came into the world as a recognized motorsport it was like road cars. Of course, there were differences from the F1 cars to the normal cars, but most components and fuel sources were the same. In 1950 there were very few fuel regulations at that time. In the 1950 rules by the FIA about what fuels to use it did not specify what to use, (FIA, 1950). At the start of F1 a lot of teams were using a mixture of benzene, methyl. (Nyberg, 2000)
In 2018 the first car was made from bio-components, and this was after F1 had uses bio-components such as bio plastic. (biopolylab, 2024) The reason this car could have been made was because of the testing and the reassurance of F1 that the car could work. The teams were instructed to use 5.75% of bio-components in the fuel. However, it stopped because then other and better sustainable fuel choices were available.
Currently what is happening is that we are emitting more carbon than the carbon that is being used to make the fuel. The biggest impact we have seen is the impact on UK, E10 fuel, which was introduced, because of motorsport had trialled it successfully and is now the standard fuel in UK. (Transport, 2021)
Benzene is a light-yellow liquid, and it is harmful to human as it leads to a higher chance of leukaemia. (Unknown, 2024) This was extremely dangerous for the mechanics. Methyl is similar because it can cause CNS depression (when the body’s neurological functions slow down) and potential blindness. It can be inhaled and ingested and this is dangerous. (GOV, n.d.) This was also the same for normal cars because instead of petrol they used gasoline which had smaller amounts of these dangerous substances, it was still dangerous.
Moving on to the future, F1 knows they must make a change so they will introduce carbon neutral fuels. Carbon neutral fuels work in the sense that the amount of carbon used to create the fuel is emitted and due to this we create a carbon neutral atmosphere. The new F1 fuel could have an enormous impact on the automobile industry as they have said that their cars will be 100% sustainable in the 2026 season through carbon neutral fuels. (Barreto, 2022)
At the start of F1 a lot of teams were using a mixture of benzene, methyl. (Nyberg, 2000)
When cars were first introduced, they were mainly made from a steel body and still to this day a lot of cars are made from steel because it is one of the best materials to make a car in terms of cost and safety and reliability but there are some materials that are better.
Carbon fibre is arguably the best material to make a car. It is ten times stronger than steel but being five times lighter at the same time. (unknown, 2024) Carbon fibre was first used in the 1981 Formula 1 season, looking at the future carbon fibre is the most used material in Formula 1 cars, and it has been used in many regular road cars because of its extremely good safety capabilities (Mackenzie, 2011). Some examples of a road car that has carbon fibre at a low price is the Alfa Romeo 4C and BMW i3. The 4C has carbon fibre in the chassis of the car for safety. (euro compulsion, 2016) There is one drawback though, it is extremely expensive to make currently, and companies will not make profit by selling cars made by carbon fibre because of it.
Magnesium was important in the early days of motorsport. Magnesium was one of the most used materials, it was first used in the 1920s. About a decade later, magnesium began to be used in commercial vehicles such as the Volkswagen Beetle which contained about 20kg of the material, (Association, unknown). Magnesium was preferred over steel because it does reduce weight and like carbon fibre it is stronger. (Keronite, 2023) One modern example of magnesium being used in modern cars is the Land Rover Defender. We can see the clear use of magnesium in the front bumper. (Association, unknown)
One of the most use safety feature is a roll cage. A roll cage is a specifically engineered part that protects the passengers in the case of a roll over. (Wikipedia, 2024) When cars started to get faster in the 70’s the FIA who controlled the rules of motorsport worldwide imposed a law in 1971 that all race cars must have a roll cage or roll bar. (Unknown, 2023) Shortly after this in 1989 the Mercedes-Benz R129 implemented a system that whenever the car rolls over a roll cage get deployed. Many other cars developed this system such as the Peugeot 307 CC, Volvo C70, Jaguar XK and many more. (Wikipedia, 2024)
One more very used safety feature is anti-lock braking system (ABS). ABS makes sure the wheels don’t lock under heaving braking, and this makes sure that the risk of overturing is very low and in return offers better control of the car. (unknown, 2023) This feature was not introduced by motorsport it was first used UK aviation and then it was used in motor bikes. 1960s is when it got introduced into racing cars and then normal cars. Now this is one of the most common features in a car. Motorsport may not have helped design this, but it did help it in the development of this through manufacturers wanting to reap the benefits of the technology before it got banned.
Taking in all arguments and examples we can safely assume that motorsport R&D process has definitely helped to advance the automotive industry in many different ways.
In conclusion there are many motorsports innovations that transferred over to the commercial for the better. Whether it be performance based, safety or comfort based, motorsport has shaped the way cars are being produced and optimized. Also, it has transformed other industries such as public transportation with new fuels and further research into electrical power while also keeping the roots of F1.
However, some of the innovations has been arguably for the worse or not worth putting into cars such as sport suspension which is stiffer and less (unknown, unknown) comfortable than regular suspension or rare and expensive to produce materials such as carbon fibre. Despite this the car industry mainly take the useful things which make our lives easier and that leads to cars that are incredibly good and only getting better.
We will never know how long it would have taken for the automotive industry to devise these developments without the help of motorsport. Another question we can ask ourselves is that would these ideas have ever come about without the test bed of motorsport and the hours of R&D put into the cars. Taking in all arguments and examples we can safely assume that motorsport R&D process has definitely helped to advance the automotive industry in many different ways.
1. Armstrong, M., 2022. How the World Commutes. [Online] Available at: https://www.statista.com/chart/25129/ gcs-how-the-world-commutes/[Accessed 15 2 2025].
2. Association, I. M., unknown. IMA. [Online] Available at: https://www.intlmag.org/page/app_automotive_ima
3. Barreto, L., 2022. formula1. [Online] Available at: https://www. formula1.com/en/latest/article/formula-1-on-course-to-deliver100-sustainable-fuels-for-2026.1szcnS0ehW3I0HJeelwPam
4. biopolylab, 2024. [Online] Available at: https://biopolylab. com/2020/08/bioplastics-in-the-automotive-industry/
5. Brown, R., 2023. Electric car mpg: Top brands compared. [Online] Available at: https://www.energysage.com/ electric-vehicles/mpg-electric-vehicles/ [Accessed 11 1 2025].
6. Dobie, Z., 2024. Are Formula 1 cars manual?. [Online] Available at: https://www.drive.com.au/caradvice/ are-f1-cars-manual/[Accessed 16 2 2025].
7. eurocompulsion, 2016. ALFA ROMEO 4C CARBON FIBER CHASSIS INFORMATION. [Online] Available at: https:// shopeurocompulsion.net/blogs/technical-articles/ alfa-romeo-4c-carbon-fiber-chassis-info
8. FIA, 1950. historicdb.fia. [Online] Available at: https://historicdb.fia.com/sites/default/files/ regulations/1481637297/annexe_c_1950_gbr_web.pdf
9. Gitlin, J. M., 2020. These streamliners are the world’s most aerodynamic cars. [Online] Available at: https://arstechnica. com/cars/2020/05/teardrops-and-wind-tunnels-a-lookat-the-worlds-most-aerodynamic-cars/#:~:text=As%20 far%20as%20circuit%20racing,mile%20Mulsanne%20 Straight%20in%20mind. [Accessed 17 December 2024].
10. GOV, U., n.d. GOV.UK. [Online] Available at: https:// www.gov.uk/government/publications/methanolproperties-incident-management-and-toxicology/ methanol-toxicological-overview#:~:text=Methanol%20 is%20toxic%20following%20ingestion,are%20 subsequent%20manifestations%20of%20toxicity.
11. Grmusa, N., 2023. What Does KompressorMean in Mercedes-Benz Cars? [Online] Available at: https:// carpart.com.au/blog/what-does-kompressor-meanin-mercedes-benz-cars [Accessed 5 1 2025].
12. Harrison, T., 2021. The Mercedes EQS is the most aerodynamic series production car ever. [Online] Available at: https:// www.topgear.com/car-news/electric/mercedes-eqs-mostaerodynamic-series-production-car-ever[Accessed 22 2 24].
13. Ingram, A., 2024. Volkswagen Golf - MPG, CO2 and running costs. [Online] Available at: https://www.autoexpress. co.uk/volkswagen/golf/mpg#:~:text=Officially%2C%20 the%201.5%20TSI%20petrol,similarly%20achievable%20 on%20longer%20runs.[Accessed 16 December 2024].
14. Keronite, 2023. [Online] Available at: https://blog. keronite.com/selecting-the-right-lightweightmetal#:~:text=Magnesium%20is%20extremely%20 light%3A%20it,savings%20in%20applications%20using%20it.
15. Mackenzie, I., 2011. BBC, carbon fibres journey from racetrack to hatchback. [Online] Available at: https:// www.bbc.co.uk/news/technology-12691062
16. Mansell, S., 2023. When Formula 1 Used AEROPLANE Wings, s.l.: s.n.
17. McDee, M., 2022. Porsche details Taycan energy recovery systems. [Online] Available at: https://www.arenaev. com/how_exactly_porsche_taycan_generates_energy_ when_braking-news-392.php#:~:text=Porsche%20 equipped%20its%20Taycan%20with,while%20still%20 maintaining%20great%20efficiency. [Accessed 22 2 2025].
18. Mitchell, S., 2022. Tech Explained: Formula 1 MGU-H. [Online] Available at: https://www.racecar-engineering.com/articles/ tech-explained-formula-1-mgu-h/[Accessed 16 2 2025].
19. Myintree, C., 2022. Porsche's GENIUS New Turbo Design. [Sound Recording] (Overdrive).
20. Nyberg, R., 2000. atlasf1. [Online] Available at: https:// atlasf1.autosport.com/evolution/1950s.html
21. Rawlins, P., 2024. These are the 12 most aerodynamically efficient EVs on sale today. [Online] Available at: https:// www.topgear.com/car-news/electric/these-are-12-mostaerodynamically-efficient-evs-sale-today [Accessed 11 1 2025].
22. review, G. R., 2024. Porsche 911 GT3 RS review. [Online] Available at: https://www.topgear.com/car-reviews/ porsche/911-gt3-rs [Accessed 23 2 2025].
23. Tallodi, J., 2024. 10 of the most aerodynamic cars ever made. [Online] Available at: https://www.carwow.co.uk/best/mostaerodynamic-cars#gref [Accessed 17 December 2024].
24. Transport, D. f., 2021. E10 petrol explained. [Online] Available at: https://www.gov.uk/guidance/e10-petrol-explained
25. Unknown, 2022. Mercedes-Benz Vision EQXX Just Smashed Its Own EV Range Record. [Online] Available at: https:// carbuzz.com/news/mercedes-benz-vision-eqxx-justsmashed-its-own-ev-range-record/ [Accessed 16 2 2025].
26. unknown, 2023. [Online] Available at: https://www. rac.co.uk/drive/advice/road-safety/what-are-antilock-brakes-abs-and-how-do-they-work/
27. Unknown, 2023. [Online] Available at: https:// mightycarmods.com/blogs/news/the-history-of-rollcages?srsltid=AfmBOorVgxm6ZdlX-3j00uhzJhqKNgMMlrYjrRqk8RWV7saBUYVgt3jM
28. Unknown, 2023. The Most Efficient Car Ever Created? Mercedes EQXX. s.l.:s.n.
29. Unknown, 2024. kbc. [Online] Available at: https:// www.kbcylinders.com/news/comparing-carbonfiber-and-steel-durability-and-weight/
30. Unknown, Unknown. Manual vs. automatic transmission. [Online] Available at: https://www.progressive.com/answers/ manual-vs-automatic-transmission-cars/ [Accessed 22 2 2025].
31. Unknown, unknown. mercedes. [Online] Available at: https://www.mercedes-benz.com/ en/innovation/concept-cars/vision-eqxx/
32. Unknown, 2024. NCI. [Online] Available at: https://www.cancer. gov/about-cancer/causes-prevention/risk/substances/benzene
33. Unknown, n.d. biopolylab. [Online] Available at: https:// biopolylab.com/2020/08/bioplastics-in-the-automotive-industry/
34. Wikipedia, 2024. [Online] Available at: https:// en.wikipedia.org/wiki/Roll_cage
35. Writer, S., 2018. History of the Turbocharger. [Online] Available at: https://grassrootsmotorsports.com/articles/ history-turbocharger/ [Accessed 28 12 2024].
This shows us how much cars are used across the world. As we can see cars easily top every list which shows that cars are incredibly important to most people. From Chart: How the World Commutes | Statista
In this diagram, we can see that in every angle of attack the downforce and drag went up as speed went up. This means that we need a speed to measure downforce. There is not a set standard yet, but most companies specify the speed where they make a claim about the downforce. An example is the GT3 RS generates 860kg of downforce at 177mph (review, 2024). From: Relationships between vehicle speed and downforce and drag | Download Scientific Diagram
This essay was commended at the Junior ILA Celebration Evening
Like the story of David and Goliath, Bill Ackman’s company Pershing Square Capital Management was always the underdog compared to other big hedge funds such as Citadel and Jane Street. Yet, the ambitious Harvard MBA graduate refused to be deterred by this and raised his first pool of capital at the age of 26. He made strategic activist positions and trades that have now amassed him a
personal net worth of around $9 billion at the time of writing. He may be a controversial and disliked figure to some, to others he may be an inspiration but no matter what one believes every person must acknowledge the fact that he will go down as one of the greatest investors to have ever walked the planet. So, what acquisitions did this investing mogul make to establish his own financial legacy?
Inspired by Benjamin Graham’s The Intelligent Investor and Warren Buffet, Ackman pursued a career in investing using the principles preached by the two. These principles led Ackman to look for companies with predictable cash flows, healthy balance sheets, good long-term competitive advantages and reliable management. However, to compete with larger and more established hedge funds Ackman realised that he needed a much higher rate of return on investments which the strategies he cultivated from Buffet and Graham would not deliver. In the modern world, people have learnt to find these reliable businesses, meaning such companies are typically overvalued, preventing investors such as Ackman maximising their returns on investment. Ackman’s realisation led him to acquire shares in businesses that were undergoing periods of financial hardship but were ultimately recoverable to ensure he purchased the asset at an undervalued price. His first attempt at implementing this strategy was with Wendy’s, a fast-food chain which was undergoing financial difficulty. However, Ackman noticed that Wendy’s and its subsidiary Tim Horton were two different business models and believed that if they were to operate independently, they would both have a future of greater prosperity. To do this Ackman needed to pressure the management into executing
a spin-off. Since Ackman was a minority shareholder with access to only $3 million in capital, the CEO did not answer his calls. Ackman refused to accept rejection and approached Blackstone, who formerly had an investment bank, which gave Ackman a fairness opinion of what Wendy’s would be worth after the spin-off. Blackstone reported that Wendy’s would be worth about 80% more. Ackman then emailed Blackstone’s report to Wendy’s and just six weeks later Wendy’s and Tim Hortons were two separate companies operating individually just as he desired. Ackman later sold his shares for a hefty profit and developed an activist investing style that he is famous for today. What Ackman did with Wendy’s not only demonstrated his ability to spot opportunities that not many others can but also turn them into a reality.
Much like Warren Buffet, Ackman’s investment research heavily revolves around finding a company’s moat. A moat is simply a long-term competitive advantage that a company has and is vital for fundamental investors to consider. If a company has a strong durable moat, it shall be less affected by economic turmoil and more immune to disruption from competitors. This is why fundamental investors, while there are exceptions, typically stay away from software related businesses due to the low barrier to entry which allows small technology companies to disrupt other firms within a short period of time. Microsoft’s OpenAI and Google’s Gemini were arguably the best products in their industry until DeepSeek a much smaller Chinese company, created a cheaper and more powerful product. DeepSeek’s product caused chaos amongst Wall Street shareholders leading to the crash of big technology stocks such as Google, Microsoft and Nvidia proving just how susceptible the technology industry is to disruption. To uncover whether a company has a durable moat Ackman reads its publicly available SEC Filings such as10-K or 4-K reports as well as talking to experts in the industry. In Warren Buffet’s own words “The key to investing is determining the competitive advantage of any given company and above all, the durability of the advantage”. Since Ackman strongly believes in investing in companies with a strong moat, he generally avoids commodity businesses as the only way they can outperform their competitors is by offering products at cheaper prices to the rest of the market and hence sacrificing profit margins.
This section encompasses case study of two of Ackman’s major wins revealing the investment strategies and principles that Ackman has utilised to reap such returns.
Chipotle is one of his most notable wins, purchasing shares in the third fiscal quarter of 2016 and still owns 28.82 million shares worth approximately $1.7 billion. Ackman has made an overall implausible profit of around 660% at the time of speaking on Chipotle. Ackman was initially intrigued by Chipotle when its stock dropped by around 50%. The fast-food casual restaurant was undergoing a food consumer crisis due to vital systems lacking leading to consumers falling sick. Ackman also noticed that Chipotle was being managed poorly, making sloppy decisions and essential systems were not in place. Ackman then did fundamental analysis to see if Chipotle had a strong and durable moat. His findings were that Chipotle had a very high barrier to entry and was not vulnerable to disruption from competitors. The moat that Chipotle has is unlike that of other fast-food chains. They make food fresh in front of the consumer’s eyes and use very high-quality ingredients while simultaneously offering food at relatively low prices. This is very hard to replicate due to the numerous relationships and negotiations Chipotle has made and maintained for decades with small farmers to obtain such high-quality food at a low price. Ackman accepted that Steve Ells the founder of Chipotle had done an excellent job at getting the company to where it was when he acquired shares but realised that if Chipotle were
to expand then it was necessary that they brought in a more knowledgeable CEO to clean up the mess the company was in. Ackman pressured Chipotle board members to replace Steve Ells with Brian Niccol which turned out to be a great appointment since Niccol implemented required systems and increased Chipotle’s profitability thus increasing shareholder value. This investment is a classic example of one of Ackman’s most used strategies which involves acquiring shares of a poorly managed company with a durable moat and using activism to fix the company’s issues and improve its profits, directly increasing shareholder value.
According to Investopedia Ackman’s investment in General Growth Properties will go down as “one of the best hedge fund trades of all time” turning $60 million into an astonishing $1.2 billion. In 2008, General Growth Properties’ share price fell from around $60 to $0.34. This was due to the management’s aggressive use of debt, using a lot of short-term loans to fund new malls and using those malls as collateral to borrow even more money. When the devastating financial crisis of 2008 hit it was very hard for the company to refinance their loans since lenders were unwilling to replace their existing loans. This led to the company drowning in approximately $27 billion of debt and was about to file for bankruptcy. All the board members of GGP except Ackman were
talking about filing for bankruptcy under the terms of Chapter 7 in the bankruptcy code forcing the company to liquidate its assets and wiping out the shareholders. Ackman did not agree that this was the fate of the company since the malls’ net operating income had been increasing every year and the only obstacle was the debt they were in, suggesting GGP did not make many fundamental mistakes but the market put them in this situation. Therefore, Ackman claimed that it would be better if the company were to file for bankruptcy under the conditions of chapter 11 as then the shareholders would retain their shares, and they could restructure the company’s debt. GGP went to court where the judge agreed to file for bankruptcy under the terms of chapter 11 after GGP proved that their assets were worth more than the liabilities ahead of the crisis implying that value of GGP’s assets would be worth more than the liabilities once the crisis was over. The company went on to restructure its debt extending the maturity dates of their loans which allowed the company to still own real estate and shareholders to keep their stocks. The stock price immediately went up from $0.30 to $1 after the debt restructuring and GGP continue to operate to this day and have increased their net profit margins as well as having paid off most of their existing debt thanks to Ackman’s financial sharpness. What this investment shows about Ackman’s strategy is that similarly to Warren Buffet, he sticks to industries he is well-versed in. Ackman understands real estate like the back of his hand since he worked for his father who had a company that supplied mortgages to real estate developers and investors. Therefore, his lateral idea to file for bankruptcy under the conditions of chapter 11 stemmed from his thorough understanding of real estate and the methods of managing excessive amounts of debt. In addition to this, when Ackman announced he was to acquire a stake in a company that was about to file for bankruptcy, many of his clients were almost certain that Ackman would lose their money. However, Ackman still went in with $60 million, demonstrating his confidence due to his detailed understanding of his investments.
Warren Buffet claims that the first step to investing is not losing money. However, at the same time, what separates a great investor like Ackman from the average investor is not only the difference in knowledge but also Ackman and other top investors have mastered the art of being indefatigable. A huge part of Ackman’s investment strategy is simply about being persistent and persevering. Ackman’s worst investment decision of his career was in a company called Valeant Pharmaceuticals. He entered his position at around $180 per share and it was reported that they fully exited their position in 2017 for a loss of around $4 billion.
Ackman made an investment that deviated from his core principles since he did not fully understand the industry nor the business. The pharmaceutical industry is a very complex and volatile one which Ackman was to find out. Valeant Pharmaceuticals’ business model was acquiring other companies in their industry or drugs and increase the prices of their drugs after ensuring there was a patent on them. However, the management was influenced by another activist investor on the board and made poor decisions which meant the market no longer trusted the management. When shareholders sold their stock, it prevented the business from being able to acquire the low-cost drugs leading to a further decline in stock price. The Valeant loss, however, was not the worst to come as it triggered other hedge funds to believe that Ackman was going out of business which would require him to liquidate all his holdings. Other hedge funds began to short sell all the holdings listed in Ackman’s portfolio. To make matters worse, the Elliot company, another activist hedge fund acquired a 25% stake in Ackman’s public company to influence the company into liquidating so they could profit from short selling the stocks held in its portfolio. Only a small percentage of people would attempt to continue operating when placed in this situation, yet Ackman’s stubbornness kicked in and he refused to be shut down by his competitors. Ackman, who was a loyal customer to JP Morgan,
borrowed $300 million to regain control of his public company to prevent activist investors forcing him to liquidate. Ackman learnt his lesson of not deviating from his core principles and repatched to his clients about plans for future investments and since then the trajectory has mostly been uphill. Whether one invests or not, a lesson can be learned from Ackman that when everything seems to fall apart, rationality is required more than ever, and one must not simply panic and crumble under pressure but attempt to find a judicial solution. In the case of Ackman, if he had simply given up, he would be far from the successful investor we know today.
When everything seems to fall apart, rationality is required more than ever, and one must not simply panic and crumble under pressure but attempt to find a judicial solution. In the case of Ackman, if he had simply given up, he would be far from the successful investor we know today.
On a foundation level, Ackman’s multi-billion-dollar investment strategy involves seeking out business, in an industry he understands, with a durable moat being poorly managed and using activist methods to fix the problems in the business. Ackman also has underscored the vitality of resilience in the finance industry and not allowing failures to define one’s ability to invest and as stated before, while his public stance on social issues may be controversial, there is no denying that he will go down as one of the smartest and most inspirational investors to have ever lived.
1. Activist investing explained | Bill Ackman and Lex Fridman - YouTube.
2. https://www.investopedia.com/articles/investing/032216/ bill-ackmans-greatest-hits-and-misses.asp
3. How to decide which companies to invest in | Bill Ackman and Lex Fridman - YouTube.
4. How an Economic Moat Provides a Competitive Advantage.
5. What Is DeepSeek and How Should It Change How You Invest in AI?
6. Warren Buffet Way by Robert G.Hagstrom: Chapter 3: Page 49 and 50 Favourable Long-Term Prospects Section.
7. Chipotle Mexican Grill, Inc. (CMG) Stock Major Holders - Yahoo Finance.
8. How Much Money Does Bill Ackman Have Invested in Chipotle Mexican Grill Stock?
9. Best hedge fund investment of all time | Bill Ackman and Lex Fridman.
10. Chapter 11 - Bankruptcy Basics.
11. Bill Ackman's lowest point: $4 billion dollar loss | Lex Fridman Podcast Clips.
This essay was commended at the Junior ILA Celebration Evening
“Learn from yesterday, live for today, hope for tomorrow.”
1
Albert Einstein
Given the diminishing supplies of fossil fuels on Earth, and the lack of a high-output, reliable method of obtaining energy, the prospect of using a single, 15-metre-diameter device to power an entire city (Editors of Wikipedia, n.d.) has attracted the attention of several countries, tens of billions of dollars of investment per device, and already over six decades of research. (User, 2023) This essay aims to discuss the feasibility of using tokamak fusion reactors as a sustainable energy source in the near future.
Nuclear fusion is the process by which smaller, unstable atomic nuclei are combined into a larger, more stable nucleus (and sometimes extra high-speed nucleons, which carry most the energy from the reaction). It is important to distinguish this from nuclear fission, which instead involves splitting large, unstable atomic nuclei apart. The fusion reaction currently under the most interest by researchers is that of fusing deuterium and tritium (both isotopes of hydrogen) into helium-4 and a neutron (called the D-T reaction) (US Department of Energy, n.d) due to its low activation energy (thermal energy required for the nuclei to fuse) compared to other reactions and high energy input-to-output ratio.
On paper, nuclear fusion is an average of 4 million times more efficient than fossil fuels and 4 times more so than fission at producing energy, (ITER, n.d.) with the added benefit that fusion produces more energy than is put in (due to the products having some of their mass converted to energy as described by the equation E=mc2), meaning that it is a self-sustaining process.
There are two main subcategories of fusion reactors – magnetic confinement fusion (MCF) and inertial confinement fusion (ICF). (ricketycricket, 2023)1 Currently, MCFs are the most promising candidate for a feasible fusion reactor, whilst ICFs are used more as general testing facilities for a wide range of fusion technologies. (Editors of Wikipedia, n.d.)
Of all MCF designs, the tokamak, developed by the Soviet Union in the 1960s, is leading the efforts in fusion energy research; for example, the Experimental Advanced Superconducting Tokamak (EAST) in China recently set a new world record2 for plasma confinement time of 17 minutes and 46 seconds on January 20th this year, (Writer, 2025) over double the previous record of 6 minutes and 43 seconds, (Pester, 2025) which was also held by EAST. Whilst alternative MCF designs exist, tokamaks are by far the most researched and have the most operation data, as a result of six decades of experimentation. Given the current goal of simply getting a functional fusion reactor up and running as quickly as possible, tokamaks are the best option moving forward.
A tokamak works by injecting the gaseous hydrogen isotope fuels into a toroidal vacuum chamber and heating them to 150 million°C, (EUROfusion, n.d.) ten times the temperature at the core of our Sun, (Editors of Wikipedia, n.d.) from various external sources, which can vary greatly between reactors. These conditions are necessary because these reactors are essentially creating an artificial star on Earth, and they must make up for not being able to replicate the extreme pressures at the core of a star by having even more extreme temperatures. (Lea, 2022)
1 (User, 2023)
2 This record was held by China at the time of writing (February 7th); however, the French Alternative Energies and Atomic Energy Commission (CEA) announced on February 20th that they achieved a new plasma confinement time of 22 minutes in their WEST tokamak (Sharwood, 2025)
Under these conditions, electrons are stripped from their atoms and allowed to wander freely, leaving behind an electrically charged plasma. The nuclei are now able to combine in the plasma and fusion occurs; the produced neutrons gain kinetic energy from the reaction and escape unimpeded from the plasma due to their neutral charge. (User, 2024) (Aman, 2013) They go on to be absorbed on the walls of the vacuum chamber by neutron blankets, which transfer most of their kinetic energy to the vaporisation of water. Similarly to a conventional power plant, the produced steam drives a turbine connected to a generator, and electricity is produced. (Aman, 2013)
The world’s largest tokamak, the International Thermonuclear Experimental Reactor (ITER) in France, is under construction and expected to be operational by 2039. It will be able to contain a volume of plasma over ten times greater than other reactors (ITER, n.d.) and plans to be the first tokamak to achieve a burning plasma (a higher efficiency mode of the plasma). (User, 2025)
Although tokamaks are currently the most feasible fusion reactor design, there are nevertheless still several barriers to their widespread use. The five most limiting ones are the confinement of a pure, stable plasma, protecting the reactor from the extreme operational environment, heating the plasma to sufficient temperatures, having an energy input to output factor (Q) greater than 1, and having a reliable supply of tritium to fuel the reaction. (User, 2025)
The former is the hardest aspect of a tokamak’s design. It depends on turbulence and imperfections within the plasma itself (called magnetohydrodynamic (MHD) instabilities) that cause positive feedback loops that disrupt its confinement. (User, 2025)
MHDs come in five main forms. (User, 2025) Kink instabilities occur when plasma currents, the flow of ions within the plasma, are too strong, which deforms their shape and could lead to weakened confinement or a disruption (a sudden, complete loss of confinement). (User, 2025) Ballooning instability occurs higher-than-usual plasma pressure at the edge of the plasma causes some areas to bulge out, which allows energy to escape and reduces efficiency. (User, 2025) Tearing mode (TM) instabilities occur when plasma imperfections cause the magnetic field lines in the plasma break and reconnect incorrectly to form magnetic “islands” that allows some particles to leak through, which similarly allows energy to escape and reduces efficiency. (User, 2025) Edge-localised modes (ELMs) occur when particles are ejected from the plasma edge (the outermost layer of the plasma with the least stable conditions), which can damage the first wall (the layer of the vacuum chamber in contact with the plasma). (User, 2025) (Editors of Wikipedia, n.d.) Finally, neoclassical tearing modes (NTMs) occur when the pressures in the core of a plasma cause bootstrap currents (self-induced currents (Editors of Wikipedia, n.d.) to form, which give rise to similar issues to TM instabilities. (User, 2025)
Although most of these instabilities could technically be eliminated by taking away the plasma currents, this is not feasible. Plasma confinement requires a toroidal and poloidal (which can be thought of the x and y axes respectively on the toroidal vacuum chamber) magnetic field; (Editors of Wikipedia, n.d.) the tokamak’s external magnets provide the former, but the plasma currents self-induce the latter. (User, 2025) Other MCFs, like stellarators, can operate without a plasma current by placing all the magnets necessary to confine the plasma externally. (User, 2025) While these designs offer a more stable confinement because everything that is needed to control the plasma is present, they often lack confinement and input energy efficiency as a result, because the plasma currents’ self-induced magnetic fields are not exploited. (User, 2025)
To mitigate against plasma instabilities, one of the simplest solutions is to elongate the vacuum chamber in one direction to be more of an ellipse rather than a circle. (User, 2025) The flatter shape reduces kink and ballooning instabilities by distributing pressure more evenly, gives more magnetic stability from the better alignment of the plasma with the field lines, and increases the overall pressure of the plasma. Large tokamaks like ITER, JET (Joint European Torus), and EAST all use an elliptical vacuum chamber for this reason. (User, 2025)
A second way to reduce the formation of plasma instabilities is to actively adjust the magnetic field live during operation. This method is too complex for humans to reliably do, so researchers at the California National Fusion Facility are investigating the use of AI Deep Reinforcement Learning (DRL). (US Department of Energy, 2025) After training their AI, it was able to take inputs from hundreds of places on their tokamak and respond by adjusting the magnetic field in real time. The further development of AI could revolutionise tokamak operation and facilitate the elimination of plasma instability,
Should a disruption occur from a failure to confine the plasma, there may be a sudden spike in thermal energy, powerful eddy currents from the loss of a plasma current, and/or the emission of high-energy electrons, all of which can damage the first wall. (User, 2025) Scientists at JET are experimenting with the addition of massive gas injections (MGIs), made up of a gaseous mixture of noble gases and deuterium, to the vacuum chamber during a disruption to quench (rapidly cool and dissipate) the plasma, (Kruezi, et al., 2011) controlling the release of thermal energy, hindering the paths of the high-energy electrons, and preventing the formation of eddy currents.(User, 2025) ITER will use a technique called shattered pellet injection (SPI), which involves shattering cork-sized frozen pellets of hydrogen and neon and injecting the fragments into the vacuum chamber (ITER, n.d.) to achieve a similar but better effect to MGI.
Nevertheless, even with measures like MGI and SPI in place, the tokamak, especially the first wall, still requires thorough protection from the intense heat and high-energy particle flux of operation. To tackle this issue, tokamaks use an actively water-cooled, tungsten divertor placed on the bottom and/or top of the first wall in which the plasma is contained, which acts as a shield layer between the plasma and the first wall, as well as an exhaust pipe for waste and impurities, like ash or unfused fuels, to be extracted, thus increasing the thermal efficiency of the plasma. (ITER, n.d.) (Liu, et al., 2009) The divertor does not need to cover the entire first wall because the temperature and particle flux are the most prominent in the top and bottom of the chamber (the thermal power there can reach up to 20MW/m2, (ITER, n.d.) due to the magnetic field lines having a nonzero divergence. (User, 2025)
For the remainder of the first wall, a layer of neutron blankets to absorb the neutrons would suffice; there are almost no highly damaging particles (like helium-4 or electrons) reaching here, because the plasma doesn’t physically touch the first wall during confinement, and all charged particles would not be able to escape the magnetic field. The neutron blankets have the additional aforementioned role of converting the kinetic energy of the neutrons bombarding them into thermal energy.
The successful confinement of the plasma depends on two systems that must work together – the cryogenic cooling and the auxiliary heating systems.
The several thousands of tons of superconducting magnets (usually an alloy of niobium (User, 2025) that confine the plasma need to be cryogenically cooled to a few degrees above absolute zero (User, 2025) to perverse their properties of low resistance and high magnetic flux (ITER’s magnets can reach 13T). (ITER, n.d.) Similarly to other fields that require superconducting magnets, like the Large Hadron Collider (LHC), liquid helium is typically used as the go-to cryogenic fluid (User, 2025), which is cycled to every area of the tokamak via a pipe network.
Tokamaks employ a wide range of different methods to heat the plasma to the required 150 million °C. Neutral Beam Injection (NBI), the most flexible and reliable one, involves shooting a beam of high-energy neutral particles into the vacuum chamber, which are ionised by the electrons in the plasma. Due to the unprecedented size of the NBI device needed for ITER, an entire testing facility (called the Megavolt ITer Injector and Concept Advancement (MITICA)) has been built in Italy to separately run experiments before integration. Other methods, like radio frequency (RF) heating and electron cyclotron resonance heating (ECRH) (involves the heating of electrons using microwaves, which transfer their thermal energy to the nuclei in the plasma), are more efficient and result in more stable plasmas than NBI, but their complexity means that it makes more sense to initially use NBIs for prototypes.
The ultimate aim of all these heating and cooling technologies, coupled with the divertor, is to achieve a burning plasma – the holy grail of tokamak operation. A burning plasma is a plasma that is so hot that >50% of its heating comes from the fusing particles within it; in the case of the D-T reaction, the particle in question is helium-4. This means that far less heating needs to be done via auxiliary methods, which increases Q. A burning plasma is so sought-after because the conditions required for it, as described by the Lawson Criterion to be a high enough temperature, pressure, and confinement time, are incredibly difficult to achieve, given the largely unsolved issue of plasma instability, which causes the plasma to lose significant amounts of energy with nothing in return.
Another big issue is that of acquiring the fusion fuels. One of the hydrogen isotopes required, deuterium, can be industrially extracted from seawater very easily and cost-effectively. (Arnoux, 2011) Tritium, the other isotope, is extremely rare in nature (its natural abundance is 0.0000000000000001%, (Editors of Encyclopedia Britannica, n.d.) but the most feasible option to produce it is by “breeding” it using the neutron released from the main fusion reaction and lithium. (ITER, n.d.) JET currently uses tritium breeding, and ITER will join it soon. As it happens, tokamaks already have a fully functional, modular neutron absorption system – the neutron blankets – so some of these blankets are simply replaced with tritium breeding modules. (Editors of Wikipedia, n.d.) (ITER, n.d.)
Fusion energy eliminates the environmental and habitat damage issues associated with fossil fuels and hydroelectricity, whilst remaining free from the fetters of fuel scarcity and weather dependency.
Since tritium is radioactive, it must be disposed of properly. Tritium has a half-life of 12.32 years and decays via beta minus radiation into helium-3, which is stable. This puts it under the category of low-level nuclear waste (User, 2025) (especially given that the amounts of tritium that need to be disposed will be in the tens of grams, (UK Atomic Energy Authority, n.d.) and a few millimetres of aluminium is enough to block all the radiation. Unreacted tritium from the vacuum chamber can be extracted using the divertor and either cycled back into the chamber or disposed of in the above way.
This safe disposal protocol means that practically zero contamination will affect the environment, a huge improvement from the need to store nuclear waste for fission reactors. Since no other fuels used in tokamaks are radioactive, and the plasma does not even get close to being hot enough to form heavier, fissile elements, fusion could be the first energy source to be fully green and sustainable, but without the various downsides that typical sources of this nature come with.
The extraction of said fuels poses minimal risk to the environment too. Deuterium is extracted from seawater, tritium is produced in-situ, and lithium is largely produced from seawater (although some of it is still mined). (Editors of Wikipedia, n.d.) Therefore, fusion energy eliminates the environmental and habitat damage issues associated with fossil fuels and hydroelectricity, whilst remaining free from the fetters of fuel scarcity and weather dependency. Moreover, fusion energy could catalyse the completion of the shift from obtaining lithium via mining to via the electrolysis of seawater, which reduces potential worker exploitation and improper disposal of waste in poorer countries.
The operation fees of tokamaks will be far fewer than other sources; they require little to no human
intervention to operate due to the aforementioned role of AI in plasma confinement (there will be less salaries to pay and the cost of the reactor will be recovered over time). Furthermore, the cheap production would result in lower energy prices for consumers and may begin a U-turn in the negative public opinion for nuclear-based energy sources (because of the legacy of fission reactor meltdowns).
Finally, as has been mentioned, fusion is one of the most efficient energy sources that humans are currently capable of harnessing; it is 4 million and 4 times more efficient than fossil fuels and fission respectively. (ITER, n.d.) Hence, the amount of fuel required (and consequently price) is also brought down.
In conclusion, tokamaks currently have several barriers to their widespread use, including plasma instability, tritium breeding, and achieving Q>1. Once the prototypes have overcome these challenges, the use of tokamaks to generate sustainable energy could pave the way for innovations in other fields of science. However, it is unlikely that tokamaks will be ready to take on this role in the next few years, given the need to sway public opinion in the short term and the high prices that come from its novelty as a technology. Consequently, tokamaks likely won't be able to contribute to current global energy issues, like rising energy costs or net zero, but can still hopefully positively contribute to a future of clean, cheap, and sustainable energy after a one or two more decades of research.
This essay won the Junior ILA award in the Third Form category
Back in October of 2024, my family and I were on holiday. Accompanying us was the game of Dobble, from which my research began. In particular, my father and I became quite intrigued by the nature of the game, the ‘mono-match’ principles of it, and how it was constructed. Over the course of my investigations, I discovered an unexpected flaw within the game!
Included in a tin of Dobble, there are 55 cards. Each card has 8 symbols illustrated on it. Furthermore, all cards have exactly 1 symbol in common with every other card, and the aim of the game is for players to be the quickest to identify which is the matching symbol with the previous card when a new card is played. Surprisingly, there are only a total of 57 different symbols used. Here are some Dobble cards:
There are many ways to create a ‘mono-matching’ game like Dobble, a couple of which are listed below:
1.1 Find the Carrot
One remarkably straightforward way of making a ‘mono-matching’ game is to firstly, pick a number of cards, and a number of symbols to be drawn on each card. Next, establish a common symbol
or link running through every card, such as the carrot symbol in Dobble. Finally, fill in the remaining symbols with unique images. Technically, this does create a usable ‘mono-matching’ game; however, it would become fairly boring and monotonous after a few rounds (since the only symbol being sought after would be the carrot), and we could call it Spot the Carrot!
Dobble is a much more compact, efficient, ‘mono-matching’ game than Find the Carrot!
As just previously established, method 1.1 is quite inefficient and monotonous, whereas Dobble’s method is largely the opposite. Consider a game with n symbols allocated to each card. To make explaining easier, I will use n = 3, although n = 8 is used in the actual game of Dobble. Since n = 3, set the first card to ‘ABC’ (I will use letters to represent symbols on cards).
After having constructed ABC, to make a functioning ‘mono-matching’ game where n = 3, more symbols will need to be added. However, the next card to be constructed must have exactly one similar symbol to card ABC. Without loss of generality, a reasonable next card to formulate would be the card ‘ADE’, as it has the common link of symbol ‘A’ to card ABC.
In Dobble, each symbol has 8 appearances across all the cards. Helpfully, this figure also equates to the number of symbols on any one card. As I illustrate in the next section, if we want to create a ‘mono-match’ game like Dobble (where n = 3), each symbol must occur precisely 3 times throughout all cards. Now that we know this, we simultaneously know that another card incorporating ‘A’ must be crafted: ‘AFG’. To find the remaining cards, one must continue to logically construct cards that include 3 of the symbols A-G until no more can be assembled.
To help you visualise this, I have created a step-by-step guide below:
0. For context, this is what the sheet of paper looks like at the start of the guide with the three rows representing our first three cards:
1. Since we already have 3 ‘As’ in our pack, we will not be needing to use them further to construct new cards:
2. Firstly, one must pick a set of 3 letters, one from each row, to ensure that there is no ‘double links’ (Where two cards have two common symbols). I have picked the combination ‘BDF’:
3. Next, another card needs to be visualised, so that all three cards including the symbol ‘B’ have been found. The singular other card with ‘B’ (which only has ‘B’ in common with the other cards) is the card ‘BEG’:
4. Since all permutations incorporating ‘B’ have been identified, we must now find all permutations incorporating ‘C’. To ensure that each card maintains exactly 1 symbol in common with each of the other cards, the last two symbols of each ‘C-Card’ must firstly be dissimilar to the last two symbols of each ‘B-Card’ and secondly, must not be situated on the same row. One combination that follows both rules is ‘CDG’:
5. For this set of cards, it is now clear to see that the only other ‘C-Card’ permutation which follows the rules is ‘CEF’, as any other combination would result in a ‘double link’:
6. Now every symbol appears 3 times throughout our deck, indicating that the hunt has concluded:
7. Therefore, when creating a ‘mono-match’ game where n = 3, the 7 cards needed are: ABC, ADE, AFG, BDF, BEG, CDG, and CEF. Each pair of these cards has exactly one letter in common.
To start off, I realised that the number of symbols on each card set a limit in terms of how many cards I was able to make. To investigate this, I carried out a few experiments, setting 2, 3, and 4 as the number of symbols on each card. As above, I substituted symbols for letters (A, B, C, etc). Each string of letters represents a card. My results are shown below:
ABC, ADE, AFG, BDF, BEG, CEF, CDG
ABCD, AEFG, AHIJ, AKLM, BEHK, BFIL, BGJM, CEIM, CFJK, CGHL, DEJL, DFHM, DGIK
Following this data, I created a sequence from the ‘Maximum number of cards’ column, which would go: 1, 3, 7, 13. Now I sought to find the nth term of this sequence, and so conjecturing a formula for determining the maximum number of cards in a ‘mono-match game’ when given n (the number of unique symbols on each card)
From the table in section 2, we can see that the number of unique symbols in total across every card is equal to the maximum number of cards. When we take n = 3, for instance, every symbol is included in the 3 cards ABC, ADE, and AFG. On these cards there are n^2 symbols in total (not necessarily different). When one omits every similar symbol (every ‘A’) from the 3 cards, we are left with n^2 – n symbols (since the number of As = n = 3). Lastly, one ‘A’ needs to be added back to the number of total unique symbols, otherwise the total will be 1 less than it is. Therefore, the conjectured formula for the nth term is n^2 – n + 1, where n = the number of unique symbols on each card. When substituting the values of 1, 2, 3, and 4 for n, the outcomes are as expected (1, 3, 7, and 13).
In Dobble, there is 8 symbols illustrated on each card. When 8 is inputted into the formula, the result is 64-8+1, which equates to 57. However, as I mentioned earlier, there are 55 cards in Dobble. If the maximum number of cards that Dobble could include is 57, why are there only 55 cards included in a tin? What are the two extra cards? Is there a specific reason as to why Dobble is ‘Incomplete’? I will answer these burning questions in due course.
Almost immediately after I had theorised that Dobble was ‘Incomplete, I found myself subconsciously on the hunt for the remaining 2 cards. To satisfy my curiosity, I created an excel spreadsheet to investigate the included 55 cards, and how that would help me find the missing 2 cards. A small excerpt of the spreadsheet is shown below (Although it does extend to row 69 and column CF):
From the table in section 2, it is clear to see that, the number of symbols on each card = the number of times that any specific symbol appears throughout all the cards. This means that, because Dobble uses 8 symbol cards, the number of appearances of any symbol is also 8.
Since there are 2 missing cards, I knew that when adding up all the symbols, 14 would only have 7 occurrences; and one symbol would have a mere 6 occurrences (as the two cards must have a common/matching symbol). These missing appearances tell me the symbols on the unknown 2 cards.
I used ‘1s’ within the spreadsheet so I could simply sum up the number of occurrences. Once I had summed up all the symbols on the provided cards, the symbols drawn on the missing 2 cards were revealed. However, it was still necessary to find the correct permutations of the symbols between the two cards, as an incorrect placement of them on the cards could lead to some cards sharing more than one symbol, completely invalidating the game.
To determine which symbols to assign to which cards, I used a SUMPRODUCT formula. A SUMPRODUCT formula is the addition of the products of the elements of two (or more) arrays
For instance: The SUMPRODUCT of the first and second columns would be; (2 x 2) + (3 x 1) + (3 x 6), which is equal to 4 + 3 + 18 (= 25) SUMPRODUCT can be used for more than 2 columns of data.
However, before, utilising the SUMPRODUCT function, I allocated the 15 symbols between the two cards (I made a column titled ‘Card 56’ and one titled ‘Card 57’). Once I had assigned the 15 symbols, I used the SUMPRODUCT function separately for each card column and the Card 56 column, and for each card column and the Card 57 column.
The target SUMPRODUCT calculation result is 1 for every combination. This is because, to achieve the result of 1, the multiplication of ‘1 x 1’ would need to take place exactly once. Since there are only blanks (zeroes) and 1s written on the spreadsheet,
this multiplication would show that there is precisely one symbol in common between the two selected cards, as every other multiplication apart from ‘1 x 1’ would result in 0 (i.e. ‘0 x 1 = 0’ and ‘0 x 0 = 0’). Also, multiple matches would give a result of 2 or more.
The first trial of the SUMPRODUCT function showed that my arrangement of symbols of the 15 symbols was wrong, as some cards resulted in having 0, 2, and even more than 2 matching symbols to Cards 56 and 57. So, after trial and error of the composition of the last 2 cards, I had done it. It was at that moment when every SUMPRODUCT calculation resulted in 1, I knew what the missing 2 cards were.
By means of my extensive data sheet; I had now finally deciphered the last 2 cards, complete with correctly placed symbols. They are as follows:
Card 56 Card 57
Symbols Cactus Dog Daisy Exclamation Mark Ice Cube Eye
Maple Leaf Hammer Person Ladybug Question Mark Lightbulb Snowman Skull
T - Rex Snowman
It bemused me as to why Dobble appeared to be incomplete. Upon researching the internet, there seemed to be numerous theories as to why Dobble only includes 55 cards instead of the maximum 57. However, some of the theories, when examined, seem completely irrational. In this paper, I will only discuss the somewhat feasible theories, in order of credibility.
Out of all the speculations on this topic, this argument seems the most likely. It revolves around the idea that playing card printers use 55 cards sheets. Therefore, for Dobble to print all 57 cards, the manufacturers would have to use extra sheets, raising the production cost. This is simply not worth it for a mass manufactured game.
However, the single flaw in this theory is that Dobble cards are not rectangular like playing cards, but circles. Circular cards might not be printed in the same number and formation as rectangular cards because of their cumbersome shape. This means that, for instance, the cards could potentially be printed in 3 sheets of 19, which would render this theory invalid as no extra sheets would need to be used. Overall, this hypothesis is relatively attractive, if not for one modest flaw.
Perhaps the creators and manufacturers of Dobble believed that the game would flow more smoothly with only 55 cards? The omitted cards certainly make dealing easier, as 54 is divisible by 1, 2, 3, 6, and 9 (in Dobble, one card is taken out at the start of every game). Although, 56 (the number of cards being dealt if the game was complete) is divisible by 1, 2, 3, 4, 7, and 8, which is arguably an even better selection of numbers, based on regular gathering sizes. On the whole, this postulation is largely reasonable but does, again, have a puzzling fault, ultimately leading to its downfall.
Some people (however few) take an alternate perspective on this topic. According to them, the tin/packaging is simply too small. I myself find this argument extremely hard to sympathise with, as 2 playing cards maintain almost no volume whatsoever, not to mention the minuscule vertical height they would adopt. Hence, this theory is last on the list of credibility for me.
Aside from the original game, there are several other variants of Dobble. Some examples are Dobble Disney, Dobble Animals, and Dobble 1 2 3. However, there is one version which adds a further example of incompleteness: Kids Dobble. Instead of having 8 symbols printed on each card like the original game, Kids Dobble has just 6. When substituting 6 for n in our formula for maximum card count from earlier (n^2 – n +1), we achieve the result of 36-6+1 (or 31). Therefore, the greatest number of cards that Kids Dobble can include is 31. But alas, the Dobble manufacturers again do not encompass every card in the game, as it contains only 30!
Interestingly, in Dobble Disney and Dobble Animals, there are also only 55 cards included inside the tin (like the original game). This fact leads me to believe that the first theory I covered earlier is prevalent, considering that leaving out 2 cards appears to be a common theme through every version, possibly due to some sort of manufacturing constraint.
Dobble, at first glance, is nothing more than a simple matching game. However, when examined at a deeper level, the intricacies and complexities of the game's construction began to reveal themselves. I was surprised that my research took an unexpected turn and led me to discover that whilst being ‘efficient’ Dobble is also incomplete; and I am now glad to have found the missing cards!
1. How does Dobble (Spot It) work?, Matt Parker/ Stand Up Maths (his channel name), YouTube, 2022. How does Dobble (Spot It) work?
2. Why are there only 55 cards in a deck of spot it and not 57? | BoardGameGeek.
3. Dobble Jeu De Carte ▷ Promotion et meilleur prix 2025, Image Link : dobble-jeu-de-carte-678x381.jpg (678×381).
This essay was commended at the Junior ILA Celebration Evening
Appeasement attracts a great deal of contrasting views, from those who believe that it was a complete disaster and those who try to explain the logical reasons why the British and French governments adopted the policy. As will be evidenced, my view is similar to the latter and is a more sympathetic appraisal of why it was, at the time, a good idea. In hindsight, it seems this policy was an abysmal mistake that allowed Hitler to become extremely powerful, to the extent
that Britain and France were almost weak in comparison. However, it must be considered that, at the time, appeasement looked like a wise plan for the British and French, and one that could easily have proved very beneficial. This view can be supported by various ideas and this essay will explore; the need for some time to rearm, the previous horrors of the Great War, the Treaty of Versailles being unfair, Hitler standing up to Communism, British economic problems in the
aftermath of the Great Depression and the lack of support by the British Empire and the USA. These are the main reasons why it is most compelling to believe that appeasement was not a bad policy at the time. However, it will still be argued that appeasement, even at the time, was an unwise endeavour, considering there was no reason to be trustworthy of Hitler, something the public can see. This view will be further supported by the growth the policy allowed Germany to carry out, the effect it had on the USSR and the encouragement it gave Hitler. (Walsh, 2001)
First though, it is vital that some context is established regarding the policy of appeasement. At the end of the Great War, the Allies introduced the Treaty of Versailles which imposed a maximum army size of 100,000 for Germany, while also forbidding conscription, which meant that soldiers had to be volunteered. (Britannica, 2024) The restrictions on the German army were particularly frustrating for a country whose military strength had grown so significantly since the Franco-Prussian war of 1870-1871, (King’s College London, 2020) though perhaps the most damaging cause was the forced acceptance of sole blame for the Great War, a clause that attacked German reputation and broke their spirits completely. In addition, they lost significant land, particularly to Poland, while they were required to pay £6.6 billion in many instalments, which proved a challenge to their economy already broken by the long war. (Britannica, 2024) The public anger at such conditions, which was seemingly somewhat justified, led to Hitler’s rise to power, as he promised better conditions, and that he would violate the unfair treaty. This gave rise to the policy
of appeasement, which was an idea that to limit any conflict, Britain and France should not interfere with Hitler’s actions and attempt to satisfy his desires to a reasonable extent. This policy meant that Hitler’s Germany could slowly grow in strength, starting with significantly increasing their army past the limit in 1933, before they remilitarised the Rhineland in 1936, with the Anschluss between Germany and Austria in 1938 and the invasion of Czechoslovakia in March 1939 also happening without any significant reaction by Britain and France. (Walsh, 2001)
It seems convincing that the policy of appeasement was not a good idea because, even at the time, British citizens could see that Hitler could not be trusted, and putting faith in his words was unwise. For example, in October 1938, soon after the Munich Agreement was signed, Hitler stated that he did not intend to attempt to seize any further territory once being given control of the Sudetenland. (Gottlieb, 2024) In the view of Prime Minister Neville Chamberlain, this would bring ‘peace for our time’, yet 93% of the British population correctly believed Hitler was lying about his intentions for the rest of Europe. (Modern History Review, Hodder magazines, ‘Peace for our time’, 2022) These beliefs were held in spite of
the Anglo-German Declaration being signed a day after the Munich Agreement, which is suggested to state that the British and German people would ‘never go to war with one another again’. (Neville, 2006) Therefore, one must raise the questions as to whether appeasement was not a good idea, since it relied on trust in Hitler’s promises, since appeasement could only work well should Hitler defy the Treaty of Versailles, but not carry out any extreme actions, that would leave the British and French with no option but to start a war. When Germany invaded Poland, it was proven that trusting Hitler was a mistake, but even without this hindsight, a fair number of the population accurately predicted some deception by Hitler, making it clear that like Hitler had previously gone back on his promises, he would continue to do so. For example, Hitler had already gone back on his promise that he did not desire the entirety of the Sudetenland, which should have stopped the British and French governments from believing Hitler would not take any further land after the
Sudetenland, yet the British and French still pursued appeasement. (Walsh, 2001) Ultimately, any trust in Hitler was completely misplaced as he had already shown that he was perfectly happy to violate international agreements, as he had done when remilitarising the Rhineland in 1936 and increasing the size of the German army. The peace in Europe which appeasement was intended to maintain could only happen if Hitler did not eye complete dominance and had some limits. This links to the idea that appeasement was fundamentally based on the idea of trusting Hitler, something that even at the time was evidently a bad idea, making the policy of appeasement an incorrect choice. However, it must be noted that this does not act as much of a criticism of the earlier stages of appeasement because initially, without hindsight, it would have been challenging to know that Hitler was not trustworthy, with 43% of the population not wanting to support Czechoslovakia after its invasion, whereas only 33% were in favour of military intervention. (Christie, 2011)
In addition, it seems plausible that appeasement was a bad policy because it aided Germany in reaching the point where they were even stronger than the British and the French. It can be seen that this was the case since, every year from 1936-1939, Germany produced more aircrafts than Britain. By 1939, when war broke out, both countries had similarly high numbers, but the gap in 1937 from around 5500 aircrafts to approximately 2250 was extremely significant. (Walsh, 2001) By showing a blatant disregard for the actions of Germany, the British and French had made a catastrophic mistake since, without enforcing the restrictions in the Treaty of Versailles, the German armed forces could grow massively. This meant that, by the time war did break out, the huge gap in strength that existed between Britain and Germany at the end of the Great War was very little. To some extent, after Hitler had decided to try to conquer the entirety of Europe, a war was inevitable, though a quick victory for the Allies could have been easily possible had it not been for the appeasement. This is because appeasement allowed German army strength to increase before the war began. For example, the Treaty of Versailles had imposed a limit of six battleships on the German navy yet, as part of appeasement, the British signed the Anglo-German Naval Agreement with Germany in 1935 to allow Germany to have more battleships, and therefore more control of the seas. (Gottlieb, 2024) This meant that, by 1939, Germany had around 70 battleships, only eight less than the number Britain had, with this being only one example of the significant power Germany gained. If an alternative approach to appeasement had been taken, and Hitler had been crushed before he became such a significant threat, then it is quite possible a catastrophic war could have been avoided, which is what appeasement failed to do. This links to the idea that appeasement was a bad policy as it allowed Germany to become
stronger, to the point where they had a good chance in a war. (Walsh, 2001)
Furthermore, it can be argued that appeasement was a bad policy because it clearly had an adverse effect on the USSR. This is clearly proven because it was public knowledge that Hitler intended to expand eastwards into the USSR, something that was frightening for Stalin. Hitler was killing communists in Germany and publicly spoke against Communism, meaning the USSR knew that it was a target. With the policy of appeasement being even more concerning to Stalin, who therefore knew the British and French were not keen to stand up to Hitler, and were unlikely to offer any significant protection. (Walsh, 2001)
Somewhat concerned, Stalin signed an agreement with the French, stating that help would be offered if Germany invaded, but the policy of appeasement had shown the inactivity of the French and the British in matters concerning Hitler, meaning Stalin signed the Nazi-Soviet Pact in 1939, and simultaneously exited negotiations with the British and French to form a Triple Alliance against Hitler. It is quite plausible that Stalin, considering he initially tried to form an alliance against Hitler, was against Hitler’s actions, but the policy of appeasement meant that the only way Russia would be safe from German threat was an alliance with Hitler. (Roberts, 2018) The Nazi-Soviet Pact then gave confidence to Hitler that he could invade Poland, as even if Britain and France fulfilled their promise to go to war over Poland, he would have significant support, while he could eventually invade the USSR too. Despite disagreeing with Russian Communism, Britain and France should have considered the effect appeasement obviously would have on the USSR, before implementing this policy. In this way, the policy of appeasement was a very bad idea since it frightened the USSR, as they were worried that they had little protection
from Germany, meaning they felt compelled to sign the Nazi-Soviet Pact, something that led on to the Second World War. This links to the idea that the policy of appeasement was not a good idea. Finally, appeasement is considered negatively because it offered huge encouragement for Hitler. When one examines Hitler’s actions, it can clearly be noticed that as every risk he took was not acted on, he started to take even greater risks. For example, he started off by little violations of the Treaty of Versailles in 1933, by increasing the army size, before rearming the Rhineland in 1936 yet by 1939, he had already invaded Czechoslovakia and had launched an invasion of Poland. (Gottlieb, 2024) The greatest issue was that, through appeasement, the British and French took up a position of weakness, where they opted to let Hitler break the Treaty of Versailles, with the complete lack of action just giving Hitler the license to keep taking risks. It must be understood that Hitler had the pressure of the entirety of Germany to fulfil his promises of making Germany strong once again, so he had little choice but to risk being reprimanded by Britain and France but, when these countries chose not to act, he could carry out a more significant action, yet still without punishment. This fearful position was taken up due to the suspected strength of Germany, yet in the view of British historian AJP Taylor, German strength was actually only 45% of what it was assumed to be, linking to the idea that when Hitler’s weak Germany got away with violating some terms of the Treaty of Versailles, he was encouraged to take more risks, eventually leading to the invasion of Poland and the Second World War, largely because of appeasement.
(Walsh, 2001)
British and French took up a position of weakness, where they opted to let Hitler break the Treaty of Versailles, with the complete lack of action just giving Hitler the license to keep taking risks.
Nonetheless, the stronger argument is that the policy of appeasement was a good idea at the time since Britain, along with France, were weak. Firstly, the British needed to buy some time to rearm themselves, since they only had two army divisions ready to fight in January 1938, before the Munich Crisis. (Sir Knox, 1938) Even with the less significant strength mentioned by AJP Taylor, Chamberlain was the first British Prime Minister to begin the rearming process, and certainly did not believe British forces to be ready for war. Clearly, appeasement gave some time for Germany to become stronger, but it is likely the British needed this extra time, perhaps even more than the Germans, so that they were actually ready for a war. Secondly, the British and French were in a poor financial situation after the Great Depression, which began in 1929, and therefore were not in the position to enter an expensive war. This is proven by the huge debts Britain had to pay and the significant amount of unemployment present in the country, with the unemployment rates soaring to 30% at their peak. (Christie, 2011) For the British, solving these issues would have been more of a priority over the actions of the Germans, particularly with some sense of backlash by the British, particularly with the Jarrow Crusade of
1936. (Quinn, 2024) This meant that they had to prioritise their own issues over intervening with Hitler’s actions, and were afraid to provoke him, in case it started a war that they could not afford to be involved in. This links to the idea that the policy of appeasement was a good policy because the British may have otherwise entered a war they were not prepared to be in, in terms of their military and financial state.
In addition, it is quite compelling to believe that appeasement, at the time, was a reasonable policy because British and French politicians would naturally be keen to avoid the suffering that was experienced in the Great War. This is proven because the French lost 1.4 million soldiers, while the British lost 885,000 soldiers, clearly demonstrating the catastrophic nature of the war. (Kiger, 2023) At the time, it would have certainly seemed to most that, to avoid a war, appeasement was the best choice, because it would give little reason for Hitler to go to war and despite the knowledge that Hitler would take it to the point where war was a necessity, the politicians could not have guessed that would be the case. The government would have wanted to keep the population happy, and a war of destruction and death would have been an utterly terrifying prospect, particularly after the recent horrors, meaning the appeasement was the best option. Also, if the British were of the opinion that there would be a war without appeasement, which would be understandable, it is fair to say appeasement was a good policy, because the British may have struggled in the war as it was unclear whether the rest of the British Empire would support Britain. For example, in India, Gandhi was leading protests such as the Salt March in the 1930’s, (Kidson, 2011) while Canada was also seemingly growing tired of British control, suggesting that perhaps there would not be much support in a war. In the Great War, over 3 million soldiers from the British Empire contributed, meaning a war without this extra support would be challenging. (National Army Museum) The Empire’s support was also not the only concern, but the policy of appeasement can be
made more explicable by the idea that the British were understandably distracted by the events in their own Empire, such as the Italian-Abyssinian War at the same time as Hitler remilitarised the Rhineland. This means that appeasement was not a bad policy because it allowed Britain to focus their attention on the matters that concerned them most, such as the troubles in the British Empire. In the words of Sir Thomas Inskip, the Defence Coordinator, to reduce "the scale of our commitments and the number of our potential enemies". (Goodlad, 2013) Britain would also not have had the support of the USA if there was another war, because the American leaders were very desperate not to have involvement in another war and, remembering that the USA’s entry into the Great War was decisive, the British may well have struggled without American assistance. This links to the idea that, considering the circumstances in the 1930’s, appeasement was a good policy.
Furthermore, perhaps appeasement was a good choice of policy to employ because maybe, to some extent, Hitler and his initial actions were correct. For example, Hitler had a thorough dislike for Communism and estimates suggest he had around one million people killed due to the political view, with many of these people being communists. (Sacks, 2022) At first, Hitler was actually helping
Britain in some ways and was certainly not their main concern because Communism was spreading rapidly, and Britain were more worried about the threat Stalin and his philosophies posed to global peace. Hitler and Germany provided a significant buffer separating out the USSR from much of Europe, and many held the view, ‘Better Hitlerism than Communism’, with the 1936 intervention by the USSR in the Spanish Civil War (a power struggle between right-wing Nationalists against left-wing Republicans) also raising questions over Stalin’s plans. Because of this, appeasement was the right policy choice as Communism was more of a concern than Hitler and, since Hitler’s actions sent a strong message against Communism and this was something that was useful to Britain. Appeasement was a good policy because it meant, while not approving of all of Hitler’s actions, they would not go to war with a nation who, in some respects, were helping their views to be heard, while also enforcing these views on many. Finally, Britain’s actions must reflect their beliefs, and many would agree that most British people believed the Treaty of Versailles to be too harsh in many respects. The British particularly disliked the reparations, which would completely destabilise the struggling German economy, while they also felt placing full blame on Germany was wrong. When they signed an agreement with Germany in 1935, allowing Germany to break the Treaty of Versailles and expand their naval presence, they proved that they also thought a limit of six battleships to be unfair. Ultimately, while trying to punish Germany, the Allies had put far too much restriction on Germany, to the point where Hitler may have been correct when viewing the Treaty of Versailles as wrong, meaning that it was right not to fight against Hitler’s violation of the Treaty of Versailles, which was largely based around the French desire for revenge. Even the British Prime Minister at the time, David Lloyd George, predicted another conflict in 25 years as a repercussion of this harsh treaty. (Short, 2013) However, once Hitler started invading countries and territories which never were part of Germany before the Great War, he had gone too
far, and the policy of appeasement was rightly ended. Nonetheless, up to this point, Hitler was going against clauses Britain was not in support of, meaning Britain had no need to act and this links to the idea that appeasement was a good policy.
Undeniably, it seems Britain was relatively weak and frightened of a large-scale war, making it clear that appeasement was a good idea. Many of the arguments for appeasement being a terrible policy can be criticised by the idea of the benefit of hindsight.
Thus, whilst hindsight does provide plenty of criticisms for appeasement, the arguments justifying appeasement are more persuasive. Undeniably, it seems Britain was relatively weak and frightened of a large-scale war, making it clear that appeasement was a good idea. Many of the arguments for appeasement being a terrible policy can be criticised by the idea of the benefit of hindsight. This principle suggests that most of the sources know the outcome of appeasement, and that it turned out badly, making appeasement far easier to criticise when, at the time, it seemed far more likely to succeed than it did to fail, suggesting appeasement was a good idea. For example, the point regarding appeasement scaring the USSR would have been a challenge to predict at the time, though it can now be said that this was a definite impact of appeasement. Therefore, there are few negatives of appeasement that would have been clear at the time, while there was a plethora of benefits of pursuing the policy of appeasement that can still be seen now, and were obvious at the time, such as prioritising the British economic struggles and avoiding the horrors of the Great War, making it clear appeasement, despite leading to World War Two, was the correct choice of policy at the time.
1. Britannica, 2024. Treaty of Versailles. Available at: https://www.britannica.com/event/Treaty-ofVersailles-1919 [Accessed 9 December 2024].
2. Christie, N. 2011. Appeasement in the 1930’s. Hindsight, 21(2). Available at: https://magazines.hachettelearning. com/magazine/hindsight/21/2/appeasement-inthe-1930s/ [Accessed 24 December 2024].
3. Gottlieb, J. 2024. Britain – Appeasement, 1930-1939. How can we understand appeasement in context?* [Video] MASSOLIT. Available at: https://massolit.io/ courses/appeasement [Accessed 9 December 2024].
4. Goodlad, G. 2013. Was Britain’s appeasement policy a mistake? Modern History Review, 8(4). Available at: https://magazines.hachettelearning.com/magazine/ modern-history-review/8/4/was-britains-appeasementpolicy-a-mistake/ [Accessed 24 December 2024].
5. King’s College London, 2020. The Franco-Prussian War 150 years on: A conflict that shaped the modern state. Available at: https://www.kcl.ac.uk/the-franco-prussianwar-150-years-on [Accessed 9 December 2024].
6. Kidson, A. 2024. A profile of Gandhi. Modern History Review, 7(2). Available at: https://magazines.hachettelearning. com/magazine/modern-history-review/7/2/a-profileof-gandhi/ [Accessed 23 December 2024].
7. Kiger, P.J. 2024. How many people died in World War 1? History. com. Available at: https://www.history.com/news/how-manypeople-died-in-world-war-i [Accessed 4 January 2025].
8. Knox, Sir A. 1938. Army Estimates, 1938. Hansard, 332 (10 March). Available at: https://hansard. parliament.uk/Commons/1938-03-10/debates/ d061e4a2-a0cb-49c7-8fbe-16371dba1aa3/ ArmyEstimates1938 [Accessed 24 December 2024].
9. Neville, P. 2006. The Dirty A-word: Appeasement. History Today, 56(4). Available at: https://www.historytoday.com/archive/ dirty-word-appeasement [Accessed 24 December 2024].
10. National Army Museum, 2024. The Commonwealth and the First World War. Available at: https://www.nam.ac.uk/explore/commonwealth-and-first-world-war [Accessed 24 December 2024].
11. Quinn, R. 2024. The UK in the 1920s and 1930s. Hindsight, 34(2). Available at: https://magazines.hachettelearning. com/magazine/hindsight/34/2/the-uk-in-the-1920sand-1930s/ [Accessed 16 December 2024].
12. Roberts, G. 2018. The Nazi-Soviet Pact. Modern History Review, 21(2). Available at: https://magazines. hachettelearning.com/magazine/modern-history-review/21/2/ the-nazi-soviet-pact/ [Accessed 16 December 2024].
13. Sacks, A.J. 2022. Nazism’s Political Victims Should Never Be Forgotten. Jacobin. Available at: https://jacobin. com/2022/01/nazism-political-communist-socialist-victimsworld-war-two-history [Accessed 3 January 2025].
14. Short, P. 2013. Was the Treaty of Versailles too harsh? Hindsight, 24(1). Available at: https://magazines.hachettelearning. com/magazine/hindsight/24/1/was-the-treaty-ofversailles-too-harsh/ [Accessed 14 December 2024].
15. Walsh, B. 2001. Modern World History. 2nd ed. London: Hodder Education.
This essay was commended at the Junior ILA Celebration Evening
The modern banking system is essential to global finance, driving economic growth through credit, wealth management, and financial inclusion. However, it also poses risks like the possibility of a financial crises, The centuries old issue of financial inequality and unethical practices. This paper
explores the modern banking system’s dual role as both a catalyst for development and a source of instability by using historical and contemporary examples. While banking fosters progress, its risks highlight the need for strong regulation and ethical oversight to ensure financial stability and fairness.
The modern banking system is a network of financial institutions that allow individuals, businesses, and governments to manage money, credit, and investments. It is the backbone of the global economy, it facilitating transactions, savings and lending on a gargantuan scale.
The banking system has been cultivated for millennia. Its ancient roots trace back to civilizations such as Mesopotamia (modern day Iraq) where temples served as safe places for deposits and moneylending. Babylon (modern day Iraq), Greece, and Rome provided rudimentary banking services, such as loans and currency exchange. Italian city-states like Venice and Florence were also pioneers of early banking. Families like the Medici set up banking houses offering loans, deposits, and international trade. Double-entry bookkeeping (recording transactions twice leading to fewer mistakes and more transparency with the consumer) revolutionized
financial record-keeping, laying the groundwork for modern banking.
The emergence of central banks was marked by the Bank of England (BoE) in 1694. Later similar institutions opened too. In the 19th century, the Industrial Revolution created a demand for more robust financial systems to support large-scale enterprises. Also, Joint-stock banks appeared, allowing multiple investors to pool resources, share risks, and earn dividends. Later in the 20th century the establishment of institutions such as the Federal Reserve System, the central bank of the United States, in 1913. Finally in the 21st century, the invention of the internet and mobile technology paved the way for online banking and fintech solutions. The word 'fintech' is simply a combination of the words 'financial' and 'technology'. It describes the use of technology to deliver financial services and products to consumers, making banking more accessible and efficient. Blockchain technology and cryptocurrencies introduced decentralized alternatives that challenge traditional banking systems.
The Modern Banking System provides businesses, individuals and governments with loans which are used for infrastructure, technology and innovation. Governments can inject this money into the economy boosting GDP via the multiplier effect which is when an initial injection causes a much larger increase in GDP than the initial injection. China borrowed heavily from both state-owned and international banks to fund large-scale infrastructure projects. China has used loans to develop extensive railway networks both domestically and in other countries through its Belt and Road Initiative. For example, China lent $62B to countries like Pakistan for the China-Pakistan Economic Corridor (CPEC), which includes the construction of highways and railways. (Khan, 2022) The Modern Banking System provides crucial tools like loans and credit what can be used to increase economic growth and development in general.
management tools to make the Saudi Public Investment Fund (PIF). The Saudi PIF Has invested $700bn in assets under management as of July 2022, created 500,000 direct and indirect jobs, and has established 79 companies, such as the Saudi Coffee Company (Fund). Banks offer wealth management options like savings accounts to channel your money into productive investments, enabling economies, corporations and individuals to flourish financially. .3-Cross-border transactions anncy exchange:
The Modern Banking System is a blessing because it encourages savings and wealth management. Banks offer savings accounts, fixed deposits and interest-bearing financial tools. For example, Saudi Arabia evaded Dutch disease, a sudden boost in one sector of the economy leading to corruption or excessive inflation, by using wealth
The Modern Banking System also offers cross-border transactions and currency exchange like SWIFT (Society for Worldwide Interbank Financial Telecommunications) and CIPS (Cross-Border Interbank Payment System). SWIFT and CIPS both offer network and letters of credit that allow for seamless international trade. China, as a major exporter has effectively used CIPS for facilitating cross-border payments, CIPS has 168 direct participants and 1461 indirect participants. Among indirect participants, 1072 participants are from Asia (including 560 from Chinese Mainland), 252 from Europe, 56 from Africa, 31 from North America, 19 from Oceania, and 31 from South America (2025) . Without these systems, globalization and international commerce would be nearly impossible, making the modern banking system a blessing.
Digital Banking and Financial Inclusion in South America
Technologies such as mobile apps, ATMs and online portals allow banking in remote and deprived areas. In South America NU, an online bank, serves over 110 million customers. Many South Americans didn't have access to banking before NU. Circa 20.7 million Brazilians have had the opportunity to obtain their first credit card within the last 5 years (NU). NU, an online bank uses digital tech to offer no-fee credit cards and digital banking. NU has gained significant traction and now millions of Brazilians have an easy and efficient way to handle their money. By increasing accessibility, banks reduce economic inequality and promote participation in the economy for rich and poor.
3.2 Micro finance and Empowerment
Modern banks offer microfinance and small loans which empower the poor. Modern banks, especially microfinance institutions, provide small loans to individuals who lack capital. The Grameen Bank in Bangladesh founded by Muhammad Yunus. Grameen Bank has banked to 94% of all villages in Bangladesh providing over $39.79 Billion in loans (Bank). These microfinance loans have enabled impoverished individuals, particularly women, to start businesses and generate income.
Its group lending model ensures high numbers of people paying back loans and fosters community accountability. Beyond income generation, microfinance improves access to education, healthcare, and housing while promoting financial inclusion and women's empowerment, making it a vital tool for sustainable development.
4.1
The Modern Banking System is a curse because When banks act irresponsibly it could lead to a financial crisis. The 2008 Global Financial Crisis occurred due to subprime mortgages (mortgages given out to individuals with tarnished credit reputations or lack of affordability) given out after the Federal reserve cut interest rates after the September 11 attacks what instilled fear in the US markets and subsequently hammered US stocks down. These low interest rates made it easier for consumers to get a mortgage so major banks wrongfully lended out subprime mortgages this fueled a housing bubble what at first increases house prices drastically as more demand sept into the market and as the bubble popped due subprime mortgages defaulting, the price of houses plummeted. These risky loans were packaged into mortgage-backed securities (they are investments like bonds. Each MBS is a share in of a bundle of home loans and other real estate debt) and sold to investors who didn't fully understand the risks. Financial institutions relied heavily on leverage, which amplified losses when the housing bubble popped. Weak regulatory oversight allowed excessive risk-taking. The dependency on the American markets meant that the crisis spread quickly through the globe. As banks failed and panic set in, liquidity (the ability of a company or an
individual to settle short-term liabilities easily and on time) dried up, causing widespread economic turmoil and a severe global recession. When poorly regulated, banks can destabilise entire economies and sometimes even the World economy. Unfortunately, the modern banking system encourages excessive reliance on credit and debt. Easy access to credit encourages overspending, leading to unsustainable personal and national debt. Greece suffered from a severe economic downturn due to excessive borrowing during the 2008 financial crisis because of fiscal problems stemming from a lack of revenue due to tax evasion. (Johnston, 2023) While credit can drive growth, unregulated borrowing could lead to financial disasters.
In times of economic uncertainty, fear causes customers to withdraw funds, destabilising banks as banks use the customers' deposits to operate the bank which is fractional reserve banking. This is known as a bank run. During The Great Depression, bank runs began due to low consumer confidence. People thought the bank may fail or were near failing because many small banks had lent substantial portions of their assets for stock market speculation and were virtually put out of business overnight when the market crashed. (Editors, 2010) Trust is the foundation of banking, and its loss can trigger a collapse.
Profit-driven practices and exploitation are another issue modern banking has brought about. To meet aggressive sales targets and boost profits, Wells Fargo bank opened millions of unauthorised accounts in customers' names without their consent. These fake accounts were created to meet sales quotas, often resulting in fees for customers and damaging their credit scores. While the bank profited from the increased number of accounts, it led to significant harm to customers and damaged the bank's reputation. (Meagher, 2023)
Trust is the foundation of banking, and its loss can trigger a collapse.
The Modern Banking system can exacerbate inequality as wealthier individuals and corporations can benefit from better services, lower interest rates, and investment opportunities. Wealthy individuals can access private equity funds, which often require minimum investments of £1 million or more while the average investor can only participate in publicly traded stocks or mutual funds with much lower returns. Bank of America, HSBC and JP Morgan’s minimum requirement ranges from $2 million to 10 million to access their private banking. (Kate, 2024) Banks widen economic inequality by catering to the privileged.
The Modern Banking System is a curse because large banks take excessive risks knowing they will be bailed out by governments. This is known as a moral hazard. Banks such as NatWest were bailed out of bankruptcy by the British government for billions of taxpayers' pounds in 2008. (News, 2024) The government also stepped in to take a significant stake in Lloyds Bank. These bailouts cost billions to taxpayers and are one of the reasons for the UK’s lost decade (The Lost Decades are a lengthy period of economic stagnation). As the government tried to recover the bailout money, austerity measures were introduced, cutting public spending and raising taxes. This only led to a prolonged period of economic stagnation, high unemployment, and reduced public services. The recovery was slow with banks reluctant to lend, stifling business growth and worsening social inequality. The bailout also eroded public trust in both the government and the financial system.
The modern banking system can be seen as a curse due to its potential to cause widespread financial instability and worsen economic inequality. Financial crises such as the 2008 Global Financial Crisis, which wiped around 16 trillion dollars (about $49,000 per person in the US) (Paul, 2024) from the global economy. This example shows how banks’ reckless lending and profit-driven motives can devastate economies and harm millions. Additionally, the system often favours the wealthy, offering them better terms while burdening low-income individuals with high interest rates and hidden fees, perpetuating financial inequality. Banks' excessive focus on profits has also led to unethical practices such as predatory lending and 'too big to fail' bailouts by governments which place the burden of financial mismanagement on taxpayers. However, the modern banking system provides valuable tools that can be a catalyst for economic growth like loans, savings, wealth management and cross-border transactions which have transformed many economies from outdated agriculture-based economies to more modern industrial and services-based economies like the UK and USA. Furthermore, savings and wealth management tools have allowed many oil and gas-rich nations to avoid Dutch disease and maintain and even increase their wealth. Moving to a more microeconomic level, the modern banking system allows for individual financial management like Grameen Bank a microfinance institution that empowers poverty-stricken individuals and gives them a platform to start businesses and generate an income by offering microfinance loans. Finally, the 2008 Global Financial Crisis, which erased over $16 trillion from US households and caused millions to lose their jobs and homes, highlights these risks. Additionally, with the top 0.7% controlling nearly 43% of global wealth, (Neatle, 2017) the system perpetuates economic inequality. Without regulation and reform, the modern banking system remains more of a curse than a blessing.
1. Khan, S. (2022). The China-Pakistan Economic Corridor: A Flashpoint of Regional Competition. [online] Available at: https://www.lse.ac.uk/ideas/Assets/Documents/The-ChinaPakistan-Economic-Corridor.pdf [Accessed 23 Feb. 2025].
2. Saudi Public Investment Fund. Public Investment Fund Program. [online] Vision2030.gov.sa. Available at: https://www.vision2030. gov.sa/en/explore/programs/public-investment-fund-program
3. Wikipedia Contributors (2025). Cross-Border Interbank Payment System. [online] Wikipedia. Available at: https://en.wikipedia.org/wiki/Cross-Border_Interbank_ Payment_System [Accessed 23 Feb. 2025].
4. Nu (n.d.). About Nu. [online] Nu International. Available at: https://international.nubank.com. br/about/ [Accessed 23 Feb. 2025].
5. Grameen Bank. “Grameen Bank – Bank for the Poor.” Grameenbank.org.bd, 2024, https:// grameenbank.org.bd Accessed 16 Feb. 2025
6. Johnston, M. (2023). Understanding the Downfall of Greece’s Economy. [online] Investopedia. Available at: https://www. investopedia.com/articles/investing/070115/understandingdownfall-greeces-economy.asp [Accessed 23 Feb. 2025].
7. History.com Editors (2018). Bank Run. [online] History. Available at: https://www.history.com/topics/great-depression/bank-run
8. Meagher, P. (2023). The Wells Fargo Fake Accounts Scandal: A Comprehensive Overview. [online] Learnsignal. Available at: https://www.learnsignal.com/blog/wells-fargo-fakeaccounts-scandal-overview-2/ [Accessed 23 Feb. 2025].
9. Morgan, Kate. “Private Banking: Benefits, Requirements, and How It Works.” Unbiased, Kate Morgan, 27 Nov. 2024, www.unbiased.com/discover/banking/ what-is-private-banking. Accessed 15 Feb. 2025.
10. News, A. (2025). Top UK Banks Could Avoid Government Bailout If They Fail BoE. [online] MorningstarUK. Available at: https:// www.morningstar.co.uk/uk/news/AN_1722948577862922400/ top-uk-banks-could-avoid-government-bailout-if-theyfail-%E2%80%94-boe.aspx [Accessed 23 Feb. 2025].
11. Kosakowski, Paul. “The Fall of the Market in the Fall of 2008.” Investopedia, 30 June 2023, www.investopedia.com/articles/economics/09/ subprime-market-2008.asp. Accessed 16 Feb. 2025.
This essay was highly commended at the Junior ILA Celebration Evening
Humans have always been fascinated by exploration, from the days of Christopher Columbus to putting man on the moon in the 1960s. With the onset of climate change and the very real possibility that Earth will, one day, become uninhabitable, scientists have been looking to Mars as an alternative. However, is martian exploration worth the vast amount of money and effort it would entail? Some have suggested that we should be
content with Earth, although travelling to Mars has been an intriguing prospect since the 1940s. NASA is currently working on projects with the aim of sending humans to Mars by 2040. In this essay, I will be presenting an overview of the challenges involved in colonising Mars along with possible solutions, and discuss the pros and cons of putting people on Mars.
First, I would argue that there needs to be a quicker way to get to Mars - current rockets like NASA’s Orion capsule would take 9 months to get there –a journey time which, in itself, poses challenges to humans. If a Kevlar cable with a counterweight on one end is put into orbit, small shuttles could latch onto the end and be launched to Mars at high speeds. This concept, known as a skyhook, could shorten the time taken to get to Mars to between 3 and 5 months.
Mars must then be supplied with power. Because Mars is further from the Sun than Earth, solar power is only 40% efficient, or less when factoring in martian dust storms that frequently cover the surface of the planet. Wind and geothermal power would not work either; Mars has a very thin atmosphere and its core has stopped producing heat. The only reasonable option would be to transport nuclear fuel and a reactor to Mars to power the first colony. Although fossil fuels could be brought instead, they are less ideal, because of the lack of oxygen required to burn them. Another solution, albeit a slightly more impractical one, would be to attach a turbine to Phobos, one of the moons of Mars, and lower it down into the thin atmosphere. Phobos orbits Mars only 3700km above the surface; closer than any other moon in the solar system. The orbital speed of Phobos would produce more than enough energy to power Mars. However, this solution would require a system for getting the energy from Phobos to the martian surface.
The next problem lies in the design of the habitat itself. Due to the atmosphere having only 610 Pascals of pressure (much less than Earth’s 101,300 Pascals), humans will have to live in pressurised chambers. Flat walls, corners and edges will not be able to withstand the pressure differences and will break. To solve this problem, the living spaces would have to be spherical or cylindrical. An alternative option could be to build habitats underground, however the radiation on Mars would require the base to be covered in concrete or other materials that could shield people
from gamma radiation. Unfortunately, outside of the colony, radiation would still be present, and robots would have to do most of the outdoor tasks. The machines would also have to be repaired frequently, as the extremely fine martian dust would likely damage the equipment. Dust on Mars is also very dry, and would therefore stick to things via electromagnetism. If it was carried inside the base, it could be breathed into people’s lungs and may eventually become fatal.
Mars also has a gravitational pull of only 0.38g – less than 40% of Earth's – which means that people’s bones would weaken, as observed by astronauts who visit the International Space Station. A study of heart muscle tissue in low gravity has shown that the regular beating of the heart is disrupted by the lack of gravity. Other muscles in the body would also degrade. To prevent serious tissue damage, humans on Mars would have to exercise a lot, - more than most people on Earth seem willing to do! Lastly, humans need food and water. Water should be able to be obtained from polar ice caps, but food would much more difficult to obtain. The soil on Mars does not have enough nitrogen to grow plants because it is too alkaline. A better solution would be to use a technique called aquaponics. This is when the excretion from fish is turned into fertiliser by microorganisms, which allow plants to grow.
Plants are also able to filter water, which helps to keep the fish alive. The plants and seafood would offer humans on Mars a more healthy, balanced diet, but obviously, fish species would have to be transported there from Earth on the first instance. Soil decontamination is expensive, and takes a lot of time and effort, so ordinary agricultural techniques would most likely be a lot less effective on Mars than on Earth.
A base on Mars would initially not be able to support itself without aid from Earth. In the future, a more practical, self-sufficient option would be available: terraforming. The final result of this concept would be making the surface of Mars mild enough for humans to live on, just like Earth. This would be massively expensive and time consuming, but would give humanity a second planet to live on. However, in order to achieve this, the first problem to overcome would be the lack of a magnetic field on Mars. Solar wind from the Sun carries gamma radiation, which damages cells and significantly increases the risk of getting cancer. Earth’s core produces a magnetic field which deflects the radiation away from the planet. Mars, unfortunately, does not have a magnetic field, and that means that half of gamma radiation from the Sun hits the Martian surface; 50 times the amount that
hits Earth. Even inside a habitat, people would be subjected to enough radiation that, after 3 years, they will have exceeded NASA’s radiation dose limits for an astronaut’s entire career. This is why a base would have to be covered in concrete, as it is one of the few materials that can block gamma radiation. Mars is both smaller than Earth and further from the Sun, so a smaller magnetic field would suffice. One way to create this would be to have a ring in space containing very powerful magnets, powered by nuclear reactors, in order to divert radiation. The ring would have to be on the sun-facing side of Mars at all times during its orbit, which means that it will have to be at the Mars-Sun L1 point – a point in space where an object would feel an equal gravitational pull from the Sun and Mars.
As stated before, the thickness (or lack thereof) of Mars’ atmosphere is a problem for humans on the planet. To make Mars habitable, it must not only have a thicker atmosphere but it must also be made up of the right mixture of elements and it must have a suitable temperature. Currently, Mars is, on average, only -63°C. The easiest way to increase the planet’s temperature would be to use greenhouse gases that cause global warming on Mars. (Similar to climate change on Earth but, in this case, warming up the planet is ideal.) Although, that problem cannot be solved until the atmosphere is sufficient. Roughly 500 million years after Mars was formed (4 billion years ago), Mars had liquid water on its surface and a thick atmosphere. However, UV radiation removed the atmosphere and most of the gas is now part of iron oxides in the martian rocks. These are what give the planet its red colour. The main problem is that the only way to separate the oxygen from the iron is thermolysis, which means that the rocks must be heated to almost their melting point. This would require lasers twice as powerful as anything on Earth currently, and they would need to be powered by solar power for 50 years. The result would be an atmosphere that is almost 100% oxygen and roughly one fifth of the pressure on Earth.
In order to be able to breathe on Mars, the remaining atmospheric composition must be nitrogen. Earth’s atmosphere is roughly 21% oxygen, 78% nitrogen and 1% argon. Too much oxygen or too little, and humans would not be able to breathe. The easiest way to obtain nitrogen would be to take it from the atmosphere of Titan, the largest of Saturn’s moons, and deposit it on Mars. This would take over a generation, especially since the nitrogen must first be vacuumed out of the Titanian atmosphere, before being sent to Mars via electromagnetic propulsion. This could work by encapsulating the gas in magnetic containers and propelling them to Mars. Once they arrive, they would have to be broken apart, releasing the nitrogen into the atmosphere. The final problem would be increasing the temperature, although using the lasers from before, the ice from Mars’ poles could be boiled to release water vapour; a very effective greenhouse gas. Some may even fall as rain, which will help to decontaminate the soil and prepare it for life.
In order for Mars to be able to harbour animals and humans, it first must have a biosphere with self-sufficient ecosystems. Ideally, the first
organisms to live on Mars would be phytoplankton, which photosynthesise and turn carbon dioxide into oxygen. They were vital in the formation of life on Earth and would also be able to remove some carbon dioxide from the air in order to make sure that a runaway greenhouse effect does not occur (as this would make Mars end up in a similar state to Venus). Phytoplankton are also important due to their role as the bottom of the food chain in oceans on Earth. They are fed upon by zooplankton, which are eaten by fish who are, in turn, eaten by sharks, whales and other sea creatures. Next, life must be sustained on land. Before plants can be grown, the soil must be fertilised with bacteria and fungi. Plants that are found on Earth on volcanic islands will be best suited to the charred martian surface. They will prepare the soil for other types of plant, which will fertilise more soil, creating a chain reaction of plants preparing soil space for other plants. Once Mars has an abundance of plant species, animals can be introduced, and will form a balanced ecosystem after over 100 years. Only then can humans settle on the planet and live permanently.
Nevertheless, colonising – and eventually terraforming – Mars would be extremely costly and take a very long time. Another problem with terraforming Mars would be the ethics of the situations. Firstly, it would be very difficult to prevent the disruption of indigenous martian life (although there are no signs of life on Mars currently). Secondly, there would be the problem of potentially polluting the Martian surface. There are already 12 pieces of space junk on Mars, and settling there would increase the severity of the contamination of Mars. All in all, the development of the technologies needed to terraform Mars would also be beneficial to humanity in other ways. If Earth ever became uninhabitable, mankind could repeat the process learnt in the colonisation of Mars to make Earth viable again. Having two habitable planets also increases the length of time that humanity will likely survive for. If either planet is significantly damaged, its inhabitants will still be able to survive on the other planet. Going to Mars is an exciting concept, and even though it may not be necessary for the survival of the human race, colonising it could potentially be done. But would anyone actually be willing to go
and leave their family and life behind for good? In 2013, the Mars One Foundation received more than 165,00 applicants to take a one-way trip. This seems like a lot of people, but it remains to be seen as to how many of them really would go ahead if the opportunity actually presented itself. NASA also carried out research last year to determine if humans could psychologically cope with the stresses of living on Mars. Four people lived for 378 days in Mars Dune Alpha, a sealed ‘martian’ habitat in Texas. Whilst this experiment was a success, the exact conditions and emotions could never be fully replicated as, deep down, each participant would have known that it was just an experiment that would come to an end, and that they would not be confined for the rest of their life.
It will likely cost trillions of dollars to colonise Mars. From an ethical viewpoint, one could argue that this money would be better spent making Earth a better place – solving climate change, curing suffering and disease and ending world hunger and poverty. Despite the vast difficulties, expense and hurdles that not only going to, but living on Mars will entail, history has shown that mankind’s desire to explore makes this a real possibility.
1. Terraforming Mars with Neil deGrasse Tyson.
2. Human mission to Mars - Wikipedia.
3. Building a Marsbase is a Horrible Idea: Let’s do it!
4. How to Terraform Mars - WITH LASERS.
5. Yes, scientists are actually building an elevator to space - Fabio Pacucci - YouTube.
6. How long does it take to get to Mars? | Space.
7. Space Elevator – Science Fiction or the Future of Mankind?
8. How Will SpaceX Bring the Cost to Space Down to $10 per Kilogram from Over $1000 per Kilogram? | NextBigFuture.com
9. 1,000km Cable to the Stars - The Skyhook - YouTube.
10. Phobos - NASA Science.
11. Munroe, R. 2019. how to. Published by John Murray.
12. Mars habitat - Wikipedia.
13. Life on Mars? | Smithsonian.
14. Colonization of Mars - Wikipedia.
15. Low Gravity in Space Travel Found to Weaken and Disrupt Normal.
16. Rhythm in Heart Muscle Cells | Johns Hopkins Medicine.
17. Magnetic field of Mars - Wikipedia.
18. Life on Mars? | Smithsonian.
19. Harry W. Jones1 NASA Ames Research Center, Moffett Field, CA, 94035-0001.
20. These People Want to Go to Mars (and Never Come Back) | Space.
21. Four Humans, Who Were Living "On Mars", Finish Year-Long Mission.
Despite the vast difficulties, expense and hurdles that not only going to, but living on Mars will entail, history has shown that mankind’s desire to explore makes this a real possibility.
THEO O'DONNELL
This essay was commended at the Junior ILA Celebration Evening
On the 26th of December, 1991, the Soviet Union finally fell, after having existed since 1922, due to the impact of a number of key factors. The current President of Russia, Vladimir Putin controversially called it “the biggest catastrophe of the century”. His unsuccessful attempts to try and restore it to how it once was demonstrates how the Soviet Union
could realistically no longer exist and even that its demise could reasonably be argued to have been inevitable. Although the immediate circumstances of the final days of the union are complex, it is possible to identify the key underlying causes which ultimately meant the fall of the Soviet Union was truly inevitable.
The key near-term cause of the fall of the Soviet Union was the introduction of political reforms and, in particular, Perestroika. (Reforming the Economic and Political System) From 1989, the leader at the time, Mikhail Gorbachev, made a decision to allow elections with a multi-party system, allowing fairer elections, where votes were honestly counted, for the first time since 1921. This experiment in greater democracy started to run out of control with the effect that "one country in the region after another cast aside its communist rulers", (Brown, 2011) causing a divide between states and resulting in Communist influence in the Soviet Union becoming weaker than ever before. As well as this, Hamburg (2008) states that Perestroika allowed some prices of goods ‘not to be set by the central planners, but by negotiation between enterprises’, meaning that the state maintained less control. Hamburg also later states in the lecture that Gorbachev believed that workers ‘should have more voice in the management of the factories’. (Brown, 2015) also identified that another important aspect of the Perestroika reforms was the weakening of restrictions with the West, with people now allowed
to travel outside of the Union. The Soviet Union now had less control over its citizens and their exposure to Western values, and consumer goods, led to greater pressure for further democratic reforms. In this way, Perestroika set up more pressure for democratic reforms and more ability to exert that pressure.
Greater democracy started to run out of control with the effect that ‘one country in the region after another cast aside its communist rulers’ (Brown, 2011) , causing a divide between states and resulting in Communist influence in the Soviet Union becoming weaker than ever before.
Another reform which was just as important as Perestroika was the so-called process of Glasnost (being more open to the West and allow more open and public discussion of political matters within the Soviet Union). In the words of Kenez, (2017) "the decisive step in the ultimate demise of the Soviet system… was the introduction of openness in discussing the past as well as the problems facing contemporary society." When Glasnost was introduced, it resulted in greater freedom of speech. Millington (2020) states that ‘news outlets could lay bare the failings of the Soviet system and the Communist Party’, proper reporting on what happened in major Soviet incidents such as the Chernobyl nuclear power plant meltdown, as well as “rising rates of alcoholism and infant mortality rates”. (Janos, 2023) When these reports were published, many citizens no longer believed that they could be governed safely or that they could trust the Soviet authorities. Millington’s article further argues that in fact Gorbachev in 2006 said that Chernobyl and the resulting media fallout was the real cause of the collapse of the Soviet Union, which highlights that the collapse in the belief in the system was a major factor in the ultimate demise of the union.
A further reason was the Soviet Union’s failure to capture Afghanistan. Grau and Ahmed Jalali (2002) have stated that 14,453 Soviet soldiers died and 53,753 were wounded as a result of poorly equipped Soviet and allied Afghani troops, and because of the United States supplying advanced weapons to the Afghan rebels. Continuous defeats meant that the USSR forces were eventually ordered to withdraw by Gorbachev after four years. This meant that they had failed to capture a country the size of Texas, which lowered the country’s general morale and their status as a major world power. The political leaders, according to Gary Hamburg, (2008) had become ‘soft’ due to a lack of war experience, and that their ideas did not primarily involve the weapons sector due to a lack of war experience themselves, meaning that they had become politically weaker. In particular, the steady returning flow of injured and disillusioned servicemen, and families losing loved-ones, stoked a growing resentment which was allowed to burn brighter through the Perestroika and Glasnost reforms.
The Soviet Union was made up of 15 separate and distinct republics covering an area two and half times the size of the United States. A huge and diverse union was always going to be a challenge to hold together. The impact of the failures in Afghanistan provide a good example of this as Muslim-majority areas within the Soviet Union, such as Azerbaijan and Turkmenistan, were angered by this invasion, which fed into the growing cause of militant Islam. The strength of these developments would later be seen in the events of September 11, 2001 as well as the Chechen wars. Even in the 1980s though, these powerful forces were already producing great pressure on the Soviet Union, and its economic, political and military ties, and would ultimately help push these states to split from the Union.
Another factor why the USSR began to fall was the failure of Communism and the rise of nationalism, firstly in states which were in the Warsaw Pact. The collapse in Communism in those states, particularly as it was largely peaceful, would inevitably put pressure on the Soviet Union and prompt dissidents to question why such changes should not happen in Moscow if they were able to happen in Warsaw and Prague. The immediate cause of the end of communism in Eastern Europe was the resurgence of a nationalism that had been suppressed since the end of the Second World War. The prime example of this was in Poland, where in 1980 a nationalistic trade group called Solidarity was established, and quickly gained 10 million members. (Nelsson, 2019) Although attempts were made by the authorities to control it by banning it, pressure among the public was too strong, and it entered talks with the Soviet government to form a coalition in 1989, thereby turning the country anti-Communist. The effects of this anti-communist takeover soon began to spread into the Soviet Union, and the 1980s saw growing nationalist movements within several republics, including Ukraine, Armenia and the Baltic States.
As far back as 1974, Alexander Solzhenitsyn claimed that the ideology of Communism was both ‘wrong and exhausted’
The power of ideas and ideals was also a key factor in the fall of the Soviet Union. The growing disillusionment of the Soviet peoples, the inspiration from other countries, especially those in the West, and the greater freedom of expression as described above meant that many people simply no longer believed in the idea of Communism as they had once done. As far back as 1974, Alexander Solzhenitsyn claimed that the ideology of Communism was both ‘wrong and exhausted’ and his, and similar ideas, spread through the Union. This explains the argument made just after the fall of the Soviet Union that "Communism was dying from its legitimacy". (Hassner, 1990)
A further key factor which caused the collapse of the Soviet Union was the fierce policies towards Russia of Western nations, particularly the USA, and the severe economic effects which followed those policies within the USSR. In 1981, Ronald Reagan began massively investing in more defence systems and missiles, and conducted more research into better weapons than there had been before. Sempa (2004) writes that Reagan had told a journalist that “the Soviets lacked an economic wherewithal to compete in an all-out arms race with the West”. According to Braithwaite (2020), even the Soviet Chief of Staff, Nikolai Ogarkov, admitted that the Soviet Union “will never be able to catch up with [the Americans] unless we have an economic revolution”. At the same time, a range of geo-political factors (including further oil production in the United States) resulted in driving down Russian oil prices, and with the oil revenues decreasing from $120 a barrel down to only $24 a barrel shortly before the end of the USSR.
The resulting fall in crucial foreign exchange reserves caused greater challenges in acquiring technology and even basic food. The Soviet economy began to freefall, and by the time that the Soviet Union had collapsed, the CIA Factbook in 1992 reported that its GDP had decreased to half of that of the United States. Alongside an assertive economic policy was a deliberate, and even aggressive, arms build-up by the United States. In order to try to match this spending, the USSR itself spent a vast amount of money on its armed forces, and as a result Eduard Shevardnadze, the foreign minister of Gorbachev, said in 1974 that the country had spent of a quarter of its GDP on its military. This imposed further stresses on the Soviet economy and indeed Kenez (2017) has stated that the burden of this on the economy “was five times greater than that on the US economy”. The resulting unbalancing of the economy as it focused on arms production meant that many of the resources which could have helped support Gorbachev’s planned transition of the Soviet Union’s economy into a market economy were instead forced into the defence sector. The resulting economic hardships, and lengthening queues in shops, further compounded the loss of belief in the system at all levels.
Another key reason was the lack of force implemented by Soviet troops. Despite Soviet ground forces comprising over two million personnel (according to Zickel and Keefe, 1991), which would most likely have been able to easily halt major protests in areas in Eastern Europe like Poland, these were left unused. The same
was true of the forces of the KGB. Instead of ordering troops to be deployed, Gorbachev announced that the USSR would not object to Hungary opening its borders with East Germany. This was perceived as weakness which signalled a policy of non-interference, and that led to the domino effect ending East European regimes. When more countries in Eastern Europe became anti-Communist peacefully (with the exception of Romania), it highlighted that the Soviet Union failed to enforce their might on protestors and demonstrate control. The Soviet leadership itself appeared to hold the view that force could have saved the communist regimes of Eastern Europe. This was a clear contrast to the actions of the Communist Chinese government in dealing with protests around the same time, such as in Tiananmen Square in 1991. It is also unlikely that the West would have intervened. Facing a highly militarised and nuclear armed USSR, its options would likely have been limited to economic and sporting sanctions. Indeed President Reagan did not appear to actually want to precipitate the end of the Union when he referred to the threat to the USSR from “suicidal nationalism” at his speech to the Ukrainian Supreme Soviet in 1991. However, although a crackdown was possible, it was always unlikely given that there was no desire amongst the Soviet elites to organise one. That is the underlying reason why the protests on Moscow at the end of the USSR did not meet the same unsuccessful end as those in East Germany in 1953, in Hungary in 1956, in Czechoslovakia in 1968 or in Poland in 1980.
To conclude, there was a combination of reasons which eventually would lead to the demise of the USSR, including Glasnost, Perestroika, western aggression, economic failure, the invasion of Afghanistan and lack of military force. Ultimately it is possible to agree with AJP Taylor’s rule that ‘nothing is inevitable until it happens’ to say that it was not inevitable that the Soviet Union would end specifically on 26th of December 1991. Stronger leadership could have prolonged the political status quo for a while longer. However, much like the leadership at the time of the Russian Revolution, it would have been very hard for the old system to have survived for more than a few months or years, and therefore the view of this essay is that the USSR could have survived a little longer through strong leadership but it was still inevitable that it would eventually fall.
The USSR could have survived a little longer through strong leadership but it was still inevitable that it would eventually fall.
1. Reform, Coup and Collapse: The End of the Soviet State (2011), Professor Archie Brown, BBC.
2. A History Of The Soviet Union from the Beginning to its Legacy (2017), Peter Kenez, Cambridge University Press.
3. American Diplomacy, Volume IX (2004), Francis P. Sempa, Columbia University.
4. Letter To The Soviet Leaders (1974), Alexander Solzhenitsyn, Collins/Harvill.
5. Soviet Union: A Country Study (1991), Raymond Zickel and Eugene Keefe, Washington DC Library of Congress.
6. The Soviet-Afghan War: Breaking the Hammer & Sickle (2002), Lester W. Grau and Ali Ahmad Jalali, VFW Magazine
7. Could The Soviet Union Have Survived? (2020)
8. Rodric Braithwaite, James Rodgers, Joanna Lillis and Richard Millington, History Today.
9. 1992 CIA Work Factbook (1992), Central Intelligence Agency.
10. Was The Soviet Union’s Collapse Inevitable? (2023), Adam Janos, History.com.
11. “Communism: A coroner’s inquest” (1990), Pierre Hasner, Journal of Democracy.
12. The Birth Of Solidarity (2019), Richard Nelsson, The Guardian
13. The Rise and Fall of Soviet Communism: A History of 20th Century Russia (2008), Gary Hamburg, The Great Courses.
This essay was the joint winner of the Junior ILA award in the Fourth Form category
"Today's science fiction is tomorrow's science fact"
Issac Asimov
The essay that you are currently reading has travelled through time. This is not even in the boring sense that most people would fall into the misconception of accepting. It is, yes, going into the future as we all are, but not only that, the series of binary digits that make it up have been sent
through a combination of electrical signals and electromagnetic waves. This is a more interesting and different type of time travel. The difference would be the same if you ran up and down a corridor with it in your hands. Why? Time travel is not what you think. *(see relativity, 2.2)
The purpose of this essay is to look at and explain time travel in life and then evaluate its uses for the future, science fact or fiction and the justification of this. It is therefore important to clarify the title. It means, how can time travel be used to make fiction a reality and what are the ways in which it is misrepresented.
The way in which the essay explores time travel is through the topics as listed: Entropy, Relativity and Quantum mechanics. These will be broken down into sub-topics which will be explained and evaluated. They will be evaluated in italics at the end of each section. There will be a separate topic of black holes which will be used as an example of the extreme conditions that time endures and the resulting natural phenomenon.
When looking at the topic it is important to first know how time can be warped in the future direction, but it can never go backwards. *(see entropy, 1.2) This is one of the ways that time travel in science fiction is wrong.
The main argument of this essay will be to show how it is very difficult to utilise the way in which time is warped and using time travel commercially
in say, a spaceship. It will be shown that this is fiction after an evaluation of each topic. Time travel happens everywhere but science fiction makes it seem strange and distant.
Before you see how you are a chaotic time traveller, it is important to consider the first thing that comes to the mind when time is said. A clock. What does a clock actually measure? My Grandfather’s grandfather clock works with weights, springs and a pendulum. Clocks generally use springs because of Hooke’s law, which you may have had the pleasure of learning about for your GCSE’s. It means that the slow-release energy of the spring will be at a constant and therefore the ticks are evenly spaced apart. It is the same type of principle for the grandfather clock but with a pendulum. However, the cup of tea *(see entropy, 1.2) that you might drink next to the grandfather clock in the living room may tell you much more about travelling through time than the clock itself. Namely a question, what is time?
"Entropy is the price of structure"
Lyla Prigogine
1.1
Entropy is often described as the tendency of disorder to increase. However, a better definition for it is the tendency of energy to spread out. This is because disorder means randomness and chaos due to some kind of movement or a causal effect. However, this is not what entropy is, as described by the third law of thermodynamics: ‘The entropy of a closed system at thermodynamic equilibrium approaches a constant value when its temperature approaches absolute zero.’ This shows that at absolute zero where there is no energy there is no entropy. This is because entropy is the energy spreading. Does this therefore mean if we were at absolute zero, there would be no time for us? The answer is technically yes, however, we cannot get anything to reach absolute zero as we would know it has zero speed and we could then also measure its position. *(see Heisenberg’s uncertainty principle, 3.2) As well in 1895 Rudolf Clausius made the second law of thermodynamics, meaning that essentially all of the energy will tend to a uniform, spread out state.
Entropy is the medium by which time is measured. It is how time is measured because the spreading out of energy only occurs in one direction, Entropy only increases. For example, in metals, heat is transferred through conduction. This means that the vibration of an atom will vibrate another and so on. When a cold and a warm piece of metal are placed together heat always transfers from the hot to the cold. This is simply statistical. This is because the hot piece will vibrate the atoms in the cold piece more than the other way around until the vibration is evenly distributed. It is never statistically reasonable that a solid piece of metal suddenly gets hot on one side and cold on the other. The direction of entropy moving in only one direction is why you cannot go back in time. Going back in time would go against the very definition of entropy. It is also important to note that the higher the temperature, the more concentrated the entropy. For example, a cup of tea has concentrated entropy that gives off vapour, a gas with low entropy.
1.3
The history of the universe, from the big bang to the heat death, shows us that the direction of entropy is only one way. The universe had a relatively low entropy state right after the big bang as the entropy increased from there. The low entropy is due to gravity holding everything together in the massive density of right after the big bang.
1.4
Furthermore, in the middle of the process of the big bang to the heat death, entropy spreads out and in this time frame, life occurs. How and why? For example, we humans use energy to drive cars. By doing this we take the photons from the sun which cause natural phenomena to occur on earth such as plants undertaking photosynthesis. Dinosaurs then eat these plants. Dinosaurs then mysteriously die and form oil. Humans refine this oil into petrol and can now successfully drive to the supermarket. All of these steps spread energy out by using it, releasing low energy photons back out from the Earth. We are so good at this that some people believe it is why we exist. If there is a lot of highly concentrated energy in an area you will get better and better dissipaters of it to use this energy and increase the entropy. This happens until you get life. Jeromy England said, “You start with a random clump of atoms, and if you shine light on it for long enough, it should not be so surprising that you get a plant.”
1.5
At the very end of the universe, there will be the heat death, according to Drake, 2025, where all the useful states of energy were used and everything is spread out the same distance, uniform and there will be no time. Everything will be the same backwards as forwards. I would say that this is a long time away, but time doesn’t seem to be so rigid. *(see relativity, 2.2)
Entropy ensures that we cannot go back in time. It would not even be a good idea to try to tamper with entropy to try to time travel. This is at least now as we are not even on level one of the Kardashev scale, meaning that we are bad at harnessing energy. This is because it is the bringer of and pursuer of life.
"Time is an illusion. Lunchtime doubly so"
Douglas Adams
2.1
Before we had a proper understanding of what time was, Newton said that time was absolute. However, this changed once Einstein proposed his improved theory to describe gravity: relativity. Special relativity understood objects moving at constant speeds through space.
2.2
However, when changing speeds are introduced, this also describes gravity. In addition, Einstein proposed space-time. this is important as it shows that space and time are woven together and as one is warped, bent, stretched, squeezed or travelled through so is the other.
2.3
An example for this is Einstein’s thought experiment of two observers, one on a train in the middle and another on the station. The person on the station sees two bolts of lightning, one on the front and one on the back simultaneously. However, for the person on the train they see the lightning strike on the front as they are moving into it. therefore, time is different based on the location and speed of the train. This is called time dilation.
2.4
In fact we are always moving through time differently to other people. This is further developed by an understanding of the speed of light because according to Hossenfelder, 2025 we travel through time with the speed of light. This is because Einstein figured out that the speed of light was the fastest moving thing in the universe. Or is it? (*see Quantum Mechanics, 3.4) Therefore, movement that is very fast bends the fabric of space time causing time to pass more slowly and so, it follows that the faster you move, the slower your life goes by. To visualise this bending of space time it is often thought of like a bowling ball on a trampoline that may pull other objects closer. If you were moving at the speed of light, there would be no time. However, anything with mass cannot move at the speed of light as its mass would become infinite due to E=mc ^2. Additionally, another interesting property of a cosmic speed limit means that when you see a star’s light, it travelled through time to get to you, so it is in the past therefore you are always looking into the past. Even when you see a tree in your garden, this is in the past; It is a past that you would never reach. The world around you is not the one you are living in; it is the past of it.
Could we use special relativity to time travel? It could be more effective if you want to age less and live longer by warping space time. We could also try to travel faster as an interstellar colony. Relativity is the everyday time travel. The use of gravity can warp space time for you so much that you could become older than your parents. Some truly amazing things can be achieved with gravity.
"Everything we call real is made of things that cannot be regarded as real"
Niels Bohr
3.1
In following relativity it is impossible to go faster than the speed of light. However, introducing quantum mechanics adds a level of peculiarity. This is because very small things can appear to behave very oddly. There is a level of uncertainty. This is why there is a principle for dealing with quantum mechanics, Heisenberg’s uncertainty principle.
3.2
The uncertainty principle essentially means, as influenced by Poojary, 2015, that you cannot know both the speed of a subatomic particle and its position. Only one at a time. This limits precision and is fundamental to all of quantum mechanics. The formula for this is ΔxΔp≥ℏ/2.
3.3
Another example of uncertainty is a superposition and Schrödinger’s cat. However, people have typically misunderstood this. A superposition is when a particle is considered in multiple places at once. Schrödinger was confused about this and set out to explain it on a larger scale to show it did not seem to make any sense. He said that if you had, in
a box, a particle in a superposition that will at some point do something, for example trigger poison to be released, and a cat. The cat would seem to be considered dead and alive due to this definition.
3.4
However, one of the strangest topics in quantum mechanics is quantum entanglement. In this area the spin of fundamental particles, which means its angular momentum and orientation, can be determined seemingly faster than light. The orientation of a particle can be in line or opposite to the measuring device, meaning when measuring it, the probability of getting it correct is a fifty-fifty chance. If it is wrong the spin will change. For entanglement, if the particles were prepared correctly, spontaneously out of energy, then if one is measured correctly with one spin, the other will have the opposite. The spin could not be set before as if both were the same this would violate the law of angular momentum, meaning the spins have to cancel. This then means that faster than light communication seems to occur between one correctly measured particle and the other corresponding particle. This is because by measuring one you instantly seem to know the other, no matter the distance. How can this be, would this not violate the laws of relativity? What do you think? This seems to be one of the strange properties of quantum mechanics. This is why Einstein called it ‘spooky action at a distance’ in 1935.
Very strange things happen on small scales. Humans are not very small and therefore do not experience these things. We cannot even harness them on a large scale as once it becomes large enough it is not quantum. Even if we were to utilise quantum effects, we would have to accept a level of uncertainty. Pertaining to entanglement, could we communicate faster than light? The answer is, no. This is because the result at a detector will be random, a fifty-fifty. Regardless you would not know what you would get for one reading before you send a signal to the other.
“The past is a foreign country; they do things differently there.”
LP Hartley
4.1
Black holes are an example of the laws of the universe that bind time to sensibility driven to the limit of possibility. So appropriately this section will have an example of a person having an unfortunate encounter with a black hole. As such this section will be slightly different. There will still be an evaluation of its use in time travel in a future, science fiction sense at the end in italics. It will, however, be an example explaining what will happen in a fictional scenario and not an explanation tested by time. It is more modern theoretical work that could be fact or science fiction just as the utility of time travel for humans could be.
4.2
The example will be of a lost astronaut called Fatuous. He is part of a mission to further study black holes and is not very cut out for this Job. As such he knows nothing about black holes. So, when he drops his sandwich into one, he goes diving in after it. Fortunately for the researchers he happens to be immortal and can tell them what he sees. He does not become ‘spaghetti’ What happens to him?
Before delving into this example scenario, what is a black hole. The first photo of a black hole was of M87 in 2017. You may have seen it. A black hole is formed by gravity overcoming pressure created by nuclear fusion due to size or less fusion taking place. Fusion is the process by which stars emit heat and light. The form the compressed star takes depends on its size. When the star is relatively small it may become a white dwarf. A white dwarf has the vibration of particles that have been pushed closer creating a pressure to equalise with gravity. Next is a neutron star which is formed when the star was so big that the pressure created by the particles is not enough. This is because the fastest that they can move is the speed of light. And so, protons and electrons have to join to form neutrons and neutrinos. Neutrinos are just fundamental particles meaning they are not able to be broken down. This mass limit for white dwarfs that mean neutron stars are formed was proposed by Subrahmanyan Chandrasekhar in 1930. Beyond even this mass, black holes are formed.
4.4
The anatomy of a black hole can vary. The basic anatomy is a singularity, a point where all mas has been squashed into infinite density. Then the event horizon, the point of no return where not even light can escape. These will be looked at further in our scenario. Depending on type and size there may be an accretion disc where matter and hot gas spin around the black hole in an orange-red ring. They have coronas which are arcs above the black hole. It may have particle jets that have matter streaming away from the black hole. Black holes emit Hawking Radiation, where the mass becomes energy which can be radiated away. The scenario in this example is a spinning black hole.
4.5
Fatuous pushes off the space station after his sandwich very fast. As he approaches the black hole, the other astronauts see him slowing down until he reaches the event horizon. At the event horizon they see him stop. He slowly gets redder and redder until he disappears. Alternatively, Fatuous sees himself going straight through. He passes through the horizon extremely fast. Just as he is doing this he sees light and tries to run towards it. He fails and takes a bite of the sandwich he was reunited with through his futuristic, one way helmet. Then he is pulled into the inner horizon. In here he sees the singularity. It is in the shape of a ring. He is able to move freely. He decides to move through the singularity. After this he is pushed out of it into a different universe. Gravity pushes here. He finds this very odd and loses his appetite.
4.6
What is going on here? To explain this, it is important to first understand light cones and Penrose diagrams, named after Roger Penrose. A light cone is a graph of space and time that has the speed of light at a forty five degree angle. This then shows all of the possible places you could go in the future. This is like how the observable universe is all we can see as the light has to travels back to us. There is also a light cone opposite to show all that could have happened in the past. A Penrose diagram shows space time when it is curved and when it is so curved that there is a black hole. When Fatuous reaches the event horizon, the photons that display the image of him are stuck on the horizon as this is the exact point where the force flowing inwards ins the same as the speed of light. The reason why he becomes more red is due to redshift, according to Howard and Dobrijevik, 2023. This is when the waves of light are extended due to them fighting against the force of gravity to escape. However, for Fatuous, he can pass normally through. Inside the horizon nothing escapes as the force is greater than the speed of light. This can be visualised as a circular waterfall where the effort to escape is too much beyond a point and the same at a point. Now when considering the singularity, Penrose diagrams become very helpful. This is because they show that the singularity is not a point in space but a moment in time. For some black holes, the singularity can be forever in your future just out of reach. In Penrose diagrams, the universe and a black hole are depicted together in a helpful map. Scientists morph light cones and Penrose
diagrams. By doing this they see that in the past of the black hole there is the opposite, a white hole. It pushes everything out of it like how a black whole attracts things. This is what Fatuous feels as he is pushed out. It is important to note that if this were not a spinning black hole he would have gone into a parallel universe where two people from different universes could meet in a black hole. However, he goes into an opposite universe where this type of universe and the opposite, alternate infinitely. At the centre of the alternate universe, black hole, white hole and our own universe is a wormhole. It is unstable. This is why when Fatuous tries to run across it he cannot. He can only run a finite amount. He is limited by the speed of light whereas the wormhole is not. The wormhole will ‘pinch off.’ Kip Throne and Micheal Morris found some wormhole structures that could work mathematically but are physically difficult. This is why wormholes are described as ‘not naturally occurring.’ This is where the boundary between science and science fiction are blurred and it may be the science fiction that forms the new science. Only the future will tell.
Could black holes be harnessed by humans, to master gravity and travel through universes. Maybe, in a lot of time. There are some problems such as ‘spaghettification’ before even arriving to the singularity that may always lie in your future. The only way to test whether there are other universes is to jump in. We could certainly not use wormholes. However, one thing is certain, there is more to come, we don’t know what we could do with them as we do not fully understand them yet.
In the essay the unlikelihood of time travel in a science fiction sense has been explained through the exploration of four topics and each of these topics highlights a fundamental reason why time travel in a science fiction sense is unlikely. The key reasons are summarised as:
Entropy can only increase, limiting the arrow of time.
Relativity limits the speed anything can go at.
Quantum mechanics allows uncertainty to arise when dealing with the nature of particles and their interactions with time. However, this does not apply to large scale interactions.
Black holes are hostile to humans and therefore difficult to utilise.
"All we have to decide is what to do with the time that is given us."
J.R.R. Tolkien
To conclude, it may seem as though our traveling through time is very limited but this is not the case. We have many possibilities working with these rules. I believe that time travel would be so difficult to harness that as a species, we should have nothing to do with it, as much as we can. Moreover, the universe seems to have laws prohibiting anything that contradicts itself, meaning most of science fiction to do with time travel. This is exemplified by relativity and entropy. Although the human imagination and yearning for what is not already understood and controlled by us is formidable. It would not be surprising that we manage to do something more impressive in the endless search for what is unattainable. For now, we have time travel in an everyday, interesting sense which people do not seem to appreciate for its bizarre nature. We should enjoy that and marvel at the curious nature of everything around us but still keep looking for what comes next; however, we get there.
1. N.B. all websites and papers are cited in the text
2. Cox, B and Forshaw, J. (2022). Black holes: The key to understanding the universe. First Edition. England: William Collins.
3. Hawking, S. (1988). A brief history of time. First Edition England: Bantam Dell Publishing Group.
4. Prigogine, I and Stenger, I. (1978). Order out of chaos. First Edition. France: Bantam New Age Books.
5. Adams, D. (1979). The hitchhiker’s guide to the galaxy. First Editon. England: Weidenfeld & Nicolson.
6. Hartly, L (1953). The Go Between. First Edition. England: Hamish Hamilton.
7. Tolken, J (1954). The fellowship of the ring. First Edition. England: George Allen & Unwin.
8. The Most Misunderstood Concept in Physics YouTube – Veritasium, 2023.
9. A better description of entropy – Steve Mould, 2016.
10. What is the difference between Special Relativity and General Relativity? - World Science Festival, 2015.
11. Quantum Entanglement & Spooky Action at a Distance – Veritasium, 2015.
12. Something Strange Happens When You Follow Einstein's Math – Veritasium, 2024.
For now, we have time travel in an everyday, interesting sense which people do not seem to appreciate for its bizarre nature. We should enjoy that and marvel at the curious nature of everything around us but still keep looking for what comes next; however, we get there.
TOBY BECKINGHAM
This essay was highly commended at the Junior ILA Celebration Evening
Machine learning and artificial intelligence is becoming an increasingly controversial subject, with disinformation, questions, and confusion often surrounding the subject. The lack of precision when using terminology associated with the subject is one of the causes of this, as well as the Uncanny Valley phenomenon, which is the most common cause of fear of robots. (Barrat, 2021) The Uncanny Valley phenomenon describes how there is an
instinctual distrust of objects with human-like tendencies and behaviours, and the more an object behaves like a human, the more distrust is created, until the point when it seems that the object is a human, with no distinguishing features, and thus the object is trusted. Therefore, there is an instinctual distrust of AI, which may put others off of using AI, believing it to be inferior to humans, and therefore a threat to humans.
There is, in theory, no reason that an AI could not create pieces of music of equal or greater merit than humans. This is understandably fear-inducing, for fear of one’s job and hobby being replaced by a machine. However, with the correct guidelines and usage, AI can be a tool to help musicians and composers to create better music, and to perform and learn more effectively, enhancing not only pedagogy, but also performance and the creation of new music.
A graph demonstrating the “Uncanny Valley” phenomenon. Musical automata have existed for millennia, since around 270 BC, before Palestrina, and mediaeval music. Musical automata are machines that can independently perform music. The first reference to musical automata can be found in Ancient Greece, where Ctesibius and Heron of Alexandria described their designs such as mechanical singing birds. (Chen, Ceccarelli and Yan, 2018) These automata are not only relevant as a part of musical history, but also as a source, allowing musicologists to realise ornaments in a historically accurate manner. (Fuller, 1983) Often, these machines are more valuable than inferences made from other sources such as inference from analogous passages, or ornament tables such as that written by Carl Philipp Emanuel Bach, Johann Sebastian Bach’s son. This is due to the fact that one can see the notes, as well as the rhythms, as
well as the performance style, which does not rely on inferences, or guesswork. Whilst automatic mechanical instruments should not be relied upon as a sole source for historical performance practice, due to the fact that pins can become bent, broken, or distorted, and accuracy cannot always be assured with rhythmic precision, they are able to tell us much about the time. Mozart’s Fantasia in F minor KV 608 is an excellent example of the benefits of the usage of mechanical instruments. Furthermore, impossible leaps and stretches are now possible, allowing thicker textures, more dexterity and precision, particularly with attacks and releases, which are said to be “everywhere uniform”. (Fuller, 1983) Whilst often these lack the “subtle variations in touch of a skilled keyboard player”, this could indicate a stylistic fashion, or weakness in the tonal quality and skill of the pinners. Thus, musical automata have been useful in the past as players and historical sources, from the smallest music box to the barrel organ, and the Hornwerk of Salzburg.
AI is relatively modern in comparison, and as a rapidly evolving technology, is highly unpredictable. Often, due to the imprecision of the media, technical terms are mislabelled, or misused. For example, deep learning is often used as a substitute for GPT, or generative pre-trained AI. Machine learning is often used as a substitute for AI, and whilst they have similarities, they are different in multiple ways. Artificial intelligence is an overarching branch, which encompasses all of the aforementioned terms, and is defined by the Oxford English Dictionary as
The capacity of computers or other machines to exhibit or simulate intelligent behaviour; the field of study concerned with this.
(Oxford English Dictionary, 2023)
Alan Turing first coined the term artificial intelligence in his 1950 paper Computing Machinery and Intelligence, (Turing, 1950) proposing an answer to the question, “Can machines think?”. Turing proposes a game, known as the imitation game, where a machine attempts to pose as a human, and the interrogator has to guess which of the two players is human, and which is a machine. A machine that can successfully fool a human interrogator into believing that it is a human has successfully passed the Turing test and thus exhibited or simulated human behaviour.
However, one criticism of this is that we should not try to simulate human behaviour, as human behaviour is prone to flaws, such as greed, ignorance and making mistakes. Despite this, making mistakes is less relevant in music, because a mistake is only a note that does not obey the current rules which the rest of the music follows.
To quote Jacob Collier, “That’s not a wrong note, you just lack confidence”. (Collier, 2021) In effect, this means that a wrong note is not wrong, it just does not suit the rest of the music or goes against what the ear expects. Going against what the ear expects is also not truly wrong, as it can create surprise or drama, adding to the excitement of the music. However, it may be exceedingly difficult to avoid mistakes, as it is humans who are creating AI, and thus there is potential for mistakes to be passed down into the model, as well as from the data set that it is fed. Mistakes are frequently found in GPT models, such as the chatbot ChatGPT, as well as other types of AI, including image generators, which may have biases due to them replicating bias in their dataset. Two of the most relevant criticisms to artificial intelligence are the ones expressed by Lady Ada Lovelace, and by Professor Jefferson.
The first criticism of artificial intelligence is Lady Lovelace’s criticism of Babbage’s Analytical Machine, and thus artificial intelligence. This states that [Babbage’s Analytical Machine] “has no pretensions to originate anything. It can do whatever we know how to order it to perform.” Lovelace’s criticism is later paraphrased into “a machine can never do anything new”, which is quickly blocked by the counterargument that nothing is truly new, “For if I have seen anything further, then it is by standing on the shoulders of giants”, to (supposedly) quote Sir Isaac Newton. This means that one is only expanding on what already exists, and even a machine can analyse something that already exists, by analysing patterns (i.e. deep learning), and presenting this pattern in a suitable way. Another variation on this is that “a machine cannot surprise us”. Turing gives an excellent argument to this in his paper Computing Machinery and Intelligence (Turing, 1950) Turing explains that due to the common, but false assumption that "as soon as a fact is presented to the mind, all consequences of that fact spring into the mind simultaneously with it”. Therefore, there is 'no virtue', or surprise in a machine working out the consequences, given the data and general principals. However, this statement does not seem to bear true for humans, for otherwise scientists should not be credited with their discoveries, for the data was available to them, so all they had to do was work it out. Thus, Turing presents a multitude of arguments to Lady Lovelace’s criticism of the Analytical Machine, and overall, these arguments are satisfactory.
Additionally, AI is often distrusted due to it mimicking human behaviour, by absorbing a large sample set of data, and replicating it. Mimicry is often looked down upon by humans, as we regard those who mimic others as sub-human, or lacking in character, for example the clones in Kazuo Ishiguro’s book Never Let Me Go, where human clones mimic those that they see on television or on the street, because they feel that they lack a distinct character due to the systematic oppression and suppression of individualism. The clones try to have discernible characters in a desperate attempt to be normalised into society, based on a false hope that they can integrate into society. Therefore, as humans, we regard it as inferior, because it is only copying, and according to Western culture, copying is seen as being a lack of originality and of far less merit than original, creative thinking.
Professor Jefferson’s argument is that “Not until a machine can […] compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols could we agree that machine equals brain […]” This is more relevant; however, music is not only used as a means of expressing emotion. It can be used as part of a ceremony, or to validate a ritual, or promoting social stability, or even as an outlet for negative emotions. (Ball, 2010) Music has often been compared to another language. Therefore, a machine needs to have an independent purpose of why it is composing the music, which cannot merely be that its user has told it to. Additionally, the machine should understand what it is trying to
convey, and why. If a musical AI can fulfil the above criteria, then this AI is truly intelligent. If it cannot fulfil these criteria, then it is just a system, where an input is taken (a prompt), the AI systematically generates music using a process or formula and then outputs this.
Algorithmic generation is where an input follows a formula, or fixed set of rules, and parameters are modulated or randomised to achieve a desired result, without human intervention. Therefore, algorithmic generation is not intelligent, because it does not understand why it has an input, and its algorithm is pre-determined, meaning that it has not independently or creatively made this formula. Algorithmic generation is particularly well suited to subjects such as counterpoint, such as Fux’s style of counterpoint, due to the strict rules-based nature of this. However, algorithmic generation would struggle to replicate something like Bach, where Bach tends to follow the rules of counterpoint, but occasionally breaks them to achieve a composition that is more emotionally impactful. Additionally, modern technology such as the Google Bach Doodle struggles to replicate a Bach chorale, not due to a small sample size, but due to an algorithm that does not properly understand the rules associated with Bach Chorales. (Doornbusch, 2019)
This is continued in the Romantic period, where freedom of expression and emotion becomes a predominant theme, in response to the radically changing political atmosphere at the time, as seen for example in France, during the French Revolution. Freedom of expression, and the abolition of regulation of music allowed the establishment of the Paris Conservatoire, leading to music that was more free in structure, and did not have the same stricture to rules. (Clifford, 2022) By enlarging orchestras and pushing extremes, composers are able to highlight the grandeur and emotional extremes of their pieces. If these emotions cannot be replicated by a machine, then a machine would struggle to create music in the style, but could, theoretically, emulate the product, if not the emotions that the composers themselves felt.
However, one must remember that the performer, composer, and listener have an impact on the music. The composer tries to frame emotions or ideas into the music, and then the performer must bring out these ideas, in a way that the listener can understand and pick up on. Therefore, a composition by a machine has to have ideas that it came up with independently, or was inspired by, rather than being told to generate a piece of music based on an idea that the user had. Additionally, machine performance has to understand the emotions and ideas of the composer, in order to convey it meaningfully to the listener, who
must then pick up on the ideas conveyed by the performer and composer and internalise them. Therefore, the machine cannot only emulate the idea or emotion but must understand the meaning of the idea and emotion, in order to effectively convey it to the listener.
If a truly intelligent program/being could be created, and could understand how and why to create music, then do we as the creators of this program/ being have the rights to this music? This is an issue that has been increasingly common with musical AI, because at a mathematical level, music consists of a finite number of combinations of rhythm, pitch, and other elements of music, within a given amount of time. However, many of these combinations will be very similar, so could be regarded as the same thing, or as plagiarism. This leads us to the issue of what is regarded as plagiarism, for an AI draws upon a data set in order to generate content, and because companies that use neural networks and AI often spend millions of dollars on developing their dataset, often resulting in their dataset remaining undisclosed, and only the AI (as an API) not the data set can be used. (OpenAI, 2024) This leads to the problem that it is impossible for the creator of the data does not know if they are being plagiarised, because AI can be used to hide plagiarism, by breaking ideas down into constituent parts, such as phrases, ideas and words, and then rephrasing them. AI checkers rely on AI writing in a certain way, with a particular style, utilising common phrases. Additionally, students have a
When
we use AI, we have to think critically of the information that it gives us and should not treat it as merely the perfect solution to a problem.
tendency to rely on AI without fact-checking, as an AI’s data set often contains mistakes, meaning that it can produce mistakes, or mix data in an incorrect way that would not be done by a human. (Darvishi et al., 2023) However, providing that when AI is used, it is cited as being AI, with the program and name of the technology sourced, then it is possible to avoid the issues with plagiarism, providing that the data sets are obtained ethically and legally. Additionally, if AI had a blockchain chip or file format that it could only be exported as, in order to prove that it was from AI, then this would avoid the previous problems, as well as enforcing student and composer/performer honesty. However, this does not avoid the problem of AI reducing the user’s ability to think critically. This can be highly problematic from an ethical point of view, because if a human user does not think about the content that is being produced and absorbs it without thinking that it could be wrong, then the machine could be used to spread propaganda. Unthinking humans that absorb corrupt information have led to some of the worst crimes in human history, such as the Holocaust. An AI’s data set being corrupted is not merely a philosophical concept or ethical issue, as there are numerous examples, such as the Tay chatbot in 2016, which began spreading racist and sexually-charged messages, which mimicked human users who were targeting a specific vulnerability in the chatbot, causing Microsoft to remove the service 16 hours after it had been released. (Lee, 2016)
Therefore, when we use AI, we have to think critically of the information that it gives us and should not treat it as merely the perfect solution to a problem. From a music point of view, music has been created in the past to spread political propaganda such as Shostakovich, who disagreed with the Soviet regime, and fought against antisemitism, by including Jewish folk song and themes in his music, such as the Piano Trio No. 2 in E Minor and Symphony No.13, which features text memorialising the Jewish victims of a Nazi massacre, whilst denouncing Soviet antisemitism. (Carnegie Hall, 2024) Another famous example of Shostakovich’s political activism through music can be seen in his Symphony No.4, which was not performed until 1961, eight years after Stalin’s death, as it would certainly have angered Stalin due to it being 'formalist' and 'overly pessimistic'. There was a ban in place by the Soviet Composer’s Union, which banned any form of pessimism, dissonant, or critical of the state. Some interpret this symphony as a response to the oppression of the arts, as it was composed in 1935-1936, only one year later after many
filmmakers, musicians and writers were imprisoned for speaking up against the Soviet regime. However, not all politically charged music is for anti-oppression purposes. Nazi propaganda music, such as the Horst-Wessel-Lied, the official Nazi anthem, was used to rile up crowds and previously communist melodies were adopted into Nazi propaganda music, to appeal to the working class. After being martyred by Goebbels, the march that Wessel wrote became a symbol of rising up against the Communist Red Front, and inspiring patriotism. (Longerich et al., 2015) This can become immensely problematic when music created by AI is politically charged due to the data set being politically charged, and the user of the program does not even realise it. Fortunately, censorship AI is in place in an attempt to try and prevent offensive themes from permeating the music, by comparing datasets of music without offensive themes, and music that contains offensive themes, and by understanding the difference, it can be used to remove offensive themes from datasets, and purge data which could generate offensive themes. In an effort to prove how some of these
issues affect musicians, a survey was released to a sample of 18 participants. Unfortunately, due to the small sample size, and the fact that all of the participants were 18 or under, and 83% of the participants were male and went to the same school, the results drawn from this survey cannot be representative of the wider musical population, as there is a lack of diversity within the sample set. However, there is an interesting split between whether these musicians believed that AI was a significant threat. Around half of the respondents believed that AI was not a significant threat. One third of the sample disagreed with the statement that AI is a significant threat for composers, but 27.8% believed that AI was a significant threat for composers, and 22.2% were neutral. This suggests that there is no clear opinion among musicians whether AI is a significant threat, particularly when 83.3% believed that AI currently had problems, and two thirds of the sample set believed that generative AI will have problems in the future. The belief that AI has problems or is sub-human means that all of the respondents said that they were unlikely to use AI in their music in the future and may also contribute to why the participants were unsure if AI truly was a significant threat.
95% of respondents said that AI should legally have a stamp so that it could be recognised. This data seems to suggest that musicians would like to differentiate between human and AI work, suggesting that musicians regard AI as less merit-worthy than human work. Additionally, the sample believed that AI’s most useful use was as a tool that amateur musicians and composers can use to improve, rather than a threat. 72% of participants had heard of AI being used in music, and 69% of participants believed that it sounded 'like a useful tool' and was a viable alternative to mixing/post-production (not the initial creative aspects of music making). Compared to other surveys, these trends are continued, such as in the Ditto music survey, which found that musicians are more likely to use AI for mastering and artwork, rather than songwriting. (Parsons, 2023)
In conclusion, AI can affect musicians in many different ways, from fear, to distrust, to being regarded as a useful tool. The majority of musicians believe that AI content should be marked out as being generated by AI, perhaps in an attempt to value human content more, decreasing the merit of AI works, or because they believe that AI content is inferior to human-created content. Even the language that we use to treat AI is different to how we would treat a sentient human. Often, AI is referred to as “it”, or a “tool”, suggesting that humans wish to differentiate between “other” and self. This can be explained as a natural instinct, as tribal humans had to be able to discern between allies and enemies, in order to survive. This results in the ‘Uncanny Valley’ effect, which can lead many musicians to mistrust AI. (Geue, 2021)
According to the surveys previously mentioned, AI permeates musicians’ lives in a variety of ways, but is yet to truly make an impact, with most musicians regarding it as a tool to aid in teaching, and to improve compositions. Despite this, most musicians do not believe that it is sophisticated or human enough yet to be able to originally compose a piece of music and use in the place of human writing. However, merely because AI is not currently a threat does not mean that it could be, and we should continue to be aware of the ever-growing nature of machine learning, and should think logically and cautiously about how we move into this new era of intelligence.
AI permeates musicians’ lives in a variety of ways, but is yet to truly make an impact, with most musicians regarding it as a tool to aid in teaching, and to improve compositions.
1. Ball, P. (2010). The Music Instinct. Oxford University Press.
2. Barratt, E. (2021). How psychologists are using robots to study the ‘uncanny valley’. [online] BPS. Available at: https://www. bps.org.uk/research-digest/how-psychologists-are-usingrobots-study-uncanny-valley [Accessed 31 Jan. 2025].
3. Carnegie Hall (2024). A Guide to Shostakovich’s Symphonies. [online] Carnegiehall.org. Available at: https://www.carnegiehall. org/Explore/Articles/2024/11/07/Shostakovich-Symphony-Guide
4. Chen, Y.-H., Ceccarelli, M. and Yan, H.-S. (2018). A Historical Study and Mechanical Classification of Ancient Music-Playing Automata. Mechanism and Machine Theory, 121, pp.273–285. doi:https://doi.org/10.1016/j.mechmachtheory.2017.10.015
5. Clifford, B. (2022). Classical music, privilege, and ghosts of the French Revolution. [online] OUPblog. Available at: https://blog.oup.com/2022/07/classical-musicprivilege-and-ghosts-of-the-french-revolution/
6. Collier, J. (2021). That’s not a wrong note, you just lack confidence. (Jacob Collier). [online] www.youtube.com. Available at: https://www.youtube.com/watch?v=meha_FCcHbo
7. Darvishi, A., Khosravi, H., Sadiq, S., Gašević, D. and Siemens, G. (2023). Impact of AI Assistance on Student Agency. Computers & Education, [online] 210, p.104967. doi:https://doi.org/10.1016/j.compedu.2023.104967
8. Doornbusch, P. (2019). Google’s Bach Doodle and Other Online Tools for Algorithmic Music Instruction. College Music Symposium, [online] 59(2), pp.1–3. doi:https://doi.org/10.2307/26902595
9. Fuller, D. (1983). An Introduction to Automatic Instruments. Early Music, [online] 11(2), pp.164–166. doi:https://doi.org/10.2307/3137828
10. Geue, L. (2021). From robots to primates : Tracing the uncanny valley effect to its evolutionary origin. [online] Available at: http://essay.utwente.nl/87564/
11. Lee, P. (2016). Learning from Tay’s Introduction. [online] Microsoft Blog. Available at: https://blogs. microsoft.com/blog/2016/03/25/learning-tays-introd uction/#sm.00000gjdpwwcfcus11t6oo6dw79gw
12. Longerich, P., Bance, A., Noakes, J. and Sharpe, L. (2015). Goebbels : a biography. New York: Random House.
13. OpenAI (2024). OpenAI Platform. [online] Openai.com Available at: https://platform.openai.com/docs/overview
14. Oxford English Dictionary (2023). Artificial Intelligence (n). [online] www.oed.com. Available at: https://www.oed.com/dictionary/artificialintelligence_n?tab=meaning_and_use#38531565.
15. Parsons, L. (2023). 60% of Musicians Are Already Using AI to Make music. [online] press.dittomusic. com. Available at: https://press.dittomusic.com/60-ofmusicians-are-already-using-ai-to-make-music
16. Turing, A. (1950). Computing Machinery and Intelligence. Mind, 59(236), pp.433–460.
17. W. J. G. Ord-Hume, A. (1983). Cogs and Crotchets: A View of Mechanical Music. Early Music, [online] 11(2), pp.167–171. doi:https://doi.org/10.2307/3137829
TOM CHERTKOW
This essay was commended at the Junior ILA Celebration Evening
The prospect of humans on planets other than Earth is one familiar to many; it is a favourite of billionaires and sci-fi writers alike, yet could it really happen? Could we be seeing people set foot on Mars in the next few decades, or is the whole concept overblown by the media? These questions, among many others, will be answered here. Firstly, what does it really mean for humanity to be an ‘interplanetary civilisation'? Put simply, it requires a permanent human presence on at least one planet
other than Earth. Since it is specifically related to planets, a presence on the moon wouldn’t classify, however it may be a key stepping stone to reaching another planet, as will be mentioned later. Therefore, much of this essay will focus on the next most discussed prospect, being Mars. Various areas surrounding this question will be explored, namely the physical limitations, the technological issues, the cost and the health risks.
Although one of the more obvious of the challenges, the physical properties of the planets and their locations is nevertheless an important issue to discuss. It is not only the distance between planets that is the issue, but how much it varies. At its closest approach, Mars is less than 56 million km from Earth, but at its furthest, it is almost 400 million km away.1 This results in many challenges, as the variable distance affects the time information takes to travel there, as well as how long there is between the most efficient times to travel there. This issue will likely be the most difficult to overcome in the short term, because once there is some form of infrastructure there and it is relatively self-sufficient it will be able to operate as its own entity, with little intervention from Earth necessary, however for the initial stages, this is one of the key physical challenges.
Martian launch windows (the most efficient time to launch) only occur roughly every two years and two months (780 Earth days).2 This means that any community established there would have to be able to last at least that long without any support from Earth. One simple solution to this would be to set up a base on the first mission there. This base would likely be much less glamorous than sci-fi likes to depict it; probably comprising of some underground tunnels to avoid radiation, with some gardens and food stores in addition to other life support systems involving temperature and air. This it is a vital step towards permanent infrastructure on another world. Another solution would be to terraform Mars (manipulate the environment to appear as if it was Earth’s). This is significantly more costly than simply landing a base there, but humanity does have the technical capabilities.3 In Mars’ case specifically, terraforming would likely involve some method of releasing CO2 from the ice in the poles to essentially be a catalyst for global warming thus making the temperature ideal for life. From there, organising an ecosystem would follow, before humans would eventually move there.
1 Encyclopaedia Britannica (2025) ‘Mars – Basic astronomical data’ available at Mars - Red Planet, Orbit, Moons | Britannica (accessed 29 January 2025)
2 Douglas W. Gage (2013) ‘Humans to Mars: Stay Longer, Go Sooner, Prepare Now’ available at Humans to Mars: Stay Longer, Go Sooner, Prepare Now on JSTOR (accessed 4 February 2025)
3 James S.J. Schwartz (2013) ‘On the Moral Permissibility of Terraforming’ available at On the Moral Permissibility of Terraforming on JSTOR (accessed 7 February 2025)
Another effect of the variable distance between Mars and Earth is the time it takes information to travel that distance, since it is limited to the speed of light. Assuming the scenario where the internet spreads to the rest of the solar system once we expand there, the delay in communications between Earth and Mars would vary from 6 – 11 minutes if they are on the same side of the sun and 40 – 44 minutes if they are on the other side.4 Unfortunately, due to the cause of this issue being the speed of light, this is a problem which can never truly be solved that just adds extra complications to the existing challenges.
Another challenge associated with travelling to other planets is the technology that enables it. This includes both the rockets which are used, which are not yet able to carry the large structures required to set up a base there, and any potential future infrastructure to make space travel easier.
Rockets are clearly pivotal for any expansion beyond Earth, as they are simply the best form of propulsion developed to travel through vacuums. Yet there is still a plethora of improvements that could be made, from SpaceX attempting to make reusable rockets, significantly cutting costs, to other forms of propulsion entirely, it is clear that there must be some development in the technology. One of the main limiting factors of current rocket designs is the amount of size and mass that must be utilised for the propulsion system, leaving very little space for the payload. A potential solution to this currently being considered is a ground based system (GBS) which would propel the rocket to space, without the rocket having to have any propulsion system itself.
These GBSs could take many forms, ranging from space elevators to magnetic propulsion systems. Starting with ‘beamed energy propulsion’, this is a system that relies on a laser being fired from the ground to a rocket above, combusting the air or solid fuel there, generating lift. Another, far more well-known approach would be the space elevator, where a satellite in orbit can lift a payload to it on a platform, however no materials strong enough for this task currently exist, so it is only theoretical. There are also concepts for partially ground based systems as well, wherein magnetic or other propulsion systems would launch a rocket into space where it would then use conventional fuel to reach an orbit. However promising these systems may sound, as of yet they have only been used in small, enclosed tests and the technology is far from being used for any real space travel.5
Now that we have got the rocket to space, the next step should be to get it to Mars. One of the suggestions for this would be to have a space station, using the gravities of Earth and Mars to slingshot between them, enabling easy travel between the two planets.6 Experts suggest that the
station would encounter the planets roughly every 2.7 years, with the transit time being around 6 months. A benefit of this is that the act of using the planets’ gravity, known as a gravity assist, results in very little fuel required to keep the station on the correct orbital path, therefore allowing inexpensive travel from planet to planet.
Another potential issue with expansion into space is the economic cost to any government or independent body wishing to invest in it. NASA (the National Aeronautics and Space Administration) spent just over 0.1% of the GDP of the United States in 2016, a figure equivalent to roughly $1.9 trillion or £1.5 trillion. In addition, this was a time with little interest in expansion into space, whereas in the space race in the 1960s, they were spending upwards of 0.7% of GDP on NASA.7 These costs currently make it unsustainable for anybody to properly consider expansion into space, with space agencies instead focusing on cutting costs on existing technology.
5 J. Coopersmith (2012) ‘Affordable Access to Space’ available at Affordable Access to Space on JSTOR (accessed 12 February 2025)
6 J. Oberg, B. Aldrin (2000) ‘A Bus Between The Planets’ available at A BUS BETWEEN THE PLANETS on JSTOR (accessed 14 February 2025)
7 M. Weinzierl (2018) ‘Space, The Final Economic Frontier’ available at Space, the Final Economic Frontier on JSTOR (accessed 16 February 2025)
A lot of these changes were mentioned when discussing the technology, whether it be reusable rockets or propulsion systems that can be used repeatedly for different rockets, however one that has not yet been mentioned is the prospect of the moon as a checkpoint on course to the destinations farther afield. A permanent moon colony is already being developed by NASA as part of their Artemis program,8 and there have already been suggestions to use this colony to refuel rockets on their way to other destinations like Mars. This works because the Moon lacks an atmosphere, which makes ascents to orbit much more efficient than they would be from Earth, leaving a surplus of fuel. This could be used to dramatically cut the amount of fuel, and therefore the cost, in rockets. A mission to Mars following this model would work as follows: firstly the rocket would launch from Earth with enough fuel to reach the Moon, once there, it would either land at the base or rendezvous with a craft from the base whilst in orbit. It would then be refuelled with either fuel brought to the moon or that mined there, although the later comes with other complications as will be discussed shortly,
before continuing to its ultimate destination. While it may seem that space exploration is entirely an economic negative, with some equipment used for it costing $2.5 billion, there are industries that would benefit as well, for example the mining industry. Various asteroids contain rare Earth metals such as iron, gold and metals in the platinum group: osmium, iridium, rhodium, ruthenium and palladium and platinum.9 These metals sell for huge profits now, due to their rarity on Earth, potentially making expansion into space economically viable. However, some argue that due to the scarcity of asteroids with these resources easily accessible, an asteroid mining industry could quickly become a monopoly. There are also questions to be raised about the legality of such an industry, with the Outer Space Treaty stating that “Outer space, including the moon and other celestial bodies, is not subject to national appropriation by claim of sovereignty”10 essentially forbidding any party of claiming ownership of space. This treaty has been signed by 105 member states of the UN.
8 NASA (2025) ‘Missions – NASA’ available at Missions - NASA (accessed 18 February 2025)
9 R. E. Loder (2018) ‘Asteroid Mining: Ecological Jurisprudence Beyond Earth’ available at ASTEROID MINING: ECOLOGICAL JURISPRUDENCE BEYOND EARTH ECOLOGICAL JURISPRUDENCE BEYOND EARTH on JSTOR (accessed 20 February 2025)
10 A. M. Leon (2018) ‘Mining For Meaning: An Examination of the Legality of Property Rights in Space Resources’ available at MINING FOR MEANING: AN EXAMINATION OF THE LEGALITY OF PROPERTY RIGHTS IN SPACE RESOURCES on JSTOR (accessed 20 February 2025)
11 BBC (2019) ‘The Planets’ ep.2 ‘The Two Sisters – Earth and Mars’ 38:00 available on BBC iPlayer (accessed 21 February 2025)
Given infinite time, humans will expand into space, I do not expect it to happen any time soon.
Another challenge with settling permanently on other planets than Earth is the health risks associated with the different environment. One risk in particular focus for Mars is the radiation. This is caused by solar winds from the sun, which are essentially large groups of charged particles ejected from the sun. These solar winds are very dangerous as they can ionise and kill living cells, essentially killing living creatures, and potentially break off parts of the atmosphere as well. On Earth we are protected from these solar winds by the magnetic field, which forms a barrier, blocking the charged particles, however Mars has no such protection thus making radiation a severe threat.11 This can be solved in the short term by simply building underground, with the Martian soil blocking most of the radiation, but it prevents any activity on the surface, at least not without large scale terraforming.
Mars also has the additional challenge of temperature. While it is theorised that around 4 billion years ago, Mars had near perfect conditions for life, with water and temperatures of around 25°C , due to a variety of factors including the loss of gas in the atmosphere and the resulting climate change, the current temperature now sits at around -65°C on average, which is far colder than Earth’s average which is around 15°C . This again can be easily fixed in small bases with heating systems, but it is simply not feasible to heat the planet on the scale required to match that often portrayed by the media.
In conclusion, whilst I fully believe that given infinite time, humans will expand into space, I do not expect it to happen any time soon. This is due to the multitude of challenges any organisation would have to face to simply get there in the first place, and while the vast majority of these are able to be overcome, the cost of doing so would be immense. Furthermore, as a result of the current global climate, with conflict and rising tensions commonplace, along with the increasing awareness of the threat to our own planet in climate change, it is simply not a priority for anyone with the power to influence it. That being said, with progress in space exploration being made increasingly in the private sector, among billionaires as opposed to governments, it is possible that they may not be deterred by the cost and continue despite this.
1. Encyclopaedia Britannica (2025) ‘Mars – Basic astronomical data’ available at Mars - Red Planet, Orbit, Moons | Britannica (accessed 29 January 2025).
2. Douglas W. Gage (2013) ‘Humans to Mars: Stay Longer, Go Sooner, Prepare Now’ available at Humans to Mars: Stay Longer, Go Sooner, Prepare Now on JSTOR (accessed 4 February 2025).
3. James S.J. Schwartz (2013) ‘On the Moral Permissibility of Terraforming’ available at On the Moral Permissibility of Terraforming on JSTOR (accessed 7 February 2025).
4. Douglas W. Gage (2014) ‘Stepping Stones, Detours and Potholes on the Flexible Path to Mars’ available at Stepping Stones, Detours, and Potholes on the Flexible Path to Mars on JSTOR (accessed 6 February 2025).
5. J. Coopersmith (2012) ‘Affordable Access to Space’ available at Affordable Access to Space on JSTOR (accessed 12 February 2025).
6. J. Oberg, B. Aldrin (2000) ‘A Bus Between The Planets’ available at A BUS BETWEEN THE PLANETS on JSTOR (accessed 14 February 2025).
7. M. Weinzierl (2018) ‘Space, The Final Economic Frontier’ available at Space, the Final Economic Frontier on JSTOR (accessed 16 February 2025).
8. NASA (2025) ‘Missions – NASA’ available at Missions - NASA (accessed 18 February 2025).
9. R. E. Loder (2018) ‘Asteroid Mining: Ecological Jurisprudence Beyond Earth’ available at ASTEROID MINING: ECOLOGICAL JURISPRUDENCE BEYOND EARTH ECOLOGICAL JURISPRUDENCE BEYOND EARTH on JSTOR (accessed 20 February 2025).
10. A. M. Leon (2018) ‘Mining For Meaning: An Examination of the Legality of Property Rights in Space Resources’ available at MINING FOR MEANING: AN EXAMINATION OF THE LEGALITY OF PROPERTY RIGHTS IN SPACE RESOURCES on JSTOR (accessed 20 February 2025).
11. BBC (2019) ‘The Planets’ ep.2 ‘The Two Sisters – Earth and Mars’ 38:00 available on BBC iPlayer (accessed 21 February 2025).
12. BBC (2019) ‘The Planets’ ep.2 ‘The Two Sisters – Earth and Mars’ 13:40 available on BBC iPlayer (accessed 21 February 2025).
13. DK (2016) ‘It Can’t Be True! 2’ (accessed 22 February 2025).
Aaron Shahi/Riaan Verma How Does Motorsport R&D Benefit the Automotive Industry?
Adrian Bahari Assessing the Role of Artificial Intelligence in Diagnostic Medicine.
Albert Churchill An Investigation Into the Success of the International Criminal Court.
Alex Jones What Effect Will Unmanned Underwater Vehicles Have on the Future of Underwater Warfare?
Ayush Rao What Is Bill Ackman’s Multi-Billion-Dollar Investment Strategy?
Can Görgüner How Does VAR Affect Football?
Damin Lee How Does Fast Fashion Affect Climate Change?
Dario Alampi How Has the Behaviour and Attitude of Tennis Players Changed Over Time?
Devansh Panda Is Space Exploration Essential For Our Future or an Exorbitant Diversion From Overcoming Earth’s Critical Challenges?
Edward Haley How Conspiracy Theories Originated During the Cold War in America.
Eric Zhang Assessing the Feasibility of Using Tokamak Fusion Reactors as a Sustainable Energy Source in the Near Future.
Ethan Logue Determining the Feasibility of Nuclear Fusion.
Fin Burns How Did the Recent US Election Affect the Economy?
George Clubley The Evolution of Roller Coaster Launches.
George Holmwood Attitudes Towards Asylum Seekers and Solutions.
George Lye To What Extent is the English Civil War the English Revolution?
George Short How Has Saudi Arabia Changed Sports?
Hugh Bayne What Is the Effect of Caffeine on Synapses?
Leo Shaw Dobble - Brilliant But Flawed.
Luke Barrett Why was Propaganda Such an Important Weapon for the Nazis Between 1920 and 1945?
Matt Slater What Factors Have Led to the Development of a N/S Divide in the United Kingdom?
Matthew Wall How Did Trump Win?
Rajvir Mangat Was The Policy Of Appeasement (1930’S) a Good Idea?
Rayan Abbas The Impact of Technological Advancements in the Past 50 Years on Medicine.
Rex Morgan Addressing the UK’s Faltering Democracy.
Saif Mian Is the Modern Banking System a Curse or a Blessing?
Tate Brooker Is Going to Mars Worth it?
Theo Odhams What Makes a Good Board Game?
Theo O’Donnell Why Did the Soviet Union Collapse?
Thomas Aczel Is Time Travel Fact or Science Fiction?
Thomas Tallis What Factors Have Contributed to the Recent Dramatic Rise of the Far Right Movement Across Europe?
Toby Beckingham How Does Artificial Intelligence Affect Music and Modern Musicians?
Tom Chertkow Could Humanity Become an Interplanetary Civilisation?
Valentine Wallin Following the Retirement of Concorde in 2003, Will Supersonic Commercial Aviation Make a Viable Return To Modern Life?
Will Batlin Will Crypto Currencies Entirely Replace Government Made Money and if it Will Become Part Of Everyday Use?
William Courts Should We Allow Tourism in Environmentally Vulnerable Areas?
William Staveley How Social Media Has Changed the World of Marketing And Branding.
Zayan Ahmad The Role of Mathematics in the Progression Of Society.