F O O STEPS

Welcome to this year’s collection of research essays written by the Lower Fifth Academic Scholars. All the topics in this magazine were chosen by their author as an area of research that they were curious about and wanted to delve into more detail about, ranging from Folklore to gun control and social media to black holes. Therefore, the articles are notably engaging reads due to the depth of the research that has been done in order to write them. Many of the articles shed light on prevalent subjects and invite the reader to think more about the significance of what they have learnt.
The Lower Fifth research essays provide a great opportunity for everyone to develop their research skills in an area that they are passionate about, and it is evident how much the Lower Fifth enjoyed doing this through the quality of the work they have produced.
We would like to take this opportunity to thank everyone who has contributed with their essays. They truly make for a fascinating read, and it has been a pleasure to follow your thoughts through your research.
We hope you enjoy reading them as much as we did.
Best Wishes,
Tacita Rhys Williams and Jocelyn Yue Head of Academic Scholars 2023-2024
Welcome
I am delighted to welcome you to the second edition of Footsteps, the Lower Fifth Academic Scholars’ essay collection. The aim of the Scholars’ Programme at Downe House is to foster a culture of academic endeavour where enjoyment of learning is unlimited. Academic Award Holders have a wide range of choice and opportunities available to have their intellectual life enriched through stimulating and substantive academic endeavours, and they take a lead in encouraging and sharing their love of learning with their peers.
One of the key events in the Academic Scholars’ Programme is the Lower Fifth Research Essay Seminar. We meet once a term for a seminar, for which a group of award holders write and submit an essay in advance of the meeting. The focus of the seminar then becomes discussing and debating issues arising from the essays. It really is a highlight of each term to have the opportunity to meet with the pupils, to share ideas and to take time out of their busy timetables to talk about topics they really care about; topics that go beyond the curriculum and that provide a forum to learn for the joy of it! I have really enjoyed seeing where our discussions have led and how the pupils listen, engage and challenge each other, whether that is on gun control laws in America or discussing the role and impact of military interventions.
In writing the essays, our scholars are developing skills of research that will set them up for future independent study in the not-so-distant future- EPQs, A Level NEAs and, looking further ahead, university dissertations. All pupils in the Lower Fifth of course can build on these skills by taking part in the Foote Essay Competition, and many gain inspiration to read beyond the curriculum from the Elective Programme or from the wealth of academic enrichment opportunities on offer to students at Downe House. The Scholars’ Research Seminar essays enable academic award holders to take those extra few ‘footsteps’ on their academic journey and really stretch and challenge themselves and their peers.
Sincere thanks go to Tacita Rhys Williams and Jocelyn Yue, Heads of Scholars Seniors, who have edited this collection of essays and worked incredibly hard to provide a platform to showcase the work of the academic award holders and Downe House. And of course, thank you to the authors of the essays in Footsteps. I hope you enjoy reading it.
Mrs Maria Reichardt Head of Academic Scholars

What were the impacts of the drug Thalidomide?
HAO YUN (HEDY) JIANG
I CAME ACROSS THE TOPIC OF THALIDOMIDE WHILST TALKING WITH SOME OF MY FRIENDS, I WAS INTERESTED TO FIND OUT MORE ABOUT THE CRISIS AND THE SCIENTIFIC EXPLANATION BEHIND HOW BIRTH DEFECTS, SUCH AS BRAIN DAMAGE, HEART CONDITIONS AND PROBLEMS WITH THE EYES WERE CAUSED. THEREFORE, I RESEARCHED AND INCLUDED IN MY ESSAY THE GENERAL IMPACTS FROM PUBLIC CONSUMPTION, THE CHEMICAL STRUCTURE OF THALIDOMIDE, HOW IT IS BROKEN DOWN INSIDE THE BODY AND ITS BIOLOGICAL EFFECTS. I ALSO RESEARCHED AND COMPARED THE DRUG REGULATIONS BOTH BEFORE AND AFTER THE EFFECTS WERE KNOWN. I FOUND IT FASCINATING TO LEARN ABOUT HOW SUCH A DISASTROUS EVENT COULD HAVE POSITIVE EFFECTS ON THE WORLD AFTER THE FACT.
In 1957, West Germany, a new over-the-counter drug, called Thalidomide, was introduced and was supposed to cure morning sickness during pregnancy. It was initially thought to be safe during pregnancy, and everything was normal until the next year when thousands of infants were born with severe birth defects, such as brain damage, deformed limbs, problems with the eyes, urinary tract and heart. Two doctors, Widukind Lenz from Germany and William McBride from Australia, both noticed that the mothers of these babies had all taken the drug. By 1961, concerns regarding the birth defects arose and the medication was removed from markets in Europe. In the same year, Lenz figured out the connection between Thalidomide and the birth defects. But by then, about 10,000 infants had been
born with those birth defects, and approximately 40% of whom had passed away. Since the rules for drug testing were not as serious and strict during the time, it cost many lives.
Even though scientists were now aware that Thalidomide was a problem, they still didn’t know why. It was later discovered the answer lay in optical isomerism. Optical isomerism is exhibited by molecules which have a chiral centre, a central atom, usually carbon, from which four different groups are attached. The presence of this chiral centre allows two versions of the molecule to exist. These two different versions are called optical isomers or enantiomers and they are mirror images of each other.
Thalidomide is an example of a molecule which displays optical isomerism. The two isomers are called the S-enantiomer and the R-enantiomer. Since different receptors and enzymes inside the body react with molecules in very specific ways, the fact that enantiomers are mirror images means that they might react to things differently. In the case of Thalidomide , scientists discovered that the R-enantiomer helped with morning sickness, but the S-enantiomer caused severe birth defects. After researchers discovered that it was only the S version of Thalidomide that caused birth defects, they thought about isolating the R-enantiomer so they could continue to use it to treat morning sickness. However, it turned out that Thalidomide’s R-enantiomer can actually switch to the S version while it’s inside the human body. Meaning that even if they had isolated 100% pure R-enantiomer, it still wouldn’t be safe for pregnant women.
Scientists only recently discovered how Thalidomide causes birth defects. The drug has been difficult to study until now because the compound is broken down in the liver into potentially more than a hundred different compounds. Any of these, or some combination of them, could be the cause of the birth defects. But researchers found a way to isolate the
broken-down compounds (known as metabolites) and identified one of them, CPS49, to promotes the degradation of several transcription factors, which are proteins that bind to a section of DNA to control the speed of transcription of DNA to mRNA. One of these transcription factors is called SALL4 and this causes inhibited development of new blood vessels at a crucial stage in the pregnancy.
Women usually took the drug at about five to nine weeks into their pregnancy to combat morning sickness, this was a crucial stage as that is when the limbs of babies are still forming. The blood vessels involved in this process at this stage of pregnancy are still under-developed and are rapidly changing and expanding to make it possible for the limb to grow. This is why the most common birth defects caused by Thalidomide were babies born with abnormalities in the limbs.
The birth defects from the Thalidomide crisis resulted in development of greater drug regulation and monitoring in many countries. Since 1938, FDA approval for new drugs were required by the Food, Drug and Cosmetic Act (FDCA) in the United States, however it was not until after the Thalidomide crisis, in 1962, new drug sponsors were required by the FDCA to demonstrate the safety and effectiveness of their products before receiving approval from the Food and Drug Administration (FDA), additionally, drugs intended for human use could no longer only be approved on the basis of animal testing.
Not only did the United States take action to prevent abnormalities to occur in the future on the same scale, but in most countries, a monitoring system was introduced. In the UK, the 1968 Medicines Act, which was a direct result of the crisis, made distinctions between drugs that are prescribed, drugs that are available in pharmacies and drugs that are available for general use. The Yellow Card Scheme, which was initiated for doctors to share previously unknown side effects of medication they prescribed, has now widened so that anyone can report a side effect. This enabled the UK to make sure that any effect that a patient reports or signs which become evident at the time of examination are recorded by practitioners together with the drugs prescribed. This data is then filed immediately onto a national database and correlations of effects with drug usage will be noticed quickly and will be dealt with before thousands of people are affected. However, a problem regarding the introduction of new drugs illustrated by the Thalidomide crisis shows that not all drugs can be perfectly safe under every condition, and often the ultimate answers are dependent on the consequences of general
release. It also became mandatory to test new drugs on pregnant animals after the tragedy, which contributes to the overall evidence provided that drug trials for substances marketed to pregnant women were safe for use in pregnancy.
Despite the negative effects of Thalidomide, in 1964, at Jerusalem’s Hadassah University Hospital, a leprosy patient was given Thalidomide when other tranquilisers and painkillers proved to be of no use for him, and his doctor, Jacob Sheskin, noticed within three days that the leprosy had gone and the skin lesions had healed. However, when the patient stopped taking the Thalidomide, the leprosy symptoms returned. The drug seemed to have temporarily suppressed the disease, but not able to cure it. As a result of this, in 1967, the World Health Organisation (WHO) ran a clinical trial on the use of Thalidomide for leprosy, and after more positive results, Thalidomide was introduced as a treatment for leprosy in many countries. Nevertheless, the renewed use of Thalidomide remains controversial because of its history.
Overall, the impacts of Thalidomide resulted in severe birth defects in thousands of infants, mostly in Europe, and many, unfortunately, did not survive. Therefore, many countries have responded to this incident by taking greater precautions and regulations, such as the system for post-market drug surveillance. In 1998, the FDA approved Thalidomide almost forty years after rejecting it based on the birth defect problems. This time however, it was approved for treating cancers, such as multiple myeloma, and skin conditions such as Hansen’s Disease or leprosy and used to control some AIDSrelated conditions. Recent research is also leading some scientists to believe that in certain cases, Thalidomide could help with a number of debilitating diseases, including some forms of breast cancer. They just make sure not to prescribe it to anyone who is pregnant.
Gun control in the United States: Common sense vs the Constitution
CHARLOTTE WHEELER

I CHOSE THE TOPIC OF GUN CONTROL FOR TWO REASONS. FIRSTLY, THE LEGAL INTEREST IN THIS PROBLEM AS I USED TO STRUGGLE TO UNDERSTAND HOW THIS PROBLEM COULD NOT BE LEGISLATED AWAY. SECONDLY, FROM A SOCIAL POINT OF VIEW AND FROM MY INTEREST IN UNDERSTANDING THE NATURE AND LOGIC OF THE ENDURING SUPPORT FOR GUNS DESPITE THE VIOLENCE THAT IS REGULARLY OCCURRING TO AMERICAN CITIZENS. THOUGH I MAY NOT HAVE FULLY ANSWERED ALL THE QUESTIONS IN MY ESSAY, THIS RESEARCH ALLOWED ME TO BETTER UNDERSTAND THE NEARLY UNSOLVABLE NATURE OF THIS ALMOST UNIQUELY AMERICAN PROBLEM.
America’s foundation for law was created on 17 September 1787. Ratified in 1788, and in operation since 1789, the United States Constitution is the world’s longest surviving written charter of government. In the Second amendment it states, ‘A well-regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed’.
The United States has the highest homicide-byfirearm rate of the world’s most-developed nations. In the late 1990s, there was an abrupt increase in gun violence in schools and since 1999, over 300,000 children have experienced gun violence. In just 2023 there have already been more than 13,900 deaths from gun violence.
America does not just have a problem with gun violence in schools. The United States has faced at least 201 mass shootings so far this year. Mass shootings are defined as an incident in which four or more victims are shot or killed by the Congressional Research Service and this definition only focuses on public attacks and excludes domestic. There have been 15 shootings already, just in May.
In 2022, there were 20,200 malicious or accidental deaths caused by firearm with a further 38,550 people injured. Of this total, around 6,000 children under seventeen were killed or injured. Whilst 2022 numbers are not yet finalised there were around 24,000 suicides by gun. Ironically, deaths from defensive use of a gun, at around 1,178, is less than deaths from unintentional shooting of 1,626. In 2022, there were 1.8 mass shootings per day with 55 people every day killed by a gun and per 100
residents there are 120 guns owned. Sometimes, this is seen as a problem related to crime. Although, in a landmark 1999 study by Franklin E. Zimring and Gordon Hawkins of the University of California, Berkeley, they found that American crime was just more lethal. For example, a New Yorker is just as likely to be robbed as a Londoner, but the New Yorker is 54 times more likely to be killed in the process.
The modern interpretation of the Second Amendment is that law abiding citizens deserve the right to carry and own guns with limited restrictions. Nonetheless, there have been attempts to put in place gun control.
For example, in 1993 the Brady Handgun Violence Prevention Act (the Brady law) was passed. It was named for James B. Brady (1939), an official in the administration of President Ronald Reagan (1911–2004) who was unfortunately shot during a 1981 assassination attempt on the President. The Brady law imposed a five-day waiting period on handgun purchases and a background checks on buyers to determine whether they had a history of criminal behaviour, mental illness, or drug use.
In April 2023, in Colorado, Governor Jared Polis signed three bills into law that tighten restrictions on gun purchases and possession as well as a fourth that makes it easier for victims of gun violence to sue firearm companies. The new laws include raising the age of buying any gun, a three-day waiting period between buying and receiving a gun and expanding the state’s red flag law (which is a law that permits a state court to order the temporary removal of firearms from a person). Now, the law allows people such as doctors, mental health professionals and teachers to petition judges to remove people’s firearms, not just family members and law enforcement officers. The last bill, which would have banned semiautomatic firearms, failed to make it out of the House Judiciary Committee. Another bill to ban the production and sale of Ghost guns (homemade guns) is still in front of legislators.
This is an incredibly important step in introducing gun control and is defiant of national trend in the USA, which is constitutional carry. These bills were signed five months after an assailant killed five people and injured more than a dozen others in an LGBTQ+ nightclub in Colorado Spring which occurred two weeks after Governor Polis was re-elected. However, despite the bills being able to be passed, they are already being challenged.
Other countries have struggled with gun violence. However, they seem to have had a quicker reaction. Canada’s gun laws were prompted by gun violence. In 1989, a student armed with a semiautomatic rifle killed fourteen students and injured more than a dozen others at a Montreal Engineering School. The incident is widely credited with driving major gun reforms that imposed a twenty-eight-day waiting period for purchases; mandatory safety training courses; more detailed background checks; bans on large-capacity magazines; and bans or greater restrictions on military-style firearms and ammunition. Firearms in Canada are divided into three classes: nonrestricted weapons, such as ordinary rifles and shotguns; restricted, such as handguns and semiautomatic rifles or shotguns; and prohibited, such as automatic weapons. It is illegal to own a fully automatic weapon unless it was registered before 1978.
In Australia the changing point for modern gun control was the Port Arthur massacre of 1996, when a young man killed thirty-five people and wounded nearly two dozen others. The rampage, perpetrated with a semiautomatic rifle, was the worst mass shooting in the nation’s history. Less than two weeks later, the conservative-led national government pushed through fundamental changes to the country’s gun laws in cooperation with the various states and territories, which regulate firearms, similar to that of other nations.
Only the United States, whose rate and severity are unparalleled outside of conflict zones has consistently refused to respond with tightening gun laws. This can only prompt the question; why not? Some may argue that they believe that gun control would be ineffective, however in a recent analysis of 130 studies from 10 countries it shows that gun control legislation tends to reduce gun murders. The main difference between English and United States safeguards is that English protections rest on statute or case law and may be changed by ordinary statute, whereas US safeguards are constitutional and cannot be relaxed unless the Supreme Court later reverses its interpretation, or the Constitution is amended.
So, why has the second amendment not been repealed and is it even possible? The first process requires that any proposed amendment to the Constitution be passed by both the House and the Senate with two-thirds majorities. It would then need to be ratified by three-fourths of the 50 states – or 38 of them. The second option for repealing an amendment is to hold a Constitutional Convention. In that case, two-thirds of state legislatures would
need to call for such a convention, and states would write amendments that would then need to be ratified by three-fourths of the states. While this is theoretically possible, it has never happened since the Constitution was ratified.
Its first three words – “We The People” – affirms that the government of the United States exists to serve its citizens. At what point, will people collectively begin to say that the Second Amendment, as it exists now, does not serve the people?
Why is the MBTI an inaccurate way to judge a person’s character and future reactions? PEARL

(IVIE) AVWENAGHA
WHEN CHOOSING THE TOPIC FOR MY SCHOLARS’ RESEARCH ESSAY, I WANTED TO TAKE MYSELF OUT OF MY COMFORT ZONE, AND WRITE ABOUT SOMETHING NOT PARTICULARLY STEM RELATED, AS THAT IS WHAT I USUALLY GRAVITATE TOWARDS. INSTEAD, I WANTED TO TAKE A LOOK AT A TOPIC MORE PSYCHOLOGY BASED, AND PERSONALITY TESTS SEEMED LIKE A GOOD PLACE TO START. MY RESEARCHING PROCESS PRODUCED A LOT OF THE SAME DATA, WHICH I HAD NO CLUE ABOUT BEFORE DECIDING TO EXPLORE THE SUBJECT. DELVING INTO THIS NEW POOL OF KNOWLEDGE THAT I HAD NOT YET EXPLORED PROVED TO BE VERY FRUITFUL, AND I HAVE SUMMARISED MY FINDINGS IN MY ESSAY.
Have you ever found yourself reading up on your personality? For example, researching the traits of your zodiac sign, or taking a personality quiz? More often than not, your results may be deeply gratifying. One such personality quiz is the Myers-Briggs Type Indicator (MBTI).
The MBTI test is a personality profiler that has been famous across the globe for decades. Like zodiac signs and compatibility tests, it is a way to learn more about yourself by answering questions based on your reactions to specific scenarios. However, psychologists have constantly looked down on this test, claiming it has no scientific bases. So, why is the MBTI test such an inaccurate way to judge a person’s character and future decisions?
The bulk of the MBTI idea is credited to Swiss psychologist Carl Jung. He published a book named Psychological Types in 1921, which had several interesting theories about human personalities and how they can be grouped into categories. Firstly, he stated that humans fall into two main types – ‘perceivers’ and ‘judgers’. Perceivers are then divided into groups where they are driven by ‘sensation’ or ‘intuition’, and judgers, ‘thinking’ or ‘feeling’. These four types, very similar to each other, could then be split into ‘introversion’ or ‘extraversion’.
A few decades later, an American woman named Katherine Myers and her daughter Isabel Briggs Myers, both fascinated by the theories presented in Psychological Types, modified Jung’s theories, and made them more complex. Eventually, the categories became a sort of 4-digit binary number, with two options for each number – Introvert or Extrovert,
Intuitive or Sensational, Thinking or Feeling, and Judging or Perceiving.
Modern psychologist Adam Grant gave a contextual statement about Carl Jung’s theories. In an online article, he wrote that ‘[Jung’s theories were] before psychology had become an empirical science, and Jung made [them] based on his own experiences’. Even Jung himself wrote in his book, ‘...every individual is an exception to the rules [I’ve created]’. As well as this, Katherine and Isabel had no formal training in psychology, although Isabel had a Political Science degree. ‘The fact that Jung, Katherine and Isabel all did not apply scientific studies to develop the concepts, instead using their own observations and experiences fuels many to say that your MBTI is no more meaningful than your zodiac sign.’
Another reason the MBTI test isn’t accurate is because it has an ‘either/or’ system that doesn’t really do justice to what is supposed to be studied about the human personality. When taking the MBTI test, characteristics such as ‘thinking’ and ‘feeling’ are two mutually exclusive ends of the spectrum.
However, decades of evidence prove that you can prefer ideas and data when making a decision, and people and emotions at the same time. Most of the time it depends on the context and situation, and most scenarios are not mentioned in the test. Because of this grouping style, many people may find that their results are inconsistent. For example, I frequently fluctuate between INTJ and ISTJ, with unofficial tests telling me I’m around 52% sensitive, and 48% intuitive. Jung himself noted that ‘there is no such thing as a pure extravert or a pure introvert, (and that) such a man would be in a lunatic asylum’.
So, why do so many people let MBTIs have such a big hold on their future ideas and reactions? Annie Murphy Paul had two main arguments as to why people still bother giving MBTI results such great power over their lives. The first being that thousands of people have invested time and money into becoming certified MBTI trainers and coaches which, yes, is a real thing. The second reason is the ‘aha’ moment people get when they get insight about themselves or others. If you are satisfied with the type provided (because your type is going to be described in the best way possible, even when the negatives are stated), it’s so hard to admit to yourself that the MBTI isn’t as accurate as you originally thought.
In conclusion, there are several plausible reasons as to why the MBTI test is inaccurate. Of course, everyone can take it for fun, or for curiosity’s sake, but it is important to remember that this test does not have a scientific origin and has been proven to be inconsistent and inaccurate because of that.
How has weather affected conflict?

LIBERTY SPRY
WEATHER CONTROLS OUR LIVES. WHETHER IT IS SHORT TERM DECISIONS LIKE TAKING THE DOG FOR A WALK OR THE MORE LONG LASTING EFFECTS OF WEATHER THAT CAUSES DAMAGE TO PROPERTY AND HOUSING. NOBODY CAN CONTROL THE WEATHER AND WE ARE ALL SUBJECT TO ITS UNPREDICTABILITY. THE WEATHER HAS INFLUENCED MANY PIVOTAL MOMENTS IN HISTORY SUCH AS THE D-DAY PREPARATION, HITLER’S INVASION OF RUSSIA, AND THE SPANISH ARMADA. HISTORICALLY, PEOPLE HAVE MIGRATED BECAUSE OF WEATHER, AND IN MORE RECENT TIMES THE EFFECTS GLOBAL WARMING ARE BECOMING MORE WELL-KNOWN AND ACTED UPON.
A pivotal moment that changed the Christian world forever was the Spanish Armada. Disagreement between Queen Elizabeth I and Prince Phillip of Spain caused England and Spain to fight regularly, with both countries having opposing views concerning the Dutch Revolt, piracy, and the French Civil War. Phillip wished to invade England and replace Elizabeth with a Catholic monarch, therefore settling their religious feud. After two years of preparation and delays, his fleet of 130 set sail from A Coruña on 1 July 1588. Through their vast network of spies situated throughout Europe, England knew that an invasion was coming and had time to prepare their navy along the English Channel. The first encounter between the two sides was on 31 July in Plymouth. Due to the wind position, the English were able to have the upper hand and win the conflict with relative ease. The Armada then moved east to Portland, then the Solent, and eventually they went to Calais, France. The fleet could not find a way in through England’s efficient navy and continued
moving east. The fleet was vulnerable in Calais and had no choice but to anchor, try to find shelter, and cure the ill. When the British realised that the Armada was exposed, they send out eight fire ships that were set alight and the wind drifted them towards Calais. The Spanish were in chaos and attempted to escape in whatever way they could. The fleet’s only choice was to go up to the North Sea, around the British Isles, and circle back to Spain. Yet, at this time longitude was difficult to calculate and the fleet sailed too close to Ireland. Dire winds tore 35 ships to shreds. Several ships sank in the squalls, while others ran aground or broke apart after being thrown against the shore. The weather caused more casualties and more damage than the English attack overall. This battle was seen as a major propaganda victory for Europe’s Protestants, demonstrating their power against supposedly ‘invincible’ odds. Queen Elizabeth I remained monarch until 1603, 15 more years. Was it, however, the Naval Officers’ victory or simply the gulf stream and the rough waters of the Atlantic? Was the unsuccessful Spanish Armada truly a triumph of military might for England? The victory, in my opinion, is due half to the weather and half to the navy. The navy is owed some recognition for relentlessly guarding the three major ports, which stopped the Armada from entering.
More recently, in 1941, Operation Barbarossa in WWII was altered hugely by the weather and allowed the Russians to win the conflict. Hitler’s original plans were to organize the attack on the USSR in mid-May but suddenly the country was forced to fight back against Yugoslavia and Greece in April. As a result, Hitler and his officials were compelled to alter their plans and attack the USSR in late June. Misguidedly, Hitler believed that he could overcome the entire army in 2-3 months before the autumnal weather approached. To begin the battle, Hitler launched three army groups, one situated to the north, and one to the south, and the final group went through southern Poland and Ukraine. The Germans were successful in their surprise attack which led to the
Red Army being caught off guard and vulnerable. After months of battles, the Germans were at standstill, after going back into Ukraine to destroy the pockets of Russian rebellion, they were met with trenches 90km away from Moscow. This is when the weather hit. The weather in Russia’s autumn led to a muddy quagmire filling the western Soviet Union. German troops lacked essential equipment such as winter coats, and their vehicles had not been winterized or lubricated. Germany was unable to move troops at the speed Hitler desired and used fuel reserves in the process. Later in the year, when the battle was once again at standstill, the winter weather came. Russian winter or Rasputitsa arrived and this led to thick layers of snow and very low temperatures. During this time, more soldiers were in the hospital for frostbite than for gunshot wounds. The Red Army knew the Russian weather well and were equipped for the cold. In addition to a few other variables, the weather was a major factor in the winning of the war for the allies. In a different scenario where Hitler’s plans weren’t pushed back, there perhaps would be a different outcome for WWII. Furthermore, Hitler was aware of the possibility of bad weather owing to Napoleon’s attempted invasion 100 years earlier. A factor that eventually caused the downfall of Napoleon was the weather. The Grande Armée was caught with an electrical storm, freezing hale and sleet. There were even stories of soldiers ripping open dead animals and reaching inside for warmth and piling dead bodies in windows for insulation. I find it intriguing that Russian weather for centuries has been able to deter invaders and has managed to stop European countries from taking over.
Other major decisions in WWII influenced by the weather were, for example, D-Day. The invasion named D-Day was the largest military, naval, air, and land operation ever attempted. The event involved the landing of 153,000 men from America, Britain, and Canada. Captain James Stagg was the head of weather forecasting and was reliant on observations from different parts of the Isles, ships in the Atlantic Ocean, reconnaissance flights to western Europe, and German weather forecasts available through decoding the Enigma. Originally, the allies were confident that the weather would be ideal for the 5 June and that this suitable weather would last for several consecutive days. On 1 June, meteorologists understood that the weather on 5 June would be stormy and the seas would be turbulent. Stagg advised the invasion to be delayed 24 hours, which Eisenhower signed off on. From all this information collated, Stagg was able to notice a small window
where the troops could land on the beaches. The German meteorologists lacked sufficient data from the British Isles and the Northern Atlantic Ocean which meant that they thought the weather on the 6 June made an invasion impossible. The Allies saw that there was a tight time frame in which they could invade when the weather was unrestrictive and they could catch the Germans by surprise. During the fight, layers of cloud lead to bombing raids being difficult. The wind at the coast was roughly 1 Beaufort stronger than expected and beaches were not as wide as thought. Despite the bad weather, the D-Day mission was successful and many countries owe their democracy to it. However, out of the 2,052,299 personnel who landed in Normandy between 6 June25 August, 36,976 died. Therefore, the weather was overall advantageous for the allies as it provided them with the element of surprise. It could also be inferred that weather was a nuisance as it stopped plans for a day and sadly lead to the drowning of more than a hundred soldiers.
Ultimately, weather has changed the world completely, in this essay exclusively in WWII and the Spanish Armada. All three major events mentioned have significance in different ways with the defeat of the Spanish Armada promoting Protestantism and beginning the slow decline of Spain’s power and in time the loss of its huge empire. The victory also inspired nationalism and gave England the freedom to conquer more places to grow its empire. With religion in mind, Protestants thought that the weather was God throwing up fronts to send the message of Protestantism to the English and to stop Phillip II from changing their religion. The defeat of Operation Barbarossa changed the world as it stopped Hitler from gaining Russia. Potentially, if he had invaded earlier, he would have won the war. Before the bad weather hit, he had already taken most of the land on the European side of Moscow. Finally, D-Day changed history incomparably and if it was unsuccessful, the idea of culture in Europe would be very different. Failure was a very real possibility to the Allies, Eisenhower himself wrote a statement saying that he took full blame for the operation if it failed. If D-day was not successful, there possibly would have been even more battles and a future unimaginable to us would have taken place. Yet, we thank the win of D-Day for the world as we know it and the freeing of France would not have been possible. Arguably, the weather has altered history massively throughout WWII and even before. Something so unpredictable yet important has changed religion, country occupation, and democracy in ways that many would not have thought possible.

05.
Concepts of Russian Folklore
MARIA TARABANbehaviour and way of life in Russia.
I DECIDED TO WRITE MY SCHOLAR’S RESEARCH ESSAY ON EASTERN EUROPEAN FOLKLORE AS BEING IMMERSED IN IT THROUGH MY RUSSIAN GRANDPARENTS WAS A BIG PART OF MY CHILDHOOD. I FOUND SLAVIC FAIRY TALES ENCHANTING AS I READ BOOKS, WATCHED CARTOONS, AND SAW ILLUSTRATIONS DEPICTING THEM BUT I NEVER QUITE KNEW THE ORIGINS OF THEM, WHICH WHILE WRITING MY ESSAY I DISCOVERED TO BE REALLY FASCINATING. I PARTICULARLY ENJOYED RESEARCHING THE SIMILARITIES BETWEEN SLAVIC AND BRITISH FICTIONAL STORIES SUCH AS HOW CERTAIN CHARACTERS LIKE GIANTS SHARE TRAITS IN BOTH CULTURES.
Russian folklore stems from ancient Slavic people, who were around during the Middle Ages and the migration period (around 400AD – 1000AD) living in various tribal groups around Eastern and Central Europe. These groups laid the foundation of what are now the Slavic countries that include Croatia, Bosnia, Poland, Slovakia, Russia and more. Many of these Slavs carried out beliefs of the polytheistic Pagan religion, which was originally adopted from Ancient Greece and Rome. Paganism’s main beliefs incorporate ideas that the cycles of life such as birth, growth and death carry spiritual meanings, that nature is sacred and that the world is a place of life and joy rather than sin and suffering. The Ancient Slavic people divided their tribes into three groups – the East, West and South Slavs. Using the key beliefs of their practiced religion, each group came up with its own deities, rituals and mythologies that paved their way into the deep roots of Russia’s culture. Ultimately, Russian folklore is a collection of fairy tales, myths, poems and rituals that significantly influenced generations of people’s
One of the most well-known characters in Russian folklore fairy tales is Baba Yaga. In Slavic folk tales, her character is that of a supernatural being in the form of a disfigured and hostile old lady. Famously, she lives in an old hut in the woods that stands upright on three chicken legs, that can transport to different places. In many stories she is presented as menacing, especially towards children by always scaring them, even eating them in some cases. Yet, in other stories she is presented as wise and caring by offering magical solutions to those that find themselves lost in the woods or seeking her advice. In these instances, Baba Yaga is portrayed as a maternal figure who is mystically connected to nature and wildlife. Another mythological creature is the giant Balachko, also known as the ‘three headed giant’. He has two extra heads that work with the elements of nature to protect him. With one head he is able to spit fire, and with the other he can breathe in freezing air from cold winds and use them against enemies. In the legend of Balachko, the giant kidnapped the princess and the Tsar in the story went on a mission to release her from him. When fighting against the Tsar, the giant used up his supply of elements and needed time for them to regenerate, at which point became an easy target for the Tsar to succeed and kill him to save the princess.
Slavic tradition also includes characters in the form of spirits. An example of this is Domovoy, known as the good spirit of a household. A lot of the time, he is depicted as a short elderly man with flashing eyes and grey hair, however in some tales he is pictured as an animal or ghost of a late family member in the household. The Domovoy’s role is to protect people in the house, especially animals and children who are seen to be most vulnerable. He does this by warning them of any possible threats to wellbeing such as future intruders of the house. Domovoy is able to shape-shift into other spirits and forms of life like animals to best suit the family in each particular situation. In some stories, he is further accompanied by a female house spirit called Domania who is
considered to be the ‘goddess of the home’. Characters and fairy tales in Russian folklore have been embedded into literature. For instance, in Bylinas. A Bylina which translates to English as “something that was”, are old Russian epic poems that used to be translated orally. Although they do contain factually proven historical detail, the poems are embellished with knights, dragons and other fantastical creatures from folklore, making Bylinas a genre of Slavic folk art. More recently, an example of folklore’s influence on literature was during the Golden Age of Russian literature in the 19th century. The world-renowned novelist Nikolai Gogol was fascinated by supernatural concepts – both divine and demonic, due to being taught about local folklore in his upbringing, which he incorporated into his work. In one novella, The Viy, the main character, who is the king of gnomes, has the ability to possess men just like vampires in Russian folklore. Along with other vampire qualities like being sacrilegious, it can be assumed that Gogol took inspiration from vampires in folklore to create the main character, ‘the Viy’. Similarly, Russian folklore had a great influence on artistic projects. Wassily Kandinsky is an example of an artist who adopted a certain style to give his paintings of folk fairy tales a mystical impression. One of his works known as the ‘couple on horseback’ presents the two protagonists in the fairy tale, Ivan Tsarevich & the Grey Wolf. There, it is clear that Kandinsky used Folklorist techniques such as bright colours, woodcut prints and abstract
prints to show fantastical images such as the grey wolf that befriended Ivan in the tale. Furthermore, Russian folklore inspired movies and cartoons, a well-known example being the movie Jack Frost. It was first made in 1964 in Russia and since then more modern versions have been created. The film is closely based off the folktale Morozko. The story involves an innocent young girl named Anastasia and a conceited young man cursed with the head of a bear named Ivan. The movie is both similar yet different to the original story, for instance the man in the film is not named Ivan but Jack.

During the Pagan era, sorcery and witchcraft consisted of occult practices like fortune telling, dream interpretation, charms and changes in weather. Witchcraft was based on the belief that magic could help people understand natural phenomena and cast away misfortunes through mystic powers that certain people, often known as witches, were able to obtain. A major asset of witchcraft was the power of remedies made of natural matter such as herbs, spices and plants. It is believed that they could either treat people who were unwell or curse people with the help of what was said to be called ‘black magic’. An example of a plant used by the Slavic witches is the toxic plant Belladonna, grown in Poland, which has hallucinogenic effects that could lead to death. In terms of spells that the Slavic witches used, they were mostly used to gain prosperity, wisdom, love or revenge. The Love spell was used to attract a person’s beloved or to bring fertility.

It is said to have worked if a person were to address the charm to the three winds – the Western Waft, Eastern Waft and Northern Waft and say the name of their beloved. After this, it is believed the person’s beloved should fall unconditionally in love with them. Although many people in Russia still practice the religion today, Paganism was abolished as the main religion in Russia in the 10th century by Prince Vladimir I, replacing it with Christianity. At first there was no punishment for still practicing folk traditions but they became less significant. However in 1648, a new law was created by Tsar Alexis of Russia that meant practice of Paganism such as witchcraft, would result in harsh punishments like execution. The reason for this was authorities fearing that sorcery was being used for malicious purposes and to cause harm. Telling folk tales also became illegal and even during the time of the Soviet Union (1922-1991), they were banned as they were seen as too aristocratic. It was only in 1990 when folklore in Russia was officially unveiled again due to the legal guarantee of ‘full equality of all religious
groups’ by the Soviet Law and Religious Freedom Group. Despite all the controversy to do with Russian folklore, there is no doubt that through folk tales it helped improve aspects of society such as people’s character traits. This is because the foundations of what these stories taught readers are moral messages and positive ideas like being caring, trusting, courageous and intelligent.

‘Today, there is a little premium placed on being authentic’, writes philosopher Gordon Marino. But as much as this statement may be true, ‘authenticity’ has become a buzzword in recent years, according to the Harvard Business Review. Everybody wants to be authentic.
So, what is the definition of authenticity? Authenticity is the quality of being genuine or unpretentious. As an ethical ideal and as a standard of what it is good to be, authenticity means a whole lot more than a lack of genuineness; it also regards the features of our personal lives that define us.
06.
Will Social Media ever be authentic?

WHILST LEARNING MORE ABOUT UP-AND-COMING SOCIAL MEDIA PLATFORMS, I BECAME MORE AND MORE INTRIGUED BY BEREAL, AN ANTI-SOCIAL MEDIA APPLICATION TAKING THE WORLD BY STORM.
WHILST RESEARCHING MORE ABOUT THE APP, I STUMBLED UPON THE IDEA OF PERFORMATIVE AUTHENTICITY AND THE PROCESS THAT GOES WITH CREATING APPLICATIONS. I THEN DELVED FURTHER, LOOKING AT CLINICAL RESEARCH, PSYCHOLOGY AND THE IDEA OF AUTHENTICITY.
‘Today, there is a little premium placed on being authentic’, writes philosopher Gordon Marino. But as much as this statement may be true, ‘authenticity’ has become a buzzword in recent years, according to the Harvard Business Review. Everybody wants to be authentic.
So, what is the definition of authenticity? Authenticity is the quality of being genuine or unpretentious. As an ethical ideal and as a standard of what it is good to be, authenticity means a whole lot more than a lack of genuineness; it also regards the features of our personal lives that define us.
The world around us is changing in terms of what we (consumers) want to see. Carefully curated Instagram posts are losing their appeal as people switch to media sharing that is more candid and relatable. If you are present on social media, you may notice the increase in unpolished photo dumps on Instagram, Snapchat’s ‘play once’ format, and even Wordle’s scarcity tactic.
Many people follow the belief that Instagram feeds have turned into priceless mosaics, and every part has to be perfect to fit with the piece. As time progresses, more people are joining the movement to make social media more casual again. Rather than priceless mosaic-like feeds, people are more convinced that social media should go back to showing the everyday moments in our lives rather than just the highlights. However, even as people want social media to be casualised, many think these beliefs will be a short-lived sensation.
In the last few years, there has been a resurgence in film and disposable cameras, which in my view is paradoxical because with a film camera, the picture is spontaneous but regardless, time and time again, you can see people immortalising this into posting and editing them to put on Instagram and other platforms. A prime example of this is many teens going out of their way to make their photos appear worse. Huji Cam, which makes images look as if they were taken with a disposable camera, has been downloaded 16 million times, according to Lorenz. Performative authenticity is the idea that your genuineness is an act. I think that if you are not speaking to one person directly, you are always performing, it does not matter whether that is online or in person, like the prevalent phrase ‘if you didn’t buy a product, you are the product’.
Although the ‘make social media casual’ movement rejects the curated Instagram culture we see now, all it does is replace it with a different aesthetic and put more value on effortlessness. An example of this is blurry pictures, stylised because they portray that you do not care how your picture looks because you’re having fun. But is casual Instagram just another manufactured aesthetic? It’s clear in my view that planned authenticity is just not authenticity anymore. Most people agree that all they can do is not continue to glorify people who continue to fake their lives, before and after curated social media.
Another example is typing in lowercase. I think this can be categorised as performative because although it makes you look like you are carefree, to type in lowercase all of the time you have to go into settings and turn off auto capitalisation to get that effect. A great way to explain performative
authenticity is ‘calculated casualness’.
‘The way we think about authenticity poses a real threat to our capacity to grow and learn’, says Herminia Ibarra. It is important to note that even the act of attempting to be perceived as authentic immediately invalidates your authenticity. For example, taking a photo whilst crying has very controversial opinions – for some, the thought of taking a true moment of deeply personal emotion and burning it as public fuel is a several steps down the road towards inhumanity of our emotions being sold as products.
I think that it is vital to realise that everything we see online is a glimpse of a part of something and not the whole thing. It is almost impossible for it to be fully genuine or authentic online because the medium of social media just does not allow for that. The knowledge threat we are being observed changes our behaviour, and as I mentioned earlier – if you are not speaking to one person directly, you are always performing. The Hawthorne effect describes this phenomenon in both clinical research and studies, which is the idea of thinking of who is going to see what you’re posting; the thought process of ‘what if my boss/spouse/dad reads or sees this’.
An article by Help Guide explains the vicious cycle of unhealthy social media use. When people feel lonely, anxious, or stressed, they tend to use social media more to relieve boredom and feel connected to other people’s lives. Using social media more often, however, increases fear of missing out and feelings of inadequacy and dissatisfaction. Consequently, these feelings negatively affect your mood and worsen symptoms of depression and stress. These symptoms cause you to use social media even more, and the cycle continues.
I could not write about performative authenticity and society’s interest in realism without mentioning the controversial W Magazine Issue by Juergen Teller. The ‘Best Performances’ issue features images of Jacob Elordie, LaKeith Stanfield, Jonathan Majors, Steven Yeun, Michelle Pfeiffer, Sacha Baron Cohen, Tessa Thomspon and more.
Teller is well known for his informal and unretouched style of shooting, so it was almost imperative that his work for W Magazine followed the same style. For example, Riz Ahmed mentioned his shoot for the Issue was less than 20 seconds. Many people share the view that Teller’s work is attempting to break the ‘fourth wall’ between celebrities and non-celebrities, but to many the images are jarring, but more argue that this is a fresh take compared to what we’re used to, the overly edited Instagram aesthetic of our time.
Ostensibly, the point of shooting a magazine cover
is to get lots of eyes on the images, so Teller has evidently fulfilled his role as a photographer, and built controversy at the same time. I think that the informal aesthetic is almost so of the moment that it is difficult to recognise as warranting artistic merit. In 2020, during the COVID-19 pandemic, BeReal was released to the public. I, for one, have always been intrigued with how quickly it grew with BeReal monthly active users increasing rapidly in 2022 from 920,000 at the start of the year to 73.5 million in August. Over 10 million people accessed the app daily in February 2023. But thinking back to the basis of the app, this one has something that other social media platforms do not have; it targets our generation’s obsession with authenticity.
BeReal is an anti-social media app founded by two men based in Paris. It prompts every user through a notification to capture and share a photo during a random 2-minute period every day. You cannot see your friends’ BeReals or the Discovery public page until you post your own, so the only way to get involved on any given day is to post your own BeReal. If you miss the 2-minute timeframe, you can still post, but your friends will see how many hours late your BeReal is. Friends can also see how many retakes you have taken. When posting a BeReal, the app takes a picture using both your front and back camera, so other people can see what you look like and where you are. BeReal is changing the way that people act online and also encouraging people to strive less for the mirage of perfection. BeReal’s success suggests that this could be the beginning of a much larger online movement.
Having surveyed some people around me, I asked the question ‘Do you enjoy the experience of Bereal and Spontaneity’ to which 64% (29 people) said yes, 13% (6 people) said no, and 22% (10 people) said they do not mind. I also asked the same audience whether they have/had BeReal. 87% of the audience who had Instagram also had Bereal, 12% of the audience who had Instagram didn’t, and 2% didn’t know what BeReal was. Over half of the people I surveyed (58%) have secondary, photo dumps or more private accounts where they feel they can be more of themselves because they are in better control of who is viewing their images.
Authenticity shaming is ingrained into the app’s design. When someone misses the two-minute deadline or decides to retake the image, their friends are shown that they have not ‘been real’. In addition, even if it is only 2 minutes late, all your friends receive a notification informing them how late you posted after the time limit. This has many positive and negative effects on the app’s users. As pointed
out by Sophie Haigney in the New York Times this year, every image or video contains a very present and operational choice of what will and won’t be in frame. That framing is always building some kind of story, a narrative that you are consciously or subconsciously constructing. Simultaneously, platforms are becoming indistinguishable: Instagram Reels vs TikTok vs YouTube Shorts, Twitter Fleets vs Instagram Stories vs TikTok Stories vs YouTube Stories, Snapchat vs Instagram filters, and on it goes. BeReal is a fresh take and that’s what makes it different from the other social media platforms we use. So, the question is how long will it take for other apps to copy this format? It’s already happened –TikTok Now and Instagram Dual both use the idea of a front and back camera. TikTok Now even goes as far as renewing when people can post their ‘Nows’.
I downloaded BeReal specifically for this research essay on the 31 January 2023 and it has been intriguing to both learn how the app works, and see how performative shaming can be demonstrated in the form of an app. Although the app took some getting used to, I find it is something that I enjoy using daily to both see a more authentic view of what my friends are doing, but also as a way to log memories for myself in the future.
Ultimately, I think social media cannot cease to be a performance. I believe the only authentic way to ‘be real’ is to detach from social media to fully live in the moment. Regardless of how close we get to genuine authenticity on social media, people should always have in the back of their minds that social media isn’t an accurate representation of real life. One must question whether even the practice of taking pictures of ourselves, even if just for BeReal, and posting for others, is authentic? However, as much as I do not think BeReal completely solves the problem of performative authenticity, and I do not believe you can make a completely ‘real’ or ‘authentic’ social media app. Every social media app is inherently manufactured in some way. And of course, there is something ironic about seeing people post their BeReals to Instagram. Many people want social media to be fun again, but at some point, it became something so anxiety ridden and now there’s no way of going back. Social media is performative and tricks your mind into constantly comparing yourself, even if it is unintentional.
I do not have a lot of hope that social media can ever return to being as authentic as it once was, but I do wonder if we will keep making new ways to ‘create’ authenticity or if we as a society will one day accept that being on any platform naturally leads to performative attitudes.
07. The Four Forms of Black Holes

FIVE YEARS AGO, UNDER THE WATCH OF MANY, SCIENTISTS TOOK THE FIRSTEVER PHOTO OF BLACK HOLES. SINCE THEN THIS MAJESTIC AND MYSTERIOUS ‘PROJECT’ OF THE UNIVERSE HAS BEEN GRADUALLY REVEALED TO THE PUBLIC. ALTHOUGH, FIVE YEARS AGO, MY UNDERSTANDING OF THE UNIVERSE WAS ONLY AS OLD AS “INTERSTELLAR”, THE EHT PROJECT SEEMED TO ME IMPOSSIBLE, TURNING ABSTRACT CONCEPTS INTO TANGIBLE SIGHTS. A YEARNING FOR THE VASTNESS OF THE UNIVERSE WAS PLANTED DEEP DOWN. THIS YEAR, I WAS GIVEN THE OPPORTUNITY TO SHARE WHAT I HAVE LEARNED, SEEN AND FELT. I HAVE ASSEMBLED IN MY ESSAY, FROM THE FORMATION OF BLACK HOLES TO THE TYPES OF BLACK HOLES, ULTIMATELY TO THE POTENTIALS THAT HAVE NOT YET ENTERED THE SIGHT OF HUMANITY.
Four years ago, before the first wave of lockdown approached, I sat in the car on my way to School with a photo of a black hole on my phone. It is known to be the first ever photo taken of a black hole - a majestic piece of art. The photo was taken by the “Event Horizon Telescope”, a project uniting globally synchronized radio observatories to record black holes at different angles comparable to their event horizons. This very black hole is located 55 million light years away from Earth, sitting at the centre of the elliptical galaxy M87, introduced as a supermassive black hole. Meaning that the black hole has a mass in the range between millions to billions of times the mass of our Sun, making it the winner among all forms of black holes by mass, followed by intermediate black holes, then stellar black holes, and finally miniature/primordial black holes, some comparable to the size of a tennis ball. To understand how our universe works, it is essential to understand the structure of the black holes - the fundamental basis of all.
Black holes only have three physical quantities - the mass, the speed of spinning, and the amount of charge (coulomb) carried by the item captured by the black hole. These quantities define how they are categorised. Ultimately the mass of black holes cannot be 0, so black holes spinning at the speed of 0 and has no charge are known as Schwarzschild black holes; black holes spinning at the speed of 0 and charged are the Reissner - Nordström black holes; similarly, black holes spinning at the speed>0
and uncharged are the Kerr black holes; finally, speed of spinning>0 and charged black holes are the Kerr-Newman black holes. To avoid further confusion, these are known as the four types of black holes and there are four forms of black holes (as far as we are aware).
In 1971, Stephan Hawking published his essay: Gravitationally Collapsed Objects of Very Low Mass, with detailed math accumulations to prove that objects of mass 10^-5 g upwards, equivalent to a human cell, can all gravitationally collapse into black holes. These are known as Primordial black holes. The most compelling explanation for its formation is the Big Bang theory. In the beginning, there was nothing, but an extremely high temperature and tiny particles mixed with light and energy. As a result of the consistent fluctuations due to the quantum effect in the early universe, particles were spread out unevenly, some regions are highly dense, and others are less dense. According to Hawking’s calculations, if the picture of large initial fluctuations is correct, in those highly dense regions, the gravitational energy exceeded the kinetic energy of expansion. Thus, these regions would not have expanded continuously along with the rest of the Universe but collapsed gravitationally. With Hawking’s further research on Primordial black holes, the question follows: Where are the Primordial black holes now? Depending on Hawking’s model, the initial mass of these black holes ranges from 10^-5g to thousands of solar masses. However, it would not have existed to the present, if the initial mass of a primordial black hole was lower than 10^11kg. Primordial black holes are non-baryonic (research), therefore are likely candidates for dark matters. On the other hand, Primordial black holes also share common features with massive compact halo objects (MACHOs), they have a great mass but don’t emit visible light to be seen. They are also plausible candidates for being the seeds of intermediate-mass black holes and supermassive black holes (located at the centre of massive galaxies).
Primordial black holes may not be a familiar term, but Stellar black holes will sound familiar. Fundamentally, a stellar black hole is formed through the process of gravitational collapse acting on a star. When a large star reaches the end of its life, its energy sources become exhausted. The star will produce a compact star – a white dwarf or a neutron star or a quark star, if the mass of the collapsing star is lower than the Tolman - Oppenheimer - Volkoff (TOV) limit for the neutron-degenerate matter.
However, if the mass of the collapsing star exceeds the TOV limit, the collapsing process will continue until a black hole is produced. During the formation of these black holes, the star will fuse all the hydrogen stored in its core and release all energy. The increasingly hot core will expand and eventually pushes the shell of the star outward, it is known as a supernova - the most beauteous funeral. Different from primordial black holes, stellar black holes, can be and have been observed. Although it is way too far and small for us to be seen by eyes, it is observable through its gravitational effects on the stars and other objects surrounding the stars. The energy released in the fall of objects into the black hole is so large that the matter heats the temperatures to millions of degrees and radiates in X-rays. Therefore, the black hole is observed in X-ray telescopes on both Earth and in Space.
Stellar black holes have masses ranging from five to several tens of solar masses (the mass of the sun as a unit), although its gravitational effect seems inevitable, in our universe Supermassive black holes have masses ranging from one hundred thousand to a hundred million solar masses, around twenty thousand times greater in mass. Supermassive black holes are located at the centre of a galaxy, all celestial bodies orbit around the Supermassive black hole. For example, our galaxy, the Milky Way, has a Supermassive black hole at the galactic centre, known as Sagittarius A*. The stars nearer to the black hole takes 16 years to orbit all around, and Earth is located at the outskirts of the galaxy, to orbit all around the galaxy an estimated 2.25 hundred million years is needed. Our galaxy is currently in its 20th orbit. The formation of Supermassive black holes remains an active field of research. In agreement from Astrophysicists, these black holes may develop by accretion of matter and by merging with other black holes.
The masses of stellar black holes normally do not grow greater than 100 solar masses, and Supermassive black holes have masses over 100,000 solar masses. In between the barriers, there exists another form of the black hole, with masses ranging from 1,000 to 10,000 solar. These are the intermediate-mass black holes. Although Astrophysicists seek signs of intermediate-mass black holes, all observations have failed. In 2005, scientists discovered a black hole located in the NGC4395 galaxy, obtaining the mass of 36,000 solar masses. In 2019, scientists again observed the formation of a black hole with a mass of 142 solar masses formed from the merging of two
black holes. Although both masses of the black holes discovered exceed the regular range of stellar black holes and Supermassive black holes, they are not considered typical intermediate-mass black holes. The field of black holes with masses ranging from 1,000 to 10,000 remains in the dark. The question Astrophysicists face nowadays is whether these black holes exist at all. The existence of intermediate-mass black holes depends much on the theory of black hole formation. If under a hypothetical circumstance, Supermassive black holes are formed from a gravitational collapsed black hole merging with other stellar black holes, like black holes producing gravitational strength, then it is obvious that some smaller stellar black holes will merge to form smaller black holes, ideally growing into intermediate-mass range. However, if Supermassive black holes are formed during the Big Bang, like primordial black holes, then intermediatemass black holes may not exist. This may seem to be the ideal resolution to the mysterious case, however, the complexity behind the case is far beyond imagination. This is a question asking: “Which came first - the Chicken or the Egg?”
Under a hypothetical scenario, stars collapse gravitationally to form stellar black holes, they are the corpses of the stars. The corpses travel around galaxies devouring and merging, to form a Supermassive black hole. A galaxy may evolve surrounding the Supermassive black hole from the gravitational strength, then the black hole is the ‘graveyard’ of stars. On the other hand, if Supermassive black holes are formed first along with primordial black holes from the Big Bang, the black holes will produce stars from the material they capture from nearby stars or gas clouds as they are shot back into space in the form of blazing plasma traveling around the speed of light. By producing stars surrounding it to make its galaxy, they may be recognised as the ‘pasture’ of the universe, baring the lives of all.
I believe in the near future, the answer will soon be revealed to all, perhaps through the Webb Space telescope. When Einstein first published his theory of general relativity to the public, it was seen as the most lunatic view. The fear developed and sought its way into our species. Until a century later, in the present, our society finally realises the beauty of the majestic piece of art, to reveal its actual face under the label of death, perhaps the actual founder of all.

WHEN I HEARD ABOUT THE METAVERSE, A HYPOTHETICAL ITERATION OF THE INTERNET AS A SINGLE, UNIVERSAL, AND IMMERSIVE VIRTUAL WORLD THAT IS FACILITATED BY THE USE OF VIRTUAL REALITY AND AUGMENTED REALITY HEADSETS, I WAS ATTRACTED BY THE THOUGHTS ABOUT THE ABILITY TO LIVE IN A SIMULATION ONLINE AND CHAT TO PEOPLE ANONYMOUSLY. I THEN CONTINUED TO USE MY IMAGINATION AND CAME UP WITH AN INTERESTING IDEA: WHAT IF WE CAN STORE OUR BRAINS INTO THE COMPUTERS? IS THIS ACHIEVABLE? AND IS IT MORALLY ACCEPTABLE?
Mind uploading is a speculative process of whole brain emulation in which a brain scan is used to completely emulate the mental state of the individual in a digital computer. The computer would then run a simulation of the brain’s information processing, such that it would respond in essentially the same way as the original brain and experience having a sentient conscious mind. This means that a person’s personality, memories, and emotions could be completely replicated and potentially remain, immortally, in digital form.
HOW TO ACHIEVE IMMORTALITY
To be able to do this, the first course of action is gaining information from the brain. You can reconstruct a lot of what is in the brain from its structure. The connectome is a complete map of the brain, and as the brain is so wildly complex, it would be hard to complete. Scientists have only been able to map the complete connectome of one creature, a nematode, and a nematode’s brain has about 302 neurons. In comparison, the human brain is made of 86 billion neurons that function simultaneously in a large neural network. And the complexity does not stop there. There are more than 125 trillion synapses just in the cerebral cortex alone. That is a lot of information and storage capacity. So, the idea there, once you’ve preserved a brain, then you can slice it very thin and observe everything that’s in there - to first dissect the brain! This technology must be capable of scanning and recording massive amounts of data. Though this is extremely hard and takes an unimaginable amount of time and effort, it is not impossible. Some estimate that the machines would need to be capable of scanning human brains at a quantum particle level. However, the existing
technologies are not sufficient for this yet, which means that we need to create better-functioning tech.
The second course of action is transferring information into commands that a computer understands. This can be achieved by using a braincomputer interface (BCI). The existing BCIs can translate some types of neuronal information into commands, and are capable of controlling external software or hardware, such as a robotic arm. This means that as long as we furtherly develop the BCIs, we can eventually not only have a brain stored in the computer, but also allow the brain to carry out actions by its will. The human brain functions on a computational basis, which means its processes can be mimicked by observing its neural activities. Silicon chips or neural networks can copy how the brain works. Therefore, it is possible in the future, to transfer the information selected from the brain into computers.
In lots of ways, brains and computers are similar. Both of them uses electrical signals to send messages around the body. The brain uses chemicals to send information, whilst the computer uses electricity. Both brains and computers transmit information ‘using binary’. In a way, neurons in the brain are either on or off by either firing an action potential or not firing an action potential. Both have a memory that can grow. Computer memory grows by adding computer chips. Memories in the brain grow by stronger synaptic connections. Both can adapt and learn. The development of AI shows the ability for a computer program to develop and learn by itself. In fact, the computers can do more advanced stuff, like multi-tasking, that are difficult for the brain. Though computers are ‘slower learners’ than brains, it can be developed in the future. These all show how the computers can potentially imitate the brain and how the brain can be transformed and stored into a computer.
BENEFITS OF MIND UPLOADING
Firstly, mind uploading can lead to immortality. Individuals who wish to live forever are opting to preserve their brains and sometimes bodies through cryopreservation. In theory, in the future when mind uploading is achievable, their consciousness could be retrieved and uploaded. As long as the files and computers are kept in good condition, technically they can live forever. This is a great temptation for those scared of death or wish to be immortal.
Secondly, this technology could also store the brains of those who died a sudden death, has high intelligence or was murdered recently. This can
help them regain consciousness, keep talent for a long time or figure out the murderer. People with a naturally high IQ brain could be kept, valued and working even after they are dead.
In addition, this idea can also lead to the further investigation of the brain. We might be able to discover the full potential of the brain, as it is commonly believed that we only use 10% of our brain. If we put the entire brain into computers, we might discover more functions and figure out ways to use more of our brains. Also, as computers and brains both have their own advantages and disadvantages, combining them could provide a faster functioning machine for the good of human development.
As your consciousness becomes a program, you would be able to change it as you wish, which means that you can prevent yourself from feeling depressed, having negative thoughts or any mental illnesses. You can simply change your serotonin levels. You can also change your memory as well to forget any trauma.
Furthermore, the idea of mind uploading can lead to lots of new ideas such as living in a virtual world. This means that you can enter the five-dimensional world, living as any creature or being you would prefer. If your brain is hardware independent, you could also speed up or slow down your own perspective of time. This has so much potential for development and entertainment for human.
DRAWBACKS OF MIND UPLOADING
However, there are lots of disadvantages of mind uploading as well. Firstly, there will be no privacy. Anyone would be able to steal/take any part of your brain. The passwords, the information, the secrets that you don’t want to reveal to anyone. Even in the world now, the privacy issues are increasing. A study by the University of Maryland quantified the nearconstant rate of hacking attacks on computers with Internet access – on average, every 39 seconds. It would be more dangerous in the future, as there the technology is too advanced.
Secondly, as your brain has been turned into ‘a program’, it could be changed by others easily. You could potentially become a mindless machine with exceptional performance, more intellectual than others. You could easily be used for bad deeds. You could also be emotionally and ‘physically’ tortured by any person seeking revenge. You lose protection, as you no longer have a physical state. In addition, you would lose track of reality. Once
your brain is virtual, you have no way of knowing what is or is not real and your ability to discern truth becomes seriously impaired. You could wake up in a completely different reality created by someone else and think that you are in the true world, especially as you can change the speed of time. There is a possibility that everything you consider ‘real’ or ‘true’ is manipulated by someone else. Furthermore, you will lose all control of yourself. As you can easily be switched on or off, others can control whether or not you are ‘alive’. Also, you might be ‘revived’ by others without your permission. Imagine dying a peaceful death and then discovering that you turned into a machine by devasted family members. In a nutshell, you lose control over your existence.
ARE YOU STILL ‘YOU’?
The concept of putting one’s brain into a computer means that the brain must be dead before it is scanned and dissected and transferred into a computer. Therefore ‘you’ physically is dead. Whether this computer program is ‘you’ must be considered. This depends on how you define ‘you’. Whether it is the consciousness formed by collisions and interactions form the neutrons in your brain, or your physical appearance and DNA. Different people will have different interpretations. The concept of mind uploading can lead to lots of moral questions, such as: ‘Whether it is fine to store someone’s brain without them acknowledging under emergency circumstances?’ ‘Should this technology apply to everyone?’ ‘Should this technology even exist?’ These are all questions we should consider.
09.
The Yin and Yang of Being

The concepts of ‘good’ and ‘bad’ are thrown around day to day without being regarded as much more that ways to express opinion and personal feelings. Events and happenings that occur in both present and past are considered this way too, where certain things are viewed as ‘virtuous’ or ‘commendable’ whilst others as ‘evil’ or ‘corrupt’. People see the latter as better without and constantly comment on how much ‘better’ the world we live in would be without such things. But, if such a world existed, would it indeed bring the happiness and satisfaction people envision it to?
Good and bad. These two juxtaposing words are often used to describe events, experiences, emotions, anything that has the quality of being able to be on opposing ends of a spectrum. The Oxford English Dictionary defines the word ‘good’ as an adjective which describes, ‘possessing desirable or positive qualities; morally excellent or virtuous’ and the word ‘bad’ as an adjective which describes, ‘of poor quality or low standard; inferior or defective.’ It is evident that the two directly contrast one another, their meaning defined very much by the fact that the concepts of ‘good’ and ‘bad’ are fundamentally based on each other. People tend to view the two as independent, separate from one another and as completely individual concepts, however, the reality is that, without the ‘bad’, the ‘good’ would cease to exist.
Much like how good and bad are interconnected, positives and negatives are so too, only able to exist in the presence of the other. In a world without grief, sorrow, and anguish, one would not be able to experience or appreciate emotions of happiness, contentment, and pleasure. The absence of negative emotions would mean that the positive emotions would become meaningless and devoid of value, as it is only the contrast between the two that deems them significant. For instance, throughout the COVID-19 pandemic, the freedom to come and go as one pleases, the state of mind that allows one to see a stranger as something other than the possible carrier of a disease, the ability to meet family and friends without the fear that their previous meeting was their last, were all stripped away from people’s lives within the blink of an eye, without any sparing sympathy. Seemingly infinite resources becoming dangerously scarce as people scramble to stock
up on daily necessities which used to simply be available down the street, or even at the touch of the screen, delivered to your doorstep within hours. The loss of things which were so insignificant previously instantly magnifies their worth by a thousandfold, the foreignness of losing something so easily achieved in the past almost impossible to accept. Values of that which were viewed so lightly are only sincerely felt when such things become unavailable and the overlooking of our liberty only until that too is taken away.
It is easy to take that which is handed to us for granted, as the expected way of life. This is due to the fact that for those used to it, there is no ‘other option’, no alternative, not even the slightest prospect of anything else. The concept of being without it simply never crossing the mind of those who do not experience it themselves. Without an opposing possibility, the contrast which shows the worth of the subject will not be present because there is nothing to contrast against. There will be no worse alternative to be glad about not experiencing, as it does not exist. Therefore, the value of that which is always there, whose existence and state are constant without a fault will be none, as it is only when you lose something that you truly are able to recognise its worth.
Death, commonly regarded as the inevitable but painfully tragic and brutal cessation of life is frequently the reason for emotions of dread and sorrow. Many view it in fear, as a destructive force that comes between relations and aspirations. But without the threat of an end, how much value would life truly hold? As previously discussed, the lack of another possibility, in this case – death – would deem the state of living endless, and therefore cause it to lack all meaning. It is the very existence of a time limit, the awareness of our finitude as physical beings which allows us to appreciate things whilst they are still alive and in existence, whilst they are still flourishing and prosperous. The knowledge that death is certain allows us to better acknowledge the present reality and the value that it holds when we recognize that it is part of a finite, temporal process that is unfolding over time. Past events and experiences are valuable precisely because they are no longer part of the present. The appreciation of life is only made possible due to the recognition that it is impermanent and constantly changing. It is death which gives purpose to life. Hence, it is the consciousness of there being an end that gives us the ability to cherish life as it is, have ambitions, goals, to cultivate gratitude and joy for the simple things.
As humans, our awareness that we will not always have the opportunity to experience certain things is what pushes us to go and experience them, what gives us the drive to step out of the confines of our ‘comfort zone’ and embrace things even in the face of adversity or uncertainty. If life were infinite, there would be no sense of urgency to seize opportunities, no longing for the past and no appreciation of those around you, as there will always be another time to do so. What we see now as precious and beautiful would have no meaning in an infinite world, as it is the finite nature of life which motivates us to make the most of life as it is.
The very ‘evils’ of life are what allow us to feel joy and contentment, for in a world without hardships, we would not have a basis for comparison. Many believe in the concept of a ‘perfect world’, one without flaws, without conflict, where things are handed to you on a silver platter. They see pain and struggle as something better without, and envision life without the need to endeavour, to strive for betterment. But if such a world truly existed, would it indeed bring the happiness and satisfaction people envision? Would life with nothing left to strive for or achieve truly be fulfilling? While the thought of a world devoid of struggle appears to be appealing in theory, it is the very act of overcoming obstacles and working to become better that brings the sense of accomplishment and fulfilment, that brings meaning to being alive. With nothing left to work towards, no drive in life, nothing to push you to keep going, to make a change, to become better, life would become stagnant, filled with nothing but lethargy. In a ‘perfect world’, the world will not be viewed as perfect by those who inhabit it, being it is the accepted way of life, with nothing to show that it is ‘better’. There will be no knowledge of anything other than life as they know it, our vision of ‘perfect’ nothing but ‘the norm’, deemed as ordinary to those who live it.
People may argue that it would be a good thing, a world without violence, grief, and all things labelled as negative or harmful, but the concept of good would not exist, as there is no bad to help define it. A world without hardships will not be known as ‘a world without hardships’, as the idea of hardships would simply not exist. The idea of a ‘perfect world’ would mean life, despite however wonderful we, as people now, view it, is in a constant state of ennui for those living in it. In a world without corruption, violence and hunger would mean nothing if you have not experienced their effects yourself. It is only when you have seen the consequence of war that you can appreciate peace, and it is through enduring the
depths of despair are you able to fully appreciate the moments of joy and happiness.
It is bad which gives meaning to the good. They define each other. In the absence of the bad, we would simply not be able to experience the concept of ‘good’ as we know it. Without the negative outcomes, the significance of positive ones would be lost, being that there is no other alternative to appreciate for not experiencing. Similar to how light cannot exist without the dark, pleasure cannot exist without experiencing pain and success cannot exist without knowing failure. Events, experiences, and emotions thought of as unpleasant and bad are precisely what provide the contrast, depth and perspective that allow us to appreciate those that are good genuinely. If everything were good as we know it, we would no longer have the ability to see the good in them. Without an opposing possibility to demonstrate its value, the world would be trapped in a neutral state of inertia, with nothing to differentiate or distinguish anything from anything else. In this way, good and bad are fundamentally intertwined and mutually dependent, forming an inseparable duality which allows us as humans to understand and appreciate the world around us to the fullest. In the absence of one, we would lack to ability to comprehend the other. Much like how yin-yang, despite being opposing qualities, exists in a way where they are counterbalancing, as negative and positive complement each other. To sincerely be able to appreciate the good things, we have to fully embrace the bad as well.
Do international military interventions work?
I WAS CONSIDERING THAT IF HUMANS HAD ADVANCED SO MUCH OVER THE YEARS, WHY WERE THERE STILL WARS IN THE WORLD, AND WHY THE REST OF THE WORLD WOULD DO NOTHING ABOUT THEM? THIS LED ME ONTO THE TOPIC OF MILITARY INTERVENTIONS. AT THAT POINT, I WASN’T SURE WHETHER TO WRITE ABOUT THEM ETHICALLY, IF THEY SHOULD BE ALLOWED, OR MORE IN THE FASHION OF WHETHER THEY WORKED AND WHAT THEIR FLAWS WERE. I DECIDED ON THE LATTER, AS I FOUND IT MORE INTERESTING.

A military intervention is where external powers have vested interests in the outcome of an internal conflict in any given state. I will be evaluating three criteria of success: improvement in social outcomes, political and economic stability, and using the case studies of Kosovo, Sierra Leone, Afghanistan, and Iraq.
SOCIAL OUTCOMES
The Kosovo War began 28 February 1998 until 11 June 1999. Before the war there had been violence between the Serbs and the Kosovars, and there are still high tensions between the two, with Serbia refusing to recognise Kosovo as independent. During the war, there had been a campaign of terror, including murder, rape, arson, and severe maltreatment. The Yugoslav and Serb forces caused the displacement of around 1.45 million Kosovo Albanians. NATO intervened by beginning air strikes in March 1999 in order to end the military action, and the violence and repressive activities by the Milosevic government. The intervention resulted in Yugoslav forces withdrawing, ending the war, which could be considered a success, as it achieved its aims. However, it did not achieve the aim of preventing ethnic cleansing and violence, and the bombing campaign of the intervention itself caused at least 488 Yugoslav civilian deaths, and many Kosovar becoming refugees. There are still revenge killings in post-war Kosovo, and border clashes between the Serbian and Kosovar police. After the war, it was documented that over 13,500 people were killed or went missing. After the war, around 200,000 Serbs, Romani, and other non-Albanians fled Kosovo and many of the remaining civilians were victims of abuse.
A further example is that of Sierra Leone. Sierra Leone had descended into civil war in 1991; the 11-year civil war ended in January 2002. There were three interventions in total. The first by ECOMOG (Economic Community of West African States Monitoring Group) in 1997-1999, whose aim was to bring an end to the civil war, while there was some disarmament, the war did not end, and there were allegations of human rights abuses by some of the troops involved. After this there was UNAMSIL (UN Peacekeeping Operation) 1999-2005 and their aim was to restore peace after a decade of civil war, which they also failed to do without the overlapping third intervention of Britain. The British (1999-2005) aim was to establish an environment for peace and stability in the country, and with UNAMSIL they were able to end the civil war. Therefore, it could be said that the interventions were a success as the aims of an end to the civil war had been achieved. However, this took multiple interventions over a long period of time during which 50 000 had died, and more became victims of atrocities committed during
the war. After the war, there was a gradual withdrawing of troops by UNAMSIL and the British, and all had left by December 2005. After the war, the mission became humanitarian. A court was asked to be set up, to try those who had committed the most grievous offences against human rights, which began operating in Summer 2002. Rehabilitation was also necessary for many, as lots of children had been abducted during the war, and many of them suffered from drug withdrawal symptoms, brainwashing, physical and mental trauma, and lack of memory about their lives previous to the conflict. About 2 million people had been displaced and wanted or needed to return home. Thousands of small villages had been heavily damaged in the looting and raids, and there were concerns of infrastructural instability, as many clinics and hospitals had been destroyed. Despite the end to the civil war, Sierra Leone was ravaged and remains poor, with a high rate of unemployment though it has enjoyed political stability since.
POLITICAL STABILITY
In Autumn,2001, the US and several allies invaded Afghanistan with the aim to dismantle al-Qaeda, which had been behind the 9/11 attacks and to remove the terrorist organisation’s base of operations in Afghanistan, by removing the Taliban government. (In 2001, the Taliban controlled about 80% of the country, due to a previous war.) The Taliban regime was toppled and an internationally recognised Islamic Republic was established three years later. This could be seen as a success, at least in the short term, however, the military forces had to remain in the country to support the Republic and to continue the war against the Taliban, who had not been eliminated. As well as this, al-Qaeda, while not nearly as strong as it had been when it carried out the 9/11 attacks, was not fully dismantled and around five hundred members continued to fight with the Taliban. As the US and allies were unable to eliminate the Taliban by use of military force, a diplomatic deal was brokered which led to the withdrawal of all US troops in 2021. The Taliban launched an offensive at the same time, where the Islamic Republic government was overthrown and the Taliban government re-established their rule across much of Afghanistan. After twenty years of war, the country ended up back under Taliban rule therefore long term I believe that the intervention failed.
Another example of political instability following a military intervention is the 2003 invasion of Iraq. This was a US-led intervention with a combined force of troops from the US, the UK, Australia, and Poland. According to US President, George Bush, and UK Prime Minister, Tony Blair, the coalition aimed to disarm
Iraq of weapons of mass destruction (MAD), to end Saddam Hussain’s support of terrorism, and to free the Iraqi people. Most of the Iraqi military was quickly defeated and the coalition occupied Baghdad on 9 April 2003. Iraqi President Saddam Hussain and the central leadership went into hiding after the country was occupied. The war ended in May, however US military forces formally occupied Iraq until 2011. In Iraq the US had an interest in preventing a resurgence of the Islamic state, for which they needed to support a stable government, though they did not. Due to Saddam Hussain and the central leadership going into hiding, they were not then finally defeated which made the situation chaotic. Hussain was finally caught and executed in 2006. There was however a devastating insurgency, first by Saddam Hussain loyalists and the al-Qaeda, followed by a sectarian civil war and the rise of the Islamic State (ISIS), occupying a third of Iraq. Rather than being liberated, and becoming a democracy, Iraq ‘returned to the dark ages’, devolving into a dangerous and corrupt country.
ECONOMIC STABILITY
In terms of economic effects of military interventions, an example could be the long-term fortunes of Sierra Leone. Despite the interventions, the decade long civil war resulted in a large economic decrease leading to around 75% of the population living in poverty. Sierra Leone’s main export is diamonds, and while the profits of diamond mining increased more than tenfold, over 50% still remains unlicensed, and the reliance on a sole mineral resource hampers the progress of the economy. Over half the country worked in subsistence agriculture after the war, which did not help economic growth, and although the GDP has been growing at between 3.5% and 7% since the war ended, the majority of the country remains in poverty and the international community believe that Sierra Leone needs international aid to help the economy and to prevent further inequality. Arguably therefore military interventions should be backed up by humanitarian and foreign aid to lead to greater economic benefits for all the population.
Another example of the lack of economic stability following a military intervention could be Afghanistan. Before the military intervention, Afghanistan’s economy had already suffered from many wars over the past decades. While there was some economic growth during the war between 2002 and 2020, this was more due to aid money spent on reconstruction than true economic growth. Much of it went to the warlords, instead of to reconstruction, and there is an argument that it was spent more on military bases than building the nation. The economy was declining before the Taliban took over again, due to severe
drought, COVID-19, little confidence in the previous government, falling international military spending as foreign troops left, human and capital flight, and Taliban advances on the battlefield. Then, after the Taliban takeover, civilian and security aid was cut off (over $8 billion per year, equal to 40% of Afghanistan’s GDP) and this was then further exacerbated by sanctions, freezing foreign exchange reserves and the reluctance of foreign banks to do business with Afghanistan. The economy stabilised after a free-fall of a few months, but at a much lower level, with no prospects to resume higher growth. Up to 70% of the population remain unable to afford food and other necessities.
In conclusion, I do not think that military interventions work. Even in Sierra Leone, generally considered to be one of the few successful interventions, I think that success was limited, in that while the intervention achieved the goal of brokering peace, no country had a plan for how to help Sierra Leone improve after the intervention, and it remains one of the world’s poorest nations. In Iraq, many citizens look back on the Hussain Regime with nostalgia, as they feel that while it was terrible, it was still better than what came after, as after the intervention no democracy was established. In Afghanistan, one of the main aims was to overthrow the Taliban government, which succeeded in the short-term, but then failed as they returned in 2021, and standards of living plummeted. In Kosovo, the intervention stopped the war, and the Yugoslavian forces pulled out, but hostility between the Kosovar and Serbians remains high, and there is still violence, just as there was before the war. I think that the issue with military interventions is that no country, before sending troops, formulates a comprehensive plan of how to return their force, whether they succeed or fail, and if they succeed, then a plan of how to help that country be able to stand by itself, and continue developing, with stable political and economic climates leading to better standards of living and therefore social outcomes. Therefore, looking at the criteria of the long-term effects of improved social outcomes and political and economy stability, I feel that military interventions do not work.