Inspire - Michaelmas 2025

Page 1


inspire

Michaelmas 2025

Editorial

This issue of Inspire, consisting of pupil written articles, displays the depth and range of interests across Marlborough’s academic scholars. With each pupil deciding on what topic to write about, they have been able to demonstrate their knowledge and passion for areas beyond the school curriculum. We hope that you are inspired by this issue and enjoy it just as much as we do.

Thank you

We would like to thank everyone who has helped to make this excellent publication. Firstly we would like to thank Mrs Jordan, who has helped us format and organise this edition ensuring that it could be accessed by all. Additionally, the Head of Academic Scholars, Mr Moule for providing constant support and trust throughout the process. Finally, to all contributors providing their time to contribute and spread their knowledge.

CT scans that display the presence of Huntington’s disease in the brain.
Did Marie Antoinette actually make the remark, ‘Let them eat cake!’?

Marie Antoinette is remembered as one of the most iconic women within history, known for her brilliant Rococo fashion sense and intense sweet tooth; but more importantly as the final Queen of France, a symbol of excess and a highly contributing figure towards the French Revolution. In recent years, Marie’s legacy is found increasingly within culture and media, from period films, such as Marie Antoinette (2006) directed by Sofia Coppola and even the latest Manolo Blahnik collection, which takes heavy inspiration from Marie’s almost 300-year-old shoes, and distributes them to the feet of the fashion and social elite. This aesthetically appealing media coverage has almost characterised Antoinette as a coquette icon among today’s youth and at the forefront

of this conception is her infamous phrase, ‘Let them eat cake!’. However, historians have deliberated, through in-depth research, that it is very unlikely that this line was actually delivered. In this article, I will explore how this myth has arisen and how it was tokened for extremely successful propaganda by the leading revolutionists.

Maria Antonia was born in Vienna in 1755, as the fifteenth child of Empress Maria Theresa and Emperor Francis I of the Holy Roman Empire, making her an Archduchess of Austria before she had reached her first birthday. Throughout her education Maria was referred to as below par academically, despite the effort and resources that went in to her

schooling. At the young age of 14 she became betrothed to Louis Auguste, the next in line to the French thrown, in order to strengthen the frail relations between their two homelands. In 1770 the couple were married, much to the speculation of the public, and Maria became Marie Antoinette and was sent to live in Versailles with her husband. In the closely following years an adolescent Marie struggled to conform with the severity within the French palace, but on the 10th of May 1774 she was forced to become a respectable member of the elite, upon the death of Louis XV and her husband Louis XVI’s inheritance of the throne. The new Queen Consort found herself lavishly spending, almost immediately. Only two weeks after Louis XV’s death Marie’s husband gifted her the Petit Trianon, which was a small property within the grounds of Versailles, and allowed her to decorate it to her own tastes; this resulted in walls ordained with gold and diamonds. This immense dispensing of money continued throughout Marie’s early years as Queen and she developed a reputation for her elaborate fashion and frivolity, earning her the nickname ‘Madame Deficit’ among her critics. Although she attempted to frequently contribute to charities, her perceived extravagance and political ignorance marked her as an easy target for public resentment. In the years leading up to the French revolution bad harvests, as well as constant inflation had led to the price of bread to rise to 88% of a worker’s wages. Meanwhile, Antoinettes family continued to have regular banquets, in which the table would be decorated with the finest French patisserie.

It was against this political backdrop of widespread public hatred that Marie Antoinette was allegedly heard to have said, ‘Let them eat cake!’ in response to hearing about the starving people of her country. This statement implied a shocking disconnect from the realities of the common people’s suffering, which heavily reinforced the image of such a careless monarch. The phrase circulated all over France sparking

intense outrage and it is even considered as one of the tipping points of the start of the physical revolution.

However, there is no credible evidence that suggests that this phrase was actually ever spoken by Marie. The first historical connotation towards these words have been found by historians in Jean-Jacques Rousseau’s Confessions in 1762. In his novel Rousseau placed this powerful line, ‘Qu’ils mangent de la brioche!’ in the mouth of ‘a great princess’. At this point Antoinette would have only been ten years old and still living at home in Austria with her family and therefore not yet the so called ‘princess’ of France. This makes it chronologically impossible for this phrase to have come from Marie Antoinette, however confusion around this arose at the time because Rousseau’s novel was not published until the early 1780s, at a point where the line would be an entirely believable, yet appalling, statement for Antoinette to make. Although this proves that Antoinette did not say these exact words, it is important to consider that this was so believable to the public because its grotesque and unaware nature reflected Marie’s regular behaviour.

Therefore, leading revolutionists were able to coin this phrase and use it as extremely powerful propaganda against the French royal family and aristocracy. They used it to underscore both the moral and social disconnect between the two classes. During the late 18th century cake was a luxury, due to its sugary ingredients and the time and artisan craft that went into baking one, therefore it symbolised the unnecessary extravagance of the rich. Antoinette’s supposed expectation for the poor to access these riches and enjoy such a thing, as a flippant solution to their hunger, portrayed to the revolutionists how the aristocracy were so far removed from the gravity of their situation; both urging them to take their own action and to further their hatred for the upper class. In this way the “quote” ‘Let them eat cake’ functioned as a rhetorical

weapon, that ultimately lead to the storming of Bastille prison in 1789 and the eventual execution of Marie Antoinette, her family and their court.

It is clear that Marie didn’t actually deliver the famous line, ‘Let them eat cake!’ that almost outlines her place in royal history as the last Queen of France, however the phrase still holds equal significance as it encapsulates the vast class gap that was present in France at this point and the lavish arrogance of the aristocracy. The phrase greatly contributed to the downfall of

the royal family and the social republic that still lies at the centre of French politics, culture and life, yet it could be argued that Antoinette herself holds this potent role, rather than her false statement. Despite her reluctance to empathise and take action for the poor, probably being a result of her lack of exposure to this alternative life, and her young age, during her time at Versailles her wild spending habits, large banquets and unbothered attitude characterise her as a spoilt princess; something that inspired equal rage in the public as her supposed catchphrase.

The Gendered Medical Divide: When 50% Equals Less

Than Half

‘Women are dying, and the medical world is complicit. It needs to wake up.’ This is a bold statement made by Caroline Criado Perez in her book Invisible Women. The gendered data gap in medical research is a significant issue, yet one which is often overlooked. The gendered rift in medical research is not only neglectful, but comes about through the commonly held view that man is the default and woman is the afterthought – a dichotomy embedded in the human psyche, even in the modern day.

Male as Default

One reasons for female underrepresentation in medical trials is our

perception of gender in the world. Simone de Beauvoir made this observation most famously, when in 1949 she wrote, ‘humanity is male and man defines woman not in herself, but as relative to him; she is not regarded as an autonomous being […] He is the subject, he is the Absolute – she is the Other.’ Society often views male as the default. Including women in the equation can feel contrived; indeed, women often have to fight to be seen. When asked to picture a ‘person’, 80% of men defined this as male. Indeed, a recent paper celebrated as ‘great progress’ the fact that 28% of children – a figure not even close to 50% – now draw a woman when asked to depict a ‘scientist.’ This bias is seen even in pre-clinal trials. Studies have

shown that sex differences appear even at a cellular level; yet, it is often only male cells and male rodents that are tested on. Frequently, we don’t question these gender inequalities.

A 2001 US Government Accounting Office (GAO) review of the FDA (food and drug administration) records found that about a third of documents didn’t sex-disaggregate their outcomes and 40% didn’t even specify the sex of the participants. Medical research often fails to consider how women may react differently to drugs than men. Indeed, when tests are carried out on participants other than young adult males, the findings are not separated by gender – something that makes it impossible to develop more specific and effective drugs for women and negates the impact of having a varied subject group altogether.

Institutional Barriers

In the 1960’s doctors began prescribing thalidomide to pregnant women to combat morning sickness. It was considered safe as the

developers ‘could not find a dose high enough to kill a rat’. However, it did affect foetal development (which the developers had known about before it was given to the public). This resulted in over 10,000 children being born with thalidomide-related disabilities and the drug was taken off the market in 1962. In the wake of the scandal, guidelines were issued in 1977 from the FDA to exclude women of childbearing potential from drug trial. This exclusion went unquestioned. Inevitably in the decades that followed many women were prevented from being involved in medical testing – something that caused a gender disparity that persists today.

The way in which medical professionals are taught also plays a major role why women are often overlooked in medical research. The male-if-not-otherwise-stated perspective has been around at least since the ancient Greeks, with Aristotle seeing the female body as a ‘mutilated male body’. Whilst this unpleasant image no longer endures, the male body as the

A newborn human child and rabbit with thalidomide disabilities

human body is often what is taught to modern doctors. A 2008 analysis of a range of textbooks showed that male bodies were used three times as often as female bodies to portray ‘neutral’ body parts.

Similarly, funding for research into femalespecific health issues, such as endometriosis and menopausal symptoms, is significantly less than for issues effecting men. Men occupy a majority of leadership roles in government and funding bodies. Indeed, it is far easier to get behind a movement that feels closer to home. One example of this major underfunding is for endometriosis which affects approximately 190 million women globally. However, despite its high prevalence, diagnosis is delayed on average by 7-9 years. In 2022 the total US funding for endometriosis was $16 million, which equates to $2.00 per patient per year. In comparison, funding for diabetes (which 12% of US women are expected to suffer from), assuming half of the diabetes research budget goes to female patients, results in $31.30 per woman per year.

Societal Views

There is a stigma around women’s health which leads many to disregard it and label issues as unimportant. Keen to avoid being told that they are just being ‘too emotional’, ‘pathetic’ or that they should ‘just deal with it’, women are often apprehensive about voicing their concerns. As a result, many women are diagnosed late or misdiagnosed altogether, leading them to suffer in silence.

For many, it is often easier to turn a blind eye to the issue and believe that women are sufficiently represented in medical research. In February 2018 a paper was published in the British Journal of Pharmacology entitled Gender differences in clinical registration trials: is there a real problem? where the all-male author team concluded that the problem was in fact not ‘real’. Some researchers also argue that given the historical data gap, it would be inadvisable to include women in medical research as there is a lack of comparable data. Others suggest that instability of female hormones over complicates the research so only men should be used in medical testing in the name of simplicity.

The impact on women

So, what does this all really mean? In essence, women are at risk of misdiagnosis and a lack of effective treatment all because the people in power (often men) refuse to see and tackle this data bias in the world of medical research. A direct outcome of this is that a vast majority of drugs, including anaesthetics and chemotherapeutics, continue with genderneutral dosages that put women at a higher risk of overdose. Worryingly, according to the FDA the second most common adverse drug reaction (behind nausea) is that the drug simply doesn’t work at all. In summary, it is evident that there is a gendered disparity in medical research that needs to be addressed. The stigma that is deeply entrenched in society must be challenged and medical needs of women – who form half the population – recognised.

Biomimicry

Biomomicry literally means ‘imitating life’ from the ancient Greek ‘βίος’ meaning life and ‘μίμησις’ meaning imitation. It is the design of technology, materials or solutions using nature as a model. Scientists, engineers, architects as well as countless other fields study how nature has evolved over billions of years to be so harmoniously adapted to thrive in their environments. This helps them to create efficiently and effectively.

An example of biomimicry is the use of sharkskin inspired materials. Sharks are covered in small bony denticles, which are toothlike and act as scales across their body. The sharks themselves use this to prevent the attachment of algae, barnacles, or marine organisms on them. As a result, scientists working to prevent the spread of bacterial infections through surfaces

in hospitals (which results in more than 2 million infections every year in the US making it a severe issue) were inspired and have developed Sharklet, a material that mimics the microscopic bumps (denticles). It works due to its shape which inhibits bacteria, viruses, and other microbes from attaching and creating a biofilm. Unlike other popular materials such as

copper alloys, Sharklet is not toxic to the bacteria but simply prevents it from attaching. This is favourable because it means that the microorganisms cannot develop a resistant strain against it. Sharklet can be made using any material which means that it does not need coatings.

A study by researchers from Sharklet technologies used two types of infection causing bacteria, MRSA and MSSA, and compared how they contaminated different kinds of surfaces. The three they used were the Sharklet pattern, a copper alloy, and a control surface. The researchers attempted to mimic ways that bacteria would often be spread, especially in hospitals, the main ones being sneezing, spills and touching of the surfaces. Patients with already weakened immune systems are so susceptible to diseases that just by touching a contaminated doorknob they might develop serious infections. Overall, the sharklet patterns was the least infected. It reduced the transmission of MSSA by 97% compares to the control surface whilst the copper hardly made a difference. Additionally, the pattern had 94% less MRSA bacteria than the control compared to the copper’s 80%.

Biomimicry is not only used by scientists for the development of technology and materials but also by architects when designing buildings. An example of this is the emulation of termite mounds in buildings. For example, the Eastgate centre in Harare, Zimbabwe has designed their cooling system to imitate termite mounds. Termite mounds are created by an interweaving of tunnel systems and chambers which help to efficiently circulate air which are mimicked by having low level openings that allow cooler air to enter with tunnels leading to chimneys at the top where the warm air can escape. The Startup

Chloe A (L6)
A microscopic image of sharkskin

Lions Campus in Kenya very obviously makes use of this technique as well with large gaps below the main body of the building and prominent columns leading up where the warm air can escape. This technique of cooling is called the stack effect. This system encourages passive cooling which uses much less energy than a normal cooling system would. The only technology used are fans that work using a cycle which allows better heat storage during the day which can then be released at night. The Eastgate centre uses about 90% less ventilation than the average building of its size. The Eastgate centre also has vegetation covering the outside, this reduces the amount of sunlight heating as its energy is used up in photosynthesis.

Biomimicry is so important in design because it encourages sustainability. Nature’s designs have proven over hundreds of years to be more long lasting and efficient. Taking ideas from it therefore reduces waste whilst creating systems that are adaptable, such as the Sharklet material, more energy- efficient, such as the termite mound-like buildings as well as more environmentally friendly.

Biomimicry helps to optimise resource use as it shows us resources in the natural environment, such as bamboo, that can be used to encourage green infrastructure. Solutions emulating nature are more resistant to change making them more durable over time and therefore more costefficient. As well as this, nature allows us to create solutions faster by seeing how they have already been solved in nature.

Biomimicry teaches us that when looking for solutions in design we can look no further than the nature around us. By studying how they work together so harmoniously, scientists can create technology that is sustainable and efficient. So far, solutions involving biomimicry have revolutionised how things work such as Sharklet, which is increasingly spreading across hospitals as well as public surfaces and helping to battle against bacterial infections, or the termite mound- buildings which are encouraging sustainable development. The biomimetic approach to design will continue to evolve as we study nature allowing us to create a greener world.

The Eastgate Centre, Harare Zimbabwe

CRISPR – The power to edit life.

Gene editing is the altering of genetic material through insertion, deletion or replacement of DNA sequences. It has been evolving over the last half century in leaps and bounds starting with the major discoveries of the double helix and RNA in the 60s, the gene editing premise of ‘cutting and pasting’ DNA in the 70s to the revolution of CRISPR technology in the 2010s. Gene editing is a very powerful tool having the potential to cure inherited genetic disorders, improve crops and advance medicine enormously. However, while there are huge benefits to (CRISPR) gene editing there are also ethical implications which affect how and why it is used.

CRISPR, the abbreviation for Clustered Regularly Interspaced Short Palindromic Repeats is a component of bacterial immune systems that defends against viral infections. It can cut sections of the viral DNA and store them between palindromic repeats of its own

DNA so it can recognise the genome in case the bacteria’s immune system gets attacked again (immune memory). It has now been repurposed as a gene editing tool. Generally, it can be divided into three sections recognition, cleavage and repair. CRISPR cuts DNA at a specific location on the genome using the ‘cutting enzyme’ Cas9, making it extremely accurate –like a pair of molecular scissors. First, scientists decide which gene they would like to edit, for example the faulty HBB gene which causes sickle cell disease and locate the exact DNA sequence of the gene. They then choose a 20-nucleotiede sequence (a short stretch of 20 DNA ‘letters’ able to identify a specific sequence in a gene) that matches the sequence of mutated DNA. After, they produce the Cas9 enzyme and Guide RNA (gRNA) and combine them into a RNP complex in a lab and guide them into the cells. This can happen in vivo, inside the body, or ex vivo, outside the body in a lab (this is used in sickle cell therapy). The

gRNA guides the Cas9 to the target sequence which it recognises using the nucleotide sequence and bonds with it – this sequence must be near a PAM (protospacer adjacent motif) which acts as the initial recognition site and binds to the Cas9. Once bonded the Cas9 changes shape and activates two nuclease domains inside it - the HNH domain cuts the DNA, which matches the gRNA, and the RuvC domain cuts the opposite side of the double stranded DNA. This creates a double stranded break and triggers the cells’ natural repair mechanisms, opening the window for scientists to direct the process and edit the gene.

There are various methods for gene editing that CRISPR offers. Firstly, it can be directed to disable a gene to stop it from working by ‘knocking out’ a gene: if the Cas9 creates a double stranded break (DSB) the cells most likely response is to repair it by a NHEJ (NonHomologous End Joining is a quick process that joins broken DNA ends). However, as NHEJ is such a quick process it joins the DNA ends without a template and causes indels (DNA bases that are lost) meaning it is inaccurate and creates mutations in the gene which makes it non-functional. Scientists use this process to deliberately disable genes.

Although cells often repair themselves using NHEJ they can also use HDR (Homology-Directed Repair is another repair process which fixes DSB’s). However, unlike NHEJ they use a template to guide the repair and are very accurate. This method allows scientists to insert a new gene or a piece of DNA precisely – known as gene ‘knock-in’. To do this the scientists insert a DNA donor template into the cells (generally done alongside the Cas9 and gRNA). This includes a new gene or sequence and homology arms (short stretches of DNA on either side of the donor template that are identical to the DNA around the cut site), allowing the HDR system to recognise the area it needs to insert DNA into. This is slightly harder to do than the

‘knock-outs’ as HDR only happens when DNA is being copied which is much less often. Scientists are trying to make this more common by timing the CRISPR delivery alongside the cell cycle, editing the gene to boost HDR production or turning off the NHEJ mechanism so HDR is the cell’s only option.

While ‘knock-out’ and ‘knock-in’ are the most common methods of gene editing, the field has evolved further and there are now major advancements in our technology. These are called base-editing and prime-editing, which works on the same principle as CRISPRCas9 but are much more precise. Unlike normal methods they do not need a DSB to work and only cut one strand of the DNA. Base editing’s goal is to change a single DNA base - there are four bases represented by letters they make up the genetic code of DNA, changing the letter can modify a gene - without cutting the DNA. By using a modified Cas9 enzyme called a nickase (that only makes a small cut or none at all) which is fused to a deaminase enzyme (this can change one base letter into another). It can remove a chemical group from the DNA base being guided by the gRNA and forces the cell’s natural repair system to replace it with a different base. The nickase can correct SNVs (single–nucleotide variants which are the most common causes of genetic diseases) while avoiding DSBs, which are dangerous and error prone and can repair or stop mutations.

Prime-editing’s goal is to make any small DNA change (insertions, deletions or base changes) without a DSB or a template. It uses the Cas9 nickase and a reverses transcriptase enzyme (which can build DNA from RNA instructions) and is guided by a prime editing gRNA called pegRNA, telling it where to go and what edit to make. The reverse transcriptase then writes the DNA sequence into the genome. With this method there is no need for DSBs or donor templates and it can make a wider range of edits. These methods have encouraged CRISPR advancements dramatically.

Other CIRSPR methods exist with different variations of the Cas enzyme: Cas12a which can reach sites in genes Cas9 can’t; Cas13 which targets RNA not DNA; CRISPRi and CRISPRa which are two other methods that regulate genes without cutting them at all.

There have been a number of examples of CRISPR gene editing used since it has been invented. Sickle-cell disease therapy, mentioned earlier, has been used to treat sickle-cell’s disease (a life-threatening blood disorder). Both ‘knock-in’ and ‘knock-out’ methods have been used in this to correct a faulty gene in bone marrow cells. In China for instance, He Jiankui used CRISPR to ‘knock-out’ the CCR5 gene in embryos to make them resistant to HIV in 2018. The twin girls he experimented on were born healthy, but this was the first example of germline editing (gene edits that could pass to future generations) and the public was horrified, claiming that these ‘designer babies’ were ethically wrong, bringing up a number of concerns about the morals of CRISPR and gene editing as a whole.

Another major example of CRISPR being used was Huntington’s disease, which is caused by the HTT gene being faulty and leads to intense brain damage – there is no cure to this, but it is heritable. Scientists wanted to ‘knockout’ or ‘silence’ the gene in the embryo to prevent this happening. On the other hand, this then brought up more ethical concerns about humans ‘playing God’. Consent issues (with embryos being used instead of adults) and equality issues of those who could access the life altering therapy became controversially discussed.

In conclusion, CRISPR gene editing is an extremely powerful tool which could change the way we view medicine and genetic disorders in the future. With tools such as base and prime editing there is less and of a chance for mistakes, and more areas which this technology could be used. However, there are certain ethical concerns which are raised and we are left with questions to be answered. Do the moral concerns outweigh the benefit of curing the most devastating diseases? Should this revolutionary scientific progress be stopped or should we continue despite the ethical problems?

‘Should art be separated from its creator?’

Cancel culture permeates the modern world; many despise it while others promote it. Although it is impossible to provide an objective answer to the question ‘Should art be separated from its creator?’, I will begin to address it today.

There are seven major forms of art: painting, sculpture, architecture, literature, cinema, theatre, and music. They all have one thing in common: the creator. It goes undisputed that the artist plays a vital role in art, most obviously by having created it. The question ‘Should art be separated from it’s creator?’ asks whether we should interpret art independently of the artist’s beliefs or actions.

One of the main ways in which the creator is significant when appreciating art work is their reason for making it. This might be selfexpression, communication, obligation or, most

controversially, politics. In almost all pieces of art, one can see a motivation shine through, whether it be subtly or the focal point of the creation. However, numerous artists create work irrespective of any controversial opinions they may have.

Many consider art to be something which is solely personal and a means of expressing emotion. My viewpoint is that whilst art is something deeply individual, it simultaneously carries huge social and cultural significance, reflecting on the period of time in which it was made. This relates to how politics play a vital role in modern society. Nowadays, people often choose to discriminate when discovering that someone has contrasting views to them. This has led to cancel culture, which some believe is a form of censorship. There are countless examples of this occurring in pop culture.

For example, Kanye West displayed his antisemitic views when openly supporting Hitler’s Nazi regime on social media. He faced extreme consequences with an estimated $2 billion loss of net worth, mainly due to loss of brand deals. Despite this, his music remains extremely popular and continues to rank highly, highlighting how the majority of his fans do not wish to sacrifice their enjoyment of his music due to disagreements with his problematic political opinions. Another modern-day example is how J.K Rowling faced criticism for her transphobic tweets and was accused of being a ‘TERF’ (a person whose views on gender are considered hostile to transgender people). In keeping with the increase in cancel culture, former fans of her novels decided to boycott her work as a result of her political stance. However, the Harry Potter legacy lives on despite the backlash Rowling faced, due to its huge cultural impact, showing a similar pattern to fans’ approach to Kanye’s music.

Some may see the decision to disregard these art works as valid; by listening to Kanye’s music, and reading Rowling’s novels, you are boosting their publicity, thus showing support. However, others argue that one must not adopt this attitude when it comes to art, and should instead separate the artist from their work. For example, if Kanye’s political stance does not come through in his music, it is possible to argue that you should be able to enjoy it whilst not agreeing with him as a person.

Despite this topic being particularly relevant in modern society, censorship of art is an age-old issue, and has only recently become something so contentious. The 20th century was characterised by censorship, as seen in both the Nazi and Communist regimes. In 1930s Germany, art created by Jewish or black artists was labelled as ‘degenerate’ and thus was excluded from museums and schools. Looking much further back in history, censorship was prevalent in Ancient Egypt, shown by the destruction of Akhenaten’s religious art. During his reign, Akhenaten hoped to introduce monotheism, creating a new artistic style. As many disapproved of this, after his death his images and statues were destroyed in protest of his radical ideas.

These historical examples of censorship suggest that it is something deeply embedded in human nature. Similarly, whilst ‘cancel culture’ is a recently coined term on social media, its basic concept can be traced back to times of public shaming and ostracism, which have long been used to enforce social norms. In view of this, it is arguably not possible to separate the creator from the piece of art, as there is always an underlying awareness of the context and artistic motivation.

However, looking at this dilemma from a different perspective, taking the artist into consideration can hugely enhance an art work. During the holocaust, prisoners in concentration camps would create art, either on official duties or as personal commissions. Awareness of this historical context deeply enriches the experience of appreciating this art and demonstrates that, in certain situations, separating the creator from the creation would actively decrease the impact of the work.

Additionally, an alternative perspective is that it is morally wrong to separate artists from their art when their views are harmful, as it would endorse unethical behaviour. Whenever you appreciate a work of art, you are supporting the artist by giving them publicity, fame, and money. As a consequence, some see this as indirectly reinforcing problematic attitudes, and disregarding personal morals, since art can be seen as an extension of the artist. Therefore, disassociating the two could be seen as selfish in that you are condoning unethical behaviour in order to enjoy a work of art.

My conclusion to the question ‘Should art be separated from its creator?’ is not black or white. The word ‘should’ implies obligation and consistence, whereas I believe that whether art should be separated from the artist is situational. In some cases, failing to take the creator into account, whatever their political opinions may be, is foolish and diminishes the meaning of the creation. However, if society categorically refuses to separate artists from their art, and thus boycotts works of art due to personal opinions, the artistic world would be one dimensional and monotonous.

An injection of hope: AMT-130 and Huntington’s Disease

On the 24th of September this year, the headline of a BBC News report read, ‘Huntington’s Disease successfully treated for first time’, a massive advancement in the research towards the disease. The article described Huntington’s Disease as ‘one of the cruellest and most devastating diseases’ and excitedly reported on a new potential treatment for this disease, known as AMT-130, and based on gene therapy. In this article I will try to explain Huntington’s Disease, the new AMT-130 gene therapy, and discuss the benefits or disadvantages of this treatment.

Huntington’s Disease (HD) was discovered by Dr George Huntington in 1872. It is an inherited brain condition in which the nerve cells (neurons) within the part of the brain which controls movement, start to wither and die. These damages are progressive and affect many important areas of the brain. HD is inherited through a mutated gene, therefore,

if one parent has HD, their child will have a 50% chance of receiving this mutated gene. This mutation occurs in chromosome 4; and only one faulty gene is enough to cause the disease.

The mutated gene is for huntingtin, a protein which helps neurons function properly. The instructions to make protein are contained by DNA and they are coded by patterns of chemical bases. The code for Huntingtin is a CAG repeat, this means it is a pattern of cytosine, adenine, and guanine. Most people will have this repeat fewer than 27 times, whereas a patient with HD will have the repeat 36 or more times. When we have too many repeats, this appears to cause neuronal damage. Although scientists are not sure why, we can assume that the ‘extra’ copies of the protein are toxic for neurons as it is not used nor broken down.

The symptoms of Huntington’s can vary and usually begin when a person is in their 40’s

A series of CT scans that display the presence of Huntington’s disease in the brain

and intensify as time goes on. Early symptoms can be personality changes, confusion and memory loss or chorea (involuntary twitches or muscle spasms). Chorea comes from the Greek word to dance as the movements can be unexpectedly big. These regions are responsible for cognitive functions, learning and motor skills, the execution of movement as well as our memory and emotions.

AMT-130 is the new gene therapy that has been designed to slow the progression of HD. The treatment was developed by uniQure, a company specialising in gene therapies for severe and rare genetic diseases. This therapy is injected into the putamen and caudate nucleus, deep in the centre of the brain, with the aid of magnetic resonance imaging (MRI). This operation takes up to ten hours as the surgeons carefully deliver AMT-130 three separate times.

AMT-130 is made up of two main components; a vector which is the carrier for the treatment and genetic material. The vector (a harmless virus) spreads throughout the brain and enters the neurons where it releases the microRNA (ribonucleic acid). RNA’s main job is to create proteins, however microRNA is a non-coding molecule that cells use to control gene expression. When the microRNA is introduced to the cell it binds with the Huntingtin messenger RNA. This instructs an enzyme to stop the production of more Huntingtin thereby preventing the accumulation of huntingtin protein in cells.

The results of UniQure’s trial showed that after around three years, there was an average 75% slowing of the disease progression. This data was collected via MRI scans and studies of the protein levels in the fluid around the brain. It shows that AMT-130 stops brain neurons from dying. However, these results do not have longer term outcomes; positive or negative. It could be that AMT-130 solves more problems than we thought, or it could create unforeseeable problems down the line.

Scientists do not have a full understanding of the huntingtin protein nor its functions. Therefore, do we understand what silencing it will do.

AMT-130 is a treatment not a cure so it will not prevent the disease from being inherited again and may only delay the onset of the symptoms, rather than stop them all together. However, could this technology be harnessed to tackle other brain diseases that are caused by other proteins and their accumulation? It is a really exciting and promising prospect.

Despite this, AMT-130 is certainly a breakthrough in terms of treating HD and the results of the trial show hope for families that are suffering or will suffer from the disease. While the trial has demonstrated benefits with minimal side effects, it’s important to note that it is still a trial and not a treatment yet. However, for it to make an impact on the 8,000 people who live with it in the UK, cost-effective production methods are crucial. With perseverance and dedication in the medical world, there is hope for this treatment and a possible cure in the future.

Glossary

• Chromosome – a long strand of DNA wrapped around protein stored in a cell’s nucleus. We have 23 pairs (one from each parent) all for different jobs

• DNA – stands for deoxyribonucleic acid and carries the code for genetic information. It is held within the nucleus of a cell.

• Gene – a section of DNA that contains instructions to make proteins that are essential for the function of every cell. For example, haemoglobin carries oxygen within the blood

• Guanine, Cytosine, Adenine and Thymine -- chemical bases that make up DNA

• RNA – stands for ribonucleic acid and is single stranded. They are made up of Guanine, Cytosine, Adenine and Uracil (instead of Thymine). There are many different types of RNA microRNA is one of them and has gene expression qualities.

The Hittites – made up by scripture?

The Hittites were an ancient Anatolian Indo-European people who formed one of the first major civilisations of the Bronze Age in West Asia. They were mentioned throughout the Hebrew Bible (Old Testament) and were widely regarded as biblical legend or fiction. However, this theory was doubtful even at the time, as the Hittites were mentioned in The Books of Genesis, The Book of Kings and Uriah the Hittite was a captain in King David’s army counted as one of his ‘mighty men’ in 1 Chronicles 11. The high volume of mentions of such a colony made their existence too plausible to be fictional, as they spanned many different books and authors throughout the Bible.

However, there seemed to be no trace of these people throughout the world until the 19th century, when a series of great archaeological breakthroughs occurred. In 1834, the first ruins of the Hittites were uncovered by French scholar Felix Texier, although he didn’t identify them correctly, leaving the Hittite colony a remaining mystery.

Subsequent to Felix Texier’s discovery (his discovery being the uncovering of the ruins of a great city, which he mistakenly identified as the Median city of Pteria), the German geographer Heinrich Barth visited the site in 1858, followed by Georges Perrot in 1861 who was the first to suggest (in 1886) that the ruins could be the Hittite capital of Hattusa. In 1906, systematic excavations began at the site, led by German archaeologist Hugo Winkler, confirming the ruins were indeed Hattusa. Throughout the early 1900s excavations unearthed thousands of clay cuneiform tablets, which provided critical insights into the city’s identity and of the Hittite Empire’s political, administrative and cultural history. Other archaeological evidence linked to the Hittites was found at the Karum of Kanesh (now Kültepe) an ancient archaeological site located near the modern city of Kayseri in central Turkey. The evidence was found in tablets detailing records of trade between the Assyrian merchants and a certain land of Hatti. Further evidence was found as script on a monument at Bogazkale by a people of Hattusas. Unearthed by William Wright in 1884, this was found to

match peculiar hieroglyphic scripts from Aleppo and Hama in Northern Syria, which as of then, were unidentified but now have been linked to the Hittites.

Through the archaeological discoveries and scripture, historians were able to construct a comprehensive history of the Hittites, which were found to be a powerful and impactful people upon the history of our world. The Hittites settled in modern day Turkey in the early 2nd millennium BC, and grew and evolved until their empire centred on their capital Hattusa circa 1650 BC. The empire reached its’ peak during the mid-14th century BC under Suppiluliuma I when it encompassed most of Anatolia and parts of Upper Mesopotamia and Northern Levant. The Hittites had a sophisticated social system, which was hierarchical with a king at the top, followed by nobles, scribes, artisans and slaves. The king was both a political and semi-divine ruler, seen as the storm god’s representative on earth. The majority of civilians were farmers, who worked land granted by the king in exchange for service, in a semi-feudalistic structure. One of the most captivating parts of the Hittite social structure was the complex bureaucracy, headed by officials such as the Chief of Scribes, who managed the state’s affairs, but he was held in high regard throughout society, an earmark of a society dedicated to knowledge and learning, but also law and justice with complex social rules set out within society, covering areas such as property and contracts. Hittite women, whilst thought of as the child bearer and household manager, also had legal rights such as receiving partial compensation in the event of a divorce, and could participate in some economic and religious activities, truly demonstrating the revolutionary nature of the society of the Hittites, but also a futuristic element within it.

As with most civilisations of the time, the Hittites were polytheistic, but this also encompassed a naturalistic faith that included

deities form the cultures they conquered. Their main deity was the storm god, who they worshipped with fervour and of which the king was believed to be the human embodiment. Due to the Hittites accumulating gods from other civilisations and adding them to their own pantheon, they were nicknamed the ‘Kingdom of a Thousand Gods.’ The Hittite storm god – Taru/Tarhunna was depicted as a bull, who controlled the weather and was associated with war and agriculture. He is not dissimilar to the Norse and Vedic gods, and was the root for further gods to come such as the Greek God of Zeus, where he took on a more human embodiment, yet shared many of the same characteristics. The Hittites held a deep reverence for the environment and any natural phenomena were seen as the manifestation of the Gods on earth. To worship the Hittites performed rituals, sacrifices and prayers to their Gods.

Within scripture the Hittites are described as a people associated with the land of Canaan, and as a powerful kingdom that aided King Solomon. They appear as inhabitants of the ‘promised land’ that the Israelites were demanded to conquer. In Joshua, he describes the ‘land of the Hittites’ as stretching from the wilderness to the great river Euphrates, helping archaeologists define the borders of the great kingdom. Uriah the Hittite was a key figure in the story of David and Bathsheba, Uriah was a loyal soldier in David’s army, illustrating the presence of the Hittites within the ancient world and their role. Overall, the many mentions of the Hittites in scripture highlights their role as a significant nation in the ancient world.

To conclude, the Hittites were a great empire and kingdom, which ruled much of modern-day Turkey in its prime. They had great cultural influence on our world today with a futuristic society and prominent features within the Bible. The archaeological mystery that ensued puzzled archaeologists but ultimately led to the discovery of an intriguing and influential society.

Is intelligence more reliant on nature or nurture?

Intelligence is defined as the ability to learn, understand, and apply knowledge and skills. Whether having a higher intelligence comes from genetical inheritance or past experiences shaping the mind is a very controversial argument and one in which no definitively right answer has been found so far. The psychological behavioural and biological approaches were created and both approaches pose opposite views on what shapes a human. The behavioural perspective believes that a human is solely a product of its environment, and humans are born as a blank slate, therefore ignoring the notion of human biology having any impact on how we act and our intelligence. The biological approach on the other hand believes that behaviour and intelligence is solely influenced by biological structures such as the brain and nervous system. In contrast to both of these views, most people would come to the conclusion that these views both lie on two ends of a spectrum, and human intelligence and behaviour is affected by both nature and nurture, and so this essay will look further into which side of the spectrum intelligence is closer to.

Studies show that genetic factors influence all types of intelligence, for example general intelligence and specific abilities such as verbal, mathematical, and spatial reasoning. Researchers believe that intelligence is polygenic, meaning it is controlled by multiple different genes, each contributing to the overall cognitive ability of a person. A study showed that identical twins, who share the same genetics, have much more similar levels of intelligence than siblings who do not share identical genetic coding, such as biological

siblings or adopted siblings, even when the identical twins are raised in different environments. This suggests that genes play a major role in determining the IQ of a person and their intellectual strengths and weaknesses. An example of a study is the twin studies on heritability of intelligence, done by psychologists Bouchard and McGue in 1981. It included 111 twin studies, and showed that correlations of IQ scores tend to be higher for identical twins than for unidentical twins, which shows that intelligence is inherited to a considerable extent. Similarly, studies using heritability estimates suggest that about 50-70% of the variation in intelligence between individuals is due to genetic factors. Some genes have an effect on brain structure and efficiency, such as how quickly neurons communicate or how well memory systems function, which can impact different types of intelligence. All in all, this study suggests that the genetical coding of a person has a more dominant effect on intelligence than the environment and experiences a person is brought up with.

Whilst the genetic nature of a person is shown to shape their behaviour and intelligence to a large extent, environmental factors also play a big role in shaping intelligence, as the experiences and living conditions a person faces in their life greatly influence cognitive development. Factors such as education, parenting, social class, nutrition, and access to learning opportunities all contribute to how a person’s intelligence develops over time. For example, the Flynn Effect shows that average IQ scores have gradually risen over the past century, mostly due to the improvements in education, nutrition, and living standards. This is therefore clear evidence that intelligence can

be enhanced by environmental factors. Children who grow up in stimulating environments with books, conversation, and a high-quality education tend to develop stronger memory, language, and problem-solving skills. In contrast to this, a person who from a young age received a bad education, poor nutrition, or poverty can have limited brain development, showing that without proper support, a child’s intellectual potential may not reach as far as it could have done. Therefore, while genes provide the building blocks for intelligence, nurture is also shown to impact how far the potential is to being reached.

The interactionist view of intelligence emphasises that nature and nurture work together, rather than independently, to shape cognitive ability. Genetic factors may provide a biological potential for high intelligence, but the environment determines the extent to which the potential is reached. For instance, a child may inherit genes linked to high cognitive ability, yet without a proper education and living in a stimulating environment from a young age, their intellectual potential may not develop as much as it could have done. Recent research in epigenetics, the study of how environmental factors can affect how genes work, supports this view, showing that experiences can ‘switch on’ or ‘switch off’ certain genes a person has, meaning environmental factors can influence how genetic tendencies are expressed. For example, exposure to learning-rich environments with lots of books, or chronic stress can affect the gene activity related to development of the brain and cognitive function. Furthermore, the plasticity of the brain signifies that intelligence is shaped physically by experience. When a person practices something, or learns something new, neural connections strengthen, and unused pathways may weaken, demonstrating that environmental stimulation can change the structure and efficiency of the brain. This research shows that intelligence is not determined by either genes or environment

alone. Instead, it comes from the interaction between inherited DNA and life experiences, where supportive environments can enhance natural ability, and deprivation can limit it. This perspective recognises both nature and nurture, highlighting the complicated interaction between biology and experience in shaping intelligence.

In conclusion, there is no definitively right answer to whether or not intelligence relies more on nature or nurture. Nurture can shape how intelligence develops, above all between the ages of approximately 0 and 25, and genetic factors provide the biological foundation which influences the intellectual ability of a person. Both factors affect intelligence, and for those of the population who live a subjectively average life, keeping a fairly nutritious diet and receiving a standard education, I believe that intelligence is more impacted by the biology of someone, since they have not lived a life with factors in it which would massively influence their intelligence. On the other hand, I believe that the intelligence of those who live a very healthy life, receiving a high-quality education and living in a highly stimulating environment from a young age is more reliant on nurture, as their experiences could make much more of a difference to many types of their intelligence, by strengthening their cognitive function and brain development. As well as those who live a healthy life, I believe that the intelligence of those who have lived a very unhealthy life, growing up deprived and neglected of education, nutrition and books, is more affected by their nurture, since their severe neglect will also make much more of a difference to their overall cognitive ability.

The Rich History of the Great Saint Bernard Pass

In the Pennine Alps, Switzerland, lies the Great St Bernard Pass and on its summit, the St Bernard Hospice. The hospice is nestled among the snowy peaks, housing the St Bernard monks. It was named after its founder, Bernard de Menthon, and was used as a refuge for travellers trekking through the pass which also bears his name.

The hospice was founded in the 11th century (roughly 1050 AD). Before this, travellers used to come to Bernard de Menthon seeking asylum after hordes of bandits would steal their possessions. De Menthon persuaded the bishop to free the pass from banditry and then founded the hospice. It was built to create a haven and a place of worship for people travelling across the treacherous pass, adding a place of safety to this dangerous journey. The hospice still provides meals, shelter and warmth for travellers to this day. And it has done so, providing refuge, for almost a millennium. It was run by a community of Augustinian monks also referred to as canon regulars. Augustinian monks are monks that follow the rule of St Augustine rather than St Benedict like most monks. They are referred to as canon regulars because canon regulars are like monks, as they

still live a life of worship and prayer, but they live together in a community and have not been ordained. Although they live a life of worship, they can sometimes have other jobs in society such as in this case rescuing travellers and breeding dogs. In 1554, the hospice burned down and the monks had to rebuild almost all of it in order to survive. It took a very long time to fully rebuild as there are few resources surrounding them.

The Great St Bernard pass is one of the highest alpine frontier passes. The pass connects Martigny-ville in Switzerland, Rhone river valley, to Aosta Italy and is 74.6km long. The pass can be dated back to the Romans when in 57 BC Caesar attempted to conquer the alpine tribes to ensure safety when passing through the alps during the Gallic wars. Then improvements were made to the pass; even building a Road, of which you can still see remains. The Romans worshipped Jupiter Poeninus (God of travellers) at the summit. They therefore in 12 BC built an ancient roman temple in order to worship him under the rule of Emperor Augustus. Accompanying it, they built a ‘mansio’ (inn) for travellers, and they renamed the mountain ‘Mons Iovis’. Many roman remnants like the statue of

Jupiter were excavated and can be seen now in the museum on the pass. Later in the Middle Ages merchants, pilgrims and soldiers travelled through the pass under horrible conditions. Although most people travelled through the pass as there was no other option, some people saw it as a gift from the gods to conquer the mountains. Napoleon crossed the pass in May, 1800 with his reserve army of 40,000 soldiers. This was to cross the Alps to Italy to victoriously fight the Austrians (in the battle of Marengo) who had seized what was once Napoleon’s territory. Due to this a series of 5 oil-on-canvas paintings were produced by French painter Jacques-Louis David between 1801-1805. This was commissioned by the King of Spain and shows a very idealized version of the crossing. Napoleon was depicted on a horse in a magnificent coat, when in reality he was travelling on a donkey in a much simpler coat. These five paintings are now all held in various places across Europe.

In 1660, the St Bernard canons bred already existing alpine dogs with large mastiffs that were owned by the romans and used in armies. They were originally bred to protect and guard the residents at the hospice, continuing the work done by Bernard de Menhon. The monks began taking them on their treks and realised they had an incredible ability to navigate in dense fog and snow, they had a great sense of direction and they could sense and warn if an avalanche was coming. These dogs then travelled by themselves in packs of 2-3 to search for lost travellers or those buried, after big snowfalls or avalanches. The puppies and younger dogs were trained naturally and

learned these skills by accompanying the older dogs on their rescue missions. They would navigate in extremely tough conditions sensing the presence of people in the middle of nothing but mountains and snow. The dogs would dig through metres of snow and ice. If the traveller was alive one dog would keep them conscious, comforted and warm by lying on them to transfer body heat. The other dog (s) would then go back to the hospice and alert the monks in the hospice. Then in 1800 they came close to extinction and were bred with new foundlands which created a long-haired variety. This was not favoured by the monks as their hair would freeze over. However, they had an astounding ability that was recognised by many, including the soldiers in Napoleon’s army. They passed down incredible legends of the dogs, contributing massively to their reputation. They worked on the pass for around 200 years and saved approximately 2000 lives. In the last rescue recorded in 1897, 2 St Bernards found a 12 year old boy almost frozen to death buried in a crevice, and saved him by lying on him and alerting the monks. Out of all the St Bernard dogs, Barry is the most famous. He is estimated to have saved 40 or more lives in his 12 years. He is now honoured at the hospice as one St Bernard is kept there in his namesake.

The St Bernard tunnel was built in 1964 providing an all-weather route through the alps. Now the pass, hospice and dogs are not used in the same way. The hospice still functions, welcoming travellers and tourists. The tourists are invited to see the old baroque church, the museum and the library. The site of breeding the St Bernards has moved from the hospice to the Barry Foundation (named after the legendary Barry). They bring the St Bernards with their puppies in the summer for the residents to see. The foundation started in 2005 and now conserves the breed. The canons’ focus has changed from rescuing people and the dogs to providing more hospitality, maintaining the historical monastery, and supporting the Barry foundation and the St Bernards.

A Comparative Study of Authority: Trumpism, Big Brother, and Soviet Ideology

What does a 21st-century American president have in common with a fictional tyrant from a 1940s dystopian novel? More than we might like to admit. The quest to understand political authority - what it is, who wields it, and to what end - forces us to look into some of history’s darkest mirrors.

As present-day U.S. president Donald Trump’s growing authority seemingly towers over his people, significant amounts of both his critics and supporters remain unsettled. Yet while the method of their authority seems to differ drastically, Trumpism, Orwell’s Big Brother, and Soviet ideology all have something in common that correlates their way of social functioning. In this essay. I shall present how Trump’s reign and Big Brother’s consolidate power by a set of similar ways.

In an age of information overload, the most potent form of authority is no longer the control of armies, but the control of reality itself. In 1984, in order to obtain absolute power over its people, the ‘Party’ systematically dismantles the truth, and redefines their ‘enemy’. The Ministry of Truth systematically alters all historical records. The concept of “Reality Control” becomes “Doublethink”, the ability to hold two contradictory beliefs simultaneously. Without publicly announcing its methodology, the ‘Party’ has made massive changes to the society without any residents’ approval, the opposite of the democracy we have. When people ask of it, they respond simply with their altered truth – “The past was alterable. The past never had been altered. Oceania was at war with Eastasia. Oceania had always been at war with Eastasia.” There is no

duty for the Inner Party to care about and adopt what the people believe of their political system and nor does it tell them what they do to fix problems or potential leaks in the society. All this, just to make the Party’s will the only possible reality, eliminating any benchmark for truth outside itself.

In a similar way, breaking two whole centuries of political tradition, Trump didn’t make requests for his Americans to believe in God or truth, but rather in Trump himself, responding to this political pandemonium with a simple ‘I alone can fix it.’ He positioned himself not as a servant of a system, but as the sole solution, or rather viewing himself as a messiah of the collapsing US, a system he declared ‘broken’. This created a direct covenant with his followers by passing his own established truths. When faced with contradictory facts or investigative reports, the response was not to focus on its details, but to dismiss them as ‘fake news’, effectively declaring his own statements the only valid currency of reality. In doing so, he didn’t need a Ministry of Truth; he became its walking, talking embodiment, demanding that loyalty to his word.

In history, it’s not that there has never been a resemblance before. The Leninist society was the prototype. It pioneered the core tactics of 20th-century authoritarianism: the cult of leadership, the subordination of all institutions to the Party, and the treatment of objective truth as a malleable tool for consolidating power. In the early USSR stages, the constant rewriting of history in textbooks and official media was common. ‘Enemies of the people’ were airbrushed from photographs. The significant erasure of Leon Trotsky from Soviet history after his fall from power, as dictated by Marxist theory, to ‘maintain the infallibility of the Party line and prove the inevitable historical progression towards communism’.

When interrogated, US president Donald Trump responds with ‘I have nothing to do

with Russia, no deals, no loans, no nothing!’ Yet in reality, both the Trump-ism society and the Leninist are built on the same engine of perpetual conflict, where Lenin had the class struggle, Trump has the culture war. Both systems require a loyal in-group forever battling a demonised enemy, and both concentrate power by making the leader the ultimate arbiter of truth, rendering messy, objective facts irrelevant. In addition, Trump’s former lawyer was said to have testified that the Trump Tower meeting with a Russian lawyer to get ‘dirt’ on Hillary Clinton was indeed about ‘adoptions’. This chaotic public insult revealed the relentless denial of a verifiable reality, a move straight from the authoritarian playbook where the leader’s word, however implausible, is intended to supersede all evidence.

Authority is often solidified not by what it champions, but by what it opposes. A perpetual enemy is essential for unity and control. In 1984, Big Brother’s Party posed its enemy as Emmanuel Goldstein, and the elusive Brotherhood, with people born to hate it even when most are oblivious to what they actually are. Also, the foreign enemy Eurasia, which was claimed to have always been their enemy. The Two Minutes Hate was a way of connecting everyone through developing a shared enemy. In Soviet ideology, the ‘Class Enemy’, i.e. the bourgeoise, the kulaks, were called wreckers and saboteurs, and the late western imperialists and capitalists. In Trump’s America, the enemy was the ‘Washington Swamp’, all the so-called ‘Fake News Media’, ‘globalists’, and often immigrants or cultural ‘elites’, as referring to the COVID-19 pandemic as the ‘Chinese pandemic’ even though it was mainly people coming back from WuHan markets that brought this invasive virus into the European and American lands.

As this type of society overlaps, the big question comes: what is the true purpose of society? It’s often theorised as providing security, justice, welfare, and fostering the common good and individual flourishing,

yet in Trumpist, Orwellian, and Soviet worlds, things can get a bit diverse (perverted). In 1984, O’Brien explicitly states the Party’s goal: power for its own sake: ‘The object of power is power’. Society exists solely to serve the Party’s endless quest for domination. In Soviet Ideology, the stated purpose was the achievement of a ‘communist utopia’. However, in practice, the system devolved into preserving the power and privileges of the Party nomenklatura, closer to a dystopia. The ideology became a tool for power preservation. In Trump America, critics argue the purpose of the movement shifts from governing for the common good to a perpetual campaign focused on ‘winning’, dismantling institutions, and maintaining the leader’s political and cultural dominance. The health of the society is secondary to the triumph of the in-group.

In conclusion, the resemblance between Trump’s America, Orwell’s Oceania, and the Soviet state is significant, indicating that the thirst for absolute authority is always existent, it is a script written in the language of ‘us vs. them’, directed through the control of reality itself. Recognising these patterns is not an academic exercise, it is our first and most powerful line of defense as to our own political good. For the true purpose of society, the ultimate answer to the authoritarian’s claim, is not to bow to a single, towering will. It is to build a world where authority is decentralized, truth is communal, and power resides not in a leader, but in the inviolable dignity and freedom of every single person. This comparative study serves as a sharp warning that when the purpose of society is hijacked by a quest for power, the very concept of truth, community, and humanity becomes the first casualty.

Is ayahuasca a potential alternative to antidepressants for use in treating depression?

This year marks the 50th anniversary of the discovery of selective serotonin reuptake inhibitors (SSRIs), a class of antidepressant. According to the Lancet, a medical journal, 332 million cases of depression occurred in 2021 alone. Although SSRIs are widely prescribed, their effectiveness has been subject to controversy. Ayahuasca’s long history of use as a tool for mental and spiritual healing within the Amazonian cultures makes it unique among other psychedelics and has drawn the growing attention of Western scientists investigating its promising potential as an alternative treatment for depression.

How do antidepressants work and how are they used?

Antidepressants refer to a broad category of psychotropic drugs used to treat depression and low mood. There are many distinct types, but the most commonly used are SSRIs, such as Prozac and Zoloft. Serotonin, or 5-hydroxytryptamine (5-HT), is a neurotransmitter associated with regulating mood. It acts as a chemical messenger which carries signals between neurons through synaptic transmission (see image below).

Synaptic transmission

Serotonin allows areas of the brain responsible for emotions, such as the amygdala (involved in fear and threat assessment), to connect to the prefrontal cortex (involved in decision making). The prefrontal cortex manages emotional responses, helping individuals process their emotions. During the process of synaptic transmission, the brain performs serotonin reuptake, which removes excess serotonin from the synaptic gap for storage or to be broken down. This process reduces the levels of serotonin available in the brain to regulate mood. SSRIs prevent this from happening, leading to increased serotonin levels in the brain. Increased serotonin leads to more connectivity, and studies suggest this can assist mood regulation for those who struggle with depression and anxiety. This makes SSRIs a potentially helpful tool in treating the symptoms of depression. However, while antidepressants can be effective at treating the symptoms, they do not address underlying causal factors. This is why they are often paired with therapeutic treatments such as cognitive behavioural therapy (CBT), to help people suffering from depression to develop a new way to navigate life’s challenges.

Composition

and chemical effects of ayahuasca

Ayahuasca is a psychoactive brew that originated in South America. It was used by tribespeople in the Amazon rainforest as a method of spiritual healing, as well to contact a spiritual plane. It is traditionally composed of the banisteriopsis caapi vine combined with leaves from plants such as psychotria viridis which contain dimethyltryptamine (DMT). DMT is

the active psychedelic compound in ayahuasca. It enters the brain through the blood-brain barrier and behaves like a neurotransmitter, increasing brain connectivity. It does this because it is structurally similar to serotonin.

Both serotonin and DMT are tryptamines, which is why DMT can act as a neurotransmitter. Like serotonin, DMT is also susceptible to deamination (the removal of an amino group from an amino acid) by monoamine oxidase enzymes (MAO). MAO helps break down neurotransmitters like serotonin and dopamine, helping to regulate mood. If the DMT is broken down by MAO before it can have its intended effect, then the potency of the drug will be significantly reduced. That is why DMT is paired with the banisteriopsis caapi vine, which contains harmala alkaloids. Among these, the most influential are harmine, harmaline and tetrahydroharmine. Harmine and Harmaline act as MAO inhibitors (MAOIs), preventing the deamination of DMT. The addition of MAOIs allow the drug to produce visual and auditory hallucinations. In addition to being an MAOI, the third harmala alkaloid, Tetrahydroharmine, primarily acts as a serotonin uptake inhibitor, similar to SSRIs. Higher serotonin levels make the 5-HT receptors active for longer to receive serotonin, which increases the uptake of DMT, enhancing its effects on the brain. This makes the psychedelic experience last longer.

Could ayahuasca be an effective alternative to antidepressants?

Ayahuasca has been used by many different cultures in South America for spiritual healing. The Shipibo tribe in Peru, for example, have used ayahuasca for centuries as part of their shamanistic and spiritual traditions. Modern scientific research suggests that it may have potential for helping patients with depression and other psychological conditions. Similarly to SSRIs, ayahuasca can increase neuroplasticity, the brain’s ability to form new pathways. This enables people to change their habits and end unhealthy patterns in their lives. However, unlike SSRIs, ayahuasca can lead to psychedelic experiences which are deeply reflective and introspective. These experiences can help people change their perspective towards a prior experience which may be one of the causal factors of their depression or trauma. These unique aspects of what ayahuasca is able to do can assist people in overcoming depression and help treat the cause in a way that antidepressants cannot. While ayahuasca is not a miracle cure, it can help people with depression in the long term, as opposed to only treating the symptoms.

There are risks associated with taking ayahuasca, as studies suggest ayahuasca can cause negative physical and psychological effects. Physical effects include nausea, vomiting and headaches, while negative psychological effects like anxiety, distress, and confusion are also observed. A 2020 study published in Nature, a leading scientific journal, involving 40 participants in ayahuasca ceremonies reported that 7 participants described extremely challenging psychotic symptoms. As well as this, ayahuasca has the potential to worsen existing mental conditions such as bipolar disorder and anxiety. It can also trigger mental health conditions in those who are genetically predisposed. This means that if you have a family history of mental illness then ayahuasca may be risky and unpredictable. This indicates

Chemical structure of DMT Chemical structure of serotonin

that antidepressants may be safer than ayahuasca for some people.

In conclusion, ayahuasca is a potential alternative to SSRIs in the treatment of depression. Both ayahuasca and SSRIs increase the availability of serotonin in the brain. However, while SSRIs are often paired with other therapeutic treatments, the psychedelic experiences induced by ayahuasca can often be innately therapeutic and introspective. Although ayahuasca has potential benefits in treating depression, further research is needed to explore the effects of ayahuasca in more detail. This research would help reduce the unpredictability of ayahuasca as a treatment and lead to further understanding of how to use it safely in a medical context.

The Alternate History: How Operation Barbarossa Could Have Succeeded

In 1925, Hitler published his two-volume Mein Kampf, where he outlined his dream of crushing Communism and the Soviet Union. The signing of the Molotov-Ribbentrop non-aggression pact temporarily bound two ideological enemies. This pact ensured peace on Germany’s Eastern border while Hitler invaded Poland and subjugated much of Western Europe, but by mid-1940, it had served its purpose. Hitler then began to plan in secrecy to break the pact (as he had done with others) and launch a large-scale invasion of the Soviet Union, codenamed Operation Barbarossa. The destruction and dissolution of the Soviet Union would be the final undertaking before Germany assumed total control of Europe. This invasion would turn out to be the largest military operation in history.

The most critical error committed by Adolf Hitler came in August 1941. Instead of maintaining the concentrated drive on Moscow - the strategic prize - Hitler diverted the armoured divisions of Army Group Centre to the north and south. Specifically, forces were redirected to support operations against Leningrad (now St. Petersburg), which was the former capital of Russia and a symbol of the Russian Revolution, as well as a major industrial centre producing around 10% of the country’s industrial output. Forces were also diverted to the costly encirclement battle around Kyiv (Kiev).

For Operation Barbarossa to succeed, this division of forces and the resulting fourweek delay would have had to be avoided. A relentless, focused drive on Moscow would have dramatically increased the chances of

seizing the capital before the devastating Russian environmental factors set in:

1. The Rasputitsa: Capturing Moscow before the autumn rains brought the notorious Rasputitsa (literally, ‘roadlessness’ or seasonal mud) would have prevented the crippling standstill that bogged down the Wehrmacht’s tanks and supply lines.

2. General Winter: Securing Moscow in October 1941 would have provided German troops with the time and vital rail infrastructure necessary to prepare for the brutal Russian winter (General Frost). Critically, it would also have denied the Soviets the capacity to relocate key industries and organise effective winter defences.

The conventional analysis argues that the entire operation’s outcome was decided by the success or failure of the drive on the Soviet capital, Moscow. The city was the Soviet Union’s integral railway and road hub, the centre of its command structure, and a powerful symbol. Its capture would have essentially crushed the Soviet logistical and communications network, leaving the Red Army’s remaining forces fragmented and unable to coordinate a sustained defence.

A successful conquest of the Soviet heartland would have immediately shifted the balance of resources. A key strategic objective was the fertile grain-producing region of Ukraine and, most crucially, the Caucasus region, home to vast oil fields.

Fuelling the War Machine

Access to the Caucasus would provide the Third Reich with potentially 24 million tons of crude oil per year– a massive boost that would solve the German military’s crippling dependence on synthetic fuel (which was extremely labour intensive, requiring the hydrogenation of coal). The only natural reserves

the Third Reich had access to were the vulnerable Romanian Ploiesti fields. Critically, however, Germany’s existing infrastructure was unsuitable for moving large volumes of oil from the Caucasus at the time, meaning new pipeline and rail infrastructure would have had to be created quickly to make use of those vast reserves.

With the Eastern Front essentially secured, the strategic focus and resources of the Third Reich could have been dramatically reallocated, fundamentally altering the course of the global conflict. Resources previously dedicated to Eastern coordination and troop replacement could be funnelled into naval warfare and technological advancement. This shift would allow for the accelerated development and deployment of advanced U-boat fleets. Specifically, this could hasten the production and deployment of advanced designs like the Type XXI U-boat (the “Electro-boat”), a true submarine rather than a submersible, boasting greater underwater speed and endurance. Fleets of these advanced U-boats would subject Allied shipping in the Atlantic to a new, unprecedented wave of attacks. The resulting massive strategic defeat could severely compromise the United Kingdom’s ability to sustain its war effort and potentially cripple the flow of American Lend-Lease aid, dealing a devastating blow to the Allied war at sea.

Beyond the naval threat, the remaining German ground forces (over 2 million men after securing the East and leaving a half-million for occupation duties) would be free to reinforce the Western Front. The extra divisions could be deployed to fortify defences at critical locations such as Sicily and the coastline of Normandy, making future Allied invasions (such as Operation Overlord) vastly more costly, complex, and potentially impossible to execute successfully, regardless of Allied ingenuity.

The campaign was never purely military; it was driven by an ideological agenda. Following victory, the German High Command would

have enacted Generalplan Ost. This blueprint for colonisation and ethnic cleansing involved subjecting the occupied territories to a Vernichtungskrieg (war of annihilation). This horrific plan involved the systematic murder, forced starvation, and mass expulsion of up to 50 million Slavic peoples, effectively clearing the land for German settlement.

In my opinion, with an alteration of certain tactics, such as a relentless, focused Blitzkrieg-style attack plan aimed directly at Moscow, Operation Barbarossa would have ultimately succeeded.

By taking advantage of the Soviet leader’s paranoia and the immense purges of the officer corps, vast swathes of territory could have been annexed because of the lack of initial coordination. In this grim alternate history, the defeat of the Soviet Union would not only secure the vast resources and territory of the East but also allow Germany to turn its full military might against the remaining Allied powers, making an Axis victory a terrifying possibility.

Did God create humans when he was lonely or did humans create God when we were lonely?

This question exposes something deeply human, the need to be known, loved, and not alone in the universe. For centuries theologians have said that God is complete within himself and that creation was an act of love. But philosophers and anthropologists have suspected for a long time the opposite, that we invented the idea of God to fix our loneliness in this universe. Ludwig Feuerbach in his book, The Essence of Christianity, said that our idea of God is not actually a divine being, but instead a projection of our own human qualities and desires that we idealise. Humans are finite and imperfect, we are wise, but we are limited, we can love, but not perfectly. To comfort ourselves, we imagine a being who possesses all these qualities perfectly,

therefore theology becomes anthropology and studying God is really just studying human nature. Across the world throughout history people have imagined divine beings who hear our requests and give our lives meaning. Whether these Gods were revealed or imagined, they tell a common story, one about loneliness, fear, and the fragile hope that someone somewhere is listening.

The Origins of Religion

To explore this question, we must first understand where religion began. Archaeological and anthropological evidence suggests that belief in the divine began as soon as humans developed symbolic thought. Early hunter gatherers carved idols and painted spirits on cave walls. As societies grew, geography

shaped theology. In the harsh deserts of the middle east, where survival depended on unity, monotheism arose. In India and Greece, where there was an abundance of supplies allowing for diversity, polytheism began. Religion evolved as both an explanation and comfort, a way to make sense of storms, death, and suffering. The anthropologist, Emile Durkheim, argued that religions helped early societies maintain order by turning shared moral codes into divine commandments. So, Gods in this sense were just projections of the community’s values, which is why different cultures have different religious rules as their communities had different values and morals.

If religion were truly the result of divine revelation, you might expect the truth to be more evenly distributed and accessible to all, regardless of your birth place or culture. A child born in Saudi Arabia is almost certainly raised Muslim. Someone in India almost certainly Hindi. These patterns are simply the product of environmental inheritance. This raises a scary question. If salvation from Sin depends on belonging to the ‘right’ faith, then billions of people are condemned just because they are born in the wrong place. What kind of loving God would do this? The idea that a person could be rewarded or punished based on accidents of birth makes God appear less like a benevolent creator and more like a discriminating evil creator. This exposes the human origin of religion. Our Gods grew out of local soil, local fears, and local hopes. This can become a problem when faiths begin to claim authority and condemn outsiders for not sharing what they never had the chance to learn.

The human need for God

Loneliness isn’t just social isolation: it is also the isolation that we have on earth from everything around us; the awareness that we ultimately live and die alone within our own consciousness. Religion answers that terror by telling us that we are known, that our lives are

part of a bigger story, that death is not the end. Prayer and rituals give some structure to a chaotic world, similar to meditation. Even if God were an invention, he is a healing invention. Faith has encouraged art, music, charity and acts of compassion. To believe is to find piece in the infinite.

Psychologically, belief offers a survival advantage. People who believe in meaning beyond death often have better resilience, lower anxiety and stronger social bonds. The communal aspect of religion may have helped early humans cooperate, share resources, and cope with grief together. In this sense, religion may be less about truth in the divine and more about emotional evolution, a survival mechanism based off of myths and rituals.

The theological defense

In most traditions, God is not lonely, or capable of being lonely. In Christianity God is complete with the Trinity. In Islam, Allah is Al-samad, self-sufficient, without need. In Hinduism, creation is an overflow of joy instead of a fix to loneliness. To claim that God created humans out of loneliness is to misunderstand the nature of God itself. From this perspective, human loneliness is not evidence that we invented God, but that we were made with the ability for a relationship with the divine.

The harm and hope of religion

Yet the history of religion is not only one of comfort, it is also of control. The same beliefs that create compassion have also justified crusades and oppression. Religion has divided tribes and turned sacred books into tools for power. Many religions have created hierarchies, of gender, sexuality, class or race. For centuries, religious authorities suppressed science, silenced non-believers and punished curiosity. And on an even more personal level, the harm can be just as deep. People have grown up under the fear that they are sinful, to see themselves as broken or unworthy of love unless they change things that they are not in control of.

Religion in an age of science and secularism

In an age that is dominated by scientific discovery and secularism, religion faces challenges. The mysteries that were once explained by divine will, are now often attributed to physics, biology and chemistry, yet people still search for meaning. Some people think that science is dismantling religion, others think that it just shows the intricacy of divine creation. Despite centuries of scientific advancement, belief has not disappeared. Nearly 75% of the worlds people live in countries where their religious groups make up a majority of the population. Although, in the 18th and 19th century there was a rise in secularism with more sceptical thinkers like Hume and Kant who began to challenge the literalism of sacred texts, arguing that human reason should guide morality and knowledge. Charles Darwin dealt the biggest blow to the idea of religious creation by suggesting that it was not a product of special design but the result of natural selection, a mechanism that required no guiding hand.

So perhaps the story of religion did not begin with divine revelation but it began from the human brain. People faced with the large silence of the universe, shaping gods in their own image to make sense of what they could not control. They found meaning by imagining that someone or something was listening. However, to say that humans created religion does not mean that the divine does not exist. The hope that people have for God might itself be evidence for a higher being. So, did God create humanity out of divine loneliness or did humanity create God out of its own? The answer is probably neither. What is certain though is that there is still so much in the universe that is unknown, and things that might never be answered. The human mind evolved to survive on a single planet with a single sun. It can no more understand the origin of the universe than an animal can comprehend a city it walks through. There may be something behind it all, a builder whose nature we will always be too small to imagine. And if that is true then both science and belief are just two ways of attempting to find a truth that will always be just out of reach.

Is happiness achievable without comparison?

In today’s world, comparison is inescapable. Whether through social environments, social media, or the workplace, people constantly measure themselves against others. When we think about comparison’s role in happiness, the consensus is that ‘comparison is the thief of joy.’ This captures the viewpoint that comparing yourself to others can create feelings of inadequacy, envy, and vulnerability, thereby ‘stealing your joy.’ On the other hand, some psychologists argue that comparison is an essential component of happiness, so much so that the mere concept of happiness is unachievable without comparison. For this reason, it is important to evaluate the effects of comparison on happiness, as well as consider ways in which it may be to its detriment. The exact definition of happiness, as well as what causes it, are questions which scholars have debated for millennia. However, on the whole, it is agreed that success and

achievement in valued arenas are highly conducive to contentment in life. This can range from success in a sports game to receiving a promotion at work.

Existentialists believe happiness requires a contrast to be felt. In existentialism, emotions do not have a fixed and universal meaning. Therefore, they believe happiness can only have meaning if it is situated against its opposites. If someone had only ever felt happy and never felt hardship or suffering, happiness would become a neutral emotion and an unremarkable baseline. Thus, from the viewpoint of an existentialist, happiness can only be created through comparison. Nietzsche, a nihilist, agreed with these existentialist beliefs, arguing that pain and suffering aren’t obstacles to happiness, but in fact a vital condition for it. The individual who feels more hardship and pain on their journey to accomplish something

will feel a more meaningful and authentic happiness than someone who didn’t. Their belief is quite positive: with suffering comes happiness, implying that you can only feel as happy as you have felt sad, and vice versa. However, Schopenhauer believed that the natural state of emotion was suffering, and happiness is simply an absence of suffering. His pessimistic belief says that the contrast and comparison of emotions are still fundamental, and that happiness isn’t defined by what it is, but by what it isn’t. This, although very negative, still argues that it is impossible to experience happiness without its opposing counterpart. Lastly, the Easterlin Paradox, identified by Richard Easterlin, an economist, in 1974, offers evidence that happiness is reliant on comparisons and is difficult to achieve if everyone is equal. In society, wealthier people report higher levels of happiness than poorer people. Richard Easterlin recognised this but discovered that if the wealth of a country increased, the reported levels of happiness would hardly change. This is a paradox because the two data sets contradict each other, and this is due to how people are actively concerned about their peers and constantly compare themselves. The Easterlin Paradox shows how comparison can make you unhappier; however, it also shows how fundamental comparison is to happiness. If an individual were to get wealthier, they would be happier. However, if everyone became wealthier, the individual would no longer feel happier. This shows the subconscious requirement to be better than others, and this is reiterated as people often dislike the idea of being ‘average.’

Sisyphus is a figure from Greek mythology. He was the king of Corinth who repeatedly disobeyed and deceived the gods. As punishment, he was forced by Hades to roll a boulder up a hill, only for it to fall down every time he got near the top. Albert Camus, in his essay, The Myth of Sisyphus, describes how, despite Sisyphus’ meaningless existence for eternity, he is able to find happiness amidst the

absurdity he endures. This is partly because he has no other worker to compare himself to, no hierarchy, no better fate, and no punishment more or less merciful than his own. There is an opportunity for neither downward nor upward comparison, meaning Sisyphus has the perfect environment to cultivate an internal peace and happiness, as well as his own meaning and purpose. Furthermore, famous ancient philosophers, some of whom were Stoics, argued that true happiness is internal, not relative. Epicurus, an ancient Greek philosopher who founded a philosophy school, the Garden, taught the concept ‘Ataraxia,’ which means true happiness and peace of mind. Epicurus believed that ataraxia is attainable as long as it is your main goal. If you abandon desires for wealth, fame, and competition, you can minimise unnecessary wishes and achieve an inner calm, the true recipe for happiness. Marcus Aurelius, the most renowned stoic, agreed, arguing that happiness is about self-mastery and inner peace. His quote, ‘Do not waste what remains of your life in speculating about your neighbours,’ describes how comparison is redundant and even unhelpful in the pursuit of happiness. Stoicism believes in living in harmony with our problems, controlling what we can, and accepting our inability to control external events. Stoics dismiss comparisons, viewing them as a source of shallow and short-term happiness and a gateway to avoidable suffering and pain. They believe that long-term and genuine happiness comes from inner tranquillity and self-mastery. Lastly, Buddhist philosophy places an emphasis on detachment from superficial concepts. According to the Buddha, pain and suffering derive from external desires, including the wish to compete with and surpass others. Happiness is taught to be self-generated, and the Dhammapada, a text in Buddhist literature, highlights the importance of detachment from wishes and ego in achieving happiness. Overall, these philosophical beliefs demonstrate that lasting and truthful happiness comes from within, independent of external factors.

In conclusion, the involvement of comparison in happiness is far more complex than the simple answer, ‘comparison is the thief of joy.’ Although the saying is certainly true in some cases, causing dissatisfaction and envy, existentialist thinkers such as Nietzsche explain how happiness gains its meaning through contrasts. The Easterlin Paradox illustrates how happiness, although subconscious, is tied to relative, as opposed to absolute, conditions. However, the work of Camus describes how Sisyphus’ freedom from external factors can nurture a pure and peaceful happiness, and Epicurus explains how true happiness is internal, not relative. Despite the arguments which believe that happiness is achievable without comparison, I believe that happiness is meaningless unless hardship and suffering are experienced first. Therefore, in my opinion, happiness is unachievable without comparison.

Marlborough College, Wiltshire SN8 1PA www.marlboroughcollege.org

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.