The Purvis Journal - 2023 Edition

Page 1

The Purvis Journal 2023

EDITION
2 Welcome Contents 6 Moral Debates Science & Technology 30 52 Who Inspires Me? History & Current Affairs 56 70 Hobbies & Interests Introduction 2

Dear Reader,

Despite this being the fourth Purvis Journal since its inception in 2020, our ambition as an editing team was to create a Purvis Journal that drew on the knowledge and insight from not just Cranleigh UK, but also Cranleigh Abu Dhabi and a few wise, Old Cranleighans. Consequently it is the first year in which we are able to bring to you articles not only for the UK but also from Abu Dhabi.

On the 26th March 2020 the United Kingdom went into a full national lockdown. It was not until 24th February 2022 that all the COVID restrictions were finally lifted. And so this is the first year in which COVID no longer plays such a pivotal role in our everyday lives, but our lives are still no less tumultuous as the founder of the Purvis Journal later details in this year’s edition. However, we hope to help you reminisce of simpler times with a range of fantastic articles about anything from a beginner’s guide to chess to a defence of utilitarianism. We hope that we are able to live up to the immense legacy of the Purvis Journal.

There are a lot of people to thank for the creation and production of the 2023 Purvis Journal. Firstly, the most important of all, Mr Rothwell whose endless effort to annoy the editors and find errors in the line spacing of the journal were invaluable as always. Next, to Daisy Beaumont for her exceptional artwork and to Mr Ladd-Gibbon and Nicholas Mommsen for their astonishing photography. Finally, we would like to commend and congratulate the editing team at Cranleigh Abu Dhabi, who managed to collate and edit a handful of superb articles despite being a mere 5,500km away.

A Message from the Middle East... مرحباً

The Purvis Academic Journal allows for Cranleigh Abu Dhabi students to exchange and share their perspectives on a plethora of fields, ranging from linguistics to engineering. To Cranleigh Abu Dhabi, the Purvis Journal is not only an opportunity to express academic rigour and encourage debate, but also integrate with the larger Cranleigh community. The Purvis journal signifies a rich history of intellectual, thoughtprovoking articles, reflecting mesmerising penmanship coupled with concrete structuring. This is a skill instilled within Cranleigh Abu Dhabi scholars, and to us, this opportunity brings a challenge, which we take on with much pride. We would like to thank Prachet Poddar, Amelia Hanspaul, Ali Naqvi, Polina Chirkova, Isabella Cartwright, and Elizabeth Darroch for proofreading and organising the journal from CAD, as well as to Ms. Lara Dale for being the key communicator for this opportunity.

3 Welcome Welcome

MeetWelcome the Team

Here is a snapshot of the team that spent far too much time debating over fonts and front covers and are responsible for this year’s edition of the Journal.

4 Welcome
Ozzy Larmer - Designer Rafe Farrant - Editor Lauren Beaumont - Editor Sam Francis - Editor

In this section our articles will delve into a variety of society’s most controversial questions. We will look at issues of key principles in areas spanning society’s biggest inquisitions.

Moral Debates A Defence of Utilitarianism

In today’s society the nuances and differences between moral standpoints are often largely ignored. In this article Tom writes a deeply thought provoking argument weighing up the pros and cons of utilitarianism.

The first and most obvious argument against utilitarianism is that it disregards any concepts of rights along with the idea that certain actions are inherently wrong. No one in their right mind is going to argue that humans don’t have the right of freedom from slavery or that rape is ever morally good. This, however, is exactly what utilitarianism states; given a certain (granted extreme) situation an action that violates one of the UN’s Declaration of Human Rights including one of those actions society considers to be absolutely immoral can be, not just morally excusable, but morally good. This surely renders utilitarianism unusable, but before you judge it so quickly, it is important to first understand why. Utilitarianism judges an action simply based on its consequences, using ‘Bentham’s hedonic calculus’ which weighs up the total pain and pleasure of the people affected. Thus if the consequences of one of these actions are good overall the action is also good, irrespective of what the action itself is. Therefore, given a particularly extreme situation these actions may be justified by their consequences.

The classic example of the ‘ten sadistic guards’, where the pleasure gained by the guards outweighs the suffering they cause a prisoner by torturing them, is most often employed as an argument against utilitarianism, perfectly illustrating this point. It attempts to highlight the dangers of consequentialism as the actions of the guards are being justified by it because, overall, their pleasure outweighs the one prisoner’s pain. Thus we see that an action that we naturally feel is intrinsically immoral (torture) is justified in what appears to be quite an unpleasant manner. This doesn’t seem to help utilitarianism’s case, as it seems to completely oppose the feeling that we might describe as our moral compass or conscience. However, one cannot deny that utilitarianism is logically coherent, and what becomes more and more apparent as we ponder the issue is that we realise that this is more than can be said for our conscience. Aside from our groundless and inconsistent

intuition there is no rational argument for absolutism (excluding a religious argument). Instead many will put forward what is actually a consequentialist argument which thus emphasises my point, that although it appears to be beneficial it will lead to worse consequences further down the line, in a slippery slope style argument. Furthermore, when we consider why the pain of the prisoner should be held in such high regard in comparison to the pleasure of the guards, without pointing to loving deity or suchlike, it is impossible to give a compelling reason why it should be. Thus, if we are judging morality based on pain and pleasure, we should take all into account equally and disregard the baseless sentiments of absolutism in order to correctly determine morality.

In order to highlight why it is necessary to employ an ethic that abandons rights and intrinsic morality I will use another example. This is the “Anne Frank” situation; an absolutist like Kant would have to maintain that the right thing to do it is to tell the truth and thus hand the Jews over to the Nazis, despite the fact that this will almost certainly directly cause their deaths. This is because absolutists do not take consequences into account when deciding the morality of the action, it is simply judged based on the intrinsic morality of the action itself, after all Kant would say you cannot be held morally responsible for the actions of others. This, however, is completely irrational. We do not live in a perfect world and whether or not the Nazis are acting immorally is not going to change the fact that the Jews will die if we tell the truth. Therefore, it cannot be moral to tell the truth which would cause the Jews die and this is where the gaping flaws in absolutist ethics become apparent. Thus, we see why it is necessary to abandon absolutism and so the integrity of utilitarianism is maintained.

Initially it may not seem obvious why cultural relativism and utilitarianism are mutually exclusive; a utilitarian

6 Moral Debates

would say that morality is dependent on the situation and a cultural relativist would say that morality exists in relation to the culture it originates from. However, when we examine this we see that they are completely opposed: cultural relativism states that the existence of morality is dependent on culture, whereas utilitarianism states that only the outcome of morality is dependent on the situation. Thus a utilitarian (or follower of any ethic) can not subscribe to cultural relativism because their ethic requires there to be universal truth behind morality as its central principle (for utilitarianism the utility principle) is universally true. It is important to note that universal truth is separate from absolute truth as it states that there is a correct moral answer for every action in every situation, and it is important to make the distinction. Therefore, the arguments for cultural relativism must be refuted in order to maintain utilitarianism.

The cultural relativist movement stems from the study of evolutionary psychology, this is the idea that our psychology is affected by a form of natural selection and is therefore a product of our environment in the same way as the traditional, physical evolution attributed to Darwin. According to these ideas, humans developed morality simply because it benefited humanity: if a certain tribe had some form of morality they would have been more successful than one without as they would act in a way that would benefit the whole tribe, not just themselves. This led to them being more likely to survive and so morality spread in the same way as advantageous physical characteristics according to evolution. Therefore the morality that we have now is entirely a human construct, which is why there is no additional truth behind it, and it is irrelevant how it is decided as long as it achieves its aim of human flourishing.

Given that the traditional sources of universal morality, the most common being religion, are unconvincing and do not tend towards a utilitarian ethic, a cultural relativist approach does seem logical. Although, according to what I have said thus far, this is contradictory; I have made my utilitarian position very clear and have shown why this is not compatible with cultural relativism. However, with a slightly nuanced approach to cultural relativism I believe it is possible to marry the two concepts into a thoroughly convincing argument. Previously I assumed that because morality is a human construct it cannot be truthful in any way, but according to utilitarianism and, to a certain extent, empiricism this is not the case. If consequences and empirical observation are what I base my moral philosophy on then it is logical to infer that morality can be truthful because it has positive effects; and if morality was created simply to determine the best thing for humanity then this is a universalist central principle for an ethic. This view essentially falls into the meta-ethical category of intuitionism as morality is meaningful and truthful self-evidently as shown through the argument of evolutionary psychology but which ultimately makes utilitarianism a universalist ethic.

The final challenge that is levelled against, not just utilitarianism, but all consequentialist ethical systems is that they are reliant on consequences that have not yet occurred in order to determine morality. This is clearly problematic because we cannot predict consequences with perfect ac-

curacy as, regardless of how well we understand the past and how much we consider what will happen, we cannot predict the future. Therefore we cannot judge morality with complete accuracy which is necessary for an effective ethical system. In addition if, as I have argued for, we are to treat utilitarianism as a universal ethic then surely it is logical to assume that it is possible to utilise it in every situation to perfectly assess the morality of every action in every situation (when disregarding time constraints). As it would not be universal truth if it were not possible for it to be fully discovered, and thus in some cases we would always have to assert moral truth based on our perspective rather than the universal truth. Therefore, the critic would say that utilitarianism is flawed because it is logically impossible for it to be used completely accurately as a guide for morality.

Some utilitarians may point to some idea of ignorance to explain this: if there was no reasonable way we could have predicted a concequence then it should not be considered in the utility of an action. However, this is fundamentally opposed to the universalist nature of the ethic as well being open to further criticism around determining whether ignorance is invincible or vincible (reasonable or not). Therefore the way that Benthem and Mill deal with this issue is by pointing to the fact that utilitarianism is simply the criterion for morality. In other words utilitarianism judges the morality of an action retrospectively based on the consequences that do in fact happen rather than predicting the consequences beforehand. In this way the issue is avoided because there is no uncertainty when applying utility to empirical facts. Some may argue that this ignores an ethical system’s role in being used to determine what one ought to do, but this was never what utilitarianism was developed for and, by using the consequentialist logica upon which it was created, we see that potentially it may not be the most helpful system to guide us in day to day life and thus using it would not maximise utility. This is because it takes a significant time to correctly employ as you must consider any number of consequences as well as potentially being very inaccurate when applied by an agent who is incapable of correctly doing so. Therefore whilst maintaining the universalism of the utility principle as a criterion and truth of its ethical determinations we can use other ethics or methods of guiding morality in accordance with the principle, thus explaining the issue of prediction of consequences.

7 Moral Debates

The Future of Secularism

In an ever globalising world countries are having to adapt to be able to accommodate cultures from across the globe. The question is, does this call for secularism or should countries stick to their core beliefs?

Is secularism a necessary next step to modernisation or simply an invitation for religious discrimination? Secularism is the principle of separating all religious affairs from state with the aim of creating equality for believers and non-believers of the major religious sect in a country. While there are many different principles under secularism, the uniformly accepted ones involve three key ideals; institutional separation, freedom of belief and no discrimination on grounds of religion (Rodell, 2019). It is important to note that secularism is not eradicating or veiling the existence of faiths which would be classed as irreligion.

The increasing popularity of government reformation and westernisation has led to many countries leaving behind their theocratic roots and becoming secular. While the term secularism is fairly new, coined in the 19th century by George Jacob Holyoake, it is definitely not a modern concept. Moreover, the definition and principles of a secular society have also adapted throughout the years. With the recent reformation of many religious institutions globally such as those in Saudi Arabia (Farouk & Brown, 2021), is this model the most appropriate for our future generations?

While the ideology of secularism itself is to discourage all discrimination and provide equal opportunities for everyone, removing the consideration of religion from a country’s government can lead to increased inequality and more religious conflicts (Mahmood, 2015). When considering the extent to which secularism is the future, it is not only necessary to consider the advantages and disadvantages but also the changes in the factors that affect its popularity such as the glamorisation of Western society and the importance of religion in the Middle East.

It is argued by Berlinerblau that there are six types of secularism that revolve around their approach to the ‘two necessities’; the state’s need to maintain order and the individual’s need for freedom. This essay explores disestablishmentarianism which is the separation of Church and state and reducing the sphere of influence of the former such as taxing rights as per the disestablishment in Wales (Vogeler, 1980). The second type of secularism explored is Laïcité which is widely associated with the French model and involves religion being left within the ‘private sphere’

and must be kept outside of the ‘public sphere’ (Peña-Ruiz, 2014). The principles within these two types of secularism show the two most distinct paths a future of secularism can lead to.

After the First Amendment of the United States of America in 1791, the separation of Church and state was experimented with and led to an example of disestablishmentarianism. This was faced with many initial trials such as the introduction of Sunday post and prayers in public schools (D’Antonio & Hoge, 2006). Despite that it can be said that Americans have now embraced secularisation with one quarter of the population identifying as secularists, there has been little development in the government over the last 50 years. Berlinerblau argues that the government is in fact becoming less secular such as with issues like abortion, the funding of Catholic and other religious schools and vaccine exemptions (Boorstein, 2022). This demonstrates the difficulty of creating a wholly secular society when it comes to religiously controversial topics. How can a society claim to be non-discriminatory when it clearly leans towards a religious preference and deprives minorities of equal treatment?

Notably, the rise in ‘nones’, a term for non-religious Americans, has risen from 16% in 2007 to 29% as of now and the USA is one of 43 out of 49 high income countries to see this decline in religious population. Almost all major faiths such as Christianity, Judaism and Islam promote high fertility because it was necessary in a world of high infant mortality rates and low life expectancy, however, this is no longer the case. It is argued that the modernisation and secularisation of society have affected the general views of women, divorce and abortion and there is no longer the responsibility of ‘replacing the population’, especially as new generations are born into economic and health security (Inglehart, 2020). With the changing standards, the religions that do not share these views are decreasing in popularity, for example, the USA’s Christian population shrinking from 90% to 64% over 50 years. While this decline is not inherently harmful, it could create a homogeneous society in the far future and where people relied on their practised religion for their morals, there is now a fragility for what is right and wrong.

8 Moral Debates

As previously mentioned, France is perhaps the most renowned example of a Laïcist country where the government attempts to limit all religious practices to the private sphere. Here it is explored as an example of where secularism can lead to religious discrimination. In 2004, a ban was introduced against religious symbols in public schools, most significantly, including the hijab with the claim of reasserting religious neutrality. To an extent this is understandable as it prevents the possibility of proselytism, however, it is hard to believe this is the case when the ‘burkini’, a modest, full-coverage swimsuit, was also banned. How can France claim to be a truly secular society if it denies young, Muslim women the right to practise key aspects of their faith? There are two possible explanations for these restrictions; this is simply the state’s policies in approach to religions in general based on political survival, minimising rulingw costs and economic development. This is difficult to believe as the hijab ban has not created a significant effect on the French economy and the same can be said for political influence where only 2 out of 331 of the members of the senate had Muslim origins (Kuru, 2008). It is unlikely that this policy was installed to control the population of Muslims as the ban only applied to fewer than 1,500 female students in public schools.

The second explanation for these restrictions is that it is the impact of extreme combative secularism in an effort to limit religious activity in the public sphere. It can be said that this type of secularism promotes anti-pluralism and removes an important part of people’s identities. While this is an extreme and rare scenario, other secular European countries such as Germany and England have been more accommo-

dating to religions as seen in the funding of churches and the amount of religious buildings and schools, it does raise the question of whether secularism is just a slow invitation for biases and discrimination.

On the other hand, secularism is necessary to manage the diversifying world. It is estimated that four in ten American citizens identify with a race or ethnic group other than white. With this and the affected balance of religions, there must be neutral ground in order to minimise conflict. Islam is currently the world’s fastest growing religion, increasing at twice the speed of the global population and is due to surpass Christianity in numbers by 2075 (Lipka & Hackett, 2017). However, the current foundations of Western law are heavily influenced by biblical rulings, with the Ten Commandments as an example (Berman, 1975). It is not only the duty of traditionally clerical countries to adjust to these changes as can be seen in the UAE where 89% of the population are expatriates. The decriminalisation of cohabitation and alcohol consumption in 2020 were introduced in an effort to modernise (Kerr, 2020). The contrasts of religious demands and requirements as there is an increase of diversity in our society must mean that there are set laws and ethics that can be applied to people no matter their religion which is what secularism provides.

Despite this, how are these laws decided without bias or guidance of religious scripture? How is the rise of religious intolerance managed? Is it necessary to limit practice to the private sphere? Once these issues are eliminated and constitutions embrace equality under secularism, it appears to be the only solution to accommodate shifts in beliefs. Hopefully, a future filled with welcoming laws awaits humanity.

Will we, as humans, ever be truly happy?

What is happiness? How does one get happiness? Will we ever be happy? These large philosophical questions have been tackled by philosophers for millenia, and it is in this article Isabella attempts to answer these very questions.

Qui

totum vult totum perdit.’ He who wants everything loses everything. These words outline the ways of modern mankind. ‘To do more; to be more; most of all, to have more.’ We may pretend we are incandescently happy with the simplicities of life as we sit around a table hand in hand, thanking our Gods for all the simple yet fundamental parts of life – expressing gratitude – but it increasingly seems it is simply human nature to always want more. No matter how full one’s stomach for greatness may be, they always leave the table unsatisfied with worthless wealth. It is at this point that we must once again ask the age-old question – Will we, as humans, ever be truly happy? Or is the goal of complete happiness, in reality, always just too far from our reach?

When asked ‘What is the ultimate purpose of human existence?’, Aristotle – one of the most notable philosophers in history – argued it to be happiness, or as he termed it, ‘eudaimonia’. But it is important to know the difference between these two words:

Happiness refers to the simple state of being happy – the rush of pure joy through one’s blood in the seconds leading up to blowing out the birthday candles. Eudaimonia, however, is not so much a state of mind as happiness is, but a combination of happiness, well-being and flourishing. It is a sense of fulfilment, purpose, exploring one’s potential. It is the very essence of actively living rather than simply existing to reach checkpoints of short-term happiness. However, Aristotle’s approach on living a full and pleasant life is not one that is easy to attain. In modern society, mankind is programmed to take the hedonistic approach; the short and direct lane to small sprouts of joy, before continuing onwards in frantic search of another shot of bliss. We beg and plead for others to give us the happiness we crave, without even realising we’d all be well fed with it by now if only we’d looked in the mirror; if only we’d taken that winding yet scenic road through the woods.

We promote the idea of happiness being a set goal that we must achieve – take the American Dream, for example. People are motivated to work through the promise of achieving a supposedly perfect life – a big house with two cars, a family, a good salary, and trips to Europe in the summer – when perfection is essentially unattaina-

ble, because perfection does not exist. People will spend decades working towards their idea of perfection – their idea of a flawless and well-earned life, like earning a golden medal to say, ‘Well done, you have what everyone else wants,’ – only for them to look back and realise how much time they had wasted to attain short-term happiness. This is time they will never get back; precious moments that they could’ve taken piece by piece, a thoughtful bite at a time, but the meal is often devoured without savouring the flavours. It is all too rare that we find people who can enjoy the feast for a few hours rather than refill plates only minutes in.

The battle between an inwards and outwards search for happiness is ongoing, but this war is already at such a risk of being too largely in favour of one approach over the other. The approach we take increasingly too often is very much focused externally, and ends with a quick and fleeting hit of joy. It feasts off of the nervous adrenaline of shared smiles on a first date, the rush of dopamine kicking in at a midnight party, but again, this is only a quick hit. The date ends, a week goes by, and you hopelessly search for that adrenaline again in the same person; the party stops, you’re thrown back into stretching to-do lists of work not 12 hours later, and your feet yearn to feel that pulsing of music through the floor. This is short-term happiness – static joy in the moment, but the withdrawal effects – that yearning for more – only exacerbates its quick disappearance. Ultimately, happiness is given from an external source – an impermanent supplier – and when the supplier is withdrawn, that feeling of ecstasy fades with it. Meanwhile, the approach we ought to take – the eudaimonic approach – focuses inwards with a hunger for fulfilment and purpose. Happiness – as a state of mind – is maintained, because the source cannot be withdrawn.

As long as one is breathing this planet’s air and walking on its soil, the possibility of continuous happiness remains, because the source of it is you. It begins and ends with you as a being; the fierce rush of parties is a futile catalyst to spur one’s happiness on.

It is at this point that the great minds of Aristotle and Socrates leave the solution to us, posing an even greater question – How can we, as humans, be truly happy?

10 Moral Debates

We already know that we must look internally if we aspire to lead the best lives possible, and this requires a great deal of trust; we must trust in ourselves to guide us in the right direction, rely on the only person who – without a doubt – will not turn their back against us, and train ourselves in patience. We must will ourselves to keep going, no matter the difficulties; walk through lanes of stationary cars rather than waiting for the roads to move. Ultimately, the key motive is to keep moving.

Know your goals in life, let them be fueled by purpose rather than materialistic wants. Doctors don’t become doctors solely for the high wages; doctors become doctors because they want to help others. They want to serve. You can apply this to almost any area: all you need is the right mindset. After all, writers don’t spout poems to earn the title of bestseller; writers spout poems to let themselves and others feel – rawly, honestly, fiercely.

Know yourself, and know your purpose. Find what makes you truly appreciate life; what urges you to start a morning well. Never ignore your gut feeling; trust it to guide you to where you’re supposed to be, what you’re supposed to do. Your mark on the world matters, and you have the choice to shape it however you like.

Find your potential and never stop building it up. The best actor is not the actor who’s on the silver screen for the fame; the best actor is the actor who makes an impact, the one who changes the ins and outs of the industry for the better.

And finally – possibly most importantly – cherish time. Tempus fugit, and for all we may be aware, we only have one lifetime to make our time worthwhile. Don’t waste it on unattainable dreams of materialistic happiness, but spend it wisely, frugally. Find your passions, try and err, rather than pretending to love something to earn the pride and approval of others. Make yourself proud, and you will someday find someone who returns that pride, admires you for going against the wishes of others, for cutting the marionette strings and running the show on your own.

How do we know if what we know is correct?

We, as a society, are obsessed about what is right and what is wrong. In this article, Imogen continues this frivolous debate with her own take on society’s understanding of correctness.

Ashumans, from the moment we are born we acquire knowledge. From learning about colours and numbers to sophisticated scientific processes, we are constantly learning and developing our brains. However, how do we know that what we know is actually correct? In the current world, we can tell if something is correct by finding evidence to back it up - in books, online, or from other people. But how do we know that information is correct? Do we end up in a perpetual loop of not-knowing?

Up until the 1500s, individuals had no reason to believe that the information they were told was not true. Great thinkers and powerful members of society were seen as correct and the wisest of all. However, by the 16th century, people were discovering that not everything they heard was true. It was found that the Sun did not revolve around the Earth, and the human body did not function as previously thought. People were finally beginning to doubt whether their knowledge was correct.

Is it necessary to assume our knowledge is correct until such time where we have counter-knowledge which informs us otherwise? For example, if you believe that the Earth is flat, then you will believe that until such time as you have solid evidence that it is not. This makes sense until you consider the validity of that counter-knowledge. There are plenty of articles on the internet today that are factually incorrect but when you are beginning to learn a new topic or wish to test your knowledge, this could impede your progress in ascertaining whether what you currently know is actually true.

Another interesting point is that some people think what they see or believe is correct, and anyone else with a differing viewpoint is misguided. Some feel that they always know best, and all the world’s problems are due to the mistakes of others. This is often caused by the lack of willingness to explore counter-knowledge despite it being very clear to others the errors in their thinking. These people can then propagate their false beliefs through social media channels and therefore, the modern phenomenon of ‘Fake News’ becomes all too real. This in turn impacts the opportunities for others to ascertain whether their knowledge is correct and the circle of not-knowing continues.

If we are living in a world where no one has a way of being sure anything anyone tells them could be true, then you have no real way of knowing that the information you have is correct, until such time as proving it’s incorrect. Even then, that counter-knowledge could easily be incorrect too. The perpetual loop of not-knowing never seems to end!

Information taken from our senses seems to be good proof that something is correct. If we experience something firsthand with our own bodies then surely it must be true? We can see the shape of a tree, touch its rough bark, smell its scent or hear the wind blow through its branches. All these senses enable us to have accurate knowledge about the tree and its properties. That said, our senses can only provide us with knowledge for our own experience. Two individuals may look at the same colour chart in a DIY shop and label the colour they see as ‘blue.’ But are they really ‘seeing’ the same shade? Perhaps one individual is seeing what the other individual calls ‘red’ however they have both been conditioned since a young age to name that shade ‘blue’.

This then brings into question that there is a possibility our senses could also be giving us false information. My eyes could be telling me that a colour is red, but in fact, it may be green, as is common for those with colour blindness and others who do not see any colour at all. For these people is it correct that grass is grey, or is grass still green?

The human race has survived for approximately 300,000 years, and since the beginning, we have used our knowledge of our surroundings to thrive. Certain things, such as knowing what to eat and what not to eat, have been passed down through generations to keep us alive. That knowledge certainly can’t be wrong, otherwise, we wouldn’t exist today. Society has survived on the basis that our knowledge is mostly correct for many centuries. The world is the way it is for a reason and we can’t be so misguided as to see everything as not what it really is.

Overall, I believe that in order to determine whether what we know is correct, we need time to understand whether what we know is correct so that in time, we can test our theories in every possible way to solidify our knowledge and understanding.

12 Moral Debates

Is an eye for an eye an ethical concept in justice?

In this article, Petra tackles an ethical concept that has been ever present in today’s media with subjects such as cancel culture being at the forefront of our society.

An eye for an eye, and a tooth for a tooth.” This idiom, codified into a set of laws written over 3900 years ago, has become somewhat infamous in pop culture. Songs, movies and video games all utilize this as a vaguely intimidating precursor to some matter of satisfying punishment.

“ He who has taken out a man’s eye shall have his own eye taken out

But this line is the first known reference to retributive justice, the earliest form of a law that there was. And it was meant quite literally - the original form was ‘He who was taken out a man’s eye, shall have his own eye taken out.’ There is debate around whether this ancient decree has any place in the modern world of jurisprudence, and I’d like to continue that debate in this article.

The concept of retributive justice originated in the ‘Code of Hammurabi’. It is the first known legal text, and introduced many well known concepts, such as legal prerogative, presumption of innocence, and of course, retributive justice. There were many laws in this code besides the famous ‘eye for an eye’ decree that were also based on this concept. These included a law that decreed someone who falsely accuses another for a crime must serve the sentence they desired on the other, and a law that declared if a builder’s house collapsed and someone died, the builder must be put to death too.

Within modern jurisprudence, an eye for an eye does not mean that the same kind of punishment must be inflicted upon a criminal, but that the severity of the punishment must equal that of the crime. It is difficult to inflict the same kind of punishment upon someone who has committed, for example, drug dealing, or kidnapping. However, is this form of justice ethical?

To answer this, we must define the terms in our question. Firstly, what is meant by ‘ethical’? One may state that it is ‘a

morally correct action’. But what is morally correct in one society would not be viewed as morally correct in another. For example, some societies view cannibalism as a sacred ritual that helps the soul of the begone. Meanwhile, others believe it is a severe crime and desecration of the human body. Therefore, we can only define ‘ethical’ within the context of a community. We can say that ethical means ‘an action that the society considers morally correct’.

Next, we must define ‘retributive justice’. There are two parts to that phrase: ‘retributive’ and ‘justice’. Retributive tells us that it is a system based on punishment of the criminal. Justice tells us that the punishment must follow procedural restrictions.

The procedural restrictions part is incredibly important.

Certain people who have objections against retributive justice believe it to be the same as revenge. Whilst originating from the same word, retributive justice and revenge are very different within jurisprudence. Revenge does not follow the law and can be enacted for any slight, whether it be legal or criminal, real or perceived. The punishment meted out upon the offender has not been exactly measured so as to be equal to the original offense. Revenge often only provides a benefit to the aggrieved party. Meanwhile, retributive justice provides a benefit to society as a whole, due to society’s denunciation and the deterrence of committing crimes. It is also important that the law does not take pleasure in the punishment or suffering of the criminal. Finally, retributive justice must be enacted by a third party, not the aggrieved. Otherwise, personal emotions and anger may distort the punishment and make it disproportionately severe. These factors combined all distinguish retributive justice from revenge.

13 Moral Debates

Most people are instinctively pro retributive justice. There is a common example given in jurisprudence to demonstrate this mentality. Say you are the sentencing judge of a rapist who was just convicted in your courtroom. He has suffered an illness, leaving him physically incapacitated so he is unable to reoffend. He additionally has sufficient funds to live out the rest of his life without committing any crimes. These circumstances are enough to deter him from ever committing any more crimes - you, as the judge, are certain of this. Moreover, you are able to sentence him to a private island where he can live his life in peace, and the public will believe he has served a life sentence in jail, therefore deterring society from committing crimes. Assuming this ruse is certain to work, most people would still prefer the criminal to serve time in the penal system, for no other reason than the belief that he deserves punishment.

Despite this instinctual desire, some people are against retributive justice, mainly for the reason that ‘two wrongs do not make a right’. They believe that inflicting further suffering upon someone who has caused suffering is immoral, and that we should move on from retributive justice as a primal and vindictive form of punishment.

Now, we will take a look at the other main form of justice - restorative. Logically, if we were to decide that retributive justice was not an ethical concept in law, we would have to instead follow restorative justice.

Restorative justice is a form of justice intended to return community relations back to ‘rights’. It operates under the assumption that the criminal, while also breaking the legal code by committing a crime, has also broken a social code, and that this is more important than breaking the legal code. It tries to reintegrate the criminal back into the community and stop them from reoffending through victim-criminal meetings and victim reparations. Restorative justice is more about rehabilitation after the initial crime has occurred, and giving the victim a more satisfying and suitable conclusion to the crime. It is a much less instinctive and ancient form of justice. It arose through legal study, not through the natural human desire for revenge. Therefore, the arguments for and against it relate far more to jurisprudence and legal matters than the arguments for and against retributive justice.

The main argument against restorative justice involves classical deterrence theory - that the effectiveness of punishments in reducing crime is determined by certainty, celerity and severity. People who disagree with restorative justice argue that the ‘severity’ function is not fulfilled, and therefore restorative justice is not effective in deterring crime.

Meanwhile, supporters of restorative justice argue that, when it is utilized, there is higher victim satisfaction and a decreased recidivism rate compared to cases that utilized retributive justice. However, it is worth noting that the lower recidivism rate was only observed when combined with incapacitation, due to the inability of allowing the initially-unrehabilitated criminal free roam of society. Also, criminals who commit violent crimes are only permitted to undergo restorative justice if they are simultaneously serving out their punishment in the penal system, making any findings about their rate of reoffending statistically flawed.

We have gone over the beliefs and views of both sides –now to answer the initial question of if it is ethical or not. Personally, I believe that retributive justice can be an ethical form of justice, depending on the circumstances surrounding the crime and what type of crime has been committed. Non-violent offenders who suffered from extenuating circumstances such as poverty, abuse, drug addiction, etc. will almost always benefit more from restorative justice. This is because most people do not need to be deterred from these crimes - they will either commit them or they will not depending on their circumstances, and there is very little the legal system can do to stop the initial offense besides changing these circumstances. Instead, we can try to stop reoffending through restorative justice.

However, violent criminals who do not suffer from extenuating circumstances can be deterred from crime, so therefore, even disregarding our base instincts, retributive justice could be applied here. As these criminals commit crimes simply for the pleasure it brings them, logically the consequences must cause ‘suffering’ to deter them from further crimes.

Overall, retributive justice, or ‘an eye for an eye’, can be an ethical concept depending on who it is applied to.

14 Moral Debates

What is Power?

Power is ‘The ability to control people or things’ (defined by the Oxford English Dictionary.) What is Power in the 21st Century? Who are the people with the power or what organisations have power? Can those with power do as they please? Or are they too under the control of something else, perhaps another influence?

Power is a force that is active with an object considered inferior. Power is the ability to have the capacity to direct a group’s behaviour to achieve objectives, agreed collectively or directed by the power source. Our sense of power is subjective; it depends on the angle we look at it from. If you are a person in possession of power, you have the ability to do something in a particular way - your way. But someone underneath power looks at it as the possession of control over others where there is a superior leader. Power has many forms and is seen differently depending on who is exercising it, and who is subject to the use of power. Government Power for example is the substantial, long-standing power in our society today.

How does a government obtain and use this power? The government has the ability to change our lives, they use a variety of tactics to push for action. In the UK we have the right (if we’re over 18) to vote for who we think should form the government. So this form of power by our government is legitimate authority. We vote for the party we think shares the same beliefs and aspirations for our country as we do. Our government has legitimate power under the UK’s constitution - which we need in our society in order to function with an organisational structure. Without our government’s power there is likely to be chaos as individuals only act in their own interests, so the Government has power in the UK to provide security and stability within our communities.

On the other hand, Russia is an example where there is a coercive and authoritarian leader. President Putin has power based on carefully controlled information and threat of punishment of those who don’t agree with him. He is able to filter the information the Russian people receive - they therefore behave in a way that supports his objectives. So this form of power, present in an authoritarian state, is obtained by feeding the followers filtered information; feeding them a matching opinion to his. Putin has the power in Russia to provide what he believes is stability in his country.

Another example of leadership power is reward versus punishment. In school we receive credits for doing something the right way; if you achieve 5 credits then you are rewarded with a token for the tuck shop. This is an incen-

tive to keep doing the right thing. If people give rewards then this gives a reason for people to do things for them; the rewarder holds power over the rewarded. Just like giving treats to a puppy when it sits for you, you have control over their action through the reward you give them. This should be differentiated from reward from a person in power compared with the use of blackmail. This is where an extreme incentive is manufactured to get someone to do something for you. This is power over people by using the threat to expose damaging information about someone; this is an extreme form of coercive power. The difference between reward and blackmail is that reward, in power, is given by someone already in power and control. Blackmail, however, is a force of power where threat is needed to achieve ultimate power. Legitimate, elected, governments therefore use reward to exercise power whereas authoritarian regimes will be more likely to resort to harsher coercive forms of power, such as blackmail, to maintain power.

Social media is a platform of power that exerts influence through the ability to reach millions of people quickly and with very few checks and balances on content. Social media’s power contrasts with the government’s power in many ways but the biggest factor is there is no legitimate authority on social media. This power can be used positively and negatively. Social media has the power to assist you in forming ‘your own’ opinion from a list of other opinions, for example. But even with access to opinions to help form our own, as Jia Tolento, an American writer and editor, said ‘The internet increases our ability to know things without increasing our ability to change things.’ She explains that we are constantly reading and speaking but are not then following it up with ‘doing’ in the real world – we might just forward the information to someone else. Social media is a place of freedom of speech, to an extent; but there is very little change in our reaction to what is said and how we can act on that. So we are not completely in control of our opinion and actions following that. Although social media decreases the lack of ‘quick-witted skill’ and the ability to have ‘real-life’ conversations and has power to influence believers of fake news, it also assists individuals and communities that are desperately in need of help.

15 Moral Debates

Social media wide-reaching news have the power to have a positive impact on the world; it can change lives. This digital global village can reach 3.96 billion people by enabling democracy and mobilising activism. For example a fundraising video to save children in Syria was viewed by 30 million people. It meant that the Ushahidi platform was able to reach out to many people to help change children’s lives for the better. This is an example of social media having the power to enable us to help those people. Although many opinions are expressed on social media, there are limits to freedom of digital speech. For example, some Twitter users have been banned for breaking the Twitter rules and posting extreme views; perhaps more seriously in Vietnam there is a $5000 fine for posting anything criticising the government. This is an example of the controlling power of government seeping into the power of influence on social media. Social media power is an influence and forces upon us a way of thinking and an opinion. We must continue to question this source of power and assess the legitimacy of the information.

Multinational corporations have a share in power too. They have the control element of power. For example; we choose specific brands of clothing because of the influence of what famous, inspiring celebrities wear and how products are marketed. But influencers get paid by the corporations to advertise these items; in this case, where there is money there is power. The first Coca-Cola was sold in 1886 in the US, it was designed in a small town but now more than 200 countries sell branded Coca-Cola. The company now markets itself as a brand for younger generations, focusing on promoting the ideas of current affairs, pop culture and the use of all things digital. Obviously their marketing strategy works; they understand the interests of the people they want to sell to and use that power of knowledge to influence their customers to choose their product. The money markets also hold a lot of economic power. Coca-Cola is a large corporation and the increased number of stocks and shares within the company are what make the product so desirable to investors. The

more people, on the stock market, that invest in the company make the drink a bigger success influencing many more people to buy it. The stock market holds the power to determine how successful the product the company is selling is, and therefore has power over what we choose to buy. Multinational corporations have power over what we buy and when we buy it and therefore have power over our lives, often without us realising it.

A synonym for power is dominion. For example, in Christianity, dominion explains that God made us in his image, the idea we are in charge of the world ‘on behalf of’ God. Stewardship, also a belief within modern Christianity, is that with this power we also have a duty to care for our world and everything in it. Power comes simultaneously with great responsibility: power does not only present opportunities to ‘make things happen’ it also brings a responsibility to ‘take care of things’. A leader in a position of power has to ‘take care of things’ and has responsibility to complete the tasks themselves or they can decide who and how to delegate them. So with power, ‘The ability to control people or things’, comes responsibility for the actions of those people. In this case the power still lies at the top with the leader but the responsibility of it influences how they exert the power.

To conclude, power at the surface is held by the leaders and the people with ideas and money, but after taking a deeper look we can see that power stems from the many items that make it up. Government power can be exercised with consent - such as in a democracy - or through fear such as in an authoritarian state. We are all subject to other forms of power through social media and by large corporations. This is not necessarily bad but we must be aware of the influence on our lives. We must also ensure that the legitimate power exercised through government also ensures that other forms of power are managed and exerted fairly. As Lord Acton said, ‘Absolute power corrupts absolutely.’; power should not, therefore, be retained by one singular person but through cycles of events and legitimised through consent of all people.

Do we have free will?

Whilst we may like to think that the actions we make on a daily basis are our own, this may not be the case and is a topic that Agata explores in this philosophical article.

You probably shouldn’t steal. Common sense tells us that stealing is wrong. But sometimes stealing seems less wrong, or not wrong at all, after we discover the cause of the stealing. For instance, many would argue that you are not as guilty as someone who steals out of greed or spite if you steal a loaf of bread because your family is starving. And imagine a kleptomaniac who cannot control her stealing behavior. We probably shouldn’t blame her for those actions. But why shouldn’t we blame the kleptomaniac? That is to say, how are we justified in holding the kleptomaniac morally responsible? One good reason not to blame the kleptomaniac is that she cannot help her behavior. She possesses a psychological problem that is out of her control. That’s why some defendants are acquitted on grounds of insanity. If you are not in control of your actions, you are not responsible for those actions.

But what if every one of our actions is actually out of our control? That is, what if it only seems as if we have the freedom to choose between actions?

Free will; the ability to ‘make your own decisions’. Free will is a very controversial topic, being one of the main philosophical debates and causes for arguments since the start of humanity. To start off, we first need to figure out what free will is. However, this may be hard, as free will is a very polysemous topic, especially when it comes to science, law or philosophy. The textbook definition of free will is “the supposed power or capacity of

humans to make decisions or perform actions independently of any prior event or state of the universe.” Using this definition, we can try to piece together whether the average human has the same amount of free will, the same ability to make decisions, or if we even have free will at all.

To find out if someone thinks people have free will, we need to look at some of the different definitions of free will that people have; there are 4 main types of beliefs that people can have.

The first philosophical view that someone can have on free will is Determinism. Determinism is the belief that “all human behaviors flow from genetic or environmental factors that, once they have occurred, are very difficult or impossible to change.” Or, in short, it means that all events are completely predetermined by previous causes, and, no matter how hard someone might try to change it, certain events will always happen. If determinism is true, and everything someone does is caused by events and facts outside their control, then they cannot be the ultimate cause of their actions. Therefore, they cannot have free will.

Another belief is Compatibilism (also called Soft Determinism), which is the belief that free will and determinism can be mutually compatible, and if someone believes in both, they can do so without it being logically impossible. They believe that humans can have free will in certain situations, and it be absent in others, and it has nothing to do

with metaphysics. Having this view also means that, even when believing in the ‘belief of determinism’, humans still act as ‘free’, and people’s actions are usually caused by their desires. However, people who follow a compatibilist belief still believe that humans are not ‘free’ to do whatever they want, and they believe that science, and specifically neuroscience, fully explains why a person makes a decision or action and the process behind it.

The final definition of free will is Incompatibilism, which is the belief that free will is incompatible with the truth of determinism. Incompatibilists divide into ‘libertarianians’, who deny that determinism is true and ‘hard determinists’, who deny that we have free will. This division happens because they disagree amongst themselves about what is needed, besides indeterminism, for free will. They believe that there are ‘free will worlds’, but there are no ‘deterministic free worlds’. Those who are Incompatibilists think that free will and determinism are mutually exclusive, and that people can only have free will and act freely if determinism is false.

However, there are a few people who are in a separate category of people, and those are people who believe in Illusionism. According to their view, which was also the view of many philosophers (the most known being B.F. Skinner), is that free will is a figment of our imagination, and that no one has it or ever will, and that it simply doesn’t exist. In other words, free will is an illusion.

17 Moral Debates

The first philosopher to come up with this point of view was B.F. Skinner, and he said this in order to demonstrate the vast amount of influences and pressure that a person can have on their ‘decision-making process’. Since there are so many factors that can influence a person’s free will, Skinner has said that ‘we can never say it is totally free’.

Each theory above attempts to explain the question of whether or not humans have “free will”, and which

position someone takes depends on what they truly believe free will to be. No one can say for certain if people have free will, or if it’s just a concept for something more, hence, free will is something that each person decides for themselves; there is no right or wrong answer.

Personally, I believe that humans don’t fully have free will, but many things can be controlled by our own actions; I think that ‘Soft Determinism’ / ‘Compatibilism’ is overall

correct, and I think this, because in many situations – especially those with mental disabilities – their actions cannot be controlled by them, nor should they be at fault. I still believe that humans can act out of their own desires, but I think there is more to free will than just one side, and it isn’t fully black and white. However, I still believe that everyone has the right to their own definition of free will, and there is no way to determine whether they are correct or not.

Can stealing ever be acceptable?

In this article Samantha takes a look at the moral debate surrounding stealing, something we all naturally think of as wrong, but maybe wrongly so…

If asked if stealing is morally acceptable, it can be answered uncontroversially with an easy ‘no’”. Dimmock explained ‘The Kantian Categorical Imperative’ addressing two formulations. Of interest, the first formulation evaluates the “motivation behind an action, then considers if it could be an accepted universal law”.

Robin Hood, a favourite childhood story, was about a man revered as a hero… for taking what wasn’t his?

We know crime is wrong; an action considered to be unlawful according to our morals. Crime is determined by the law; a set of rules meant to be followed by everyone regardless of their place in the social hierarchy, and yet we see almost every country influenced by corruption which has left people starving and dying on the streets…

Moral values are ‘personal values’, of what we believe is ‘right’ or ‘wrong’. Ethics, however, are how we compose ourselves based on our morals and principles, reflected in how we live our lives.

Abortion is a controversial topic of taking a life, similar to someone who is shot dead. People’s choice of abortion does not erase the fact they took life from the world. Deciding to take a life through abortion is ethically wrong but legal in some countries, although it is not our right, we feel entitled to make a decision for someone else just because we think we know what’s better (for us), when in actual fact we don’t… We are deceived into believing that we have ‘liberty’ to live based on our morals (which are subjective to each individual), resulting in varied opinions of right and wrong; controversy. The problem we face today in gaining a just system is what we have fought for our whole lives, FREEDOM.

Robin Hood, by law, is a thief but a hero to the poor. Robin Hood stole from the rich, which is a crime despite the circumstances. If someone robbed a shop, we would push for a punishment against them, however, would we really punish everyone? Would it go against our morals to follow ethical rules? Let’s compare two instances of offences charged, in 2011, a well-known actress by the name of Lindsay Lohan, had walked out of a “Venice California boutique wearing a $2,500 necklace she hadn’t paid for”, you would definitely have agreed to have her sentenced to prison for stealing, now let’s compare her with a young mother with two poverty-stricken children, ages 3 and 6, neglected by society due to her circumstance, she had ‘stolen’ milk from a supermarket to feed her starving family, with no one helping her as she begs people around to help… Would you sentence her to a lifetime of prison? What would happen to her kids? Is this it for her?

An answer that is not so straightforward anymore. However, the same rule still applies.

The problem lies in the equality of the punishment given to the poor and the rich. Lindsay Lohan had only been sentenced to 4 months of prison (and a few weeks of house arrest), however, the mother of the two young children was sentenced to 14 months of jail for stealing less than $16 worth of basic food for her malnourished children sleeping on the cold streets of Malaysia with no shelter.

We don’t treat everyone with humanity. Let me repeat, we treat others with disrespect when they are “not equal to”or “like” us. Immanuel Kant had been one of the most influential figures in modern Western Philosophy and has had a large influence on the beliefs and visions the United

18 Moral Debates

Nations has in this day and age! He created the set of universal moral principles we still follow today and applies to everyone of us despite our place in hierarchy, gender or age. In his first imperative, “Formula of the End in itself”, states that we should always respect the humanity in others and not treat them any differently due to their background or culture and recognise the value we all hold as unique individuals, not one being of any different or less value to the other. To make it simple, he strongly believes that WE have an inherent value never treated as any less.

Robin Hood was acknowledged as a hero because of his ‘morally right’ reasons. “It may not be the issue of consent but of property”, Dimmock relates daily events we overlook, “If I make a joke and someone tells it again, will it be, ‘stealing my property without my consent’?”.

Stealing, even if to feed a starving family is a crime based on the law, however, it could be morally justified, but differentiates from the Kantian Categorical Imperative (universally accepted moral values) making it ethically wrong. Ethics are based on OUR standard of what is right or wrong. Stealing is ‘universally accepted’ as ethically wrong because of the wrong morals that it would produce by the action carried out, such as; dishonesty, deception, betrayal, etc.

Morally wrong behaviours would develop in those who steal habitually... We could accept if someone impoverished stole food but the same grace would not be extended to criminals that steal for their ‘wants’ rather than ‘needs’. Looking at the criminal’s viewpoint, everything has to be “fair and equal”, equality rather than equity but that is the

“ Stealing could be justified, but still remains as a crime.

view of a democrat! What is the point of working hard if everyone would gain the same reward, without equity the world would not have achieved to send the first man into space or discover technology which would change our lives forever...

The crime of stealing, could arguably be justified, but cannot change the fact that stealing is a crime. Crime is determined by law; right and wrong. We have ‘stolen’ trillions of molecules from the air with every breath we take. Technically, we are taking what is not ours, what we say is morally wrong and a crime. “Does the output of the crime justify stealing?” it may be able to, but this still can’t change the fact that stealing is a crime.

Ethically, it is right to give to the poor, however, it is ethically wrong to steal. Robin Hood was against criminals, but stole, ethically a thief but morally a hero. His actions were morally justified, but his actions were ethically wrong...

To summarise, stealing is a crime. It is ethically wrong to steal, but stealing to feed ‘a starving family’/ ‘your starving family’... could introduce a new viewpoint into society

Are people born evil or do they end up doing evil things?

It is common knowledge that some things can be considered as evil and some can be considered as good. But, are the people who commit these evil acts condemned to do so from birth or are they products of their environments?

As a society we have tried to understand why “evil” prevails in our world. From terrorist attacks and war crimes to murders, the ultimate question boils down to this: what motive drives these people to commit these atrocities? Over the course of this article, I will be trying to discover the true reason for evil acts, and attempt to answer how people become evil – is it born or bred?

There are two concepts to evil: a broad concept and a narrow concept. Merriam Dictionary states that evil is a “morally reprehensible act” (“Definition of Evil”), a wider definition. This suggests that committing any wrong act indicates a person is evil, even as little as a white lie. For example, “the evil that men do lives after them” (“Garrard and De Wijze”) suggests that evils can be great and small. However, there are evil acts that can be forgiven, an example being the act of lying compared to the act of murder. Furthermore, evil can be divided into moral evil and natural evil. Natural evil is when heinous affairs do not “result in the intentions or negligence of moral agents” (“Calder”) – for instance, tsunamis and famines. In contrast, moral evils derive from “the intentions or negligence of moral agents” (“Calder”), e.g. premeditated murder.

In contrast, a narrow concept of evil consists of morally despicable actions, characters, and events of human wrongdoing. The narrow concept of evil involves moral condemnation which is related to moral agents (humans) and their actions. This term of “evil” is used in moral, political, and legal contexts;yet, in recent years, philosophers have discussed a secular account of this concept and questioned how important the term “evil”

“ Are humans born good or evil? What makes a person good? What makes a person evil?

is in our moral vocabulary. A secular account of evil would not conflict with any religious symbols nor supernatural entities, but rather provide a clearer definition. Are humans born evil or good? What makes a person good? What makes a person evil? The Chinese philosopher, Xunzi, stated that “The nature of man is evil; his goodness is only acquired training” (The Editors of Encyclopaedia Britannica). Why did he believe this and is there any evidence to prove his theory? A study at Yale university in 2011, proved that 70% of baby’s first instinct is to be good (BBC), but what about the remaining 30% of babies who chose “evil”? What makes them choose the immoral option? One very notable evil action was the murder of James Bugler in 1993 when two young boys (aged 10) abducted, tortured, and then murdered a toddler (Butterworth). Why did these boys even consider doing such a horrific crime at such a young age? Experts gave evidence at the trial to confirm that both boys knew what they had done was wrong. They knew the difference between good and evil, yet they chose evil. Why? What transpired was that both boys came from troubled backgrounds showing how nurture can affect how people behave. One study reported in 2000 issue of the Journal of Personality and Social Psychology found that children who play violent video games are more likely to behave violently (Anderson and Dill). This study further supports the point that evil occurs due to the environment that one is in. The Chinese philosopher, Mengzi, expressed that “Humans are born with benevolent urges that they can develop into systematic thoughtful benevolence (Life Noggin).” Mengzi believed that humans are born good and that by being in a good environment and being shown examples of how to behave, society can shape a morally responsible human being. Aristotle also believed that our surroundings could shape the type of person we become. Human beings can gain everything they know whether that’s the difference between right and wrong and good and bad, through personal experiences (Lewkowicz). Likewise, John Locke thought that we have no innate ideas and that our minds are blank slates at birth which, through knowledge and maturity, develop our moral compass (Nature vs Nurture). These theories and studies suggest that Xunzi’s idea of man being born evil is incorrect.

20 Moral Debates

However, a 2002 study found that a particular variation of a gene predicted antisocial behaviour in men who were mistreated as an adolescent. The gene controls whether we produce an enzyme called monoamine (MAOA) which, at low levels, has been linked to aggression in animals. The researchers discovered that boys who were neglected and, in addition, also possessed low levels of the MAOA gene, were more likely to develop antisocial personality disordered, commit crimes, and exhibit violent behaviour. This is in comparison to those living in a similar environment but who did not display the variation of the gene (Lametti). Additionally, a study published in 2010 found that teenagers with low socioeconomic resources and with a variation of a specific gene which contributes to how quickly serotonin is recycled in the brain, are more likely to demonstrate psychotic behaviour (Lametti). In 2012 two scientists each presented their own research on how humans become evil. Shoemaker’s study involved surveying people who were considered evil compared to the general public. He discovered, after comparing the brain scans, that those who were thought of as “evil” had a deformed fusiform face area in the temporal lobe of the brain and were lacking mirror neurons compared to the general public. The evidence presented suggests that people are born evil, it is part of their nature (Amberg). Moreover, conduct disorder is a complex mix of behavioural and emotional problems in adolescents which is later, once they turn 18, diagnosed as antisocial personality disorder (APD - a diagnosis common with those who have committed crimes, Cosgrove). Recent studies have shown that men who have been diagnosed with APD have 11% less grey matter in the prefrontal cortex of their brains compared to men without the disorder (Cosgrove). A study at the University of Chicago found that boys who had been sent to a psychiatrist

because of bad behaviour had lower levels of the stress hormone cortisol than boys who did not suffer behavioural issues. The researchers speculate that the boys are less sensitive to stress and consequently less bothered by the consequence (Cosgrove). Indeed, even as far back as Plato it was argued that our sense data does not provide sufficient information to specify the abstract ideas of knowledge and that humans possess and are endowed with such ideas at birth (Lewkowicz).

To conclude, I believe that evil stems from the wrong set of genes interacting with the wrong environment, and both nature and nurture always work together to influence our personality. If an individual carries the MAOA gene, it does not automatically suggest that they will exhibit psychotic behaviours and that having good role models can aid the development of a moral human being. In the words of the great Roman philosopher and dramatist, Seneca, “Nature does not give a man virtue, the process of becoming a good man is an art” (“Kiante”).

21 Moral Debates

CAN HUMANS BE CAPABLE OF FEELING HAPINESS WITHOUT FEELING SADNESS?

Hapiness is good and sadness is bad. This is an idea I’m sure many people are accustomed to, but in this article Abena questions whether it is even possible to be happy without being sad, putting a twist on our current conception of happiness.

Majority of people would conclude that, yes, humans could do without sadness and only have the latter emotion. However, in order to feel happiness one must have at one point felt unhappy. Sadness is always deemed as a negative emotion that no one should feel, but is it really possible to feel zero sadness?

Sadness is associated with feeling terrible, sometimes with tears and could eventually lead to anger. What if I told you that sadness plays a huge role in what makes a human a human? The definition of the word happiness states: “the state of being happy” and happy states:”feeling or showing pleasure or contentment” while sadness states: “the condition or quality of being sad” and sad states: “feeling or showing sorrow; unhappy”. The term happiness gives a very narrow definition on what happiness really is.

This suggests that in order to get rid of sadness, one must just replace the unhappy thoughts with happy ones. This couldn’t be further from the truth because suppressing feelings doesn’t just make them go away, believe it or not, the sadness doesn’t go anywhere. You’re only choosing to ignore it.

Imagine you see a python coming straight for you, what do you do? Do you run? The python will only just chase you until you get tired of running. Do you close your eyes and pretend it’s not there anymore? That won’t solve any of your problems, it’ll probably create more problems for you, especially in the long run. The most logical solution would be to face the python and beat it, right? Well, there may be one thing standing in the way of you achieving that; courage and the ability to be brave. This python represents the unhappy feeling you are feeling, and destroying the python is sym -

bolising the act of facing the sadness head on. However, in order to do so, you have to have the strength and courage to realise what the root of the problem is and to accept the fact that you aren’t the most content with the outcome. Sadness is a core emotion every person has felt over the course of their lives.

An author, Helen Russel, said that “If we suppress our negative thoughts, we end up feeling worse,” ... “And, actually, studies show that experiencing temporary sadness, and allowing ourselves to sit with those feelings when they come, can counterintuitively make us happier”. What this essentially means is sadness has the ability to make humans feel better once they come to terms with the sadness itself and bottling up the negativity only makes you feel more unhappy.

If life were able to be lived without happiness, one might think that humans would just be forced to express their emotions through other types of emotions; anger, joy, fear etc. However, if sadness didn’t exist, then one might argue that happiness wouldn’t exist either. How could someone feel happy if there was no sadness to compare it to? Sadness and Happiness coexist together, so if one goes then so does the other. This can be imagined as a coin. A coin consists of two sides, if one side doesn’t exist then the whole coin doesn’t exist. Having only one side of a physical object isn’t a thing, in order to have a side, there must be an opposing side.

Life without sadness would only fill the world with despair and life would end up meaningless because without sadness, there would be nothing for people to learn from. As there would be no happiness, no one would look forward to anything.

22 Moral Debates

A few quotes from Bob Ross also highlight the necessity of having opposites. “Absolutely have to have dark to have light… gotta have opposites… if you have light on light you have nothing, if you have dark on dark you basically have nothing…” “You’ve gotta have a little sadness once in a while so you know when the good times comin’” When Ross said these quotes he was in a massive amount of pain and sorrow because his wife had passed away from cancer. If Bob Ross didn’t feel any sadness towards the passing of his wife, then him marrying in the first place would become redundant. This is because happiness would no longer be an emotion and based on a study, people don’t get married to feel happy, happy people are the ones most likely to get married.

If one could eliminate everything that caused them to feel unhappy, 100% happiness is still not guaranteed. A term “Hedonic treadmill” was used to describe how even if you are unaffected by factors that make you unhappy, you would end up limiting the amount of factors that do make you happy. This limit in factors would eventually become a negative emotion; sadness.

As there would be a limit in the happiness factors, those factors would repeat over and over again, causing dissatisfaction in life. Because these factors would be repeating, the list would shrink due to the lack of interest in that factor, which would also make the other factors repeat, only faster.

Another example of the importance of the balance of sadness and happiness is from the movie ‘Inside Out’. The main character, Riley, walks through her day to day life while most of the movie focuses on her physical “forms” of her emotions. In a scene the emotion Joy attempts to keep the emotion Sadness inside a circle Joy drew to stop Sadness from roaming free inside of the brain and bring everybody else’s mood down. Later on in the film Sadness accidentally touches on the memories and makes it a sad memory, which causes Joy to tell Sadness off for messing with it. Joy and Sadness travel with Bing Bong, a childhood toy of Riley’s, to try and escape due to the ‘islands’ crumbling around them. Joy tries to take a tube to get back to headquarters and Sadness tries to join her but the memories Joy was carrying start to turn sad. Joy pushes Sadness away and shuts the tube to stop Sadness from changing the memories, as she does, she proceeds to say that Riley

needs to be happy and shuts out Sadness. When Joy looks at the memories carefully she realises that Riley’s happiest memory was actually caused by a sad memory. Joy finally opens her eyes to the thought of Sadness being important. Both Joy and Sadness work together to get back to HQ to try to save Riley and stop her from running away from home. Joy hands the happy memories to Sadness and she turns them sad and unplugs Joy’s idea of Riley running away. Riley returns home to her worried parents; Sadness saved the day. Both Joy and Sadness press a button on the control panel and make a happy/sad memory.

The reason why this plot was explained was because it illustrates the importance of both sadness and happiness coexisting. No matter what, sadness will be a feeling felt by human beings. It is inevitable. One can try and push any feelings of sadness away from them, however, it’ll only come back sooner or later.

Happiness and sadness can also be thought of like a seesaw, when one goes up, the other goes down and vice versa.

The main reason why feeling sadness is vital for growth as an individual, is because it shapes us and it makes appreciating the times of happiness and gratitude a lot more meaningful.

Too much happiness and a lack of sadness can create “toxic positivity”. This is when there is an exaggerated amount of positivity which can later make one feel worse when that happiness is lost. The term ‘Fake it till you make it’ can’t apply to this situation because lying to yourself about being happy doesn’t actually make you truly happy.

One must realise that life has ups and downs, and you can’t feel one favoured emotion. Feelings, for the most part, are uncontrollable and you must accept the fact that life won’t always go according to plan. It’s okay to feel upset about it because it’s the normal response to disappointment.

Sadness also helps with reflecting on life because it can slow you down and look over the situation. If you were sad about a loved one passing away, sadness can help you flip through the memories you had with that person. Sadness lets you release your feelings, which is why crying is actually good for your brain as it makes you face grief and pain. After you cry, you may realise you get that “empty” feeling, and that’s due to the hormone oxytocin and this hormone makes you feel calmer.

Humans weren’t made to only be happy so trying to feel 100% happiness is a waste of time, instead focusing on all the emotions that play a part on what makes a human being a human being. Humans are irrational and make mistakes all the time, it’s normal.

In conclusion, to answer my question :”Can human beings be capable of feeling happiness without feeling sadness”, the answer is no, they can’t.

23 Moral Debates

Are our moral values simply a manifestation of the human instinct to survive?

Morals have been at the forefront of many a Purvis meeting. In this article, Naael looks back to the origins of morality to try and discern what truly underlies our fundamental perception of right and wrong.

Morality is the principle that concerns the distinction between right and wrong behaviour, what’s seen as good or bad. Morality plays a critical role in ethics, societal hierarchy and the judicial system. Rules and laws are built on the back of humanity’s moral compass and every part of our daily life is controlled by our sense of morality. The way we speak, the way we talk, the way we treat others, everything comes down to moral code. Of course moral values differ from person to person but many values are universally accepted as the norm. The ideas of not to kill, not to steal and not to lie are the foundations of what we define as just and fair.

Even after technological advancements and societal progress, we still don’t know much about where this sense of morality comes from. Recently, scientists and scholarly circles have breached the ideas of where morality originates and rigorous research in both neuroscience and psychology has come up with several different theories. One rising theory is the idea of morality being a survival instinct yet there are numerous aspects that disprove this concept. Yes, research has shown evolution to be key in understanding morality but that does not define morality as an instinct in itself, rather it could be seen as a whole new dimension in the biological buildup of humans. These conflicting ideas have caused heated debate in the scientific community yet the answer lies deep within the human genome, and we are still years away from truly understanding the details of human molecular makeup. However, with current research and data, the ideas of morality and its origins raises one question:

Is morality really a product of evolution, ingrained into our biological makeup or is it the human manifestation of the instinct to survive?

Evolutionary studies have shown that morality is deeply rooted in human evolution. Evidence of this is seen through babies and infants who already show reactions to social stimuli. Scientists have studied that babies already begin to respond to faces and voices while also developing social relationships in their early years of life. Babies and infants are also seen to develop an understanding of fairness and equality and showcase this through actions such as sharing, hugging and empathising with one another. At the age of 2, babies already understand a very basic concept of good and bad and this supports the idea of evolution and morality, as newborns and infants would have had limited

time to be influenced by the environment.

Even though morality is a concept exclusive to humans, the “traits” of morality are seen throughout the animal kingdom, especially in mammals and primates. Primates are our closest counterparts in the animal kingdom with 98.8% of human DNA being similar to that of a chimpanzee’s. Both humans and primates live in large social groups with hierarchies, designated leadership and communal gatherings. Even though our social organisation is similar to that of a primate’s, human society is a much more complex form of societal organisation, distinguished by what we call “culture”. Even though our societies may be similar, studies have shown that animals still do not have the higher-level thinking that humans have and are currently incapable of morality.

Even though animals may not have morality itself, the traits of morality are visible to many scientists. A recent study by David Attenborough has shed some light on the activities in primate society. The research was displayed to the public in a documentary series known as “Dynasty” and showcased the societal hierarchy of the chimp. Attenborough describes the chimpanzee society as a “constant state of flux, with the shifts in the relationships between the individual members key to what will happen next.” The study showcased aspects of moral values in the chimpanzees, mainly shown through their sense of empathy. The chimpanzees hugged, shared, kissed, formed alliances and empathised and supported one another when distressed. This behaviour is starkly similar to human behaviour and shows a link between morality and evolution. Evolutionary studies led by influential scientists like Charles Darwin have proven that humans used to be primates and are a much more developed species. As humans have developed much more than chimpanzees, scientists could suggest that these acts of prosocial behaviour and mannerisms in primates are a sign of early morality development. If one linked the scientific studies of babies and primates, one would see a similarity in their behaviour. Primates and human infants both show empathy and prosocial behaviour. However, as human infants grow older, their understanding of morality grows and changes, yet primates seem to keep these traits throughout their lifespan. This evidence suggests that morality is a product of evolution and as humans are the most developed species, we happen to have developed the highest sense of morality.

24 Moral Debates

Neuroscience is a field that has been essential in understanding morality. Several studies of the brain have worked to discover what could possibly be the driving force behind morality in humans. Chemicals produced in the brain, known as neuromodulators, have been found to influence morality, specifically the chemicals serotonin and oxytocin. These neuromodulators have been found to influence choices in social situations, adjusting the feelings of happiness, jealousy and generosity. However, these chemicals aren’t the only factors behind morality. A recent experiment on the brain and its structure was used to discover which parts of the brain controlled morality. The results showed that morality was not designated to one area of the brain and rather, was devoted to several systems within the human mind. For example, the amygdala was discovered to play a role in understanding positive or negative reactions while the ventromedial prefrontal cortex was the part of the brain that understood other social behaviours. It interprets decision-making and generosity by utilising other parts of the brain and analysing cognitive and emotional processes to instigate the correct social response. The research regarding the ventromedial prefrontal cortex also found out how damage to the region could affect a human’s morals. Studies found that if the ventromedial prefrontal cortex was damaged before 5 years of age, the person was more likely to commit crimes or inflict harm. This was accompanied by the fact that people who suffered brain damage felt less empathy, guilt and embarrassment if the region of the brain was damaged. This evidence suggests that the ventromedial prefrontal cortex could be controlling what we see as right or wrong and also further proves the point that morality is a byproduct of evolution.

Since evolutionary studies have discovered morality to be a byproduct of evolution, some scientists have breached the topic of morality being a survival instinct yet a lot of scientific analysis says otherwise. An instinct is defined as “a way of behaving, thinking or feeling that is not learned.” Sur-

vival instincts are instant and these can generally be summarised into the categories of fighting, fleeing, feeding and fornication. However morality cannot be an instinct as it involves a choice. Instincts are driven by fear and mainly occur as an act of self-defence. This cannot define morality as an instinct as morality is the conscious choices that a person makes, knowing the consequences of their actions. We know not to kill someone as a moral code and that is because we understand that the act of murder is harmful and has multiple consequences. Those consequences could be jail, being outcasted from society, a tax fine or the feelings of guilt and shame. However the act of killing someone is not instinctive, you are making the choice to take someone’s life. This defeats the purpose of naming morality as a survival instinct as we already know that morality is a higher form of thinking that has evolved over time. Morality, thus, consists of the urge or predisposition to judge human actions as either right or wrong in terms of their consequences for other human beings. This contradicts previous ideas as instinct is self-inflicted and morality regards others as well.

No matter how in depth one can go to breach the topics of morality, the answer will still be unresolved, however, from the research at hand, one can come to a somewhat finalised conclusion. Studies in evolutionary psychology, biology, neuroscience and ethics have come to the conclusion that morality is certainly a product of evolution yet it is not a survival instinct. For now, studies have proven morality to be a form of higher thinking controlled by multiple organs and chemicals within the body. This conclusion shows that morality is a function of human development, not a mechanism needed for survival. Greek philosopher Aristotle stated that “All men by nature desire knowledge,” and as humans we are known to be both selfish and curious creatures. Our thirst for knowledge has led to more exploration of the human genome and as more studies are put into place, understanding morality will soon have a more definitive answer.

The Threat of Postmodernism

In a world where cancel culture and identity politics are loose terms thrown around like frisbees, Lauren tackles the question of how much of a threat postmodernism really poses to our society.

Itis really easy to feel discomfort with the world. To feel as though there are pathological, corrupt systems and methods of control out there in place, with the sole purpose of negatively impacting on your life. This fear is not irrational by any stretch of the imagination, and it’s a sentiment that most young people garner when they begin to conceptualise the greatness of the world, and the different methods of power play simultaneously occurring at all times, be it political or social. The response to this, particularly from young, typically left leaning people (whilst that is a stereotype, it is one based on data), is to go out into the world and try and control it. Because then, it can’t control you. This is a biologically motivated response to fear that plays on the predator/prey dynamic. If you become bigger than your predator, you take its power away. If you control it, it can’t control you. This notion is greatly justifiable, plausible and entirely moral in its origins and yet what we are seeing is that as this notion becomes more mainstream as a means of dealing with the inequality and unjustifiably corrupt world we live in, the result is a mob of tyrants. A mob of compassionate narcissists that weaponise virtue to shut down any form of counter argument to their own. You see, the motivation behind trying to control the world switches remarkably quickly from a means of survival to something far more dangerous. It becomes the desire to be accredited with moral virtue in the absence of the work necessary to actually attain it. Regardless of the original in-

tention behind their actions, it very quality escalates to becoming it. The result being a group of people that think exactly the same, to the extent that the act of even thinking in and of itself is slowly deteriorating. The question we need to be asking ourselves, is, therefore: why is a generation that is acting upon biologically positive motivations morphing into, arguably, the most destructive generation yet?

The answer, from my perspective, is the widespread indoctrination of ‘political correctness’ and the implications that come with it. By implications, I’m referring to the paradoxical amalgamation of neo marxism and the ideas of postmodernism, specifically the ‘rejection of epistemic certainty,’ that had previously characterised the political sphere as well as the sphere of discourse. Firstly, the reason why the alliance of postmodern and neo-marxist thinking is intrinsically paradoxical is because the very foundation of postmodernist thinking is derived from the belief that there exist no canonical interpretations of the world, which practically manifests itself in the rejection of any binary categorisation; ie the rejection of gender as a concept. On the other hand, the neo-marxist ethos explicitly views the world in an ‘oppressor vs oppressed’ manner, therefore by nature of viewing the world in a canonical manner, it contradicts the ethos of the postmodernists. There is therefore no logical way to explain the union of these two schools of political and philosophical thought. And under absolutely

no circumstances should compassion be presented as an argument for the union of these doctrines as history has shown us that the practical implications of these doctrines has been murderous, tyrannical and by no means compassionate or even remotely morally acceptable by any standards. The reason, therefore, why this union exists albeit paradoxically, is because both the postmodernists and the neo-Marxists have something in common. Both doctrines are driven ultimately, by resentment. You see both doctrines distil the nature of the world down to power. They both view humanity as infinite numbers of unjust hierarchies that are irrevocably bound together by power, and power alone. Not only does this method of thinking attempt to reduce the nature of humanity, which is incomparably complex, down to one idea alone, remarkably cynically, but it also simultaneously perpetuates the ‘oppressor vs oppressed’ narrative that the Marxists view the world through whilst producing an overwhelming sense of resentment. Logically, you can then determine that, given both doctrines view the composition of humanity to be accredited only to power, that the practitioners of said doctrines practise in the pursuit of power, driven by resentment towards the natural order, and disguising themselves, most reprehensibly, as compassionate. What activates these doctrines is therefore the blunt will to power, ironically and remarkably hypocritically participating in exactly what they attempt to condemn in their foundational

26 Moral Debates

arguments.Now that we’ve outlined what the motivational composition behind ‘political correctness’ is, it’s very easy to understand its parasitic effect on society today, specifically on the younger generations. These doctrines are invading our society. They have infected almost the entirety of the educational system, which acts as the root to all else, by dominating the humanities and slowly moving their way to the social sciences. The majority are being sold the idea that these doctrines are positively motivated by compassionate intentions and are blind to the moral

corruption that actually activates the schools of thought. The younger generations are also obviously more susceptible to accepting these notions as truth, given the absence of being presented with anything else during the entirety of their lives. Before we continue down this catastrophic road we are currently on, the deception that absolutely covers this political agenda must be dismantled. Resentment and the demand for power… that’s what is motivating the forced indoctrination of these doctrines, not compassion. Nietzsche actually predicted this overtaking of the rad-

ical left about 150 years ago, which is quite remarkable considering that was largely before any of these doctrines were properly formulated. The outcome he predicted was one where scientific truth holds no value, individual liberty and speech are stripped from previously sovereign citizens with the act of thinking ceasing to exist in its natural form. That prediction is manifesting into reality at this very moment.

Welcome to identity politics and political correctness… human beings? No. You’re nothing but your race, religion and gender.

What is Knowledge?

Epistemology - the theory of knowledge. In this article, Joel takes a highly analytical approach on the journey to answering the deeply philosophical question, what is knowledge?

Knowledge can essentially be gained through one of two methods, observation of empirical evidence, a posteriori, or logical reasoning, a priori. A posteriori reasoning describes scientific testing in order to find probable truths and a priori reasoning describes using reason in order to find logically necessary conclusions.

AJ Ayer’s, ‘Language, Truth, and Logic’, includes many arguments throughout the book. His attempts to eliminate metaphysics and a priori arguments are very convincing, for example, any a priori argument is essentially a tautology (a statement that is true by virtue of its logical form) because the conclusion is logically necessary given the premises, thus all a priori knowledge only exists via a sequence of tautologies which are therefore incapable of proving anything without the input of empirical knowledge. This leads to the conclusion that we can only ever gain new knowledge from empirical evidence.

However, if we can only gain knowledge from empirical evidence, then that begs the question of how we gained the knowledge that we can gain knowledge using a posteriori reasoning; for, if we gained that knowledge from empirical evidence, then that would mean that a posteriori reasoning is reliant on circular reasoning and thus does not prove anything. Therefore, we must have gained the knowledge that we can gain knowledge from empirical evidence through the use of a priori reasoning meaning we can in fact gain some knowledge from logical reasoning.

In addition to this, if we can only gain knowledge from empirical evidence and observations, that would therefore mean that it is impossible for one to imagine objects that do not exist physically such as a perfect circle which we could never observe. This example demonstrates how, if we can only know anything through a posteriori reasoning, it is impossible to imagine abstract concepts. This leads to the question of whether we can therefore imagine numbers without attributing them to physical objects. Wittgenstein explains how numbers cannot be defined individually; he uses the example of ‘five’, which we can only truly define and understand within context. Therefore, if numbers cannot be defined without context, that there-

fore means that pure mathematics is similarly undefined and so cannot carry meaning. In particular, this throws into question the validity of algebra which is inherently abstract and has no context.

This conclusion, however, seems illogical as algebra has been proved to provide empirically verifiable results. This proves that whilst Wittgenstein was correct in his interpretation of the nature of language and pure mathematics, that does not mean that the same applies for all knowledge.

This similarly applies to knowledge in terms of universals and particulars. For instance, the statement ‘circles are perfectly round’ must apply exclusively to universals as there is no such thing as a perfectly round circle in the physical universe and thus it would not make sense with particulars. In addition to this, it is illogical to suggest that we gained the knowledge that we can gain knowledge from empirical evidence via empirical evidence as that would merely be circular reasoning which fundamentally can never prove anything. This is the same issue that rationalism possessed so neither rationalism or empiricism can exclusively provide all our knowledge. However, this is directly contradictory to Ayer’s arguments which seem to completely discredit a priori reasoning.

Universals and particulars are used in a priori and a posteriori reasoning respectively in order to represent knowledge. For example, the phrase ‘Edinburgh is north of London’ must refer to particulars as it refers to concepts that are grounded within physical reality such as ‘north of’ which must refer to real physical objects rather than abstract metaphysical concepts. On the other hand, the phrase ‘two plus two is four’ must refer to universals because it holds true regardless of the specific situation and does not refer to physical objects.

Therefore, whilst we do gain the majority of our knowledge via a posteriori reasoning and empirical evidence, ultimately, all knowledge is reliant on our ability to use a priori reasoning in order to deduce truths that are necessary in order to ascertain any knowledge. Thus a priori reasoning is more important than a posteriori reasoning.

28 Moral Debates

Science & Technology

“The test of all knowledge is experiment. Experiment is the sole judge of scientific truth. But what is the source of all knowledge? Imagination”. These wise words from Richard Feynman the acclaimed physicist reflect the balance of creativity and fact throughout this section.

The End of the Universe

Whilst many physicists focus is pointed towards the origins of our universe and what happened before the Big Bang, Adam has shifted his attention to the death of the universe and writes about the four possible ways our universe could come to its demise.

It seems strange to think about the end of the universe. No matter how the universe ends, it has little practical relevance to us. We will be long dead by the time any of these hypothetical fates come to pass. And yet humans have wondered, and will continue to wonder for a long time to come, about how this great machine that we all live in will, if ever grind to a halt. Whether it is truly eternal, whether it will end in fire or ice, what comes after, what came before. All these questions have answers. If we will ever know them is another thing entirely. When it comes to the end, the expansion of the universe is key to understanding its fate. How it’s expanding, at what rate and whether it will ever stop all have direct implications for how if ever the universe will end. Currently, we know (as proven by the redshift of galaxies far away from us due to the doppler effect) that space-time, the fabric of the universe, is expanding due to dark energy at an increasing rate. This rate has been measured to be about 70 km/s/mpc 1 or 70 kilometres per second per megaparsec. This is the rate at which things move away from us (our local group) based on how far away from us they are. A megaparsec is a unit of distance equivalent to 3.26 million light years. So the further away from our local cluster something is, the faster it is moving away from us. It should be mentioned that within our local cluster, that is the group of our local galaxies, space is not expanding as the expansion is weak enough to be counteracted by gravity, so objects close to us are gravitationally bound together. This expansion has created something called the cosmic event horizon, or the edge of the observable universe. At a far enough distance, the distance between an object and us grows faster than the speed of light, making that object causally disconnected. No light can reach them, so we cannot communicate with or observe it. As time passes, more and more galaxies cross this cosmic event horizon due to the expansion of the universe, and

so eventually we’ll be unable to observe anything outside of the things we are gravitationally bound to. Practically this means that eventually galaxies and clusters will be so far away from each other, that they can no longer interact with each other. No new galaxies will form, no new stars will form, and eventually the current galaxies and stars will die out. This leads on to the first of 4 ways our universe could end.

The first of 4 ways the universe could end will happen if the amount of dark energy is less than our current predictions, or (in other words) if the amount of mass in the universe is more than our current predictions. This will cause the expansion of the universe to slow, and eventually, gravity will begin to take over and pull space-time back in on itself. As the universe contracts, temperature and pressure will build, until everything in the universe is fried. Even atoms and molecules will break down into elementary particles such as quarks, electrons and neutrinos. All that will be left are the black holes of the universe, which will eventually unite as they are brought closer to each other, into one supermassive blackhole that holds all the matter of the universe. Even space time itself will be devoured, and the universe will end just as it started, as a singularity. Some speculate that this contraction into a singularity will then trigger another big bang, forming a new universe that follows the same laws as the one we currently live in, and that this is a cycle that has been occurring, and will continue to occur, forever. Evidence for this can be seen in the cosmic microwave background radiation. Radio telescopes have picked up anomalies in Cosmic Microwave Background Radiation. Areas of increased radiation that represent hawking points, or places where supermassive black holes have evaporated in a past universe and have left imprints on our current universe.

Section
30 Science & Technology

The second possibility for the fate of the universe comes when gravity is unable to counteract the expansion caused by dark energy. If the universe does not stop expanding, the temperature of the universe will continue to decrease. Slowly the universe’s energy will diffuse throughout the universe, becoming so spread out that it is unusable and the universe’s entropy (randomness of a system) will increase. Essentially, the more random particles are, the less useful they are, just like how an organised library is more useful than a random pile of books. It seems counterintuitive that the particles in the universe become more disordered as they become more evenly spread out, however it can be thought of like a glass of water vs a glass of ice chips. The glass of ice chips seems less ordered, however speaking from a molecular perspective, the glass of ice chips is more ordered as the particles are in a more uniform, gridlike pattern, whereas the glass of water is a messy soup of particles. In the same way, as the universe cools its contents will break down into a messy useless soup. Energy becomes more spread out, and eventually, there will be no energy gradient for anything to happen. With enough time, matter itself will break down into fundamental particles, and the universe will cool to a fraction of a degree above absolute zero. Everything in the universe will be dispersed and useless, its entropy at a maximum. All that is left will be an even dead plain of boringness in which nothing will ever happen.

The third way the universe could end, will come to being if the density of dark energy increases. We currently believe that the density of dark energy is constant (and very low), but it’s a possibility that this could not be the case, and the density of dark energy could in fact increase with the expansion of the universe. If this is the case, there will come a point at which the density of dark energy is greater than the density of the stars, planets and eventually everything in the universe. Once the density of dark energy surpasses that of an object, gravity can no longer hold that object together and so it will be torn apart down to its fundamental particles. Eventually, everything in the universe will be ripped apart. The expanding universe will become so big that no elementary particle will ever encounter another and so nothing will ever happen ever again. Some even speculate that dark energy could rip apart space time itself, and that the universe would cease to exist all together.

The fourth and final way the universe could end is a slightly different one, that doesn’t involve dark energy, but instead quantum fields and vacuum states. Quantum fields are what give everything in the universe their properties. There is an electromagnetic field that controls electromagnetic forces, and therefore controls particle

bonds, and allows fundamental particles to form atoms. There are two nuclear fields (strong and weak) that control the nuclear forces that bond atoms together into molecules. There is a gravitational field that controls gravity that bonds matter together to form objects. This list goes on.

A vacuum state is a state of lowest possible potential energy. Just like how a boulder on top of a mountain wants to roll down the hill until it reaches the bottom, a stable state with the lowest possible energy, every single thing in the universe wants to reach this vacuum state. Every single thing including quantum fields. Luckily for us, all of the quantum fields we know about have already reached their vacuum state so they are stable. All but one. The Higgs Field is the quantum field that gives things mass, and we don’t think it has reached a stable vacuum state, rather it is metastable. It is in a false vacuum. If the Higgs Field is a boulder, it has rolled from the top of the mountain all the way down to a valley. Seemingly it is at the bottom, it is at its vacuum state, however this valley is in fact just a small flat area on the mountain. The boulder still has a way to go to the true bottom of this mountain, and with some external quantum disturbance, it will finish its journey to the bottom. If the Higgs Field was to fall to its true vacuum state, it would release a huge amount of potential energy destroying everything around it. If this were to happen at a point in space, the release of energy would start a chain reaction, causing a release of energy to spread throughout the universe at the speed of light destroying everything it touches. Not only would all matter be destroyed from the release of energy, once the Higgs Field has reached its true vacuum state, all the laws of physics would change. It would essentially reset the universe. Everything that is currently in existence would cease to exist, and the universe that is left would have different laws to ours, essentially making it a completely different universe. Interestingly, this could be happening right now, or have already happened, and we would never know about it, because if it had started far enough away, it would never reach us because the expansion of the universe relative to us would be faster than the speed of light at which the vacuum decay is spreading. It would be past our cosmic event horizon and would never reach us.

At the moment, the most likely outcome based on our current observations looks like the Big Freeze as, due to dark energy, the expansion doesn’t look like it will slow down. The density of said dark energy currently seems to be remaining constant so the big rip seems unlikely, and as for vacuum decay, although theoretically possible, the chances are very low since the universe has already made it 13.8 billion years without a vacuum decay ending it all.

31 Science & Technology

Can Humans ever become Invisible?

Invisibility, a superpower championed by the likes of many a superhero and something we think of as predominantly fiction. But for how much longer? Read on to see whether Phoebe believes our ever advancing world might just crack the secret to invisibility.

Invisibility is a superpower that many dream of and has been a central theme in fantasy and dystopian fiction for centuries, but why is it not achievable in real life? The development of new ‘meta materials’ have already caused a vast overturn of our previous understanding on optics and what we are now seeing, are working prototypes of these materials being built in laboratories. However this technology has yet to be deployed in daily life. But why is this?

To explain the phenomenon known as invisibility, I am going to refer back to some basic physics. In a solid, atoms are tightly packed, whilst in liquids and gases the particles are far more dispersed. Most solids are opaque because light rays cannot pass through the dense arrangement of atoms whilst in liquids and gases, light can pass through the spaces between atoms as the spaces are larger than the wavelength of visible light so gases and liquids are often transparent. There are notable exceptions but this is the general rule. Under certain conditions, however, a solid could become transparent or invisible if the atoms were arranged randomly. This can be done by applying extreme heat followed by rapid cooling as is the process for glass. Invisibility is a property that is determined at the atomic level and would be incredibly challenging to duplicate. To make a human invisible using these methods you would have to liquify them, boil them to create steam, crystallise them, heat again and then finally cool them.

Whilst this may technically make a human invisible this is somewhat incompatible with human life and not quite what many envisioned when saying they wished they could become invisible.

New metamaterials could lead to the possibility of invisible technology in our society

However a new development has challenged the conventional model of invisibility. New “meta materials” have been deemed to have the potential to make objects truly invisible. This was once deemed impossible and yet a collaboration between Duke University and Imperial College London has already created an object invisible to microwave radiation. These meta materials have optical properties not found in nature and are created by putting tiny implants into a substance that force electromagnetic waves to bend in unorthodox ways. The materials can bend microwaves so that they flow around

them rather than incidenting and thus making the object invisible to microwaves. If the “meta material” can eliminate all shadow and reflection it can become completely invisible to that type of radiation. But to work with visible light instead of microwaves the implants into the meta material would have to be smaller than the wavelength of radiation and whilst microwaves can have a wavelength of around 3 cm, visible light’s wavelength is just a tiny fraction of this. This is impossibly small with the technology we have now.

To overcome this issue many different methods have been in the works. For instance, a new plasmonic technology that most simply put, guides or ‘squeezes’ the light to manipulate objects at the nanoscale. There are also scientists attempting to put the implants into meta materials in sophisticated patterns so that light would bend around the object, or manipulating meta-materials at the atomic level. All of these methods have been deemed to hold genuine promise of delivering invisibility in the not so distant future.

So now they key question, if invisibility is in fact possible how soon could it be developed? Michio Kaku classifies invisibility as a Class I impossibility in his book ‘Physics of the Impossible’, meaning that within the next few decades or within this century a form of invisibility may become commonplace. This means that humans could very well become invisible within our lifetime.

32 Science & Technology

Artificial Neural Networks and Their Applications

With the rise in popularity of many AI propgrams such as Chat GPT, having a good understanding of how artificial intelligence works is becoming more and more critical. Luckily, Ali has put together an article that helps us with this very question.

Have you ever wondered how you are able to read this sentence? How is your brain able to interpret a stream of scattered light signals, put them together to form a word, and put the words together to form meaning, all in a fraction of a second? What electrical, chemical or biological processes enable this? And - most importantly - is it possible to teach a computer to do the same?

In 1960, psychologist Frank Rosenblatt, known as the father of deep learning, created the Mark I Perceptron, a computer that attempted to learn by simulating human thought processes. This computer is thought of as the first artificial neural network. Today, neural networks use the same basic architecture, but - coupled with vastly superior computing power - are able to learn and perform far more advanced functions.

The second approach - which is more complicated but more closely resembles human learning - is called ‘reinforcement learning,’ and is used when inputs do not lead to clear outputs, but rather to an unpredictable response. For example, when training an agent to play a video game, any action taken by the agent will not necessarily result in immediate positive or negative feedback. Instead, the agent must adopt a policy and consistently perform a string of actions, weighing its overall performance, or ‘cost.’

So how exactly does a typical neural network work? In a conventional computer program, an input is provided, upon which a function is carried out, and the output is calculated. By contrast, in deep learning, the program must be ‘trained’ to perform the function first. This is achieved by providing both inputs and outputs during the training phase. The program then learns to match the inputs to the outputs, using an algorithm known as ‘backpropagation.’ The advantage of this approach is that it enables the program to learn functions that are too complex for a human programmer to lay out.

There are two main approaches to learning employed by neural networks. The more traditional method is ‘supervised learning,’ where a dataset with discrete input-output pairs is provided, and an algorithm known as ‘stochastic gradient descent’ is used to tune the network’s parameters to match inputs to outputs.

e structure of a typical neural network - comprising the input, output, and the ‘hidden layers.

In practice, the way a neural network achieves this is by feeding the inputs through multiple ‘layers’; which might range from as few as two to as many as tens in the largest networks. Each layer consists of ‘neurons,’ which are simply stored numbers. Neurons in one layer activate neurons in the next layer, and this process continues until the output layer is reached. The calculated outputs are then compared to the expected outputs in the training dataset, and the network is adjusted accordingly. Using this architecture, a very complex algorithm can be formed with potentially billions of parameters, meaning that a great deal of depth is achieved in the computation, and any slight variation in the input can lead to vastly different outputs.

A neural network can be trained in any field for which training data is available. This can be anything from recognizing handwriting (such as in Optical Character Recognition) to identifying cancer cells (such as in single-cell sequencing). They are also used in image recognition and image creation, such as in deepfakes. The largest neural network built to date is the GPT-3, a language model with 175 billion parameters and is capable of producing text that is indistinguishable from human writing.

33 Science & Technology
e Mark I Perceptron - the rst arti cial neural network

Preventative Antibiotics in Animal Husbandry

There is no doubt that Brexit has caused a paradigm shift in many aspects of British life, it is safe to say that our exit from the EU affected all areas of society - one being the case of preventative antibiotics in animal husbandry. Something that Lucy believes we should all be paying very close attention to.

Ata time when the UK Government is reviewing supply-side policies and is no longer bound by EU regulations, the preventative use of antibiotics in animal husbandry is once again, a hot topic. Indeed, as the UK Government promises “Growth, Growth, Growth” we have to ask whether this economic objective can ever justify prophylactic antibiotic usage by the farming industry.

The growing global population has many social and economic side effects and wide ranging ramifications for science and medicine. One of the most pressing issues is the rise of antibiotic resistance as bacteria mutate into so-called “superbugs” that resist and are untreatable with the current suite of antibiotics. The magnitude of this concern was reflected at the 71st United Nations General Assembly when 193 countries endorsed a declaration to combat Antimicrobial Resistance (AMR)1 and when the UK Department for Environment, Food & Rural Affairs (DEFRA) stated that “antibiotic resistance is the biggest threat to modern medicine”.

However, this article focuses on the use of antibiotics in animal husbandry rather than in human medicine as the rise in AMR has been linked to the ever increasing use of non-specific and preventative (known as “prophylactic”) antibiotics in farming. Indeed, in 2020 it was estimated that 66% of global antibiotic use was in farming animals as opposed to human use. Furthermore the UK’s Veterinary Medicines Directorate acknowledges that “because there are a number of different ways that resistant bacteria can be transferred directly and indirectly between animals, people and the environment, it is clear everyone has a part to play.”

So, we must therefore ask the question, what are antibiotics?Antibiotics are medications that destroy or slow down the growth of bacteria. Today there are a number of different classes of antibiotics but they all stem from Dr Alexander Fleming’s famous discovery of penicillin on an unwashed petri dish in 1928. In the UK, both human and animal antibiotic use requires a medical or veterinary pre-

scription. Whereas for humans the prescription is mostly metaphylaxis (treatment after diagnosis) (with prophylactic prescriptions only being used in limited circumstances before certain high risk surgeries) it is a very different picture in animal husbandry.

In the ten years that followed Fleming’s discovery there was a rapid explosion in development and production of antibiotics. Indeed by the 1940s antibiotics were being used regularly not only to treat bacterial infections in humans but also to treat clinical infections such as mastitis in cattle. In the 1940s, however, farmers in the USA also started noticing that treated animals were often larger and more productive. This hunch was confirmed in 1950 when a New York laboratory published findings that adding antibiotics to animal feed had “accelerated animal’s growth and cost less than conventional feed supplements”. Unsurprisingly this resulted in a global explosion of routine prophylactic use in animal husbandry by way of adding antibiotics to animal feed and also by specific pre-emptive treatments designed to limit the likelihood of bacterial outbreaks.

Due to the increase in population over the last century there is an ever greater demand for food. As a result farmers, and indeed the global population, cannot afford to have large herds of animals succumbing to bacterial illness for the simple reason that that would result in a decrease in clean, and safe, food available and could result in severe disruption to the food supply chain and, in extreme cases, even trigger famine. On this basis it would seem that prophylactic antibiotic use in animal husbandry is a sensible course of action.

However, as far back as 1945 people warned of AMR. Indeed in his 1945 Nobel lecture even Fleming warned that if penicillin was misused or overused bacteria could become resistant to it. It is argued that mass prophylactic use in farming creates the exact resistant bacteria that Fleming warned about which can then spread to people and may cause an outbreak of unstopable diseases. Furthermore, on top of this, it is contended that antibiotics can be an aid

34 Science & Technology

to hide the substandard living conditions in which animals are raised to be slaughtered. The use of prophylactic antibiotics in these environments helps to stop the outbreaks of diseases and thus prevents infection in these unhygienic environments, therefore masking the real problem and exacerbating the malpractice.

A result of the conflicting needs above, despite AMR warnings being almost contemporaneous with the discovery of antibiotics, law and rule makers have been slow to respond and even today there is a mixed global approach. Arguably the EU currently has the most prohibitive laws. In 2006 the EU imposed a law stating that the use of antibiotics for promotion of growth was banned. This had a huge impact and it is believed to have reduced antibiotic resistant bacteria significantly. Then In January 2022 a new EU law proposed in 2018 came into force meaning that the European parliament has ‘banned the prophylactic use of antibiotics in farming’. This legislation means the use of antimicrobials are only to be allowed when animals are in pain or if there are zoonotic (transmittable to humans from animals) diseases - in these situations the prescription is metaphylaxis. Since Brexit the UK is no-longer bound by EU Regulations and whilst DEFRA and the UK Government use rhetoric to support the reduction in antibiotic use in animal husbandry they have stopped short of updating the the UK’s own Veterinary Medicines Regulations, preferring instead to rely on “consultations” and “voluntary reduction” of use within the farming sector. As such, prophylactic use is currently permitted. In countries such as the USA the movement away from antibiotic use in animal husbandry is even slower and commentators claim that farming antibiotic sales figures are still high.

Overall, the competing imperatives of a safe and abundant food supply and prudent veterinary practices designed to minimise AMR, have to be weighed against each other. However, as Dr Tedros Adhanom Ghebreyesus the Director-General of the WHO (World Health Organisation) states, “a lack of effective antibiotics is as serious a security threat as a sudden and deadly disease outbreak …strong,

sustained action across all sectors is vital if we are to turn back the tide of antimicrobial resistance and keep the world safe”. As such it really does seem time for the UK to formally limit the use of antibiotics to treating animals in a metaphylaxis way, where the antibiotic is specific to the

In my opinion economic growth does not justify prophylactic antibiotics

bacteria it is treating and so has a higher chance of working and thus reducing the rise of antibiotic resistant bacteria.

Furthermore, by addressing the standards of living in which comercial animals are raised and by looking at the biosecurity and hygiene of these environments, we can reduce the need to use prophylactic antibiotics and also help secure food supply chains. Indeed a country that has led the way in combating this issue is Denmark, who at the same time as championing animal welfare completely banned the use of avoparcin (an antibiotic commonly used in poultry, pig and cow feeds to promote growth and to prevent Enteritis (inflammation of the small intestine) and yet still remains one of the world’s largest pork exporters.

Therefore, in answer to the title question, in my opinion economic growth does not justify prophylactic antibiotic usage by the farming industry as not only is the risk of AMR too great but there are also other steps that can be taken to promote growth. Accordingly I believe we should have laws in the UK restricting prophylactic usage in animal husbandry, thereby ensuring veterinarians and farmers have the same moral and legal obligations to ensure the health and wellbeing of animals as doctors do for humans.

Could wormholes help humans travel through space?

Wormholes for many are the work of science-fiction, but the truth, is they could be more real than we think - and even better, they might even be able to assist us in our quest for intergalactic space travel.

With humanity on the verge of a breakthrough in space travel and exploration, it is important to understand the limits of travel and just how vast the universe really is. Take, for instance, Elon Musk’s Mars mission. The journey to get to Mars, our neighbouring planet, is estimated to take seven months. When imagining exploring the edges of our universe, it soon becomes apparent that with current technology, our generation cannot hope to leave the likes of our solar system. Wormholes, however, may be an answer to that.

In 1915, Albert Einstein discovered and fully published the theory of general relativity. This theory predicts the movement and behaviour of objects in spacetime. In this work, Einstein also predicted the existence of black holes: bodies wherein all its mass is concentrated in one infinitely dense point, the singularity. The existence of these was confirmed with audio proof a century later in 2015 through an instrument called LIGO, which converted disruptions in a laser due to gravitational waves into an audio file.

In 1916, an Austrian physicist, Ludwig Flamm, suggested another theoretical body, the white hole, in his comments regarding Einstein’s work. A white hole is essentially what it sounds like: it spews matter and light from its core, giving it a bright white appearance in theory. It was thought to link to a black hole, thus connecting two distant regions of space. However, there were problems associated with white holes. For instance, their

existence violates the second law of thermodynamics, stating that entropy (disorder) in the universe can increase or stay the same, not decrease over time.

In 1970, however, British physicist Stephen Hawking put forth a convincing argument to the existence of white holes with his proposal of another phenomenon which we now know as Hawking radiation. This theory proposes that black holes spontaneously radiate heat into their surroundings through the conversion of quantum vacuum fluctuations (variations in the energy of a certain point in space) into particle pairs, one of which escapes the black hole’s event horizon while the other remains trapped in it.

To understand exactly what quantum vacuum fluctuations are, it is first necessary to understand that a vacuum is not entirely empty. Throughout the universe exists a background energy known as the cosmological constant, also known as dark energy, responsible for the expansion of the universe. This energy fluctuates constantly. Variations in the vacuum energy can

sometimes create a pair of particles which tend to annihilate each other almost instantaneously and release energy again. However, sometimes these particles are separated from each other, more often in black holes as one particle gets trapped in the black hole’s event horizon and the other escapes.

This idea was widely accredited also because it provided a solution to the information paradox. The information paradox, proposed by Hawking himself in his theory on Hawking radiation, looks at how the information of particles entering the black hole would be conserved. Take, for instance, a proton being absorbed by the black hole. This proton has a certain spin, its quarks are assigned the flavours up, up, and down, and it carries a charge of +1 among other properties. This information would be lost when the proton enters a black hole as it undergoes “spaghettification” – a term coined by Stephen Hawking – which essentially strips particles into their fundamental states and compresses them all into the singularity. However, this information cannot be destroyed according to quantum rules, therefore rendering black holes as violations to general physics.

Because this information was to be conserved, the black hole must be conserved in some form to retain this information. Hawking reasoned that a black hole would only radiate energy as per the rules of general relativity as long as it remains above Planck size, after which it is governed only by quantum rules. While this could be

36 Science & Technology

true, Italian physicist Carlo Rovelli suggests a link between black holes and white holes which could offer another solution.

An important thing to note at this point is that spacetime itself is quantized – that is, it is made of fundamental, indivisible quanta, woven into a fabric of spacetime – a concept widely agreed upon in the physics society. Carlo Rovelli noted that if spacetime is quantized, it cannot be lost, as with information. A solution to this would be if the black hole, when radiating its last remnants away, reached a point after which it couldn’t shrink further and thus rebound to form a white hole – the conclusion of a 2014 study conducted by his group.

This pairing of a black hole and white hole is the essence of a wormhole. In 1935, Einstein and another physicist, Nathan Rosen, used general relativity to theorise the existence of a “bridge” in spacetime which connected two distant parts of the universe which would greatly reduce the time of travel. It is also a form of time travel (as time is relative), and potentially suggests that we can go into the past and the future. The structure of a wormhole has two “mouths” (the white hole and black holes’ event horizons) and a “throat”. The mouths are spherical in shape, but the throat could be a straight line (and thus the shortest path) or convoluted.

However, the throat is very unstable and thus easily lapsible. It will have a tendency to close in on itself, unless a certain form of matter, exotic matter, were to exist. Exotic matter is believed to have negative energy density and a large negative pressure, properties which would cause it to repel the tunnel thus keeping open the throat of the wormhole. Such exotic matter has not yet been discovered or procured in any way yet.

It is important to note that exotic matter describes a group of particles which would have properties violating our current laws of physics (such as having negative mass,

density, or pressure); at least according to general relativity. That is where quantum rules come into play. Quantum physics looks into particles’ behaviours at very small distances, often much more random than general relativity.

“ Another issue with wormholes is the size they are predicted to exist

Another issue with wormholes is the sizes they are predicted to exist in. As aforementioned, white holes violate the rules of general relativity, ergo wormholes violate general relativity. However, at very small distances below Planck length (1.6e-35 m), rules of general relativity stop applying as we would expect them to. Thus wormholes could exist at such sizes, but its usefulness is null considering we can’t observe it visually, let alone use it. The answer to this, however, can lie in primordial wormholes.

Primordial wormholes describe a set of wormholes created in the very early stages of the universe. If a wormhole was created at such a time, its apparent size would have greatly increased with the expansion of the universe, making it plausible to be used by humans. Such wormholes could exist, but are yet to be discovered. However, this indicates to us that without the use of exotic matter (itself theoretical and against the rules of general relativity), humans cannot conceive to fabricate a wormhole stably; that is, until a recent research

conducted by physicists Ping Gao and Daniel Jafferis.

In this research paper, a potential solution for traversable wormholes is discussed, size adjusted, using the principle of quantum entanglement. Quantum entanglement can be thought of as two “connected” parts of a single body, wherein both bodies behave in a correlated manner regardless of their distance from each other. This relation can mean they behave in the same manner as each other, or in the complete opposite: the parameters, however defined, will remain the same at any given time and distance from each other while observed simultaneously. When not observed, one particle may undergo superposition, meaning it exists in either possible state in an even probability.

The parameters defined by the paper suggest such a relation between the black hole and white hole, creating a stable environment for travel between two points in spacetime. However, at present, this solution is not viable as it would take longer to travel this way than direct travel. Through application of quantum field theory, Jafferis negates the need for negative energy, which removes an inconsistency of relativity.

Currently, wormholes are an impractical method for traversing the universe, whether it be through their instability, miniscule size, or time inefficiency. However, by researching wormholes, we can better our understanding of quantum and general interactions, their relation, and thus look to interact and manipulate spacetime to better journey it. By understanding links between regions of spacetime as purported by wormholes, and by using classical physics to find a way to better utilise it to our advantage, humans can look to an era of space travel and potentially time travel.

37 Science & Technology

TO SAVE

ENDAGERED SPECIES AND IS IT ETHICAL?

Is it possible to use cloning to save species? Is cloning itself ethical? Could cloning change the fate of dying species? In this intriguing article Isaac aims to answer these big questions and deliver a judgement on the ethical implications on cloning.

The study of cloning began in 1885 when a German scientist called Hans Adolf Eduard Driesch was studying reproduction. In 1902, he created twin salamanders by splitting a salamander embryo in two. After that, cloning advanced rapidly. In 1958, biologist John Gurdon was able to clone frogs from the skin cells of frogs and in 1996 Ian Wilmut led a team of scientists who eventually created Dolly the sheep who was the first mammal clone. Since then, many other animals have been successfully cloned. Dolly was a major breakthrough in the science of cloning as she was the first animal cloned from a mature cell and her existence made the wider world start to think about the possibility of cloning humans.

Making a clone is very complicated and difficult to do but can be boiled down to a few simple stages. Firstly, you need to take a cell from the animal you want to clone [Organism A] and remove the nucleus of that cell. The nucleus is the control centre of the cell and contains most of the genetic information. You also need to take an unfertilised egg from the same species [Organism B] (you are able to use eggs from similar species but that reduces the likelihood of a healthy clone) and remove its nucleus. After removing these two cells, place them into an altered growth medium (this is a substance that will keep the cells alive but will prevent them from growing). Then you fuse the nucleus from Organism A with the egg from Organism B using electrical pulses. After that, you need to observe the egg for 5-6 days to see if it develops as it should. After seeing proof the nucleus has properly fused you need to place the egg, which by now should have formed a small embryo, into a surrogate mother.

Cloning seems like an easy solution to save species. However, it may never be as useful as we want it to be as we want it to be. This is because the process is unreliable and is also too expensive to use on a large scale. Also for an animal species to survive and develop, you need a high genetic diversity to prevent a disease from killing the species and

prevent the likelihood ofa genetic disease from spreading through the species. Cloning does not allow genetic diversity as every individual is a carbon copy of the original organism.

Even though these drawbacks seem to eliminate cloning as a tool for saving endagered speciessome scientists still remain hopeful. Ian Harrison, who works for the Biodiversity Assessment Unit at Conservation International in Arlington, Virginia, exclaimed, “While cloning is a tool of last resort, it may prove valuable for some species. Experimenting with it now, using species that are not at immediate risk of extinction, is important.”

However, the poor reliability of cloning means that scientists have to use hundreds of embryos to only get a very small amount of clones. This is expensive and a waste of embryos. Because of this there are also some laws protecting some species that cannot afford to waste embryos from his procedure. Unfortunately, the animals that the fit the afformentioned criteria are the species that are endangered the most.

In an attempt to bypass the legal problems scientests are faced with when cloning endagered species, they combine the DNA of the species they want to clone with an embryo of a similar species. But these embryos usually fail to develop properly. This then reduces the probability of success even more meaning more wasted expense.

However, these hybrid embryos already have DNA of their own. This means when the DNA you want to clone is inserted, it bonds with the DNA that is already in the embryo.This causes the cloned animal to be a genetic mix between the actual animal you wanted to clone and the animal you extracted the embryo from. Many of the clones then die from this and the ones that don’t are not actually the species you want to clone.However, some scientists have found ways to overcome the challenges of dealing with hybird embryos. For example, Pasqualino Loi (Uni-

CAN WE USE CLONING
38 Science & Technology

versity of Teramo, Italy) believes it is possible to increase the life expectancy of clones developed in a hybrid embryo. To do this he proposed that scientists should nurture the hybrid embryo until it develops into what is called a blastocyst. A blastocyst is made out of an outer circle of cells (trophoblast) and inside it some quickly multiplying stem cells (inner cell mass). Then scientists are then able to remove the inner cell mass and transplant it into a trophoblast from the same species as the surrogate mother. This increases the likelihood of survival because the mother is much more likely to accept a trophoblast from her own species.

However, this method further increases the expense cloning. This expense then results in an increased difficulty when trying to upscale cloning - which makes it economically unviable considering this method still has a very low success rate. Therefore, I believe that cloning is too expensive (one cloning attempt can cost over £50,000) and too unreliable (90% to 98% of cloninig attempts fail) to be used on a mass scale so therefore cannot be used to save endangered species. Also, another one of the underlying arguments is that cloning would lower biodiversity to the point where the only way the species would survive is if we kept cloning them.

When it comes to whether we can use cloning to bring back extinct species, we face the same problems as we did with cloning endageredspecies except there are even more problems. The primary issue is the fact that there is a very small diversity in the DNA sources of extinct species that we can use for cloning and as previously stated,a large genetic diversity is required to sustain a species. In addition, it is impossible to clone species that died over one million years ago as there is not enough genetic material to bring them back.

Also, the absence of any of that species scientists are forced into using hybrid embryos. This as we know makes a hybrid between the species you actually want and the species of the surrogate mother by subtly altering the DNA. This means it is impossible to clone an extinct species exactly but it may be possible in the next decade to be able to get self-reliant hybrids with qualities of extinct animals.

However, it may still be possible as now we have the technology to manipulate DNA. We can use this to fill in the sections of DNA that have died and rebuild DNA from all the genetic material from that species we can find and then attempt to clone that final gene.

In essence, I believe that in time it will be possible to clone animals similar to extinct animals and I believe in time that they will become self-sustaining. However, the questions remains as to why we might do this.

An example of cloning in popular culture is the film Jurassic Park, where extinct species were revived to be kept in zoos.However this can be considered as cruel and we should not waste money on bringing extinct animals back so they can be looked at and nothing more. If we bring them back it should be either because we were the cause

of their extinction or for an environmental reason. Either way, we should not be doing it to keep them in captivity, we should be giving them the ability to go out into the wild. For example the woolly mammoth in the last years of its existence went to Siberia where it was amazingly beneficial. It churned up the ground with its tusks and fertilised it with manure making it a vibrant grassland. Since their demise, moss has taken over, killing the grass and turning it into a wasteland. If the mammoths returned the grass could potentially grow again and it would benefit all animals in the area. This I think would be well spent money and it would be worth the billions of dollars needed to do it. However this is a very special case and should not not be used as a way to get the world to waste all its money on bringing back all the other extinct species as well.

However, if we did bring back an extinct species they would be highly endangered and would need constant conservation. Currently we can’t even conserve all the endangered species there are so I don’t think that piling even more pressure on ourselves and piling money into bringing back an extinct species that within a few years will probably die out is a particularly good idea.

I believe that if we use cloning we should be using cloning to try and save endangered species first. Also, instead of just using cloning, scientists should be using the gene modifying technology to subtly change the DNA to keep genetic diversity.

As well as this, it is a waste of money as cloning one mammoth like creature would cost around ten million pounds. This is a stupid waste of money while we could be using this to slow global warming.

Finally, I interviewed Sam Frost (another academic scholar in the IV Form) and asked if he thought using cloning to bring back extinct species would be good for the environment with Global warming - to which he answered, “Global warming is affecting the whole planet and we need to focus on that. If and when we solve climate change, we might be able to reintroduce extinct species back into our ecosystem. If we are going to reintroduce a species then we are going to have to do it slowly. If we suddenly put a recently extinct species into its habitat it could drastically affect the ecosystem.” He also said that, “I think that bringing back extinct species could take up space needed to populate the human race. If we bring back too many then we might change the food chain which could affect all of us in the long term.”

So in conclusion, I believe that cloning extinct species is a waste of money and time. Although cloning some endangered species may not be so expensive. However we should be looking at solving the cause ( Global warming / plastic in the oceans) of their endangerment. This I believe is a much better idea as it will allow us to fix all the problems endangering species at once. This is more efficient both timewise and financially and prevents further expense in the future. Also, I think that it is ethical provided we let the cloned animals out into their native habitat rather than keeping them in zoos. However, It is still a waste of money.

39 Science & Technology

The Energy of the Future

With the current population set to reach 8 billion, the future looks uncertain. How will we feed this alarmingly large mass of humanity? How will we cater and provide for them? The answer lies in innovation; after all, a person is not just a mouth and a stomach, but a pair of hands.

To a degree, we have already seen these new ideas; fusion energy and dyson spheres are high on the list. These two in particular are arguably the most interesting of the lot, and the ones with the most upside potential. Modern science is surging forward, desperate to make these a reality, some more than others.

The first source on the list is fusion energy. This source is particularly interesting; it is the little brother of fission, a reaction we are already accustomed to. But despite being related, fusion is quite the opposite. Instead of splitting an atom into two, fusion, as the name suggests, joins two atoms together. The process first involves colliding two smaller atoms into each other, and causing a larger atom to be formed as the product. However, this larger atom has less mass then the two smaller atoms, causing a deficit. To understand how this creates energy, we must first consider the relationship between energy and mass. These are linked in the equation E=mc^2, Einstein’s famous t-shirt equation, where; E = energy, m = mass, and c = the speed of light. Therefore, the missing mass is replaced by energy, and according to the formula, it is an astronomical amount. Even the tiniest amount of mass is multiplied by the square of the speed of light. This is approximately a 90,000,000,000,000,000 headstart into finding that E that humanity so desperately needs.

Typically these atoms that we combine together are very small isotopes of hydrogen, namely deuterium and tritium. These atoms have extra neutrons, one for the former and two for the latter. Deuterium is very common, so common that you could visit your local seaside and see it in abundance. Every 1 in 5000 atoms in seawater contain it, so one of two is simple enough to find. But what about tritium? This is where we encounter the first hurdle. Tritium is so rare it is valued at $2 billion a kilogram. It would cost more then to retrieve 35,000kg of gold, something which seems far more tempting than scrounging for tritium scraps.

But not all hope is lost for fusion; a substitute may be on

its way. While more common (much less than $2 billion a kilo), Helium-3 may solve some of the problems posed by tritium. It is more efficient, making a net-positive on fusion seem more likely. It is non-radioactive, reducing risks to workers, and making handling the substance easier. However, while Helium-3 may be more common, it is not necessarily more common on earth…

In order to find Helium-3, we must look to outer space. In particular, the moon. Over time, solar winds have reflected off the moon’s surface, potentially building up large amounts of Helium-3. What this does, is not only provide a motivation to a cost effective moon-mining project, but also drastically reduce the price of our fusion-bound isotope. We would be combining water, the essence of all life, with moon dust, the extremities of outer space, creating a delicate ballet.

While fusion may be taking up the modern spotlight like an annoying younger sibling, fission remains the grumpy loose end that no country seems to be occupied in tying. Once the front and centre of humanity during the Cold War, it has slunk back, overshadowed by renewables. Destined for bombs, and nothing more. Modern countries are

40 Science & Technology

moving away from fission, with Germany only running two of 42 plants. But perhaps, fission still provides some viability, even to countries who purge the usage from their land.

Fission seems to be the middle ground between fossil fuels and renewables; it is cleaner than oil, gas, and coal, but certain stigmas populate the public. Chernobyl is a prime example of one cause to these beliefs, a disaster which made 150,000 square kilometres unlivable, gave cancer to hundreds of thousands of people, and tarnished fission’s name. While the sensationalistic explosion rocked avid nuclear fission fans, it was simply that. A disaster which, while deadly, had a drastically lower death count then normal fossil fuels and drastically lower impacts on health conditions.

But why the stigma? If fossil fuels are the real killer, why do we still use them in abundance? It is because of the nature of oil, gas, and coal. These fuels gently release harmful gases into the air, molecule by molecule. Slower than a double maths class, it gradually seeps into the atmosphere, unnoticeable. It is like raising the temperature of your house by a degree everyday; you do not notice until you are sweating, and by then, it is too late not to make a mess. On average, fossil fuels cause 8.7 million premature deaths a year. A number which makes Chernobyl look like a bunsen burner.

However, a bunsen burner can still burn, and that is where the waste comes into play. After a reaction, nuclear waste is sorted into three categories. Low, intermediate, and high. Low waste is items, typically gloves and suits, which have come into contact with fission material. Intermediate is the surrounding core, such as the concrete or the lead taking the radiation. High waste is the reactants itself, the isotopes with million-year half-lives. 3% is made of the latter, but this minority produces the most ionising radiation, causing the most damage to humans and the environment. Various solutions have been proposed, but varying levels of success. Repurposing waste for more fuel seemed to be attractive but perceived as not cost-efficient. Sending the waste to space seems like an intuitively smart solution, but the cost would be astronomical compared to other alternatives. The final and most popular approach seems to be burying it into the ground; forgetting about it and waiting for it to work itself out. While short term it may be successful, these fuels take millions of years to decompose, so in the extreme long term it is impossible to know.

So while both fusion and fission seem to be dominating the front newspapers (if renewables are pushed aside), a mega-project, one seen only in science fiction movies, goes unnoticed; A dyson’s-sphere. An idea so ludicrously enormous that it might save humanity from ever needing another energy source ever again.

A dyson’s-sphere is a proposed idea by Freeman Dyson, a machine which would surround our sun, effectively absorbing every possible piece of solar energy. This machine would need to cover over 6 trillion kilometres, requiring more materials than one person could ever imagine to fabricate. The plan to build such a structure is set under three main categories: the materials and where to harvest them, the design of the sphere, and the energy required to build it.

Firstly, the materials. In order to surround the sun, humanity would need to disassemble an entire planet. Out of all potential candidates, Mercury seems to be the best contenter; it is close to the sun and rich in metals like titanium, aiding and reducing the cost of transportation. Secondly, the design of the sphere. A large, solid shell would not be the most ideal candidate for the construction. Impacts, such as asteroids and intense solar flares may cause damage, collapsing the entire structure into itself. A more preferable structure would be a dyson swarm. This would envision an array of panels, all orbiting the sun, like a hive of bees circling an intruder. These solar panels would be light, efficient, and able to withstand without repairs for a very long amount of time. Finally, the energy to create this megastructure. Currently, humanity has not yet developed science to the point where colonising Mercury and the sun are within budget, but fusion might solve the problem in the future.

Dyson-spheres are an interesting concept due to their sci-fi nature, but modern-science roots. Making such a machine would not violate any physics, or break any laws of thermodynamics. It is all based in the modern world, seemingly a touch of innovation away from becoming a reality.

And it is the same with any idea of the future. Fusion, dyson spheres, and especially fission, all challenge the ideas of the present. It stretches the imagination of humanity, asking, is this the limit? Only time will spill the secrets of the future, but before then, we are left to ponder.

41 Science & Technology
Concept image of a Dyson Sphere

How imperative is asteroid mining to our future?

With resource insecurity on the rise and an increased interest in asteroids and their uses, the world is looking for alternative ways to deal with rapid population growth. In this article, John explores an option that is quite literally ‘out of this world’.

The world is running out of resources. That is a fact, no matter what you believe in, and although it is difficult to gauge when it will happen, Earth’s precious resources will eventually be used up, and there will be nothing left for us. Some people believe that traveling to the Moon, Mars, and beyond will allow us to expand our land and take resources from those space bodies as well. Getting to that stage is a difficult task, and could still be far in the future. Therefore, we need a way to gather more of Earth’s rare metals, named that for a reason, in order to restore our planet’s depleting resource levels. The next logical step would still be a huge one, but we’ve accomplished giant leaps in the past.

The solution? Asteroid mining. The main idea involves bringing an asteroid closer to Earth, this has already proven relatively possible with the recent DART mission, and sending machinery and robots to extract the necessary materials. However, it was only around a decade ago that scientists decided that asteroid mining could actually be a possibility outside of science fiction. It was found that just a single asteroid around 10-meters across could hold around 650,000 kg of useful resources, including iron, nickel, and rarer metals such as platinum and gold. Looking at larger asteroids, some of them can be worth thousands or millions of dollars. There are many kinks to work out, of course, but based on just the basic facts, it seems like a no-brainer.

Let’s look quickly at some asteroid logistics and the dangerous business of providing price values to asteroids. There are three different types of asteroids; C-type, S-type, and M-type. C-type asteroids mostly contain water and carbon and make up 75% of all known asteroids. Although there are not many useful mining materials on these asteroids, another idea to consider is using the abundance of hydrogen, carbon, and oxygen for another task. These materials are useful to make fuel, and the C-types can become stepping stones/fueling stations for future space missions, though that’s another matter entirely. M-type asteroids are much rarer than C-type, but are also much larger and contain huge amounts of useful resources. They will probably be harder to harness, but their high value may make up for the cost. S-type asteroids are the ones we will probably focus on in the near future, which make up around 17% of all known asteroids. The 10-meter-across asteroid mentioned previously would most likely be an S-type.

Without going into specific value, one could say that (depending on the resources) an asteroid’s worth

can range from tens of thousands to even a few trillion dollars. However, judging the value of an asteroid is a dangerous business, as the prices of the resources provided depend on the market. This is a small con of the asteroid mining business logistics, as whoever controls the resources also controls the market for them. In an ideal world, the resources would not flood the market and largely decrease their price value, but would also not be held back from the world to largely increase their price value. If we were to mine a lot of asteroids and gain an exponential amount of resources, the value the asteroids were originally given would not be valid anymore, as there would be much more supply than demand.

Critics may disagree that this is the next step in scientific discovery. Fusion energy, moon bases, flying cars, and even solving climate change seem to be more important tasks. However, if you think about it, asteroid mining is a significant connection between these points and could provide a lot of useful advances when it comes time to pursue these other tasks. Every large project we look at requires time, money, and most importantly, resources. Money and resources can both be provided by asteroid mining, and a lot of time can be saved if it becomes a possibility. This is because scientific discoveries required for asteroid mining, which would presumably be made along the way, will help with future projects, such as automated robots helping with planetary exploration and

42 Science & Technology

innovative rocket designs for deeper space travel. Instead of waiting to discover these things in the natural pattern of scientific advancement, it is more or less forced when a process requires progress to be made.

The question is: how far in the future is this, and how could we possibly accomplish it with near-future technology? The process can be split into 3 distinct stages. The first stage will be to get the chosen asteroid to us or find some way to get to the chosen asteroid. The easiest method would be the former, as it only involves course calculations for an asteroid and a craft, not a whole fleet of robots and rockets traveling to the asteroid; not to mention the distance from Earth makes it hard to manage the mining process. After the recent success of the NASA and SpaceX DART mission, changing the course of an asteroid is proven to be possible, and better still, relatively easy. The satellites/crafts would have to be slightly adapted for their new task, perhaps to not be destroyed in the process and cause a greater change in the asteroid’s course. The aim would be to bring the asteroid into a high orbit, perhaps the approximate distance of a geostationary satellite. From there the second stage would come into play, the actual process of mining materials from the captured asteroid.

Putting NASA’s great minds to work, coming up with a mining technique shouldn’t be too hard, potentially

having robots cut through the stone of the asteroid and store as many resources as possible. The hardest part of this stage would be the task of getting the mining equipment or robots up to the asteroid, requiring lots of fuel and precise calculations to rendezvous with the asteroid successfully. The short-term solution would be costly, but in the long-term, a space elevator or a hyperloop-style launch from the surface of Earth could provide an alternative. A small problem with the stage as a whole would be the waste that is produced when breaking apart the asteroid. The materials would eventually build up and form a deadly ring around Earth’s atmosphere, interfering with satellites and orbiting equipment. A solution could be to program the mining robots to collect every last piece of rock, as every material produced is useful in some sense.

The third stage would be to get the resources collected back to Earth. If the idea of a space elevator is implemented as mentioned previously, this would be a much easier task. If you have an elevator going up to the asteroid, you might as well have one going down too. After all, why build one when you can build two for twice the price? However, any other idea would prove slightly more problematic, requiring more inventive ways to bring the resources back. There are a few solutions, one being that you could crash the used robots into the

ocean and retrieve them like rocket parts (not very practical), or spend some extra money and program the robots to land on their own (similar to SpaceX’s rocket models).

A final point to look at is a kind of paradox. The main idea is that an advancement such as asteroid mining will take much longer than it could to be developed. The reason for this is that people have different opinions about what direction science should take, and not everyone will focus on a singular idea. Fusion energy is a big focus right now, as well as more efficient electric vehicles, and sustainable power sources. Because scientific focuses are scattered, there is much less progress in a given direction. A good example of this is during the Space Race when the US and Russia were competing in scientific discovery. Because of the focus on that particular task, much more progress was made, and much quicker. This scattered focus doesn’t have a simple solution, as it just requires everyone to agree on a single goal (not always the easiest task).

You’ve just had a quick overview of asteroid mining and its potential purpose in the world, and now hopefully you can see how it would be imperative to our future. There are many possibilities for how to get the process up and running, but due to the “paradox,” it might not be seen for a little while.

The Impact of Psychology on Financial Investment Decisions

At face value, psychology and finance do not appear to match well, however in this article Marc uncovers the crucial relationship that psychology serves to investment bankers when making high-risk decisions.

Many think that psychology plays no part in the current financial markets, however, the truth is the exact opposite. In fact, I would argue that Psychology, emotions and irrationality have played some of the largest roles in the movement of our markets in the past two decades. For a long time, everybody thought that the traditional finance theory is accurate because it states that investors think rationally and make deliberate decisions, based on various estimations or using economic models. anymore.

However, investors can very well be described as irrational because they tend to hold onto a belief and then apply it as a subjective reference point for making future judgments. People often base their decisions on the first source of information to which they are exposed and have difficulty adjusting their views to new information. Much of our beliefs about the stock market are based on outdated sources of information which do not apply to our markets anymore.

As said previously, information which we get through technology is a vital part of placing well-informed, lowrisk investments, and there is no doubt that most investors will have access to this information. The growing rate of technology has had an immense effect on trading and investment. Technology has aided in allowing information to be easily accessible to investors, but there has been little focus on the issue of interpreting the information skillfully. Often, information is not interpreted correctly which leads investors to make wrong decisions.

Behavioral Biases Investment analysis is said to be psychological in various aspects. These are the main aspects in which psychology comes into play while investing:

Overconfidence is a large factor in a trader’s outcome at the end of the day; it is an exaggerated belief in one’s abilities compared to reality. In investing this can lead to overtrading. An investor might replace his sense of actual knowledge with their sense of confidence, which might lead to errors. Research shows that most investors

are overconfident in their ability to make well informed decisions in the stock market, many are overwhelmed when they make large profits or losses - this leads them to make uninformed decisions and ones that generally lose them money.

Self Attribution is another form of investment bias which can lead to negative returns in the long run. People attribute positive, successful investments to themselves, and ones which yield bad results to external factors which they have no control over.

Selective memory keeps certain memories and expels others. It pushes us to remember our good trades but forget our bad ones. Selective memory skews an investors view of themselves and often leads to an investor being overconfident.

Personally, I find that Herding is one of the worst decisions which an investor can make. Herding or otherwise described as ‘mob psychology’, or ‘herd mentality’ is when an investor follows the crowd. Often, these decisions are uninformed which leads them to buying a stock which is most likely inflated, caused by bloated public interest. This stock might plummet, causing the investor to lose much of their capital. In addition, these investments are generally placed on the hope of ‘quick money’,

44 Science & Technology

which means that uninformed investors are more likely to place large sums of money on a single trade. In the recent cryptocurrency market, there have been many cases in which ordinary people have lost large capital because of herding.

All of these behaviors are extremely common in all aspects of human psychology, and can most definitely be applied to trading. When combined, they restrict returns. Moverover, investors which do multiple of these are likely to repeat them, which leads to further damage to overall returns. he global markets in the past few years have witnessed rising volatility and fluctuations. Numerous studies have shown that stock markets have done extraordinarily well in the past, by presenting more than 15% returns. But even today, the majority of investors are of the view that stock market returns are uncertain and volatile. Even well-educated and experienced investors are not able to earn above average returns in the stock market. In fact, in 2007, Warren Buffett made a $1 million bet against Protege Partners that hedge funds wouldn’t outperform an S&P index fund, and he won. Warren Buffet invested in the Vanguard S&P 500 index

fund where it returned 7.1% compounded annually, while the Hedge Fund managers only returned an average of 2.2%. This has yet again brought to question the practical application of traditional financial theory of investing in the stock market, just because of how irrational most investors are - they let profits and losses cloud their minds in a fumble of glee, greed and or regret.

Various changes of the market, including things like price volatility and variations of the economic situations have a large impact on investors’ thinking. Individuals constantly feel the fear of losing money, no matter who we are, so impulsively react to market changes, impulsively changing their long-term investment goals, which disagrees with every financial expert’s opinion and they begin to have doubts about their investments. Doubt is what might ruin an investor’s chances of a prosperous future in the market. It comes from many factors, not only the ones mentioned above, but also Overconfidence, Self Attribution, Selective memory and Herding. These all have the possibility to lead to negative returns and therefore doubt, which is the worst possible thought in an investor’s mind.

Could plastic surgery ever solve body dysmorphic disorder?

Mental health has revealed itself to be one of the most important and crucial issues of the 21st Century. Therefore, it is imperative that research is conducted into how to treat these illnesses. In this article, Matilda looks at how BDD might be cured.

Body dysmorphic disorder affects about 1 in 5 people worldwide. In the USA alone, it is estimated that 7-10 million people suffer from this condition. The real statistic may even be higher than these numbers, however many people with body dysmorphic disorder are reluctant to discuss their condition with others or seek help.

Body Dysmorphic Disorder (BDD) – or body dysmorphia – is a mental health condition where a person spends a large portion of their time focusing and obsessing over flaws in their physical appearance, supposed flaws that are often unnoticeable to others. Anyone can suffer from BDD, but it is most common in teenagers and young adults, and can also affect men and women. According to the NHS, symptoms of BDD include: Worrying about a specific area of your body (particularly your face); Spending too much time comparing your looks with others; Looking in the mirror or avoiding mirrors altogether; Going to to conceal flaws – spending a long time applying makeup, brushing your hair, or even getting plastic surgery.

BDD seriously impacts the daily lives of many, and may even alter your work, social life and relationships. This condition can also lead to depression. The NHS recommends that you should see a general practitioner if you have BDD, as they can assess your symptoms and ask a number of questions. The GP may refer you to a mental health specialist for further treatment. In the majority of BDD cases, symptoms will not go away without treatment.

Plastic surgery is a surgical way to help with both the enhancement of a person’s appearance and the reconstruction of facial and body tissue defects caused by illness or birth disorders. In other words, the restoration, reconstruction, or alteration of the human body. It is most popular among 35-50-year-olds.

Cosmetic surgery is a branch of plastic surgery which is not done for medical reasons; some choose to have invasive medical procedures purely to change their physical appearance, whilst for others, plastic surgery is a

necessity and is needed to help them live a normal life. For example, cleft lips/palates are usually treated with surgery.

Plastic surgery is very common and accessible in today’s age. In the USA alone, over 15 million cosmetic surgeries are done every year. One of the most common reasons that people get cosmetic surgery is to change their appearance to make them seem more likeable.

“ In the USA alone over 15 million cosmetic surgeries are done every year

Plastic surgeon Dr Dirk Kremer is a London-based cosmetic surgeon who sees hundreds of patients per year. He has many patients – some go to him for medical issues such as cleft lip and palates, severe injuries and large scars which prevent movement, whilst others see him seeking surgery to satisfy themselves. These are people who feel they need a confidence boost, so turn to aesthetic surgery; most of these people suffer from Body Dysmorphic Disorder. They could be turning to cosmetic surgery for many reasons: some may feel too embarrassed to find appropriate therapy, others may not even realise they are suffering from this condition.

Body dysmorphic disorder is commonly perceived as someone thinking they are ugly, however, this could not be further from the truth. BDD is when one is constantly obsessing over their appearance to the point where it affects their daily life. Many BDD sufferers will turn to plastic surgery to ‘fix’ whatever they feel is wrong; however, this is not very effective as BDD sufferers will find another thing to worry about. There will always be an aspect of themselves that patients feel needs fixing.

46 Science & Technology

In 2003 the British Journal of Plastic Surgery conducted a study about cosmetic rhinoplasties. They reported that in the United Kingdom, a questionnaire found that over 20% of patients requesting a rhinoplasty were diagnosed with body dysmorphic disorder. In the second stage of the research, they compared patients without BDD who had a positive outcome after a cosmetic rhinoplasty, and BDD patients in a psychiatric clinic who desire rhinoplasties. The results of this study show the differences between BDD patients and patients without BDD. For example, researchers found that BDD patients are younger, more depressed and anxious, and are preoccupied with checking their noses very frequently. It was also found that some of them were likely to conduct ‘DIY surgery’, have problems in their relationships with others, and avoid social situations altogether, just because of their nose. This study helped cosmetic surgeons identify patients who come to them with BDD. Other than rhinoplasties, common procedures BDD sufferers seek include facelifts, breast enlargement & reduction, and liposuction.

Dr Dirk Kremer works at Harley St Aesthetics, and states that he will “always undertake an initial consultation” to try and understand the reasoning behind the surgery a patient desires. If Dr Kremer suspects a patient suffers from BDD, he will always advise them that plastic surgery is not a healthy solution and recommended that they speak to a mental health service to best find the right solution to this issue. He also mentions that many surgeons do not take the mental health of their patients into account before performing the surgery, despite better regulations coming into force in the UK. He says, “plastic surgeons have an ethical duty to ensure their patient is making the decision for the best reasons, and should always provide patients with alternative treatment”.

For the best possible treatment, a mental health professional (psychologist or psychiatrist) would be better than a GP. A common treatment for people with mild symptoms is a type of talking therapy which is called cognitive behavioural therapy, or CBT. This can be one on one or in a group. People with moderate symptoms are offered CBT, and/or antidepressant medicine called

a selective serotonin reuptake inhibitor (SSRI), such as Fluoxetine (Prozac) or Sertraline (Zoloft). People with more severe symptoms are normally offered both together.

CBT is a therapy which can help with managing BDD symptoms by altering the way one thinks and behaves. In this form of therapy, one can learn about what triggers their symptoms and how to deal with negative habits. The type of CBT used for treating BDD is called exposure and response prevention, or ERP. ERP is a therapy type which involves gradually facing situations that would generally trigger negative thoughts about one’s appearance. When this happens, the therapist will help the person find a few ways of dealing with the obsessive thoughts and feelings so that eventually, they can help themselves by not feeling self-conscious or anxious. Depending on the person’s preference, CBT may also involve group work with other patients or close relatives.

Selective serotonin reuptake inhibitors (SSRIs) are a type of antidepressant that can be prescribed to people with a range of mental health problems, including BDD. Fluoxetine is the most common prescription to treat BDD. The SSRIs will not work instantly, it may even take up to 12 weeks for any noticeable effects to take place. The side effects will only last for a few weeks and if they work, the patient will often take them for several months to improve the symptoms of BDD. If BDD symptoms pass for over 6 - 12 months, the patient will no longer need antidepressants, however, this time can range from person to person.

In conclusion, cosmetic surgery is a short-term fix for body dysmorphic disorder. It may stop someone worrying about a part of their body for a while, but eventually, the sufferer will find something else to hyper-fixate on. Many medical professionals agree that cosmetic surgeons need to dive deeper into the reasons that their patients are pursuing surgery, as it could possibly do more harm than good. There have been some BDD patients that get an unhealthy amount of surgeries, which causes permanent damage to both their physical and mental health.

The use of nanotechnology in cancer treatment

In this article, Georgia explains how advancements in technology (specifically nanotechnology) may be able to help us one day treat and cure cancer, an illness that has been so devastating to our society.

Can nanotechnology change the future of cancer treatment? I want to understand just how this revolutionary technology that we’ve heard so much about actually works, and if it can really help. So, what is nanotechnology? Nanotechnology is scaled-down technology and includes tiny nano-sized particles. A nanometer is one billionth of a metre, and at this size, the physics and chemistry of the particles change. For example, carbon nanotubes are 100 times stronger than steel and 6 times lighter. In application to cancer treatment, the invention of nanotechnology holds the key characteristic that its size allows it to enter the bloodstream and be used intravascularly to approach cancerous cells. I examined the various sections of cancer treatment impacted by nanotechnology, forming two main points of interest: drug delivery and cancer imaging.

In drug delivery, the existence of nanotechnology is applied mainly to chemotherapy. Chemotherapy involves using chemotherapeutic drugs to the body to kill or disable cells. However, over 99% of drugs don’t make it to the tumour; this is where the specificity of nanoparticles can be used. This focused delivery allows drugs to be delivered straight to the cancerous cells and directly reach the tumour. This improves the uptake of poorly soluble chemotherapeutic drugs as well as limits the damage to non-cancerous cells. A combination of three functions can be used to target cells, one example being a system called ‘Probes Encapsulated by Biologically Localised Embedding’ or PEBBLE. This involves one molecule to guide the system to the cancerous cell, one to provide MRI of the target, and one to deliver the drugs. Often the cancerous cells are sought out using biomarkers and receptors on the surface of the cells. This combination is called Theranostic nanomaterials, where imaging and treatment functions are integrated into the same platform for drug delivery.

Some nanoparticles with unique properties are used for targeting tumours through ultrasound-triggered or heat-sensitive release (selective tumour ablation) and some can be used to physically hold tumours to restrict cancerous cell growth. The opportunity for adaptation of

these particles is massive and shows the use, for example, in mice, this reduced tumour growth from 26.56 to 8.78 times the size.

This targeted delivery allows higher doses of drugs to be delivered for prolonged periods. Nanoparticles are also taken up more readily by cells than larger molecules, so are effective delivery vessels, and can increase the likelihood of uptake of the drugs by the tumour and minimise damage to healthy tissues nearby. All these adaptations would allow the nanoparticles to remove some of the previous existing barriers to entry of cancer cells, such as the cancerous cells’ immune system, which can fight off the body’s natural, anticancer mechanisms.

The cancerous cells’ immune systems can also destroy chemotherapy drugs by pumping them out of the cells. However, drugs delivered by nanotechnology are not affected by these defence mechanisms, and because of this, the drugs can actually reach and impact cancerous cells.

For the patient, this would mean dramatically reduced side effects. Side effects of chemotherapy cancer treatment are due to healthy cells being inadvertently affected and destroyed by the drugs used in a wide area of treatment and results in hair loss, anaemia, nausea, vomiting and infections, which provide severe discomfort to the patient. As nanotechnology focuses on accuracy, healthy tissues are less affected, resulting in reduced side effects. Similarly to drug delivery, cancer imaging allows nanotechnology to build upon the existing principles. The purpose of imaging is to detect cancer before it becomes too severe, and the number of cancerous cells or tumours too large, where it is likely to have spread around the body (metastasis). With earlier detection, there is a higher chance of successful treatment and survival.

Cancer can be detected intracellularly or extracellularly. Extracellularly involves proteins, carbohydrates and nucleic acids that are found in blood or tissues. Previously this has been difficult due to low biomarker concentration and the timing of identifying these biomarkers. Nanotechnology provides more sensitive and selective de-

48 Science & Technology

tection, using similar specific cell targeting techniques as used in drug delivery.

Early detection of Circulating Tumour Cells (CTC) can indicate metastasizing, which is a severe indicator of cancer development. 90% of deaths linked to cancerous tumours are attributed to metastasis, so being able to detect a CTC can be integral in monitoring cancer progression. Various nanoparticles can be used, in particular, magnetic nanoparticles (MNP) and antibody-functionalized MNPs (immunomagnetic nanoparticles) that can target CTCs using specific biomarkers expressed by cells.

Another one of the determinants of cancer growth is the extracellular environment. Nanoparticles have been developed to image the tumour microenvironment, in combination with MRIs, when MRI-visible nanoparticles would be collected by circulating macrophages and taken to the tumour, which would provide a more detailed profile of cancer. There is also an impact on the effectiveness of treatment: radiotherapy, for example, has resistance in a hypoxic tumour microenvironment and so is less effective. By identifying and monitoring the microenvironment we can change the treatment to become more efficient.

Angiogenesis is also recognised as a clear sign of relatively severe and advanced cancer, and imaging of this activity is useful in identifying the characteristics of cancerous cells and growth. Angiogenesis is the formation of new blood vessels, and happens, in cancer, when tumours are growing at such a rate that the body’s natural systems can no longer provide an adequate volume of oxygen and nutrients necessary for cellular growth.

Nanotechnology has advanced this recognition. In comparison to its predecessor; small-molecule imaging probes used in targeted therapy, nanoparticles have limited access to the tumour, reduce unwanted binding to the surrounding macrophages that express the same angiogenesis biomarker, and increase the specificity of the identification. This method of identification is used with MRIs rather than CT or PET scans and can provide targeted and image-seeking nanoparticles that are used in vivo, hard to achieve with other imaging modalities.

Another application of nanotechnology to angiogenesis would be in treatment. Angiogenesis is regulated through multiple sets of complex mediators and growth factors

(VEGF), and therefore targeting these factors can be used to control the blood supply and so the tumour. For example, coating nanoparticles with peptides that bind specifically to these mediators in angiogenesis would inhibit the growth of the tumour.

There are some obstacles to the implementation of nanoparticles in clinical use. There is a lack of research into the harmful effects of nanotechnology and concerns of toxicity. Studies have shown carbon nanotubes (CNT) lead to an increased risk of vascular thrombosis in mice, and possible damage to cells. The effects of these CNTs even when modified to reduce toxicity need to be explored fully in vitro to determine the danger to humans.

There are also barriers to the production of these nanoparticles. They would need to be produced on a large-scale basis, in highly specific conditions because if the batchto-batch nanoparticles vary then the detection of cancer would also be inconsistent. They would need to be stored in specific conditions for mass production, transportation, storage, and use. The increase in the complexity and dual function (Theranostic nanomaterials - as mentioned earlier) of the nanoparticle shows a large benefit to the user but leads to a need for a more specific production.

Lastly, the concerns of safety and toxicity send a message to shareholders that the product could be liable for safety issues. The long-term safety, future use, and impact on human health create uncertainty that is unlikely to generate investment, so most companies would wait longer to see further development. Therefore there are large economic barriers to research and technology development that are necessary to overcome before the use of nanotechnology in the diagnosis and treatment of cancer can become common.

The politics and laws governing the use of nanotechnology have also developed, for example, EU Member States voted to develop nano-specific regulations to investigate ecotoxicity and the capacity of specific nanomaterials to be absorbed into the body.

Alongside this, in Sweden for example, the view of nanotechnology from the public is of a high-risk endeavour for health and safety, therefore a lack of support exists. If there is a combined lack of support among both business owners and the public for nanotechnology, then the development through laboratory and clinical trials, and experiments will fall short.

And so, while nanotechnology poses many answers to our decades-long questions about cancer, we have short-term obstacles necessary to remove before we are allowed to gain long-term benefits. Even if we overcome production limitations, it takes an average of 12 years for something to get from the lab to widespread household use, or in this case hospital use. However, the opportunities are endless, and I believe that this has and will change the future of cancer treatment.

49 Science & Technology

It is natural for Cranleigh students to dream big and have high aspirations, but where does their underlying motivation come from? Well, this brief section contains a few articles from Cranleigh students informing us of who truly inspires them to strive for success and be the people they are today.

Who inspires me? Who inspires Lauren?

Jordan Peterson has instilled within me a form of self-belief and conviction I didn’t know I even had. He has taught me to ask questions others are too afraid to ask, out of fear of the consequences, and has shown me that the consequences of not asking those questions are far greater. He has taught me to challenge the general consensus and to not follow rules if I think they are pathological or corrupt in nature. More importantly, Peterson has taught me that the most essential thing to remember is that life must be governed by one’s own narrative. To live a life is to live an adventure and the only way to ensure it is your own adventure is to speak the truth. The truth is the best adventure. Firstly, because you really don’t know what’s going to happen when you say what you think. The outcome of speech is almost identical to the outcome of thought and there is no way to predict what that outcome will be, so in and of itself that is an adventure. More importantly though, if it’s you and your voice, then it’s your adventure. Alternatively, if it isn’t your voice, let’s say you’re manipulating or crafting your speech in order to abide by the dictates of the majority then that isnt your adventure. That isn’t your narrative. That isn’t your life. And there’s really no point in living a life that isnt your own.

“ Jordan Peterson has instilled within me a form of self-belief I didn’t even know I had

Outside of that fundamental rule, Peterson continues to present me with viewpoints that challenge the predisposions I have, on a regular basis. Whilst there isn’t always unanimous agreement, I learn something from him every day. Every day I become more curious as I realise how little I know about everything. But every day I know a little more about something, thanks to him.

The final thing Peterson has taught me is that there has been this attempt to identify competence and power with tyranny. The response he proposes to this is to become competent and dangerous and to take your proper place in the world, because that is the alternative to being weak. Weakness is not good. He has shown me that the process of becoming dangerous is not done to garner the ability to inflict danger upon other people but is instead to make yourself into the strongest possible version of yourself. There’s a statement in the New Testament that Peterson has talked about on multiple occasions that states: “The meek shall inherit the earth.” Peterson explains, however, that the word “meek,” isn’t well translated at all. What that quote actually means is: “those who have swords and know how to use them but keep them sheathed will inherit the world.”

You have to be powerful and formidable and then peaceful. In that order. The alternative is being naive and weak and harmless, things many young people are being encouraged to be today. Being those things results in an inability to withstand the tragedies of life and renders one completely unable to bear their responsibilities. Then you become bitter, and when you become bitter, that’s when you become really dangerous. So Peterson has taught me to strive to become as competent and strong as I can. To utilise every day in the pursuit of that goal, and to live my life by my own narrative, even when that is the harder option.

52 Who
me?
inspires

Who inspires Lucy?

Who inspires me? There are well-known female scientists such as Marie Curie (the only woman to have won two Nobel prizes for her work on radioactivity which spanned both physics and chemistry) and Rosalind Franklin (whose early research arguably laid the foundations for the discovery of the double helix structure of DNA). Then there are the pioneers such as Elizabeth Blackwell, who became the first female doctor in the US having been accepted into medical school as a ‘practical joke’ played by the male student body on the all-male professors, followed closely by Elizabeth Garret Anderson - whose tenacity paved the way for today’s British female doctors. However, a person that really stands out to me is Jane Goodall, whose work on chimpanzees has led to a much greater understanding and appreciation for animal behaviour that was once considered “wild”. The behavioural traits that she discovered through years of observation and research illustrated how animals have thought processes and routines much like humans. This was the start of the animal rights movement and Goodall used her fame to increase awareness about the malpractice and harm humans inflict on animals.

Goodall’s quote resonates with me: “ I want to make people aware that animals have their own needs, emotions, and feelings - they matter”. This quote sums up the importance of her early work and, with my aim of becoming a veterinarian, I believe this to be a centrepiece of how we should view animals and how treatments for animals should be of the same calibre as our human ones. The vast majority of living things in the world are either plants or bacteria but of the 0.47% of global biomass that is animals only 0.01% of these are human. Yet it is estimated that hundreds of animal species have already been driven to extinction and that in most cases human activity was a contributory factor. As such it is clear that despite Goodall’s work there is still a long way to go in seeding the protection of animals throughout the modern day and making sure that every human understands that animals do indeed matter.

53 Who inspires me?

History & Current Affairs

The past has never been so relevant and with war in Ukraine and a cost of living crisis, it has never been so important to pay attention to what is going on in the world. In this section, past and present will intertwine as you read about Roman emperors and US court cases.

Review of ‘United States vs Vaello Madero’

We often tend to forget about Puerto Rico’s relationship with the USA as it is rarely in the package of breaking news headlines that are pushed out from the USA to the rest of the globe. However, Ben believes that this relationship is highly contentious and controversial, as highlighted by a recent Supreme Court ruling.

United States v. Vaello Madero is a Supreme Court case about the ability of American citizens residing in Puerto Rico to receive Supplemental Security Income (SSI), a welfare scheme that provides monthly payments to adults and children who have low-income and resources, and who are blind or disabled. In an 8-1 decision on April 21, 2022, with Justice Kavanaugh writing the majority opinion, the Court held that residents of Puerto Rico are not entitled to SSI benefits, as outlined by previous precedent and something called rational basis. Surprisingly, two liberal Justices, Justice Breyer and Justice Kagan, joined the plurality without even writing a concurrence. Alone, Justice Sotomayor wrote the dissent.

Jose Luis Vaello Madero rightfully received SSI benefits while residing in New York. He moved to Puerto Rico in 2013 to take care of his family. Once you turn 62, the status of your benefits must be clarified with a local administrator. He willingly did this at 62, a few years after moving to Puerto Rico, when the administrator realised that he had been claiming SSI benefits while residing in Puerto Rico, rather than in New York. His SSI benefits were immediately terminated and the government sued him for the approximately $28,000 he had received from such benefits while in Puerto Rico. He appealed. A federal district judge and the United States Court of Appeals for the First Circuit found that this exclusion violated the equal protection principle of the Fifth Amendment to the United States Constitution’s due process clause.

The primary argument of the majority opinion is that this case has, for the most part, already been decided by Harris v. Rosario and the Insular Cases. The Insular Cases were the 1901 decisions by the Supreme Court regarding

the status of the territories acquired by the United States during the Spanish-American War. They define Puerto Rico as a quasi-state, one that is under complete control of the federal government with none of the usual rights or benefits. Specifically, Downs v. Bidwell defines the political status of Puerto Rico as:

A territory appurtenant and belonging to the United States, but not a part of the United States within the revenue clauses of the Constitution.’

The Insular Cases are commonly acknowledged as decisions highly influenced by the racist ideas of the time, meaning the holdings of those decisions, unsurprisingly, do little to extend the protections of the average American citizen to a Puerto Rican citizen, for example.

You may think: isn’t there something in the Constitution that stops the government from being discriminatory in the protections that it gives to its citizens? What a great question. Yes! There is, in fact, an Equal Protection Clause among the rights bestowed on citizens by the Fifth Amendment to the United States Constitution. However, as held in Harris v. Rosario, which cites Califano v. Torres :

56 History & Current Affairs

‘Congress, pursuant to its authority under the Territory Clause of the Constitution to make all needful rules and regulations respecting Territories, may treat Puerto Rico differently from States so long as there is a rational basis for its actions.’

Unfortunately, the writing of the Social Security Act, specifically Title XVI (which establishes SSI), defines the United States as the fifty states plus the District of Columbia and the Northern Mariana Islands. Interestingly, the Northern Mariana Islands were included in this definition in 1976, and the covenant establishing its status determined it was eligible for SSI, unlike Puerto Rico, Guam and the US Virgin Islands. This is a really strange difference in territorial status, considering the Northern Mariana Islands has a population of 58,000 while Puerto Rico has a population of 3.2 million. Furthermore, there isn’t a large difference between them in terms of income, with the Northern Mariana Islands having a per capita income of around $17,000. Assumably, the implementation of a federal tax system in the Northern Mariana Islands would have a similar impact on both personal income and the economy to the implementation of a federal tax system in Puerto Rico.

In other words, the governance of territories by the federal government does not have to fit into the constraints of the Constitution as long as a ‘rational basis’ is given. The rational basis applied to this case is that Puerto Ricans do not pay federal taxes and therefore do not receive the typical associated benefits with paying such taxes. The Court holds:

If this Court were to require identical treatment on the benefits side, residents of the States could presumably insist that federal taxes be imposed on residents of Puerto

Rico and other Territories in the same way that those taxes are imposed on residents of the States. Doing that, however, would inflict significant new financial burdens on residents of Puerto Rico, with serious implications for the Puerto Rican people and the Puerto Rican economy.

This analysis, however, ignores the fact that Puerto Rico has the third lowest per capita income of (~$13,000) of any state or territory. This means that forcing Puerto Ricans to pay federal taxes would have a minimal impact on their post-tax income. Furthermore, as many citizens would be entitled to welfare benefits, as many low-income groups are, it would likely have a positive effect on net incomes instead of depriving them further, as the majority opinion mentions.

Out of curiosity, I wondered how the average Puerto Rican would be affected by federal taxes. Assuming 0% ‘state tax’, with an effective tax rate of 0.35% for people earning a salary of $13k, the average Puerto Rican would pay $45 in federal income taxes. This is with the consideration that Puerto Rican residents already pay FICA taxes, including social security, Medicare and unemployment taxes. The serious implications for the Puerto Rican people and economy that Justice Kavanaugh talks about are not clear in this regard. Once welfare benefits are taken into account, Puerto Ricans would be largely unaffected or even better off if they were included in the wider tax/benefits system.

If only there was a political body that allowed representatives from states and territories to discuss their issues and write legislation to fix those issues. If only this political body allowed representation from all territories, many of which are the most impoverished areas in the entire country!

Is Consumerism Bad for People and the Environment?

In modern society, it is easy to become overly materialistic and become obsessed with things. Something I think we can all attest to having done at some point in our lives. But, have we ever considered just how harmful are materialistic habits can be? Well, Rory believes he has found the answer to that question.

Consumerism

is a social and economic order that encourages the acquisition of goods and services in ever-increasing amounts. Consumerism is a popular belief because it drives economic growth when people spend more and more money on the newest products, creating somewhat of a never-ending cycle of purchases. This helps poorer countries to develop as the demand increases so there will be an increased demand for workers in factories and mines.

Based on this most people would suggest that consumerism is a positive thing. It increases the economy in higher-income countries where all of the major companies are based. And also in lower-income countries which are required for the manufacturing and extracting of materials needed to make a product.

Yet the ways that some of these highly demanding new products are made are not good for the workers and the environment. For example, an iPhone needs a small number of rare metals, such as cobalt which is extracted in the Democratic Republic of the Congo.

These metals are mined in incredibly unethical ways with some miners being teenagers who risk their lives to find these metalsThis also has a drastic effect on the environment. Cobalt mining and the methods to extract the metals produce greenhouse gases such as nitrogen dioxide and carbon dioxide. These gases then create a blanket around our atmosphere which causes less heat to escape, resulting in it becoming trapped - leading to an increase in global temperature. Known to us as global warming.

This is all caused by the vast amount of advertising. whether it is whilst you are waiting for a bus. Watching a YouTube video or scrolling through a news article online. Advertisement is everywhere, some algorithms will even suggest you targeted adverts. These show you adverts on what you are interested in (based on what you look at online) and they will then encourage you to buy something that you don’t need. This is one of the main reasons why consumerism is so good for the economy. This constant buying will increase the demand, causing companies to

find cheaper ways of producing that product in order to generate the greatest profit. Whether they are ethical or not. This has led to sweatshops, where they only care about how quickly a product is made. This has led to horrible working conditions and long working hours. Furthermore, there is really bad environmental damage as well with lots of harmful gases being emitted. Rivers have also had waste chemicals disposed into them destroying the natural habitat.

In conclusion, I believe that the constant use of advertisements and companies’ ideas of consumerism being only good with no negative outcomes has caused significant negative changes in our world. In the future, I would like to see the government impose advertisement restrictions to ensure that there are no ways to suggest different advertisements based on what you search for. This is because I believe that this is one of the main reasons for the increased demand for products which are mainly unethically made. Consequently, I do believe that consumerism is bad for people and the environment because of the dramatic effect on the people where new products such as the iPhone are made and the permanent damage it deals towards the environment.

58 History & Current Affairs

Do we need an ethically sound leader?

In a time when cancel culture is rife, and political polarisation is greater than it has ever been before, should we, the people, prioritise a leader untouched by scandal, deemed morally ethical and liked by widespread media, or should we prioritise experience and decisiveness?

The question of an ethical leader strikes many on a daily basis, as we find ourselves in the midst of the political turmoil in the UK. With the shortest running Prime Minister in modern UK history just having left office and Rishi Sunak having just replaced her (which was the decision of the conservative MPs and not the British public). In a Western democratic country where there are no longer two political extremes of Labour and Conservative and each party has shown more central policies in the last few decades, this question is even more prominent, as many people now don’t have a specific party their views consistently coincide with and the qualities of the party leader is what tends to influence their vote now.

On the one hand, many people would argue that without an ethical leader who has demonstrated ‘good’ moral actions throughout their life, prior to their time in office, we can’t be sure they wouldn’t bring in discriminatory policies. For example, go to the extent of starting conflicts with smaller, more vulnerable countries to obtain natural resources (essentially reverting back to centuries ago when our country was so steeped in nationalism we

thought the atrocities of colonialism were justified). This point was demonstrated as many of Boris Johnson’s opponents used his numerous alleged cheating scandals as a reason to suggest to the public why he should not have been in office. Although in a time of cancel culture, how can we de -

“ If previous scandals are what l eaders are largely judged on, should a politician’s unharmful but foolish choices as a teenages impact their political career?

fine what is scandalous and what can be classed as immoral and morally ‘bad’? If previous scandals are what leaders are largely judged on, should a politician’s unharmful but foolish choices as a teenager impact their entire political career? This view can be widely criticised as many people make mistakes in their lives and it is nearly impossible to find a leadership candidate who has not been criticised for supposedly immoral past actions. Therefore this might not be the best way to judge leaders.

Conversely, many would argue that a PM who has good press and is seen as being untouched by scandal is less valuable than a prime minister who is actually more experienced and decisive and makes the country a better place. Although ‘making the country a better place’ is highly subjective and decisiveness is only good when they are making the ‘right decisions’ therefore it could be argued that an ‘unproblematic’ Prime Minister should always be valued over someone who is seen as sometimes morally questionable as the division between politicians who have and haven’t been involved in scandal is less subjective.

59 History & Current Affairs

An example of an ‘unproblematic’ leader, in a westernised view, is someone similar to Barack Obama who was firstly revolutionary in the fact that he was the first black US President, but also had a stable marriage and has a ‘normal’ family life. He was known to be charitable and supported many organisations like the American Civil Liberties Union as well as founding the Obama Foundation in 2014. Many Republicans, however, argued that he didn’t get enough done for the USA in his presidency and, although he was a captivating speaker, Ed Rogers, who was a supporter of George Bush said: “I think Obama is not a very effective leader… I think he is a thinker and a ditherer to a fault.” In the same BBC article where this view was presented another critic had said that they thought Obama’s experience was lacking. Therefore there is a clear argument for prioritising experience and competence over morality in a leadership role, if those values are used for a good cause. In addition, we have to take into account the kind of leader the general public would want in a national crisis and how our priorities change within different periods our country goes through. Many are willing to overlook a leader candidate’s prior scandals if they believe that they are experienced and decisive whilst doing

what is right for the country. This is shown as with our former PM, Boris Johnson, many supporters were completely unconcerned about his previous rumoured philandering as they believed he was experienced enough and made the right decisions throughout COVID-19 (this was not as frequently dismissed in the time before the national crisis). Additionally, in the leadership race Conservative MPs were very concerned about Sunak’s wife’s British tax evasion and this scandal undoubtedly aggravated the public, whereas now in a time of economic crisis with the pound depleting in value, many might value Rishi Sunak’s experience in the financial markets (having worked at the highly regarded Goldman Sachs and then as a partner at multiple hedge funds) and this could be one of the traits that could persuade people to vote for him. However, I believe that the need for decisiveness in a time of crisis can be dangerous as they have clouded people’s judgement in the past and resulted in the election of Hitler, who is responsible for the mass genocide of over 6 million Jews and people from other minority groups in the Holocaust, as the public of Germany resorted to political extremism whilst they were in a national crisis with obscene unemployment resulting in country-wide poverty. Therefore, although a country’s great change

in a time of crisis we should not let our judgement be clouded by our need for a decisive leader.

To conclude, I believe that obviously both morality, experience and decisiveness are important in a leader but it depends on the state that a country is in. For example, in a time where our country is at peace with other countries and has a stable economy, then we can delve more into the scandals of each candidate for PM and which seems to have very strong moral principles. Although when we are in a national crisis, for example the economic crisis that is happening now with worrying inflation causing many to choose between heating their homes or paying for their next meal, we need someone who is experienced in that sector and shouldn’t choose a less experienced and decisive leader just because they have been involved in scandal. Conversely, this does have a limit and if a prime minister is accused of heinous acts which we would permit as almost entirely irredeemable for a leadership role (such as hate crime) obviously this person shouldn’t even be considered for the job. Finally, even if you do not agree with my point of view, I urge you to look within yourself and to question how much you value morality, experience and decisiveness in your own choice of leader and assess your thoughts on this.

Who had the greatest claim to the Roman empire in the 16th century?

It is no secret that the Roman Empire was a force to be reckoned with when it was in its Golden age. However, have you ever stopped and thought about the empire’s last embers? Well, Tom certainly has: and he is here to tell you all about who deserved to hold the rights to the empire over 1500 years after its beginning.

On the 29th of May 1453, the Ottoman Empire conquered Constantinople (modern-day Istanbul), which had been the seat of Roman power for over a thousand years. This was then followed by five powerful monarchs claiming the Imperial Title of Emperor of Rome: Charles VIII of France, Ferdinand II of Aragon and Isabella I of Castile, Ivan III of Russia, Charles V of Austria, and Mehmed II of the Ottoman Empire.

Charles VIII’s claim to the Roman Empire was quite limited, as he had purchased the title from Andreas Palaiologos, the nephew of the last Byzantine Emperor, in 1494. However this was on the condition that Charles would liberate Morea (southern Greece) in his planned Crusade, and install Andreas as Despot of Morea, but this was never accomplished, due to Charles dying in 1498. This led to Andreas reclaiming the title, but the new French King Louis XII also retained his claim, with all French kings claiming the Imperial Title until Charles IX, who stopped using it in 1566.

Ferdinand II and Isabella I were joint monarchs of Aragon and Castile respectively, with Ferdinand considered to be the first king of Spain. On top of this, when Andreas Palaiologos died in 1502 he gifted his Imperial Titles to both Ferdinand and Isabella in his will, but neither of them used the title, leading to a strong but unused claim.

The Russian claim to the Roman Empire consisted of three main parts: religion, culture, and inheritance. Firstly, in 1472 Ivan III married Sophia Palaiologina, a Byzantine princess and the niece of Constantine XI, the last Byzantine Emperor. This meant that Ivan III’s descendants would be next in line to be the Roman Emperor, if these titles were passed down using common inheritance laws across Europe. But there was no precedence for the Roman Empire to automatically be inherited by the eldest child, which would make this claim via inheritance weak, but still viable. The second part to Russia’s claim to be the successor of Rome, is that of culture. This is because much of the Byzantine Empire before the fall of Constantinople was ethnically slavic, the same as the Russian Empire, which led to the Russian monarchs proclaiming themselves protectors of all Slavs. This would mean that Russia should one day reconquer Constantinople, and become the new Basileus (King of Constantinople). This was also seen in how

the Russian monarchs adopted the title of Tsar, which is derived from Caesar (a title used by Roman Emperors) and once again added to their claim to Rome. Finally the Russian Tsars claimed Rome through religion, as after the fall of Constantinople, many believed that the Orthodox faith should be centred around Moscow, not Constantinople. This was fundamental to the new idea of Moscow being the “Third Rome”, and a successor to Constantinople, which cemented Russia’s claim to succeeding the Roman Empire. This is because the Byzantines saw the belief in the Orthodox faith, as a key distinguisher between themselves, and everyone else.

61 History & Current Affairs

Despite defeating the Byzantine Empire and conquering Constantinople, the Ottoman Empire claimed to be the continuation of the Roman Empire. This is seen as Mehmed II declared himself Kayser-i Rum (the name used by Arab, Turkish and Persian peoples to describe the Roman Emperors), just as the Russian rulers had declared themselves Tsar. Furthermore, in 1454, Mehmed II appointed Gennadius Scholarius as the Ecumincal Patriarch of Constantinople, who then supported Ottoman claims to succeed the Roman Empire. This was an attempt to have the Orthodox Faith proclaim the Ottoman Empire as the true successor of Rome. However this is questionable, as Scholarius was greatly opposed to Catholicism, and hated the enemies of the Ottoman Empire, and so this could be seen as Mehmed installing a puppet at the head of the Orthodox Church to influence the Church’s views. Despite this, the fact that the Ottoman Empire held Constantinople was a major part of the Ottoman claim to Rome, and is supported by contemporary scholars. This is seen as, George of Trebizond (a Byzantine kingdom, recently conquered by the Ottomans) said: “He who holds the seat of empire in his hand is emperor of right; and Constantinople is the centre of the Roman Empire.” This shows that even those who have recently been conquered by the Ottoman Empire, support the Ottoman claim to the Roman Empire. Mehmed II was also almost the first person to control both Rome and Constantinople for over seven centuries, however he suddenly died on the 3rd May 1481, less than a year after his campaign to capture Rome started, and so the campaign and Ottoman ambition in Italy ended.

On Christmas Day, 800, Charlemagne was crowned the Emperor of the Romans by Pope Leo III in Rome. This was because the Byzantine Empire was ruled by Empress Irene, who the Pope did not believe could hold this position, as a woman, and so saw the Roman

throne was seen as empty. Therefore he crowned Charlemagne emperor, which would create a new Roman Empire in the West. However soon after Charlemagne’s death this empire collapsed, until 962, when Otto I was crowned Holy Roman Emperor by Pope John XII, which marked the rebirth of the Roman Empire in the West, now called the Holy Roman Empire. By the 16th century the Holy Roman Empire was ruled by Charles V, who was the Archduke of Austria and King of Spain, making him one of the most powerful men in all of Europe, akin to the power of Roman Emperors of antiquity. This power, and anointment by the Pope, shows the Holy Roman Empire to be a Western Catholic version of the Byzantine Empire.

Overall I would say that the Russian Empire has the greatest claim to the Roman Empire in the 16th Century. This is because France and Spain’s claims were unrealistic, evident by both nations eventually choosing not to use the Roman Emperor title, as Andreas was never in a position to be Roman Emperor, and so was not able to give it away to anyone. Furthermore, the claim of Charles V was invalid, as the Holy Roman Empire was merely a way for the Pope to distance himself from the Byzantine Empire and the Orthodox Church. The Holy Roman Empire was also seen to be mostly ceremonial, as seen by Voltaire, who said, “the Holy Roman Empire was neither Holy, nor Roman, nor an Empire”. Despite the Ottoman Empire holding Constantinople, and the Sultan calling himself Kayser-i Rum, I believe that the Russian claim to the Roman Empire is stronger, as it relies on the culture and religion of the two empires, backed up through dynastic union. This is further emphasised, as once Constantinople is captured, the Orthodox Faith centres around Moscow, and leaves Ottoman-controlled Constantinople, showing that the capital of an Empire is temporary, and can move, just as Rome was moved to Constantinople in 330 AD.

Do US prisons actually cause increased recidivism?

The US prison system has been subject to intense scrutiny over the past 50 years. Zayna poses the question of whether or not they actually work.

The purposes of the criminal justice system are universal across the world: to prevent crime, punish and rehabilitate offenders, and protect the public. Yet many speculate that the USA’s criminal justice system is in reality doing the exact opposite.

Despite a notable decline in crime rate over the past twenty years, the United States incarcerates more of its citizens than any other country. For a vast nation rife with clashing ideologies and a variety of differing identities, it is not unpredictable that the United States should have high crime and incarceration rates, especially taking into account its dubious police system and widely criticised firearm policies. But aside from all of that, there seems to be some kind of fundamental flaw within its criminal justice system which enables criminals to flourish continually at the rate at which they do in the United States. More specifically, there must be a root cause for the soaring rates of recidivism in the country.

Recidivism is the term widely used to describe the act of a convicted criminal reoffending, and the rate of recidivism is considered a chief measure of the performance of a nation’s criminal justice system. It is estimated that over 50% of prisoners in the US will be back in jail within three years of their release, and according to the National Institute of Justice in 2022, just under 44% of criminals released in the US return within a year. These figures now come to symbolise a criminal justice system beset by abuse and powered by preying on minority groups. The United States is commonly used as a prime example in the debate of the punishment versus the rehabilitation of criminals. Within these debates, the US justice system, typically deemed as an excessively punitive system, is often portrayed in a slashing contrast to prison systems such as those in Scandinavia, which focus on rehabilitation of criminals and are shown to have lower recidivism rates.

This vicious cycle of reimprisonment is costly, with a study of 40 US states showing that the cumulative cost of prisons was $39 billion in 2010. It is impossible to deny that recidivism is also costly for the families of prisoners and the prisoners themselves. Loved ones of prisoners, especially those who have been incarcerated on more than one occasion, fall victim to a culmination of economic and psychological distress, as well as the callous burden

of social stigma. Prisoners suffer severe isolation from the rest of society, which has been shown to further increase recidivism, as any capacity to self-reflect and feel remorse is rapidly obstructed by anger and resentment towards the prison system. This prevents criminals from working on bettering themselves while imprisoned, increasing the likelihood that they return to destructive behaviour upon release.

A key aspect of the prison system which increases the alienation of prisoners from the outside community, and thus correlates with increased rates of recidivism, is prison design. I find that this aspect tends to be overlooked, or fails to be considered at the forefront of the analysis of the USA’s prison system, despite its significance. I am not talking about the treatment of prisoners within a prison, or their access to facilities, but rather the physical structure of a prison building, and the psychological effect it has on those imprisoned inside. In most cases in the United States, the role of architects is surprisingly minimal in the design of prisons. The job of an architect is to design buildings that are safe and best suited for their purpose, but in the design of prisons in the United States, safety and functionality are often disregarded in favour of the cheapest and least demanding options available, essentially eliminating the “architecture” in prison architecture. This is also influenced by the fact that criminal justice policy is heavily marred by politics in the US, leaving little room for professional criminologists and academics to input their expert opinions.

Through a delve into the history of prison design in the United States of America, I found that although some early nineteenth century prison designs placed great focus on penitence, American prisons have always lacked an effort towards the rehabilitation of criminals. The Pennsylvania system, first applied in 1829, implemented the penitentiary philosophy, where inmates were kept in almost complete solitary confinement and silence day and night, and had their own enclosed exercise yards. Created under Quaker influence, the solitude was thought to encourage penance and contemplation. Not only did this system lack the aspect of recovery and rehabilitation, but it was also short-lived due to the severe mental problems it caused among prisoners.

63
History & Current Affairs

The other penal system which existed at this time is known as the Auburn System, originating in New York, and has become the model for some prisons around the United States still today. The architecture that came as a result of the Auburn philosophy was, in truth, determined only by the builders, who had the responsibility to contain all inmates as efficiently as possible, not by architects who would have designed humane institutions that abided by the necessary constraints. Total silence was also maintained in Auburn system prisons, but inmates ate and worked together during the day and a greater focus was placed on strict discipline and hard labour, highlighting the imprisonment itself as a punishment. Prisoners were confined to tiny spaces and endured unhealthy living conditions, and yet again, no attention was paid to the rehabilitation of prisoners to create better people who would be able to contribute to society upon release.

Let’s put this into perspective: in Scandinavian countries, recidivism rates are one-half to one-third of those in the United States, and though there are many fundamental differences between the Scandinavian prison system and the US prison system, simply the design of prisons themselves have a substantial impact on the mindsets of prisoners. Most Scandinavian prisons bear a striking resemblance to the real world, with plastic windows allowing prisoners to look outside instead of the conventional barred windows, which only serve as a harsh reminder to prisoners of their isolation. In Suomenlinna Prison in Finland, inmates’ cells have more in common with dorm rooms, complete with wood furnishings and contemporary art lining the hallways. Prisons in Scandinavia are deliberately designed so that inmates cannot deflect their feelings of bitterness and shame onto their environment. They are unable to distract themselves with complaints of unreasonable rules and inhumane living conditions, so

they are forced to place the blame on themselves, encouraging self-reflection and soul-searching.

Zooming out onto a broader scope of Scandinavian prisons as a whole, the entire dynamic contrasts hugely with that of American prisons. In Norway, where recidivism rates are the lowest in the world at 20%, guards and prisoners play volleyball together, eat together, and share meaningful conversations. In fact, it is entirely wrong to refer to them as “guards”, they are instead prison “officers” who take on the roles of mentors and life coaches. At the Romerike high-security prison in Norway, inmates have access to a gym, kitchen, and library, and are usually allowed to decorate their cells with personal belongings. The prison also actively reduces recidivism rates by preparing its prisoners for their return to the real world, requiring that they have a place to live and be enrolled in school or have a job when they leave.

Perhaps I have made the solution to the USA’s recidivism problem seem simple and clear-cut, but in reality, it is not. This attitude towards harsh punishment of criminals is deeply embedded – the American public tends to feel “safe” knowing that prisoners are kept enclosed by barbed wire and barred windows. Although states such as New York are now looking to implement prisons more similar to those in Norway, critics have questioned whether Norway, a welfare state with already low crime rates and a far more homogenous population, can be used as a realistic model for the diverse and densely populated New York. And of course, comfortable prisons do not come at a cheap cost, especially taking into account the expert training required for prison guards. It is important to note that the root cause of the USA’s poor prison design is the desire for cheapness and efficiency, and it is unlikely that this will change in the hands of immense political power.

A New World Order: The Role of States, Borders and Individuals

The concept of countries and borders seems irreplacable, however Raghav argues this may not be the case. Could the world be transitioning to a new world order?

The current world’s political system consists of nation-states with borders. This system has its origins in the Treaty of Westphalia, signed in 1648, which ended the Thirty Years’ War, a 17th century religious conflict centered in Europe. The major European powers agreed on a system based on the principles of each state having sovereignty over its territory and domestic affairs, of non-interference in another country’s internal affairs, and of each state being equal in international law. This system spread all over the globe in the era of colonization, survived the world wars as well as decolonization and is now taken as the foundation for modern international relations.

The community of nation-states grant recognition to a new or changed nation-state through international bodies like the United Nations provided that the state has a defined territory, a permanent population and is represented by a government which is viewed as capable of exercising control over its territory and of conducting relations with other states. The number of states in the United Nations increased with the termination of the colonial era, and then again with the fall of the Iron Curtain.

In the modern era, nations define themselves in terms of characteristics like language, religion, ethnicity, civilization, ideology and legal and civic values. An interesting insight into the functioning of modern nations has been provided by Benedict Anderson, who explains that a nation is an “imagined community”, in the sense that it is a socially constructed community, imagined by the people who consider themselves to be a part of that group. Media, history writing, national anthems, flags and more help create, organically and/or by intentional design, an example of such common imagination.

The borders that define nations are often contested from within and without, making them vulnerable to instability and change. Powerful countries may expand their borders, while some borders in erstwhile colonies might come about from arbitrary decisions of the colonial powers (think of the Sykes-Picot agreement that defined the borders of the middle eastern states, or of Sir Cyril Radcliffe coming up with the lines that defined the division of the Indian subcontinent in five weeks!); autocracies as well as democracies trample over the political, cultural and religious aspirations of minorities. The Kurdish people are spread out over several countries as are the people of the Tibetan-Buddhist culture.

The collective western experience of the two world wars led to creations of institutions like the United Nations, aiming

to prevent future wars. After the Second World War, the Western-European nations took steps towards integration and the avoiding of extreme nationalism, an initiative that would eventually result in the European Union. Even during the Cold War, the two superpowers signed the Helsinki Accords, which essentially respected the status quo and protected the inviolability of the borders in Europe, thus avoiding war (though the superpowers fought proxy wars in the third world). The European model of regional integration was adopted, with various degrees of success, in other continents – SAARC in South Asia, ASEAN in South-East Asia, MERCOSUR in South America, NAFTA in North America, African Union in Africa. The end of the Cold War brought down the Iron Curtain, and though there was a war in the Balkans, by the 1990’s and early 2000’s the overall movement of the new states appeared to be towards integration. The joining of the two Germanys, the Good Friday Agreement, further integration in Europe with expansion of the EU, increased use of the common currency, the growth of internet and globalization of trade - all these factors indicated that the world was moving inexorably towards integration with softer borders and increased cooperation. Celebrated books of this period including “The End of History and the Last Man” by Francis Fukuyama and “The World Is Flat: A Brief History of the Twenty-first Century” by Thomas L. Friedman suggested that the world had commonly accepted the methods used in political and economic initiatives and practices that would help all nation-states and their citizens to achieve freedom, peace and prosperity.

However, these trends stalled from the mid-2000’s onwards. The predominant world superpower, the United States, undermined the United Nations and its own standing in many parts of the world by launching the Iraq war of 2003. Its long entanglement in the Middle East and Afghanistan depleted its resources and soft power, and the financial crisis of 2008 called into question the strength of the western economic model. The politics of the second decade of this century revealed that the benefits of globalization were not evenly distributed. Though it helped China - and to a lesser extent countries like India and Vietnam - lift hundreds of millions of people from poverty, within developed countries it led to income disparity and concentration of wealth. Many political scientists associate the rise of nationalism and isolationism, the changed nature of politics in Britain (Brexit), United States (Trumpism), and the increase in support for extreme right-wing movements in France, Italy, Hungary, Brazil, etc to the fallout of globalization.

65 History & Current Affairs

The rise of China as the world’s second largest economy, a nation arguably possessing potential to overtake the United States in economic and military power in the future, has brought a new type of challenge to the established order and the economic and political trends of the 1990’s and early 2000’s. China’s efforts at international relations give primacy to states over individuals, which asserts that some of the values and norms which are said to be universal are actually western. Europe and the US are belatedly reexamining their long-held view that trade with China would encourage market economy and rise of the middle class within China would bring the Chinese political system closer to a democracy. The administrations of Presidents Trump and Biden have passed laws and President Xi has announced policy initiatives that could lead to the world economy splitting into two trading and financial systems. In addition to this, the ongoing war in Ukraine has broken the compact about war and borders in Europe.

And so, while the old order has not collapsed and the trends towards integration have not reversed, it is evident that our world is witnessing the development of powerful political and economic forces that could potentially fashion a new world order with increasing nationalism, stronger borders and less global cooperation.

To end on a positive note, the fight against Covid-19 saw increased cooperation amongst nation states and global efforts in medicine and science. In a similar fashion, growing global warming concerns bring together all countries to common forums. This clearly shows that it is still an option to have a world order where nation-states can recognize that the extent of the dire issues of today’s world such as poverty and war requires coordinated and resolute efforts from all. And so, it is still more than possible to imagine a world the nations of which are able to unite to stand in the name of all of humanity.

Could the Axis Powers have won the Second World War?

The Second World War was a conflict of many possibilities. Here, Ted takes us down an academic investigation as to whether the Axis powers could have ever had a chance against the Allies.

TheSecond World War is undoubtedly the biggest and most brutal conflict the world has ever seen, spanning 2193 days and involving over fifty countries from almost every continent. It led to some of the greatest scientific advancements in science and technology and yet it also holds to horrifying record for the highest number of casualties ever seen in a six year period. The conflict began in Europe in 1939, when Germany, and later fascist Italy and Japan (known as the Axis Powers) declared war on the Allies, composed of Great Britain, France and eventually the US and the Soviet Union (amongst some other smaller nations). In only a year, Germany had taken control of most of Europe, invading Scandinavia, and most Eastern and Western parts of Europe, only halting at the island of Great Britain. Fronts could be seen opening up in North Africa. The Eastern Front was created between Germany and the Soviet Union, and then the war in the Pacific started after the US joined the Allies in 1941. By the closing stages of the war, the Axis powers were beginning to crumble, and the Allies began to build up momentum that continued to the end of the war, which meant that they eventually came out victorious when Japan surrendered on 2nd September 1945. After the deaths of an estimated fifty million people, we can see why some argue that the Second World War was a decisive victory for the Allies. But was this really the case? Did the Axis ever have a chance of winning, or were

they in fact doomed from the outset?

It all began on 1st September 1939, when Nazi Germany invaded Poland. Since coming to power in 1933, Hitler had made leaps and bounds in turning Germany from a broken, war-stricken nation into a global military and industrial superpower. With an army of 1.5 million soldiers and a booming economy, all Hitler saud he needed was Lebensraum (living space) for the German people. This was achieved when Hitler annexed Austria and Czechoslovakia into Germany, and then again when German tanks invaded Poland. Resistance was desperate and chaotic, and when the Soviet Union attacked on 17th September (thanks to the Nazi-Soviet Pact), any remaining hope was lost. Fighting paused until April 1940, when Germany invaded Denmark and Norway, with a big defeat for the British at the Battle of Narvik. Using Blitzkrieg tactics (meaning lightning war), Germany then swept through the rest of Western Europe, bypassing the French Maginot Line through the Netherlands and Belgium, before forcing back and encircling Allied troops at the port of Dunkirk. It was here that we see the first major Axis failure, which, if corrected, could have changed the course of the Second World War entirely. The German ground forces could easily have wiped out the 430,000 British and French troops, but Hitler ordered them to hold back. No-one is entirely sure why he did this; some believe he wanted to make

66 History & Current Affairs

peace with Britain, whilst others think he wanted his airforce,the Luftwaffe, to finish the job. Either way, this was a fatal error which would come back to haunt Hitler’s chances of succeeding in the war. The British government estimated that 30 to 40,000 troops would be evacuated from Dunkirk, but as a result of Hitler’s erroneous decision and the overwhelming bravery of the British people, 338,000 soldiers were rescued from the shores of Dunkirk by British civilian ships and ferries.

After the German failure at Dunkirk came Operation Sealion, a German attempt at the invasion of Britain. To do this, Hitler knew he had to gain air superiority, and so he began sending German bombers and fighter planes over the Channel to destroy British airfields. In response the RAF began bombing Berlin, much to Hitler’s outrage. So much so that he got revenge by bombing London. This bombing would continue throughout most of the war in what became known as ‘The Blitz’. But despite their best efforts, the Luftwaffe lost twice as many planes as the RAF, with 1,800 planes being shot down compared to the Allied 900. This again poses the question, could the Axis powers have won the Second World War if they could not take over Britain? Being an island, invasion was incredibly difficult for the German forces, and the Luftwaffe was at a significant disadvantage due to the lack of fuel for the long flight across the English Channel, British radar technology and the manoeuvrability of British planes compared to large German bombers such as Heinkel 111’s or the Ju 88. This situation was very different for other European countries, against which land and air offensives could be launched simultaneously by using combined arms warfare. That said, Germany did not have to take control of Britain for them to win the Second World War. Instead, they could have forced them to surrender using other means, like capturing lots of troops at Dunkirk for example. This shows that maybe Germany could have defeated Britain, getting the swift victory Hitler wanted, if they had dealt with the Dunkirk situation differently, or perhaps had a more developed air force.

Another problem for Germany was its ally Italy. Whilst the Blitz was raging, Italy opened up the North African Front when it invaded Egypt on 13th September 1940. Things did not go as planned for the Italians, however, the British forces launched a counter-attack via Operation Compass, which pushed the Italian army all the way back to Libya. As a result, Germany sent in reinforcements with the newly formed Afrika Korps, led by General Erwin Rommel. Rommel, also known as ‘the Desert Fox’, became a renowned general during the Second World War, respected even by the likes of Churchill; without him Germany would have stood very little chance against the equally strong Commonwealth

army in North Africa. These Allied forces in question were led by Commander Bernard Montgomery, known as ‘Monty’, a legendary leader who revived and revitalised the Allied troops with his arrival in North Africa in 1942. My grandfather, who was a squadron leader in the RAF, actually knew and worked with Montgomery himself in North Africa, and assisted in planning the positions of camps as the British and Commonwealth troops moved through the desert. After much back and forth between Axis and Allied forces from 1940-42, Montgomery managed to achieve the crucial victory at the Battle of El Alamein, which led to the beginning of the end for the Afrika Korps in North Africa. Less than a year later, Operation Husky was launched, an airborne and amphibious invasion of Sicily, which was conquered in five weeks and led to the deposition of the fascist Italian dictator Benito Mussolini. Then came Operation Avalanche, the Allied invasion of mainland Italy which landed near to the port of Salerno. After initial gains, a stalemate was reached, which was only broken in early 1944 with the bombing and eventual capture of Monte Cassino, during which my grandfather was once again present. Allied troops continued to force their way through Italy until it was conquered entirely in 1945. So to conclude, Italy did not provide very much assistance to Germany during the Second World War, and instead had to be bailed out numerous times by German forces. It is also worth mentioning that, after the fascist movement collapsed in 1943, Italy joined the Allies in the fight against Germany, and so German troops rushed into Italy to defend ‘the soft under-belly of Europe’. Therefore, Italy could arguably not even be classified as an ally to Germany, which again displays how impressive the feats achieved by the Germans were considering they did them almost single-handedly. It also begs the question that, if Germany had a more forceful ally, could it have won the war in North Africa and Europe?

67 History & Current Affairs

In summary, we have learnt several things about the Second World War. Firstly, it is a vast period of history that can be written about in enormous breadth and depth, due to the multiple different fronts, campaigns, operations and battles that took place during the war. With this in mind, we have also learnt that it is incredibly difficult to discern the answers to very complicated questions such as, ‘could the Axis powers have won the Second World War?’ Having only mentioned the war in Europe and North Africa in relative simplicity, and having not at all mentioned the Eastern Front and the millions of German and Soviet lives lost, nor the entire war in the Pacific between the US and Japan, including events such as the bombing of Pearl Harbour, the decisive sea battles of the Coral Sea and Midway, the amphibious land invasions of the Japanese islands of Iwo Jima and Okinawa, and the dropping of two atomic bombs on Hiroshima and Nagasaki, there is still so much more that can be discussed and talked about regarding this topic. However, it is my belief that, had

Germany executed the conquering of Europe perfectly, and had the Eastern Front not been opened up in 1941, which was in itself an almost guaranteed defeat, and had the war in the Pacific not occurred, then I believe that Axis could have indeed won the Second World War. However, given the fact that Germany, when faced against countries such as the US and the Soviet Union, could never have won a war of attrition, it was ultimately unlikely that the Axis would ever come out victorious. Hitler knew this, and so attempted to gain a swift victory over the Allies at the start of the war - when this failed, an Axis victory was as good as lost. And so, to conclude, I do not believe that the Axis powers could have won the Second World War, given Hitler’s ambitions to conquer vast swathes of Europe and Asia, as ultimately they would have been defeated one way or another. One could argue that Hitler knew this, and that he was in fact setting his ‘beloved’ nation up to fail from the outset. Maybe this was his greatest display of delusion of all.

With a reputation for usually being weird and wacky, this edition’s section of the Hobbies & Interests section of the Journal covers everything from Shakespeare to Internet Slang.

Music and why it has an effect on us

From the elevator to Reading festival, music is everywhere. Charles is here to provide a highly interesting and unique article into what music really does to us and why we should probably be listening to it much more often.

Music is something that we all listen to every day in the modern world and quite often multiple times a day for a long period of time. It might be just relaxing on your favourite, most comfortable seat whilst listening to the music that you enjoy the most, or it might be whilst you are doing something else and you are using it as background music. Either way, it is almost impossible to escape.

In scientific research, it is proven that music is heard in many different parts of the brain. The first stage is the auditory cortex, which is the part of the brain that is just above the ears and it is the first part of the brain that the music reaches and it analyses parts of the music like pitch, rhythm, melody, amplitude and so on. Music also reacts with the cerebrum which is the part of the brain that recalls memories when it has listened to a particular song or lyric and is one reason why music makes you occasionally feel happy. Music

might bring back a happy memory that we had previously forgotten about, however, it can happen in the other way and bring back a sad memory and make you feel sad. Next is the cerebellum which is heavily involved in your reflex reactions and the music reaches this part and might make you want to dance or sing for example in response. When you hear a calm piece of music it tends to calm and relax all of your muscles, or vice versa; if the song is energetic then it might give you a burst of energy. Finally, the music reacts with the limbic system which causes you to feel all emotions like joy, sadness, pleasure, excitement and much more.

One very big part of music is how it can manipulate your emotions whilst listening to a song. The brain releases dopamine when listening to a happy song and studies show that people who listen to upbeat and happy music after just two weeks felt a lot happier in general with the extra dopamine. However sad music can just as easily manipulate you into feeling sadness, by not producing the dopamine and the slow tempo and the more minor tonalities make you feel sadness. But why do people still choose to listen to sad music if it makes you feel sad? This is because doctors believe that listening to sad music improves your imagination and creativity and also improves your feelings of empathy towards other people. This means that whilst happy songs make you feel happy in the short term, sad songs could make you a better and happier person in the long run.

70 Hobbies & Interests Hobbies & Interests

How does Shakespeare present Henry V as a character?

Most of you reading this will be familiar with Shakespeare. If it’s not from English GCSE then it’s because he is perhaps the world’s most famous playwright. In this article Alice takes a dive into the character of Henry V where she aims to uncover his true representation.

T he prevailing idea is that Shakespeare meant to present Henry V as a warrior king and a quintessential figure of English imperialism, but there is far more to him than this. Written c.1599, Henry V was first performed within the lifetime of Queen Elizabeth I; a monarch descended from the widow of the real life King Henry V, Catherine of Valois. Regarding the play’s motivation, there was less necessity for a heroic presentation of Henry V because of the distance in relation and influence between the dramatised and contemporary monarchs. Less rides on how Henry is portrayed, unlike, for example, the necessity of villainising Richard III and venerating Henry VII for the validity and popularity of the Tudor queen following the usurpation of King Richard by Elizabeth’s grandfather in the Battle of Bosworth. However, it is undeniable that the possibility of his queen watching it would have influenced Shakespeare to write with a certain filter. Even though Henry’s legacy was ultimately nullified by his son’s loss of this war in 1453, bias still swings in King Henry’s favour because of battles won some years earlier 1415.

One way we can understand Henry V is in how he is presented as a black and white textbook figure. This contrasts with the earlier written Richard who comes across as a more layered, deeply human character. Henry V is an icon of English heritage that Shakespeare’s audience would have thought of with pride. Henry does not speak in asides at any point in the play, always speaking directly to other characters or in prayer. Richard III (written four years earlier in 1593) bridges the gap between royalty and the common man by speaking in asides to the audience, conniving with them, and thereby creating a more intimate tone. Richard III introduces himself in his first soliloquy revealing that he is determined to “prove a villain” (Act 1 Scene 1, line 30). In this moment he is removed from a purely historical context as the audience feels like the real man of Richard is speaking directly to them. Alternatively, at the beginning of Henry V, the chorus consolidates the king’s historicism and reveals the result of the battle, then at the end of the play conveys the underwhelming fact that France has been lost by Henry’s successor, immediately making redundant everything that Henry

fought to achieve. Also, the chorus speaks for Henry V at several points where in other plays the protagonist would contemplate the situation for themselves. In this way, we can learn about Henry V’s presentation via the presence of an intercessor.

We don’t learn Henry’s mind and motivation so much from himself, as from an omniscient narrator, who can feel like an intercessor between the character and the audience, creating a degree of separation that by contrast does not exist in Richard III. In this way, there is patriotism generated not entirely from the victor, but from the victory itself. Richard III is about the downfall of a villain, and the ascension of a hero, using a recent historical dynasty as a medium for the story trope, whereas Henry V is a celebration of the English military, using Henry V’s victory at Agincourt as a medium. We see this particularly keenly in the chorus building excitement in the audience for the battle to come. They do not rouse them on the basis of their king conquering a foe, but using the environment of battle with imagery of “horses, that you see them printing their proud hoofs i’th’receiving earth.”

Perhaps this is why most critics and commentators have argued this to be one of Shakespeare’s less remarkable plays. He was a master of dramatising the human condition in every type of person in every type of situation and perhaps his pitfall was trying to express a person in a more detached way. He did not want the audience emotionally connecting with King Henry, but admiring him from a distance, both dramatically and historically.

Furthermore, Shakespeare presents Henry V as a more distant character later in the play, with more to him than heroism in his counter-presentation in Act III. Although a patriotic figure, Shakespeare does not shy away from revealing a more unsavoury part of Henry’s character. Before the Battle, in Harfleur, King Henry makes a threat against the French civilians to the governor of the French town as a way to gain passage through. He threatens to allow his English soldiers to wreak “heady murder, spoil, and villainy,” using graphic imagery of “mowing like grass your fresh fair virgins and your flowering infants” and “naked infants pitted on spikes.”

71 Hobbies & Interests

The argument that Shakespeare was writing a rousing play purely about English might falls down here, as we can see that this scene is completely unnecessary from a patriotic point of view, but necessary when presenting another side to a figure who has only ever been spoken of from one angle. The threat is meant to horrify the English audience and imbue a sense of awe for the king. Although the English won Agincourt, it came at great cost not just in blood and suffering, but at the pretence at the moral high ground. Henry relents against his threat once allowed safe passage through Harfleur, but his soldiers, and English audiences get a greater sense of what he is prepared to do for victory.

If he was denied safe passage through the town, and followed through with the threat, would Henry be so revered as a character then as he is now? Henry’s Biblical allusion to “Herod’s bloody-hunting slaughtermen” creates a parallel between a Biblical villain and himself. And he presents himself as a tyrant by blaming the mayor of Harfleur for the destruction if he does not do what he wants: “will you yield, and this avoid? Or, guilty in this defence, be thus destroyed?”

The Elizabethan audience would have thought of Herod as entirely evil. This speech of threat thereby blurs the lines between good and evil. It begs the question whether roles are really that simple. And in real life, in history, is it so simple as good and bad rulers, or can the good rulers be sometimes evil, and vice versa. Shakespeare through this negative portrayal of Henry V challenges the convention of binary characterisation, urging people to consider nuance and reassess their views.

To conclude, Shakespeare looks at Henry V from a distance. He recognises the positives of his person, as a military hero, but also as a man willing to commit atrocities for his own gain who ultimately achieved nothing as his legacy was undone by his successor. The audience cannot establish an emotional connection with the character. This allows the audience to reconsider him without the filter of sentimental bias. Richard III, is a heavily character-driven play with a close connection between the evil protagonist and the audience, and by contrast Henry V is narrative-driven, urging the audience to use the heroic protagonist, and their distance from him, to consider the concepts of morality in leadership.

The English language and its influence on other languages

English. The most spoken language in the world, as well as the language you are reading this essay in. English is the official language of 59 countries with approximately 1.5 billion speakers. Naturally, English has had a huge influence on other languages, mainly in terms of vocabulary. But just how big is this influence?

The English language has been influenced a lot. Over 300 languages have directly influenced modern day English vocabulary. Here are some examples of everyday words that come from a variety of languages: Anonymous (Greek), Karaoke (Japanese), Lemon (Arabic), Ketchup (Chinese), Penguin (Welsh), and many more. The English language is a melting pot of different cultures and languages.

All the languages in the world (which is over 7000, if we only count the ones that have at least 1 native speaker), can be divided into language families. However, languages that seemingly have no direct relatives are classified as ‘language isolates.’ English happens to fall under the Indo-European language family. All languages that make up a family originated from the same ancestral language. In this case, English, along with languages such as Spanish, French, Italian, Hindi, Persian, Romanian, Dutch and many others, stemmed from one language. This language is called Proto-Indo-European, from which many

others branched – most of these have developed in completely unique ways. Contrary to popular belief, English is not actually a direct descendant of Latin. Languages originating from Latin are referred to as the Romance languages. Some of these include Spanish, French, Romanian, Portuguese, and Italian, along with variants of these spoken locally. These themself are a part of the Italic languages, a sub-branch of the Indo-European language family. However, English is part of a different branch of the Indo-European language family, referred to generally as the Germanic languages. Their ancestral language is Proto-Germanic and it includes other languages such as Swedish, Dutch and Afrikaans. Although English is not directly derived from Latin, it has been significantly influenced by it. More than 60% of English vocabulary comes from Latin or Greek. However, this percentage is even higher if only science and technology nomenclature is considered. This brings the number up to 90%. English is just one of thousands of unique and interesting languages.

72 Hobbies & Interests

English has had an immense impact on various languages. Take Japanese for instance: heavily influenced by English, Japanese magpies many words from other languages (referred to as 外来語 gairaigo), almost 95% of which come from English. Keep in mind that foreign words make up about 10% of Japanese vocabulary in a standard dictionary. As a result, many common, everyday words and expressions come directly from English. Some examples include: spoon (スプーン supūn), living room (リビング ribingu), bus (バス basu), pink (ピンク pinku), television (テレビ terebi), skirt (スカート sukāto), and many more.

Another example of a language that has been influenced by English is Hindi. India is one of the most diverse countries in the world, boasting 21 modern languages and 122 total major languages. It also has 1599 minority languages. Many Indian languages underwent changes and acquired additional vocabulary as a result of mediaeval Persian settlement. The dominant language in the northern region of India was Hindustani, a group of languages and dialects composed of Modern Hindi and Urdu. The Persian derivation of Hindustani is now known as Urdu—it is a style of spoken Hindustani with many influences from Persian. At this time, learning Persian became a scholarly fashion and more and more words were borrowed from the Persian language. However, years later when the British arrived, English replaced Persian as the key influencer, seen as a language of prestige and a symbol of education. As a result, English created a new style of spoken Hindustani as well. Full of English derivatives, it became the manner in which the most educated people spoke. For example, in the phrase ‘after all we do not use that’, the words ‘after all’ and ‘use’ are borrowed from English, while the rest are of Hindustani origin.

On one hand, English’s influence on other languages can be seen positively. As English is widely spoken across the globe, having a few common words between diverse cultures will make communication easier between proficient speakers and learners. Additionally, it makes learning other languages slightly easier for native English speakers, as well as making learning English easier for those who don’t speak it.

On the other hand, English’s influence can be viewed negatively. Borrowing a few words from a different language is harmless. However, in languages like Japanese, where almost 10% of the everyday language is borrowed from English, it becomes a problem. It takes away some of the uniqueness and individuality of the language and reduces its cultural integrity. Languages are a way in which people can learn more about other people’s cultures, history, religion, etc. If we don’t display variation in our communication, how can we learn to understand and respect other cultures?

As we are used to English being commonly spoken in many countries around the world, we don’t always ask ourselves why. Around five hundred years ago, English was not at all a widely spoken language. In fact, only about five to seven million people spoke English, compared to the 1.5 billion people today. The spread of the English language has nothing to do with the language

itself; it is because of political reasons.

When Britain had created an empire, they spread their language among other influences in all corners of the globe. By the early 20th century, Britain had colonised about a quarter of the Earth, which brought about the popular saying, “The empire on which the sun never sets’. In British Colonies, speaking English meant being well-off and educated; it allowed for better career and business opportunities. These people will have passed on English through their families, thus still thriving in former British Colonies.

When the USA was being established, the founders were aware they needed a mother tongue to instil nationalism in their citizens. English was the most spoken language at the time and as a result, people were encouraged to speak only English. They went as far as banning the teaching of other languages at school and, in some states, even at home. The U.S. Supreme Court only abolished this law in 1923.

Another reason for the spread of English is because in some of the most significant former British colonies (Australia, Canada and the USA), the languages and cultures of the natives have been pushed to near-extinction. This means that English has become the primary language; Australia’s population is about 26 million, Canada’s is about 38 million and the USA’s is about 330 million.

“ Our world is filled with beautiful, unique languages and cultures all of which we should aim to admire and respect.

However, when Britain was creating an empire, so were other European powers. So why is it English that has become the most spoken? During the 19th century, French carried a similar influence to modern day English. The main reason for the rise of English was the rise of the USA during the 20th century. If not for this, French may have been more widely spoken and more important than English today.

Our world is filled with beautiful, unique languages and cultures, all of which we should aim to admire and respect. When we speak, we utter words that have changed in a thousand different ways from when they were first uttered by our ancestors. Even though it may not seem like it, we shape languages every day when we speak. Together, we are creating history and making our mark on this world.

Ad qui debis elit es net rerum id mintorum ut rehentiI

73 Hobbies & Interests

A BEGINNERS GUIDE TO CHESS

Whilst chess is one of the most popular sports on the planet, many people are still daunted when faced with the opportunity of sitting down in front of the black and white squares. However, you can fear no more as Ben has created a fascinating and highly informative guide on starting chess.

Introduction

The origin of chess dates back all the way to the 6th century in India as a variation of a popular game at the time known as chaturanga. Since then, it has gained the most players of any sport with nearly 17 million active players on chess.com in April 2022 alone. Despite this popularity, chess is a notoriously complicated and challenging game to master with the Oxford Companion to Chess listing 1,327 various openings alone to the game. In this guide, I will aim to provide a basic overview of chess for beginners and administer assistance for those interested in learning the game.

Basic Rules

Unsurprisingly, chess is much easier to learn if the player understands how the pieces move and the other basic rules of play. This section will display this for each of the 6 different pieces involved and the other necessary information.

The board

A typical chess board is divided into an 8 by 8 series of squares, each can be occupied by one piece and represent areas where pieces can move to.

Starting the game

In chess, white always plays the first move, and so this theoretically does give the player with white a slight advantage as they begin with one more tempo (move) than black and will often have the opportunity to attack first. Moreover, both players will have a timer that runs down while it is their turn. If this timer runs out, that player has lost. The amount of time given depends on the format of the game that the participants are playing.

Pawns

Pawns are the piece of lowest value in chess because they can normally only move one square forwards or take a piece diagonally that is one square forwards either side of it. However on a pawn’s first move of the game it has the option to move one or two squares forward if the player decides.

Bishops

Bishops move diagonally along the board and have no limit on the amount of squares they can move. Each bishop will start, and therefore remain, on opposite colours throughout the duration of the game. One bishop is worth 3 pawns, so comparative to the pawns, a bishop is worth three times as much.

Knights

Knights have the most complicated movement patterns out of any piece on the board. They move in an L shape, travelling one square horizontally and then two squares vertically or one square vertically and two squares horizontally. Knights are the only piece capable of moving over/through other pieces without taking them in order to reach their destination. Similar to the bishop, they are worth three pawns.

74 Hobbies & Interests

Rooks

Rooks are very similar to bishops in that they are able to move an unlimited number of squares if they are not blocked by another piece. However, rooks move in horizontal and vertical planes only instead of diagonally. This means in most scenarios they cover more squares than bishops and are generally more useful. Rooks also have a special interaction with the king which will be covered in the next section. Rooks are worth five pawns.

Queen

The queen is the most versatile piece on the board; it is a combination of the rook and bishop and is able to move an unlimited distance horizontally, vertically or diagonally. In most games losing a queen will cost the player the game. A queen is worth 9 pawns.

King

The king is the most important piece on the board. Chess is won or lost depending on who checkmates the opposing king first. If a king is threatened by another piece (placed in check), the player who the king belongs to must address the threat by either blocking it, moving their king or taking the piece causing the threat. When a player is unable to do this they have been checkmated and have lost the game. Kings are able to move one square in any direction.

Other rules

Here are some more obscure yet important rules that a player must know.

Castling

Castling is one of the most important aspects of most standard chess openings. It allows the player to hide their king while bringing their rook into the game. Castling works by moving the king two squares horizontally and moving the rook next to the centre facing the side of the king. A player can only castle if the rook and king involved have not moved yet, there is no piece in between them and the king is currently not in check or would move through check in the process. Castling queenside tends to be more aggressive but dangerous whereas castling kingside is more passive and safe.

“ Castling is one of the most important aspects of most chess openings.

Pawn promotion

If a pawn reaches the end of the board in a game it is promoted to any other piece (depending on the player’s choice). In most scenarios this will result in the pawn becoming a queen. This is a key rule in endgames where advanced pawns become a key asset or huge threat often being valued above other pieces where they normally would not.

En passant

En passant is a unique and rare move that is unlikely to be seen at beginner level but it should be noted regardless. It occurs when a player moves a pawn two squares forward on its first move. After this, that pawn is vulnerable and could be taken by opposing pawns, on both the square it has moved to and the square which it would have been if it had only moved one square. For any piece other than pawns it is only vulnerable on the square that it has actually moved to.

Draws

In chess there is a number of ways to draw games. These include: by agreement of both players, stalemate (where a player is not in check but unable to play any legal moves), repetition (if both players repeat the same moves for three turns), the 50 move rule (where a pawn is not moved or a piece captured for 50 moves), or timeout with insufficient material (if a player runs out of time but it is impossible for their opponent to ever checkmate them).

Key principles

Control the centre

In most openings in chess the two players will contest over the centre of the board. This is because the centre of the board is where pieces will generally have the best access to the rest of the board and have the greatest mobility. This is why it is normally a more ambitious and advantageous plan to move your centre pawn two squares first so that it immediately contends for the centre. Additionally, when given the option of taking a piece with two of your own pieces, taking towards the centre is a good general strategy (although this does vary with context).

75 Hobbies & Interests

Piece activity

Piece activity is another vital strategy in chess. To summarise it, the more squares a piece has access to, the stronger it is. Therefore, in the game, players should be looking to improve the placement of their pieces as much as possible. This is known as positional play and becomes especially important at high level chess. The other type of play in chess is tactical play which is formed around forcing your opponent into playing moves which will result in an advantageous position for yourself. For example, this could be through checking your opponent or sacrificing pieces. Piece activity is the reason why all openings will prioritise developing pieces to some degree and getting them off their starting squares. In gambit openings the player will sacrifice pieces to accelerate development because the strategy is more beneficial than a material advantage.

King safety

King safety involves keeping the King as protected as possible from the opponents threats on the board. This should be prioritised as the king is what decides the game. This is also why in most openings the king will normally castle or manually move to the side of the board, where it is surrounded by pawns and away from the opposition’s pieces. This gives your opponent as little chance as possible to mount a strong attack without overextending and presenting counterplay. However, in the endgame when a few pieces are left on the board, king safety becomes less important. This is because with little material on the board, the king can make a huge difference in strengthening attacks with a much lower threat of being checkmated. At this point the principle is reversed with king activity being an important factor in endgames.

Queen vulnerability

As mentioned earlier the queen is the strongest piece on the board. However, this also means it is the most vulnerable piece (excluding the king) as when it is threatened, it becomes a priority to protect. Therefore, at beginner level it is highly recommended to not bring out the queen early in the game, this is due to beginners being more likely to blunder the queen, immediately losing the game.

Openings

A game of chess can be divided into three clear sections: the opening, the middle game and the endgame. Out of these three, the opening is what a player can be most prepared for so long as they have the correct theory. If played correctly, a player could begin a game with a significant advantage if they encounter an opponent who is unfamiliar with the opening they play. As such, memorising a large repertoire of openings is essential at advanced level chess. However, in most games of beginner level chess players can get away with a lesser knowledge of openings due to the unpredictable nature of beginner games. Even so, for any players looking to progress in chess, learning openings are absolutely vital. For a white opening, I recommend the London System, The Queen’s Gambit, The Fried Liver Attack and The Vienna Opening. With black openings, knowledge becomes much more varied as the player with white pieces are the ones who control the game during the opening. Therefore, black must be prepared to face all sorts of openings whether they begin with e4, d4, c4 or knight f3, to name some of the most common opening moves. Some strong openings I recommend as black are The Caro Kann, The King’s Indian Defence, The Dragon Sicilian and The Modern Defence.

76 Hobbies & Interests

Weird Internet Words Explained

In this, yet another of Brandon’s unique articles for this Journal, he explains what some words mean.

I love words. I love to look at words, read them, and suddenly have a realisation that this word came from that Latin word, and now it is this word: I love etymology, the study of the origins of words. I also like the internet, but, especially in recent years, many words have come up that I, and many others, don’t understand, and I, and few others, want to know where they came from.

You may be familiar with some of these weird words as they permeate our online world. Spam, (computer) bugs, and cookies. While the journey to their current use seems simple, these words have taken much time to develop.

Firstly, you may know that the phrase ‘computer bug’ came from Grace Hopper in 1947, when a 2-inch wingspan moth was found to be messing up the computer processes. What if I told you that this is a complete lie, the phrase had actually been in use for multiple decades beforehand. The first recorded use of the term ‘bug,’ with regards to being a malfunction of a machine, comes from none other than Thomas Edison when in a letter in 1878 wrote “I did find a ‘bug’ in my apparatus, but it was not in the telephone proper. It was of the genus ‘callbellum’ .The insect appears to find conditions for its existence in all call apparatus of telephones.” Here, the word callbellum is not an actual genus of insect, but refers to an obscure Latin joke. This being, call referring to a telephone call, and bellum referring to the Latin word for war, which presents his difficulties as an insect, just as the modern term ‘bug’.

You may also believe that the word ‘spam’ originates from the meat product released in 1937. While this is partly true, its true origin can be seen in a 1970 Monty Python sketch. In said sketch, two people visit a restaurant where every item on the menu contains the aforementioned spam. They clearly do not want spam and so this could easily be the end of the tale, as spam is supposedly something unwanted put within one’s other items re-

peatedly. However, at the end of the sketch, some vikings start chanting the word ‘SPAM’. This, in the late 80s and 90s, was transferred onto online chatrooms, where people began to write SPAM repetitively in the spirit of the spam-loving vikings, until the chatroom broke. This then moved onto telephone calls, messages, emails, et cetera, to what is now collectively known as spam.

Finally, you may visualise that when a website asks whether you accept cookies, they are handing you a chocolate-chip Maryland, or an angry gingerbread man holding your personal data hostage.

However, you may have had the wrong perception as it is supposed to relate to a fortune cookie. The story is that it was coined from the term ‘magic cookies’ that derives from a fortune cookie; a cookie with an embedded message. A cookie is a small package of code containing an even smaller package of information: the writing inside a fortune cookie.

77 Hobbies & Interests

Of course, surely one of the most important things on the internet is the application of labels; every person can have several revealing thousands of details about them. However, some of these generalised words have truly intricate stories to their creation. Take a ‘weeb’ for example: the stereotypical anime maniac.

The story begins in the 19th century, when Westerners found the island nation of Japan. Of course, as all things must have, there were some people who who quickly found a vast interest in Japanese culture, and were self-proclaimed ‘japanophiles’. This term was used until the 90s, when people began to use the word ‘wapanese’ to describe such people (this term is a portmanteau of either ‘white’ or ‘wannabe’, and ‘japanese’). However, this term got annoying pretty quickly, especially on the forum site 4chan, and so, jokingly,the creators decided to change it. For a period of time, they couldn’t find anything to change it to… until The Perry Bible Fellowship released a comic.

interest in Japanese culture!

You will also have heard of the Rickroll: when one believes that they are being shown something they wish to see, one is presented with the music video of Rick Astley’s 1987 hit ‘Never Gonna Give You Up’. Again, this story begins on 4chan, and of course, with eggs. In 2006, as a practical joke, Christopher “Moot” Poole on 4chan decided to change it so that any time one wrote the word ‘egg’, it came out as ‘duck’. This of course led to the creation of ‘duckroll[s]’, from the original word ‘eggroll’, spurring the creation of this image:

It became a popular meme to misdirect people with a certain link and actually lead them to this image. Then, in 2007, this bait-and-switch method was transformed forever, when a user of the website claimed to have the trailer for the then-upcoming video game ‘Grand Theft Auto IV’, which was actually a link to the aforementioned music video. Since then, there has been a riot of content based on this, from the 2008 Macy’s Thanksgiving Day Parade in which the artist performed a live Rickroll to the White House, Rickrolling multiple people through their Twitter page. In fact, there is so much history to this, I have found a short video on it from Youtube - just scan this QR Code…

Of course, there must be something before the word weeaboo, ‘why was it chosen for this comic?’ I hear you ask. Well… no, there isn’t - it appears that this word came completely out of the blue because it was a funny set of syllables to make a funny comic. Anyways, the 4chan programmers decided to make it so that any word in which there was the sequence ‘wapanese’ became ‘weeaboo’. This stuck, being shortened to the common slang ‘weeb’, now used to describe any non-Japanese person with an

78 Hobbies & Interests

There are some words which look simple, but have interesting backgrounds to them, which you may not have thought about.

First, an internet ‘troll’ is not called as such due to the fantasy monster, but due to a fishing term. To ‘troll’ in fishing is to draw a fishing line across the surface of the water, baiting fish in slowly and torturously, which was transferred online to describe the action of drawing in and baiting people online, only to switch out at the end (much like the aforementioned Rickrolling).

Second, a ‘stan’ is not, in fact, a portmanteau of ‘stalker fan’, though it does come from a song about one. The Eminem song ‘Stan’ is used in this context because it describes a fan who stalks Eminem, though the name Stan was probably only chosen to rhyme with ‘fan’ and ‘man’.

Finally, of course, some etymologies take longer to develop than others, especially the weirder ones. Over the last 5-6 years, on the internet, there has been a massive boom in live streaming video, which has led to some of the strangest words being created…

For instance: ‘poggers’. This story begins in Hawaii, over one hundred years ago, where children played games with milk carton tops. They stacked them up and threw them at the tower, seeing how many they could flip over. Later, in Hawaii, a lady by the name of Mary Soon mixed three different fruits together in a smoothie: Passion Fruit, Orange and Guava. This name was quickly shortened to POG. In the top of the POG cartons were little collectible circles, which were developed and rediscovered in the 90s to create the massive craze of ‘POGS’.

Then, in 2010, a streamer named Ryan sat in front of a green screen, and made faces. That’s it, as far

as I can tell. However, the next year, he released a video titled ‘POG Championship’, where he played POGS - though this is completely unrelated to the face-pulling video, the two got combined, causing one of the faces to be labelled ‘pogchamp’ (see left). From this point on, the popularity of this emote skyrocketed. With the 2016 release of the videogame ‘Overwatch’ and the presentation of ‘play[s] of the game’ and the massive spike in streaming content between 2016 and 2021, the emote was used 813,916,297 different times. Unfortunately, due to some controversial opinions released by Ryan, the emote had to be removed, replaced by a komodo dragon.

I have often also been confused by the word ‘handle’ to mean a username on twitter. Well, the origin of the word seems to come from the early 1800s, with ‘adding a handle to one’s name’ meaning to add a title, for example: ‘Sir’, or ‘The Honourable’. This eventually became synonymous with ‘nickname’ by the late 1800s to early 1900s, which transferred to, in the 1970s, members of Citizens Band radio (CB radio), which was a medium for radio communication over a very short distance, who used the term ‘handle’ to mean a unique nickname. This, of course, transferred onto early internet media as a way to identify oneself and others online.

Finally, the face of the internet: memes. As you may know, the word ‘meme’ was coined by Richard Dawkins in 1976, but what you may not have known is that it was originally a term for genetic biology, to mean ‘something which is imitated’. It originated as the Ancient Greek word ‘μίμημα’, which led to the study of mimetics, which Richard Dawkins studied. Though he studied it, he did not want to say the words, evidently, as he shortened the word ‘mimema’ to ‘meme’. Recently, the creator has become the creation, with memes of Dawkins circling the internet (see below).

79 Hobbies & Interests

My Experience in Zambia

Sophie speaks of the lessons she learned on an eye-opening trip to Zambia and how it has made her reconsider what we should be focusing on in our lives. However, it is important to also acknowledge that this article contains sensitive content that some readers may find disturbing.

When I arrived in Zambia, I was frankly unaware of the world I was entering. As we drove out of the airport and into the unknown, I began to realise that Zambia was like no country I had been to before. During the four hour bus ride to Nsobe (a game reserve in the Copperbelt region), I was taken aback by almost everything I could see. From the sofa sales on the side of the road to the ever familiar Nandos, I could not take my eyes off my surroundings. By the time we arrived at Nsobe Game Camp I was eager to begin this new adventure.

It was on our first full day that I had a conversation which I will never forget. Having spent the morning exploring the game camp and visiting Nsobe Primary school, in the afternoon we then met with Nsobe Secondary school. Admittedly it took a while for all of us to become comfortable with each other, but soon everyone was at ease and we got talking. One girl in particular, was willing to share her story. After we had been talking for about half an hour, she began to open up to me. She was chirpy and ambitious and smiley; however her story offered something far more sombre. She openly explained to me some of the devastating things that had happened to her and her family and did so with composure. I was simply in awe of her. Don’t get me wrong, I had been told of the frightening injustices and problems in Zambia, but, hearing these things come to life in what she told me, I became more aware than ever before that life is just not fair. For some context, her sister had become pregnant at twelve. Her father had been killed by her uncle out of jealousy, and her uncle had only served one month in prison due to corruption in the police. Not only this, but she only saw her boyfriend once a year due to the fact that the bus fare is K55 (equivalent to $2.75) each way. But in spite of all this, she was kind, genuinely bubbly, chatty, optimistic, and she had dreams. She had dealt with these horrors and fought back. She was resilient per-

sonified. Throughout the week we engaged with lots and lots of different people. All of whom demonstrated a similar level of resilience.

At Nsobe primary school I particularly remember watching three children diving into the ground in a ploy to remain ‘in’ our game of bulldog, and straightaway they jumped up, unfazed. This same group of children, at the time we visited, were also mourning the death of a fellow student, which emphasises further their strength of character. At the Grace School the teachers were unpaid volunteers; yet they still worked seven till four daily in the heat. In the markets the sellers were competing with one another and having to face bartering and rejection, and yet they continued to persist and did not lose hope.

This strength in resilience is something that particularly stood out to me as we went through the week, and it became harder and harder to fathom how we can possibly be anything other than chirpy and optimistic in our circumstances when they manage it in theirs. Above all, it was incredible to me just how happy they all were and hence that’s the real message I want to share with you today. You don’t need material things, success or wealth to be happy; you just need you, and to be resilient when life gets tough.

80 Hobbies & Interests

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.