The Academic Journal 2023

Page 1

Contents Foreword Society Overviews CVD and Mental illness - Aneesa Kumar Genetics and Schizophrenia - Anisha Tripathi Contagious Vaccines - Harsha Pendyala What is an Antibiotic? - Teodor Wator The environmental impact of current nuclear reactors and whether we will ever create a perfect source of energy - Aaditya Nandwani Titan – The “Earth-like” moon - Nuvin Wickramasinghe Maxwell’s Theory of ElectromagnetismSamuel Rayner
Relativity and the Mathematics behind it - Aaditya Nandwani Why Laplace’s demon doesn’t work - Bright Lan 4 5 8 Academic Journal 2023 - 2 12 17 21 24 29 34 43 50
General

The Shrouded History of Women in STEM

How

Harmony - Benjamin Dakshy Just Watch Me - Habibah Choudhry Cats in the Courtroom - Joe Davies Covid, Ukraine, and the ensuing food crisis - Will Lawson
Imperfect
has conflict shaped medicine?Rohan Chivali Vengeful violence – to what extent is it justified? - Sophie Kerr Depp v. Heard; Courtroom Case or Societal Struggle? - Nayat Menon 3 - Academic Journal 2023 53 55 58 60 63 66 69 71

Foreword

It has been a great privilege of mine to put together and edit this year’s edition of The Academic Journal. Over the years, I have read many different articles from various journals, so the standard always sets a high bar. In reading these articles included this year, I was delighted by how intriguing and eye-opening these articles are. From mental health and medicine, the environment to mathematical theories and the current global affairs, this journal really does have everything. This is a credit to the diversity in interests and strengths that students in this prestigious school have. What also can be seen is the strength of Societies available in the school - their successes with high attendence rates and the baton of student leadership being passed on year-on-year continuing to strive for interesting talks and debates. I would like to thank all the authors of the articles for the time and effort that went into these pieces, but also to the Society Presidents for the work they have done to produce their own Journals which I would highly recommend exploring for even more outstanding articles. Please do enjoy and I hope this may spur you on to having your own questions to research and write articles for in the future.

Academic Journal 2023 - 4

Society Overview

Natural Sciences:

“The Natural Sciences Society is one of the oldest communities at St Olave’s where we explore the fascinating workings of the world we live in. The society sessions involve a diverse range of topics that explore the boundaries of Natural Sciences to incorporate ideas from a multitude of other disciplines such as space physics, medicine, psychology, and more. As a result, we host bespoke and creative sessions that capture the curious nature of our members. We also publish the jam-packed Natural Sciences Journal which collates many fantastic articles from passionate students across the school. We run every Tuesday Lunchtime, so join the NatSci Teams to receive news about our upcoming talks and opportunities (MS Teams code: dasys9e). We hope to see you there!”

Islamic Society:

Islamic Society runs every Friday at 1pm for congregational ‘Jummuah’ prayers, to provide a space for Muslim students at St Olave’s to feel comfortable in practicing their faith, as well as getting to know people across different year groups. These are led by a different person each week where they choose an interesting topic to talk about before the prayer (called a Khutbah!). In the past, we have had talks on ‘Dealing with Anxiety’, ‘The Importance of Charity’ and ‘Handling Your Emotions’, among many more. However, ISoc is not just limited to praying! So far, we’ve run many charitable events such as an Ice Lolly Sale in Summer 2022 to raise money for Palestine, the first ever Inter-Faith Iftar event to raise over £2K for Bromley Food Bank and a Krispy Kreme Doughnut Sale to raise money for Young Minds.

5 - Academic Journal 2023

Hindu Society:

Hindu Society is a community for the Hindus at this school. You don’t have to be very religious to participate at this school and previous members have ranged from not very religious to very religious. It is a safe space for the sixth form to explore ideas related to Hinduism and also celebrate Hindu festivals together. All in all we promise to be a very fun society where we run large events like Sixth Form Diwali with dances and food etc but also debates such as the significance of religion with science.

Physics and Engineering Society:

Interested in physics and engineering? Physics and Engineering society is a place where anyone can talk about or demonstrate any area of physics that interests them. We have talks on all areas of physics – astrophysics, quantum mechanics, aerospace engineering, particle physics, theoretical physics, relativity, mechanical and electrical engineering and so much more! We have also had demonstrations of various engineering projects, such as a wind tunnel model, and have external speakers talk about studying physics at and beyond university.We also have an annual society journal giving you an opportunity to write about what interests you the most in the world of physics (there are journals available in the science department if you want to have a look!) We run in S8 every Friday lunchtime (a fun way to end the week!). To join the society team use the code g1zth51

Medics society:

Medics Society at St Olave’s is the biggest and arguably most flourishing society there is. Our aim is to build a medical community within our school, encouraging, supporting and driving all our aspiring medics towards a fulfilling future career. We recognise this isn’t an easy road to traverse alone, so Medics society provides a plethora of experiences to aid with the application process, from debates, discussions, student led presentations and EVEN external speakers such as consultant doctors or current university students. Medics society is the perfect place to gain some insight into medicine, and we look forward to seeing more and more people join our medic family!!!

Chemistry Society:

If you are intrigued by the world that surrounds us and what we can do with it, then the exciting talks, practicals and demonstrations (for the really dangerous stuff!) we have at Chem Soc is what you need. From the science behind Luminescence to that of poison, or perhaps how to make your own artificial diamonds! We’ve got it all, see you at Thursday 1pm, every week in S2.”

Classics Society:

Classics Society is open to all on Tuesdays at 1.10 in room 23. Every week we gather together to learn more about the ancient world through talks, activities, videos, quizzes and debates. No prior

Academic Journal 2023 - 6

knowledge is required, just a willingness to get excited about myths and monsters, language and literature, history and heartbreak (looking at you, Catullus). We’re a relaxed and welcoming space so everyone is welcome to follow in Horace’s footsteps and ‘sapere audere’ (dare to think!)

Space society:

From those intrigued by the complexities of the science behind space to the vastness and beauty of space, space society is the perfect place for all scientists and our humanities students to discuss everything related to space. We have both talks as well as discussions and competitions and some of our favourite talks range from Fermi’s Paradox (theory about the existence of Aliens) to the maths being rocket science. Space society runs on Monday lunchtime in S11 and is open to all year groups.

Afro-Caribbean Society:

Welcome to Afro-Caribbean society! Come along to learn more about African and Caribbean culture, and talk abut your own experiences. From heated debates, informative talks, and entertaining games and quizzes, there is always something fun going on at ACS. We welcome students of all races and backgrounds.

7 - Academic Journal 2023

The Increased Risk of Cardiovascular Complications due to Mental Illnesses

Introduction

Mental health disorders are characterised as clinically significant disturbances in the cognition, emotional regulation, or behaviour, of an individual and are usually associated with distress or impairment in key areas of functioning. These illnesses can range from the more common disorders, anxiety and depression, to others such as anorexia, post-traumatic stress disorder (PTSD) and schizophrenia.

For centuries, the mind-body relationship has been postulated and findings from various epidemiological studies have shown the impact of depression, trauma, anxiety, and stress on the physical body, including the cardiovascular system. As such, the damage to the system caused by mental distress can be regarded as a contributing risk factor to heart disease. According to the World Health Organisation (2022) in the world health statistics, some of the most common behavioural and metabolic risk factors, e.g. substance abuse and hypertension, for non-communicable diseases (NCDs) such as cardiovascular disease (CVD) are often linked to poor mental health, particularly mentally induced stress. A recent study led by senior research investigator in behavioural health, Rossom R. (2022), also found the estimated 30-year risk of CVD to be significantly higher among individuals with serious mental illnesses, at 25% compared to 11% for those without a serious mental illness.

Despite the abundance of investigation and demonstration of a clear relationship between mental health and cardiovascular diseases, patients with coronary disease, myocardial infarction, heart failure, and arrhythmias are rarely assessed for psychological distress or mental illness as a contributor to or resulting from the cardiovascular disorder.

Correlation between depression and negative lifestyle habits

Clinical depression is a common but serious mood disorder and is estimated to affect 5% of adults globally. It is also recognised by WHO to be a major contributor to the overall global burden of disease as one of the most prevalent mental health disorders. Patients with depression have shown increased platelet reactivity, decreased heart variability and increased proinflammatory markers, all of which are risk factors for CVD. As with other mood or stress related disorders, depression may also result in increased cardiac reactivity as well as heightened levels of cortisol, putting unwanted stress on the cardiovascular system. Over time, these physiological effects can lead to calcium build-up in the arteries, metabolic disease, and heart disease.

However, the greatest risk of developing CVD from depression is due to physical inactivity as a result of the fatigue, accompanied by the adoption of new and unhealthy behavioural habits. These changes in behaviour often include smoking and/or excessive alcohol consumption, a poor diet, lack of exercise, and failure to adhere to medical advice, all of which classify as CVD risk factors. In fact, smoking, poor diets high in cholesterol and/or lipids, and little regular exercise are the following

Academic Journal 2023 - 8
Figure 1: Diagram illustrating the cyclical nature of heart disease and depression (Vigerust, D., 2021)

leading causes for CVD after high blood pressure- which can also be affected by such lifestyle habits.

As shown in Figure 1, this relationship between the negative lifestyle habits of depression and the development of CVD is also cyclical as it creates further emotional distress which can then increase the risk of an adverse cardiac event such as blood clots or myocardial infarction, i.e. heart attacks, in patients already diagnosed with a heart disease.

Physiological distress and anxiety

Unsurprisingly, anxiety disorders are the most prevalent mental illnesses worldwide, often coming hand in hand with other mental disorders such as schizophrenia, eating disorders, and even sometimes depression. Furthermore, whilst the definition of anxiety disorders infers chronicity, anxiety and negative emotions like anger, fear, grief, and severe emotional distress, of which people suffering from mental illnesses will experience at least one, result in what is referred to as the “stress response” and can have a major impact on the cardiovascular system.

The “stress response” is triggered when mentally induced stress is detected by the limbic system and a distress signal is sent to the hypothalamus, which then sends its own signals through the autonomic nervous system to the medulla of the adrenal gland. The adrenal gland thus releases adrenaline, a catecholamine hormone, which activates the sympathetic nervous system that triggers the “flight-or-flight” response. Consequently, both pulse and blood pressure increase temporarily, leading to the constriction of arteries which may cause myocardial infarction or induce cardiac irregularities, including atrial fibrillation, tachycardia, and even sudden death. Repeated temporary increases in blood pressure may also lead to plaque disruption, resulting in myocardial infarction or strokes, and, in patients with a weakened aorta, such as those with aortic aneurysm or survivors of an aortic dissection, may lead to aortic dissection or rupture.

As the initial surge of adrenaline subsides, the hypothalamus then activates the second component of the stress response system, known as the hypothalamic pituitary adrenal (HPA) axis (see Figure 2).

The HPA axis is activated by chronic stress, causing the hypothalamus to stimulate the secretion of adrenocorticotropic hormone (ACTH) from the pituitary gland. ACTH then stimulates the adrenal gland to release another stress hormone known as cortisol. These highly increased cortisol levels cause an increase in platelet activation and aggregation which can lead to atherosclerosis. Elevated levels of cortisol may also result in high blood glucose levels, high blood pressure and inflammation, damaging the blood vessels.

9 - Academic Journal 2023
Figure 2: Diagram illustrating effects of stress on hormone levels and the resultant impact on the cardiovascular system (Nemeroff, C. & Goldschmidt-Clermont, P., 2012)

Eating disorders and cardiac issues

Cardiovascular complications of eating disorders are extremely common and can be very serious. Anorexia nervosa, in particular, can be detrimental to your heart, with heart damage acting as the leading reason for hospitalisation in people with this eating disorder.

Cardiac deaths as a result of arrythmias account for approximately 50% of patient deaths with anorexia, the reason being that it involves self-starvation and intense weight loss, which not only denies the body essential nutrients to function, but also forces the body to slow down to conserve energy. The heart thus becomes smaller and weaker as it loses cardiac mass, making it more difficult to circulate blood at a healthy rate as the deteriorating heart muscle creates larger chambers and weaker walls. Consequently, bradycardia, i.e. abnormally slow heart rate of less than 60 bpm, and hypotension, i.e. low blood pressure under 90/60 mm/Hg, are extremely common in such individuals. Patients with anorexia may also experience sharp pains beneath the sternum, which could be a symptom of mitral valve prolapse occurring due to a loss of cardiac muscle mass and can improve with weight gain. However, chest pain may also be a more serious sign, indicating congestive heart failure and only has the potential to improve with proper treatment. In other forms of eating disorders, such as bulimia nervosa, the biggest cardiac risk is arrhythmia due to an electrolyte abnormality, such as low serum potassium or magnesium. The imbalance of electrolytes is caused by purging and is another factor that can put the individual at risk of heart failure. Binge eating disorders possess these same risks but also have those associated with obesity, such as hypercholesterolemia, i.e. high blood cholesterol levels, hypertension, i.e. high blood pressure, and diabetes.

Antidepressant therapy and cardiovascular considerations

Interestingly, in previous years, there has existed more concern over the cardiac complications which may arise from taking antidepressants, such as selective serotonin reuptake inhibitors (SSRIs), than the actual cardiovascular risks of mental illnesses themselves. However, a cohort study involving 238,963 patients aged 20 to 64 years with a first diagnosis of depression between 1st January 2000 and 31st July 2011 assessed the associations between different antidepressant treatments and cardiovascular complications (Coupland, C. et al., 2016) and found that there was no evidence that SSRIs are linked with an increased risk of arrhythmia, stroke or transient ischaemic attack in people diagnosed as having depression. Furthermore, despite the beliefs of many, there was also no evidence to suggest that citalopram is associated with arrhythmia, even at high doses. Instead, the study actually found there to be some indication of a reduced risk of myocardial infarction with some of the SSRIs, particularly fluoxetine.

Conclusion

Mental distress and mental illnesses are real and can be associated with severe cardiovascular consequences. According to Von Korff, M.R. et al. (2016), as cited by Stein, D.J. et al. (2019), the odds ratios in the World Mental Health Surveys for the association of heart disease with mental health disorders were 2.1 for mood disorders, such as major depressive disorder or bipolar disorder, and 2.2 for anxiety disorders. These strong associations and inter-related causal mechanisms of mental health disorders and CVD, alongside many other NCDs, thus argue for a joint approach to care. Treatment of mental disorders should optimally incorporate attention to physical health and health behaviours, with this parallel focus on physical health beginning as early in the course of the mental disorder as possible as a primary prevention of NCDs such as CVD. It is also arguable that the mental-physical comorbidity would be better addressed by an early focus on the physical health of those with mental disorders rather than a later focus on the mental health of those

Academic Journal 2023 - 10

with chronic physical conditions.

References

Bremner, J.D., Campanella, C., Khan, Z., Shah, M., Hammadah, M., Wilmot, K., Al Mheid, I., Lima, B.B., Garcia, E.V., Nye, J., Ward, L., Kutner, M.H., Raggi, P., Pearce, B.D., Shah, A.J., Quyyumi, A.A., & Vaccarino, V. (2018) Brain Correlates of Mental Stress-Induced Myocardial Ischemia. Psychosom Med. 80 (6), 515–525. Available from: doi:10.1097/PSY.0000000000000597 Bucciarelli, V., Caterino, A.L., Bianco, F., Caputi, C.G., Salerni, S., Sciomer, S., Maffei, S. & Gallina, S. (2020) Depression and cardiovascular disease: The deep blue sea of women’s heart. Trends in Cardiovascular Medicine. 30 (3), 170-176. Available from: doi:10.1016/j. tcm.2019.05.001.

Centres for Disease Control and Protection. (2020) Heart Disease and Mental Health Disorders. Available from: https://www.cdc.gov/heartdisease/mentalhealth.htm [Accessed 22nd August 2022]. Chaddha, A., Robinson, E.A., Kline-Rogers, E., Alexandris-Souphis, T. & Rubenfire, M. (2016) Mental Health and Cardiovascular Disease. The American Journal of Medicine. 129 (11), 11451148. Available from: doi:10.1016/j.amjmed.2016.05.018

Coupland, C., Hill, T., Morriss, R., Moore, M., Arthur, A., Hippisley-Cox, J. et al. (2016) Antidepressant use and risk of cardiovascular outcomes in people aged 20 to 64: cohort study using primary care database. BMJ. 352 (8050). Available from: doi:10.1136/bmj.i1350

Mayor, S. (2017) Patients with severe mental illness have greatly increased cardiovascular risk, study finds. BMJ. 357. Available from: doi:10.1136/bmj.j2339

Nemeroff, C. & Goldschmidt-Clermont, P. (2012) Heartache and heartbreak – the link between depression and cardiovascular disease. Nature Reviews Cardiology. 9 (9), 526-539. Available from: doi:10.1038/nrcardio.2012.91

Northwestern Medicine. (n.d.) Disordered Eating and Your Heart. Available from: https://www. nm.org/healthbeat/healthy-tips/anorexia-and-your-heart [Accessed 22nd August 2022]

Pozuelo, L. (2019) Depression & Heart Disease. Available from: https://my.clevelandclinic.org/ health/diseases/16917-depression--heart-disease [Accessed 22nd August 2022].

Rossom, R.C., Hooker, S.A., O’Connor, P.J., Crain A.L., Sperl‐Hillen, J.M. (2022) Cardiovascular Risk for Patients With and Without Schizophrenia, Schizoaffective Disorder, or Bipolar Disorder. American Heart Association. 11 (6). Available from doi:10.1161/JAHA.121.021444

Rudnick, C. (2014) Cardiovascular Complications of Eating Disorders. Available from: https://www. mccallumplace.com/about/blog/cardiovascular-complications-eating-disorders/ [Accessed 22nd August 2022]

Stein, D.J., Benjet, C., Gureje, O., Lund, C., Scott, K.M., Poznyak, V. et al. (2019) Integrating mental health with other non-communicable diseases. BMJ. 364. Available from: doi:10.1136/bmj.l295 Vigerust, D. (2021) Can Depression Have A Negative Effect On Heart Health?. Available from: https://www.imaware.health/blog/depression-and-heart-health [Accessed 23rd August 2022]

World Health Organisation. (2022) World health statistics 2022: monitoring health for the SDGs, sustainable development goals. Available from: https://www.who.int/data/gho/publications/ world-health-statistics [Accessed 23rd August 2022]

11 - Academic Journal 2023

Genetics and Schizophrenia Anisha Tripathi

What is schizophrenia?

The term schizophrenia, means split mind, and describes a detachment between the different functions of the mind so that thoughts become disconnected and coordination between emotional, cognitive, and volitional (acting based on your own will) processes become weaker. It affects approximately 24 million people or 1 in 300 people (0.32%) worldwide. It is most apparent during late adolescence and tends to occur earlier in men than in women.

Symptoms

• Hallucinations

• Delusions

• Losing interest in everyday activities

• Wanting to avoid people

• Disorganised thinking (speech)

• Abnormal motor behaviour

Is schizophrenia genetic?

By pooling data from many studies carried out between 1920 and 1987, the American psychologist Irving Gottesman, was able to show that the risk of schizophrenia increases from approximately 1% in the general population to nearly 50% in the offspring of two schizophrenic patients and in identical twins of schizophrenics. The concordance of schizophrenia between monozygotic (identical) twins is 31% greater than that of dizygotic (fraternal) twins, strongly suggesting that there is a hereditary factor in schizophrenia. If the condition was purely genetic, however, then the concordance between monozygotic twins would be 100%. Furthermore, in Gottesman’s review he noted that 89% of patients have parents who are not schizophrenic, 81% have no affected first-degree relatives, and 63% show no family history of the disorder whatsoever. However, the reliability of these statistics is limited since family members may be unaware of or unwilling to disclose this information due to the stigma around mental illness. One reason for this 50% concordance between monozygotic twins may be due to epigenetic influences. These alter gene expression through DNA methylation and histone modification. Evidence supporting the role of epigenetic effects is presented in a study that found increased methylation for the dopamine D2 (a receptor linked to schizophrenia ) in a male without schizophrenia in comparison to his monozygotic twin brother and another sibling with schizophrenia.

Academic Journal 2023 - 12
Figure 1: Rates of schizophrenia among relatives of schizophrenic patients (1991)

Why is it so difficult to identify the genes causing schizophrenia?

RFLP (restriction fragment length polymorphisms) analysis has been used to find where a specific gene for a disease lies on a chromosome. It has helped to identify the faulty gene in several hereditary disorders such as Huntington’s disease, muscular dystrophy, and cystic fibrosis. Schizophrenia, however, has a complex pattern of inheritance. Evidence from genetic studies, suggests that there may be several genes responsible, each with a small effect. These genes interact with each other and with environmental factors to influence the susceptibility of a person to schizophrenia. However, none of these genes are either necessary or sufficient to cause schizophrenia. The great difficulty for genetic studies of schizophrenia is that there is no clear pattern of inheritance. A recent study undertaken by the University of Cardiff analysed DNA from 76,755 people with schizophrenia and 243,2649 people without the condition. They then identified 120 genes which were likely to contribute to the disorder. Although there are large numbers of genetic variants involved in schizophrenia, the study showed they are concentrated in genes expressed in neurons, pointing to these cells as the most important site of pathology. The findings also suggest abnormal neuron function in schizophrenia affects many brain areas, which could explain its diverse symptoms, which can include hallucinations, delusions, and problems with thinking clearly.

Schizophrenia and the Brain

Inside the brain, there are two lateral ventricles and a third and fourth ventricle which are filled with fluid (cerebrospinal fluid). Enlarged cerebral ventricles are found in 80% of individuals with schizophrenia. The mechanisms that lead to this ventricular enlargement are unknown although it is believed to be linked to the deletion of a region on chromosome 22 which increases the risk of developing schizophrenia approximately 30-fold in humans.

Brain scans undertaken by the UK Medical Research Council discovered higher levels of activity in part of the brain’s immune system in schizophrenia patients in comparison to healthy volunteers. Microglia cells are the immune cells of the brain and regulate brain development, maintenance of neuronal networks and injury repair. A chemical dye which sticks to microglia was injected into 56 people to record their microglia activity. The scans showed that levels of microglia were highest in those which schizophrenia and high in those at high risk of developing the condition.

It is thought that the microglia may sever the wrong connections in the brain, leaving it wired incorrectly which links to the symptoms of the illness as patients often make unusual connections of what is happening around them as well as mistaking thoughts as voices outside their head.

Environmental factors Stress

When under stress, the brain releases the hormone cortisol. Cortisol has been shown to damage nerve cells in the hippocampus. Excessive cortisol production, damage to the hippocampus and impairment in memory are

13 - Academic Journal 2023
Figure 2: Ventricles of the brain Figure 3: Brain scans show higher levels of microglia activity (orange) in people with schizophrenia. Figure 4: The location of the hippocampus in the brain.

all common occurrences in patients with schizophrenia. Research has shown that these patients have smaller hippocampal volumes than those without the disease. These stresses can be brought about by events such as bereavement, losing your job or home, divorce or physical, sexual and emotional abuse.

Substance abuse

Dopamine is a neurotransmitter which the brain releases in response to pleasurable activities such as when we eat food. It boosts mood, motivation and attention and helps to regulate movement, learning and emotional responses. Most drugs work by interfering with neurotransmission in the brain. This interference can happen in many ways. Drugs that cause receptors to be over-stimulated are called agonists. Amphetamine in large doses can cause hallucinations and delusions as it is a dopamine agonist. This means that it stimulates the axons of neurons containing dopamine, causing the synapse to be flooded with this neurotransmitter. This over-stimulation of the dopamine receptors results in hallucinations and delusions.

The use of cannabis among schizophrenic patients is associated with greater severity of psychotic symptoms as well as earlier and more frequent relapses. The Edinburgh High Risk study conducted between 1994 and 2004 was carried out to determine the features that distinguish highrisk individuals who go on to develop schizophrenia from those who do not. The study found that in genetically predisposed individuals, high cannabis use is associated with the development of psychotic symptoms. This conveys that schizophrenia is a result of an interaction between genetic and environmental factors.

Ketamine, Phencyclidine (PCP) and ecstasy have also been reported to induce hallucinations, delusions, and paranoia.

Obstetric complications

A 2018 study published in the journal ‘Nature Medicine’ shows that serious obstetric complications such as pre-eclampsia (high blood pressure during pregnancy), asphyxia (lack of oxygen during birth) and premature labour can increase the risk of the developing schizophrenia by 5 times in a child who is genetically predisposed to the condition. This is because these complications appear to ‘turn on’ genes in the placenta that have been associated with schizophrenia. The placenta is composed of foetal and maternal tissue and is a vital organ for the well-being of the foetus.

Nutritional factors

A lack of certain micronutrients and general nutritional deprivation are factors which increase the risk of schizophrenia. The Dutch Famine study of 1998 found that the rates of schizophrenia doubled amongst individuals who were conceived under circumstances of nutritional deprivation throughout premature foetal development. Further studies in 2001 found evidence that low birth weight can be associated with schizophrenia.

Treatments for schizophrenia

Prior to the discovery of antipsychotic drugs in the early 1930s, a common treatment for schizophrenia was insulin coma therapy. This consists of giving the patient increasingly large doses of the hormone insulin, which reduces the sugar content of the blood to produce a state of coma. The patient is then kept in a comatose condition for an hour, after which time they were brought back to consciousness by administrating a warm sugar solution or by an intravenous injection of glucose. This would result in a remission in symptoms for a period (which varied from several months to

Academic Journal 2023 - 14

a couple of years. Insulin-coma therapy is rarely used today is due to the very high risk of a prolonged coma where it is impossible to bring the patient out of this state using usual methods. Electroconvulsive therapy (ECT) was initially used for the treatment of schizophrenia, but over the years its use in schizophrenia has become limited. Before an ECT treatment, general anaesthetic and a muscle relaxant is given to the patient to restrict movement during the procedure. The treatment consists of electrodes being placed at precise locations on the patient’s head. Breathing, heart rate and blood pressure is monitored throughout. For roughly a minute, a small electric current pass from electrodes into the brain and triggers a seizure. ECT causes changes in the patient’s brain chemistry and thus results in a change to a patient’s catatonic symptoms.

Antipsychotic drugs

As mentioned earlier, it is the drugs that stimulate the dopamine system that produce psychotic states most like schizophrenia. Thus, drugs that block dopamine receptors in the brain can successfully treat schizophrenic symptoms. Antipsychotic drugs bind to dopamine receptors without stimulating them, thus preventing the receptors from being stimulated by dopamine. This reduced stimulation of the dopamine system reduces the severity of hallucinations and delusions in those with schizophrenia. Despite this, there is still no evidence that schizophrenic symptoms are due to an excess of brain dopamine. Although post-mortem brain studies have revealed increases in the densities of dopamine D2 receptors (the main receptor for most antipsychotic drugs).

Other treatments for schizophrenia consist of admissions to a psychiatric ward in a hospital. Under the 2007 Mental Health Act, people who are at risk of harming themselves or others can be compulsorily detained in a hospital. Therapies such as cognitive behaviour therapy, family therapy and arts therapy are also prescribed.

To conclude, it is undeniable that genetic factors play a significant role in the onset for schizophrenia. However, it can be deduced that it is a combination of both environmental and genetic factors, with the environmental effects having a larger effect when the individual is genetically predisposed to the condition. Overall, research into the genes which are responsible for schizophrenia and explanations for the mechanisms causing the neurological impacts seen is largely inconclusive and thus there is still a long way to go before finding a potential cure for this illness.

References

NHS (2021). Symptoms - Schizophrenia. [online] nhs.uk. Available at: https://www.nhs.uk/mental-health/conditions/schizophrenia/symptoms/. [accessed 21/07/2022]

Figure 1: Schizophrenia.com. (2019). Schizophrenia.com - Schizophrenia Genetics and Heredity. [online] Available at: http://www.schizophrenia.com/research/hereditygen.htm [accessed 21/07/2022]

Karki, G. (2017). Restriction fragment length polymorphism (RFLP): principle, procedure and application. [online] Online Biology Notes. Available at: https://www.onlinebiologynotes.com/restriction-fragment-length-polymorphism-rflp-principle-procedure-application/. [accessed 21/07/2022] Cardiff University. (2022). Biggest study of its kind implicates specific genes in schizophrenia. [online] Available at: https://www.cardiff.ac.uk/news/view/2616522-biggest-study-of-its-kind-implicates-specific-genes-in-schizophrenia [Accessed 5 Aug. 2022].

15 - Academic Journal 2023
Figure 5: The efficacy of antipsychotic drugs depends on their ability to block dopamine receptors. The smaller the concentration of drug that inhibits the release of dopamine by 50% (IC50), the smaller the effective clinical dose.

Figure 2: www.simplypsychology.org. (n.d.). Brain Ventricles: Anatomy, Function, and Conditions. [online] Available at: https://www.simplypsychology.org/brain-ventricles.html [accessed 21/07/2022]

Figure 3: Gallagher, J. (2015). Immune clue to preventing schizophrenia. BBC News. [online] 16 Oct. Available at: https://www.bbc.co.uk/news/health-34540363 [accessed 21/07/2022]

Figure 4: clipartkey.com. (n.d.). Brain Amygdala Hippocampus Prefrontal Cortex , Free Transparent Clipart - ClipartKey. [online] Available at: https://www.clipartkey.com/view/xwowRm_ brain-amygdala-hippocampus-prefrontal-cortex/ [Accessed 5 Aug. 2022].

CNN, M.L. (2005). Pregnancy complications might ‘turn on’ schizophrenia genes. [online] CNN. Available at: https://edition.cnn.com/2018/05/30/health/schizophrenia-genes-pregnancy-placenta-study/index.html [accessed 21/07/2022]

Picker, J. (2005). The Role of Genetic and Environmental Factors in the Development of Schizophrenia. [online] Psychiatric Times. Available at: https://www.psychiatrictimes.com/view/role-genetic-and-environmental-factors-development-schizophrenia. [accessed 21/07/2022]

Grover, S., Sahoo, S., Rabha, A. and Koirala, R. (2018). ECT in schizophrenia: a review of the evidence. Acta Neuropsychiatrica, 31(03), pp.115–127. doi:10.1017/neu.2018.32 [accessed 21/07/2022]

Figure 5: Author: Christopher Frith and Johnstone, E. (2003). Schizophrenia: A very short introduction. Editorial: Oxford New York: Oxford University Press.

Academic Journal 2023 - 16

Contagious Vaccines

Harsha Pendyala

The history of vaccines

Vaccines have been humanity’s most effective weapon in suppressing the spread of virus for over 2 centuries being our saving grace in midst of many devastating pandemics, like the polio vaccine introduced in 1955 reducing the incidence of polio cases by over 99% to just 6 cases worldwide in 2021 (Louten, 2016).

The beginnings of vaccines date back to the smallpox epidemic, back then variolation (the intentional inoculation of an individual with virulent material) was the method of choice to battle smallpox. Doctors noticed ground up smallpox scabs or fluid from its pustules, infects the patient with a much milder form of smallpox and provides them immunity from the virus when exposed to it at a later date. This was a very effective method however it had 2 major drawbacks: the inoculated virus could still be transmitted to others after variolation had taken place and the method wasn’t flawless; 2-3% of cases died (History of Smallpox, 2022). However, a viable alternative would soon be discovered as Edward Jenner, a physician at the time conducting research in a small village, noticed that milkmaids who were infected with cowpox (a virus similar to smallpox but much milder) did not contract smallpox, and connecting his knowledge of variolation to this he theorised that inoculating people with cowpox could give them immunity from smallpox. When tested on the young James Phipps, sure enough his method worked and protected the young boy from smallpox (Louten, 2016). This was the first vaccine created in 1976 and quickly caught on in the medical community as it was an ideal alternative to variolation that did not possess its disadvantages. Fast-forward to the present day and vaccines haven’t changed much since the first smallpox vaccine, apart from minor changes have made to increase the breadth of infectious diseases vaccines can prevent against and despite how revolutionary they were they have some major fundamental disadvantages. As illustrated by the COVID-19 vaccination programs, suppressing a pandemic this way is very expensive with the UK’s vaccination program spending more than £8bn over the 2019-2022 period (Pfizer ’s £2 billion NHS rip-off could pay for nurses’ pay rise SIX TIMES over - Global Justice Now, 2022), mostly due the vaccines being made and sold by private companies, with large profit margins in some cases. However, the manufacturing and planning costs were also a large part of this. With each vaccine costing just shy of 5 pounds to produce and sold for even more (Pfizer’s £2 billion NHS rip-off could pay for nurses’ pay rise SIX TIMES over - Global Justice Now, 2022), it is clear our current vaccine technology has major shortcomings.

What are contagious vaccines and how were they developed?

The possibility of a contagious vaccine promises to provide a better solution, with a more efficient rollout timeline and lower rollout costs than the traditional alternative. The first idea for a contagious vaccine came in 1999 from veterinarian José Manuel Sánchez-Vizcaino who was faced

17 - Academic Journal 2023
Figure 1- an artist’s interpretation of Edward Jenner’s conversation with the milkmaids (What’s the real story about the Milkmaid and the smallpox vaccine? 2018)

with the insurmountable task of trapping and vaccinating an entire population of wild rabbits, notoriously known for being fast breeding (Craig, 2022). He was simply unable to keep up with the rate of population increase while the virus on the other hand, ravaged through the population with relative ease. He needed to find a much faster way that could keep up with the rate of infections by the virus as the rabbits were dying faster than they were being vaccinated. Hence the first contagious vaccine was born, a hybrid virus vaccine between rabbit haemorrhagic disease and myxomatosis, his team sliced out a gene from the rabbit haemorrhagic disease virus and inserted it into the genome of a mild strain of the myxoma virus, which causes myxomatosis. The vaccine would then incite an immune response for both viruses which would easily overpower them both as the myxoma virus would in a very mild form. Although the virus would be modified to make it weaker and add the rabbit haemorrhagic disease viral DNA, Sánchez-Vizcaino hypothesized that because the vaccine was still similar enough to the original disease-causing myxoma virus, it would still spread among wild rabbits (Maclachlan et al., 2017).

A proof-of-concept field test was then carried out among a sample of 147 wild rabbits, with 50% being infected with the viral vaccine. The test showed positive results as percentage of wild rabbits they chipped containing antibodies for rabbit haemorrhagic disease and myxomatosis increased from 50-56% over a 32-day period (Maclachlan et al., 2017). While this does not initially seem like a significant enough difference, the test was done on an island that both rabbit haemorrhagic disease and myxomatosis viruses had not reached yet, therefore the only way rabbits could’ve gained the antibodies was through the viral vaccine’s immune response and the population of rabbits chipped in the test were a only small fraction of those inhabiting the island, and so the number of rabbits the virus had spread to was likely a lot larger than 9 over the month period. This initial test showed great promise for the technology however the EMA (European Medicines Agency), noted technical issues with the vaccine’s safety evaluation and requested that the team decode the myxoma genome, which had not been done before and as a result the concept was dropped with the team’s funding also being cut due to concerns the EMA would never approve such a technology (Craig, 2022). After this, research into self-spreading vaccines went largely dormant. Pharmaceutical companies weren’t interested in investing in research and development for a technology that, by design, would reduce its own profit margins and was unlikely to be approved for use.

The revival of contagious vaccine research

However, in recent years there has been a renewed interest and funding in this line of research, inspired by the devastation zoonotic disease epidemics have caused (like the Lassa virus in East Africa). A new strain of virus vectors, cytomegalovirus, or CMVs, are being used to create viral vaccines and have significant advantages over the previously used Myxoma vectors (Varrelman et al., 2022). These spark hope for the possibility of future contagious vaccine approval for use in reservoir populations, to attempt to combat the tight grasp zoonotic disease has on many countries. CMVs are better as they infect a host for life, inducing strong immune responses while not causing severe disease and they are also uniquely species-specific; as an example, the CMV that spreads among Mastomys natalensis, the rat species that spread Lassa fever, cannot infect any animals other than M. natalensis (Craig, 2022). This alleviates the ethical concerns over viral vaccines mutating and jumping into the human species, where unlike with wild animal populations,

Academic Journal 2023 - 18
Figure 2 - An image of the myxoma virus (taken by a transmission electron microscope) (Myxomatosis 2023) Figure 3 - Cytomegalovirus image taken by a Transmission Electron Microscope) (TEM images of our virus-like particles 2020)

informed consent is essential before vaccinating. Field tests using CMVs were also carried out with promising results and after extensive mathematical modelling, a prediction of how long a vaccine of this sort would take to reduce pathogen incidence by 95% in reservoir populations was made. If the technology works as expected, releasing the Lassa fever vaccine could reduce disease transmission among rodents by 95% percent in less than a year and significantly reduce the annual predicted death toll of 5000 people to possibly even 0 and maybe eradicate the disease given a long enough time period (Varrelman et al., 2022).

How realistic is this technology?

The technology clearly shows huge amounts of potential but nevertheless, these predications are ultimately only predications, based on the chance that the technology works exactly as modelled to. Many believe technology should always be developed and tested from a pessimist’s point of view, ensuring that every potential problem that can develop has been addressed so that the release is not a catastrophic failure. This has been applied to many fields like AI but is especially important in immunology as it deals with ecosystems which we do not know enough about. Many experts warn that too little is known about zoonotic disease transmission and viral evolution to accurately predict what might happen if a self-spreading vaccine were released into the wild and what the consequences could be on an ecosystem.

Historical instances show that humans using and modifying viruses for their intended purposes has had devastating effects on the ecosystems of countries. For instance, a man in France intentionally released the myxoma virus in 1952 to keep rabbits out of his home garden but in the process decimated France’s rabbit population wiping out 90% of its rabbits within just 2 years (Maclachlan et al., 2017). Furthermore, the same myxoma virus started killing wild hares almost 70 years later, as it had mixed with the poxvirus allowing it to jump species (Maclachlan et al., 2017). Many experts cite this evidence and warn that we cannot accurately predict (using even mathematical models) what problems might arise from release of a viral vaccine. When the natural ecosystems and animal populations are involved, many say the stakes are simply too high to use the vaccine anyway.

Furthermore, the ethical and social issues that surrounds a programme like this are crushing as most experts in the field accept that the viral vaccine can never be used on human populations, because universal informed consent would never be achieved. However, another problem that accompanies all dangerous technology research is that even though they would never be passed by medical councils, underground unregulated research is likely to take place even if the technology was banned and if the technology enters the wrong hands, it could wreak havoc on countries. The potential scale of devastation that could accompany the viral vaccine technology is limitless, with the process to make the vaccine bearing an uncanny resemblance to the creation of a bioweapon, capable of causing global pandemics. Even Bárcena, a scientist who was part of Sánchez-Vizcaino’s original research group, had shifted his view of self-spreading diseases after he saw how previous strategies involving the intentional release of viruses had unforeseen consequences, referring to the evidence that the myxoma virus had combined with poxvirus enabling it to jump species (Craig, 2022). This ethical argument, however, is an age old one accompanying the discovery of any new technology. The question, whether the potential risks that technology poses, is worth taking to reap the potential benefits it could provide mankind? is at the core of this notion. However, some of the riskiest technologies like AI are still being allowed to develop, as it is widely considered by millions as the next step in human evolution; hence it’s worth taking the risk that AI poses, the benefits outweigh the potential risk. A similar logic can be used on contagious to evaluate if it should be developed or not and the conclusion most come to is that the technology might never be used due to its multitude of problems, but it should still be developed, in case we ever need it. Alec Redwood expresses this excellently with “it’s better to have something in the cupboard that can be used and is mature if we need it than let’s just not do this research because it’s too dangerous, to

19 - Academic Journal 2023

me, that makes no sense at all” (Craig, 2022).

Reference list

Craig, J., 2022. The controversial quest to make a ‘contagious’ vaccine. [online] Science. Available at: <https://www.nationalgeographic.com/science/article/the-controversial-quest-to-make-a-contagious-vaccine> [Accessed 1 October 2022].

Varrelman, T., Remien, C., Basinski, A., Gorman, S., Redwood, A. and Nuismer, S., 2022. Quantifying the effectiveness of betaherpesvirus-vectored transmissible vaccines. Proceedings of the National Academy of Sciences, 119(4).

Louten, J., 2016. ScienceDirect. Clinical Microbiology Newsletter, 38(13), p.109. Centres for disease control and protection. 2022. History of Smallpox. [online] Available at: <https://www.cdc.gov/smallpox/history/history.html#:~:text=Jenner%20took%20material%20 from%20a,but%20Phipps%20never%20developed%20smallpox.> [Accessed 1 October 2022]. Global Justice Now. 2022. Pfizer’s £2 billion NHS rip-off could pay for nurses’ pay rise SIX TIMES over - Global Justice Now. [online] Available at: <https://www.globaljustice.org.uk/news/pfizers-2billion-nhs-rip-off-could-pay-for-nurses-pay-rise-six-times-over/> [Accessed 1 October 2022].

Maclachlan, N., Dubovi, E., Barthold, S., Swayne, D. and Winton, J., 2017. Fenner’s veterinary virology. 5th ed. Amsterdam: Elsevier/AP, Academic Press is an imprint of Elsevier, pp.157-174. Brink, S. (2018) What’s the real story about the Milkmaid and the smallpox vaccine?, NPR. NPR. Available at: https://www.npr.org/sections/goatsandsoda/2018/02/01/582370199/whats-the-realstory-about-the-milkmaid-and-the-smallpox-vaccine (Accessed: March 13, 2023).

Myxomatosis (2023) Wikipedia. Wikimedia Foundation. Available at: https://en.wikipedia.org/wiki/ Myxomatosis (Accessed: March 13, 2023).

Shore, J. (2020) TEM images of our virus-like particles, The Native Antigen Company. Available at: https://thenativeantigencompany.com/tem-images-of-our-virus-like-particles/ (Accessed: March 13, 2023).

Academic Journal 2023 - 20

What is an antibiotic?

[1]Any substance that inhibits the growth and replication of a bacterium or kills it. Antibiotics are a type of antimicrobial designed to target bacterial infections within the body. This makes antibiotics different from the other main kinds of antimicrobials widely used today:

• Antiseptics are used to sterilise surfaces of living tissue when the risk of infection is high,

• Disinfectants are non-selective antimicrobials, killing a wide range of micro-organisms including bacteria used on non-living surfaces.

Of course, bacteria are not the only microbes that can be harmful to us. Fungi and viruses can also be a danger to humans, and they are targeted by antifungals and antivirals, respectively. Only substances that target bacteria are called antibiotics, while the name antimicrobial is a term for anything that inhibits or kills microbial cells including antibiotics, antifungals, antivirals and chemicals such as antiseptics. Most antibiotics used today are produced in laboratories, but they are often based on compounds scientists have found in nature. Some microbes, for example, produce substances specifically to kill other nearby bacteria to gain an advantage when competing for food, water or other limited resources. However, some microbes only produce antibiotics in the laboratory

WHY ARE ANTIBIOTICS IMPORTANT?

[1]The introduction of antibiotics into medicine revolutionised the way infectious diseases were treated. Between 1945 and 1972, average human life expectancy jumped by eight years, with antibiotics used to treat infections that were previously likely to kill patients. Today, antibiotics are one of the most common classes of drugs used in medicine and make possible many of the complex surgeries that have become routine around the world. If we ran out of effective antibiotics, modern medicine would be set back by decades. Relatively minor surgeries, such as appendectomies, could become life-threatening, as they were before antibiotics became widely available. Antibiotics are sometimes used in a limited number of patients before surgery to ensure that patients do not contract any infections from bacteria entering open cuts. Without this precaution, the risk of blood poisoning would become much higher, and many of the more complex surgeries doctors now perform may not be possible.

PRODUCTION Fermentation

[3]Industrial microbiology can be used to produce antibiotics via the process of fermentation, where the source microorganism is grown in large containers (100,000–150,000 litres or more) containing a liquid growth medium. Oxygen concentration, temperature, pH and nutrients are

21 - Academic Journal 2023
Penicillin

closely controlled. As antibiotics are secondary metabolites, the population size must be controlled very carefully to ensure that maximum yield is obtained before the cells die. Once the process is complete, the antibiotic must be extracted and purified to a crystalline product. This is easier to achieve if the antibiotic is soluble in an organic solvent. Otherwise, it must first be removed by ion exchange, adsorption or chemical precipitation.

Semi-synthetic

A common form of antibiotic production in modern times is semi-synthetic. Semi-synthetic production of antibiotics is a combination of natural fermentation and laboratory work to maximize the antibiotic. Maximization can occur through the efficacy of the drug itself, the amount of antibiotics produced, and the potency of the antibiotic being produced. Depending on the drug being produced and the ultimate usage of said antibiotic determines what one is attempting to produce. An example of semi-synthetic production involves the drug ampicillin. A beta-lactam antibiotic just like penicillin, ampicillin was developed by adding an amino group (NH2) to the R group of penicillin.[2] This additional amino group gives ampicillin a broader spectrum of use than penicillin. Methicillin is another derivative of penicillin and was discovered in the late 1950s,[3] the key difference between penicillin and methicillin being the addition of two methoxy groups to the phenyl group. [4] These methoxy groups allow methicillin to be used against penicillinase-producing bacteria that would otherwise be resistant to penicillin.

WHAT ARE THEY MADE OF?

The compounds that make the fermentation broth are the primary raw materials required for antibiotic production. This broth is an aqueous solution made up of all of the ingredients necessary for the proliferation of microorganisms. Typically, it contains a carbon source like molasses, or soy meal, both of which are made up of lactose and glucose sugars. These materials are needed as a food source for the organisms. Nitrogen is another necessary compound in the metabolic cycles of organisms. For this reason, an ammonia salt is typically used. Additionally, trace elements needed for the proper growth of the antibiotic-producing organisms are included. These are components such as phosphorus, sulfur, magnesium, zinc, iron, and copper introduced through water-soluble salts. To prevent foaming during fermentation, anti-foaming agents such as lard oil, octadecanol, and silicones are used.

WHAT DIFFERENT ANTIBIOTICS ARE PRODUCED BY?

• [4]Some antibiotics are produced naturally by fungi. These include the cephalosporin producing Acremonium chrysogenum.

• Geldanamycin is produced by Streptomyces hygroscopicus.

• Erythromycin is produced by what was called Streptomyces erythreus and is now known as Saccharopolyspora erythraea.

• Streptomycin is produced by Streptomyces griseus.

• Tetracycline is produced by Streptomyces aureofaciens

• Vancomycin is produced by Streptomyces orientalis, now known as Amycolatopsis orientalis.

Academic Journal 2023 - 22

DOES SILVER MAKE ANTIBIOTICS MORE EFFECTIVE?

[2]Bacteria have a weakness: silver. It has been used to fight infection for thousands of years —silver can disrupt bacteria and could help to deal with the thoroughly modern scourge of antibiotic resistance.

Silver — in the form of dissolved ions — attacks bacterial cells in two main ways: it makes the cell membrane more permeable, and it interferes with the cell’s metabolism, leading to the overproduction of reactive, and often toxic, oxygen compounds. Both mechanisms could be obtained to make modern antibiotics more effective against resistant bacteria.

Many antibiotics are thought to kill their targets by producing reactive oxygen compounds, and when boosted with a small amount of silver the drugs could kill between 10 and 1,000 times as many bacteria. The increased membrane permeability also allows more antibiotics to enter the bacterial cells, which may overwhelm the resistance mechanisms that rely on pushing the drug back out.

References

[1] https://microbiologysociety.org/members-outreach-resources/outreach-resources/antibiotics-unearthed/antibiotics-and-antibiotic-resistance/what-are-antibiotics-and-how-do-they-work.html

[2] https://www.nature.com/articles/nature.2013.13232

[3] http://www.madehow.com/Volume-4/Antibiotic.html

[4] https://en.wikipedia.org/wiki/Production_of_antibiotics

23 - Academic Journal 2023

In December of 1951, news of the first-ever nuclear reactor capable of producing electricity had broken out. Since then, nuclear energy sources have been considered, by many, to be an excellent replacement for fossil fuel-based sources; producing more energy per unit kg of fuel as well as no release of Carbon Dioxide. Despite these advantages, considering world events, with tragic bushfires ravaging Australia in 2020, the melting of ice glaciers in Iceland and most recently a heatwave with temperatures in London reaching up to 40.3 degrees Celsius, we must take drastic steps to slow down the effect of Climate change, which includes finding cleaner sources of energy This article sheds light on both the widely known and unknown consequences of nuclear fission reactors and whether it is possible to find a completely green source of energy

How do Nuclear Fission reactors work?

Comprehending the impact which these reactors have, involves an understanding of their function and how they carry it out. They work on the principle of radioactive decay, which is when an unstable nucleus releases radiation, in order to become more stable. Instability in a nucleus could be due to several reasons, such as an excess or dearth of subatomic particles such as protons or neutrons which causes forces within the nucleus to be unbalanced (however, the case we will focus on is the former). Nuclear fission is the process by which a neutron strikes a larger nucleus, causing it to split producing 2 smaller nuclei as well as 2 or 3 neutrons as well as copious amounts of energy. If a neutron produced in this fission reaction can strike another nucleus, it will cause this to split as well resulting in a chain reaction. One of the most famous equations ever created is E=mc2 stating that Energy and mass are different forms of each other and that Energy = mass times the speed of light squared. This means that a very small mass can be converted into a large amount of energy. This principle is critical in fission reactions; if one was to measure the mass of the individual products and the neutrons after the fission reaction, one would see this mass is slightly less than the mass of the nucleus before, which is called the mass defect. This is due to some of the mass being converted to energy, which is then harnessed.

Within a nuclear reactor:

Nuclear reactors use a similar process of creating electricity to fossil fuels; using energy to heat water and producing steam, which turns the turbine and in turn rotates a generator. Such reactors consist of 4 main components: fuel rods, control rods, a moderator, and a coolant, each of which has a distinct function in ensuring the production of this energy.

The components:

The fuel rods are rods containing the Uranium 235 nuclei, which are surrounded by the moderator. Upon impact of a neutron with the Uranium 235 nucleus, it produces a Uranium 236 nucleus

Academic Journal 2023 - 24
The environmental impact of current nuclear reactors and whether we will ever create a perfect source of energy
Aaditya Nandwani
Figure 1.1: A simple diagram illustrating the chain reaction of fission

(which is unstable and therefore splits). The moderator plays an important role in this as it reduces the speeds of neutrons from 13200 m/s to 2200 m/s, which increases the probability of a neutron hitting the fuel rod, making a successful fusion reaction more likely. The moderator, most commonly heavy water, is carefully chosen as it must not absorb the neutrons, but rather just slow them down by a considerable amount for the reason stated above. The control rods are the most crucial parts of this reactor, as they control the rate of the fission reactions taking place. When not lowered, the number of neutron collisions with the fuel rods is greater, which means the rate of nuclear fission is also greater, however, an uncontrolled rate of fission can prove to be dangerous. When the control rods are lowered, they absorb neutrons and therefore decrease the number of neutron collisions with the fuel rods, decreasing the rate of fission reactions taking place which means the electricity output is lower, however rendering the reactor safer. Therefore, there is always a fine balance between ensuring the rate of reaction is enough (to meet the financial goals) while making sure that it is not too high (to ensure the safety of the workers). Finally, the coolant transfers the thermal energy produced in the process to the electrical generator, where the electricity is produced.

The impact on the environment of Fission reactors:

Nuclear reactor accidents:

Nuclear accidents are catastrophic events, releasing significant amounts of nuclear radioactive waste into the surrounding environment, causing the tragic destruction of marine and land ecosystems. Despite numerous safety features implemented; events such as Chernobyl still took place (mainly due to a mix of human error and an unexpected failure of the machines), which have had a profound impact. This section will focus on the effects on agriculture and farming, the effects on lakes and finally the effects on plants and animals.

After the Chernobyl accident, the amount of radioactive iodine in the soil had reached an all-time high. Despite this level decreasing after the accident (due to multiple factors such as the wind carrying these isotopes and the quick decay time), it still had a profound effect. For example, the iodine levels in the thyroid gland of humans had increased, resulting in hyperthyroidism. Furthermore, it affected the reproductive abilities of trees, especially those in the exclusion zone. The Chernobyl accident also heavily contaminated water bodies, with radioactive Caesium as well as radioactive Iodine and radioactive Strontium. There were large concerns about the accumulation of radioactive caesium in aquatic food webs, as the amount of Cs137 in fishes was at a peak. Despite these levels evenutally decreasing, in certain lakes in Russia, the levels of Caesium in water bodies remain high, which is seen by the persistently high levels of radioactive Caesium in fishes. The Chernobyl accident had disastrous effects on human health as well as other wildlife. Workers in the Pripyat reactor received high amounts of gamma radiation (2-20 Gy), which resulted in acute radiation syndrome, a disease where a person has been exposed to large amounts of penetrative ionising radiation (which can penetrate the skin to reach sensitive organs and cause cell death), which resulted in death. Furthermore,

25 - Academic Journal 2023
Figure 1.2: Diagram showing the parts of the nuclear fission reaction vessel Figure 2.1: A graph showing the number of cases of thyroid cases in children in Belarus and Ukraine as time progressed. It can be seen, that in Belarus and Ukraine the number of cases has increased, however has increased even more steeply in Belarus.

due to increased uptake of radioactive iodine by the thyroid (because of the milk of cows which grazed on radioactive grass), the ionising radiation caused mutations, and therefore thyroid cancer. Finally, the Chernobyl accident caused visible deformations in many animals in the exclusion zone.

Mining for Uranium

Obtaining Uranium consists of mining and processing, which have a severe effect on the environment. There are several different types of mining processes, such as underground mining (where large underground tunnels are vertically built, and the Uranium is then removed by breaking it up and then bringing it to the surface). The energy demand required in mining is very large, which is often provided through the burning of fossil fuels (which releases CO2 and other greenhouse gases into the environment). When processing the Uranium ore, by a process of leaching, it has a considerable impact (mainly due to the waste products created). The mill tailings (which is waste containing metals such as radioactive forms of radium, eventually form a toxic mixture of different gases, for example, radioactive radon). There have been particular concerns within the scientific community about radon gas on the environment, especially on human health. As explained by a Stanford University published article, radon gas has links to causing lung cancer: the radon gas can be converted into smaller radioactive particles which get stuck in the lung linings and continue to decay, releasing ionising radiation; this eventually leads to cancer due to the mutation of DNA in these cells. Furthermore, other problems such as its easy ability to be carried away from the mill tailings by the wind, and also its long half-life means that mill tailings have to be controlled safely for long periods of time.

Environmental issues concerning nuclear reactors:

Nuclear waste:

Furthermore, nuclear waste presents itself as a prominent problem; due to its effects on the environment and difficulties in disposal and storage. There are numerous proposed solutions to storing nuclear waste; such as underground or in the oceans. However, if the sealed container gets damaged this could result in leakages of highly radioactive elements with half-lives. The ionising radiation released could cause genetic deformations/mutations and furthermore death of plants and animals. Due to the long half-life of the waste nuclei (for example strontium 90), the rate of decay is much slower so more of this dangerous ionising radiation is released into the environment. Nuclear waste, similarly, has adverse effects on human health. For example, caesium 137, a soluble radioactive isotope, may be released as a waste product. This can be absorbed by internal organs such as the reproductive organs and will decay, releasing high-energy gamma photons (gamma radiation). These can then cause damage and inhibit reproductive abilities. Furthermore, it is widely known that ionising radiation (which is released by such isotopes) is capable of causing cancer.

Academic Journal 2023 - 26
Figure 2.2: An image of mill tailings from Uranium extraction Figure 2.3: An image showing the storage of nuclear waste.

What is nuclear fusion and could we generate electricity from this?

Nuclear fusion is the process by which two smaller lighter nuclei fuse together, in order to produce a larger nucleus which is more stable (releasing energy). This is the process which takes place in the centre of stars, and in fact, keeps them in a stable condition (due to the outward force of radiation pressure balancing the inward gravitational force). Similar to fission, fusion also works on the principle of the mass-energy equivalence theorem. If you were to measure the mass of the two smaller nuclei, the sum of these masses is greater than the mass of the larger nuclei formed, this difference is called the mass defect. By E=mc2, we know that this mass has been converted to energy which is liberated (released) in the form of binding energy. At present, a proper nuclear fusion reactor has not been made that can successfully produce electricity on a commercial scale, however, there have been big strides in our technology. There are 2 main models for the nuclear fusion reactor: Magnetic confinement reactors and Inertial confinement reactors.

How close are we to a viable nuclear fusion reactor?

A viable nuclear fusion reactor would release more energy in the fusion reaction, than the energy taken in. There are several promising solutions which have been announced. Scientists hope that by 2040, we will have such a reactor which is economically viable and operational.

A large difficulty in making a nuclear fusion reactor is holding the initial ingredients for fusion in place. These nuclear reactors work by heating deuterium and tritium (isotopes of hydrogen) to high temperatures which produce Plasma, which is a soup of ionised particles. However, the plasma cannot touch the wall of the container as it would simply vaporise them. In order to hold the plasma in place, this type of reactor uses strong magnetic fields. Because the plasma contains ions (which are electrically charged), it means that the magnetic fields produced can guide the ions and keep the Plasma in a fixed position. A magnetic confinement reactor works in the following way:

1. Inside the reactor; a stream of neutral particles is released by the accelerator. This stream works to heat the deuterium and tritium to very high temperatures

2. The plasma is contained within a tokamak (which is a doughnut-shaped vessel). The transformer, which is connected to the tokamak, produces magnetic fields which squeeze the plasma so that the deuterium and tritium fuse to produce Helium.

3. The blanket modules absorb the heat produced in the fusion reaction and transfer it to the exchanger where water is converted into steam. This steam drives the turbine which in turn rotates the generator, producing electricity.

Inertial confinement is a new, experimental procedure which physicists are testing however, so far magnetic confinement is a more sound method. This process uses 192 laser beams which are shot at a small capsule called a hohlraum (inside which is a capsule of hydrogen). These X-rays collide with this smaller capsule and cause it to implode. This is meant to model the high pressure and high-temperature conditions at the centre of the stars and cause the deuterium and tritium nuclei to fuse and form Helium.

Furthermore, there are other promising solutions being developed such as SPARC which is also a magnetic confinement Tokamak, however, would be much smaller in size and can be built in a shorter timeframe (potentially allowing us to achieve viable fusion by 2025).

27 - Academic Journal 2023
Figure 3.1: A tokamak: a torus shaped vessel that uses magnetic fields to shape and squeeze the plasma.

How would our planet benefit from the production of viable nuclear fusion?

One reason we would benefit is due to no release of Carbon Dioxide. In 1900, the percentage of CO2 levels in the atmosphere was 0.285%, however, the percentage of C02 levels in 2020 was 0.039%. Such numbers, which may not seem to have much significance, are largely worrying when looking at them in the context of the global temperature. An increase in greenhouse gas levels results in an enhanced greenhouse effect, where radiation from the sun is absorbed by these gases (however not reflected back into space). This causes an increase in the average temperature on the Earth, which has caused climate change. The drastic effects of this on our environment have been recently seen for example the melting of glaciers which has resulted in the flooding of coastal areas and disrupting ecosystems. However, this is avoided when using energy made from nuclear fusion. Also, there is no release of radioactive isotopes that emit alpha, gamma or beta radiation (all 3 of which are destructive ionising radiation forms). Furthermore, nuclear fusion produces 4 times as much energy as nuclear fission reactions, allowing for more electricity production in nuclear fusion reactors.

What limitations will we face in the development of nuclear fusion:

The main difficulty we have had in producing a viable fusion reactor is the input-to-output ratio i.e. making sure the energy released is more than the energy taken in (With the current record being only 65%). The challenges being faced are:

- Initiating a burning plasma (which is plasma which can maintain itself at the same temperature and spark fusion), requires heating it to temperatures higher than the core of the sun- which needs technologies we haven’t developed yet

- If the walls of the reaction container are not kept at the same temperature (near absolute zero), it could cause damage to the blanket modules and cause the decommissioning of the nuclear facility.

In conclusion, the stark reality of the destructive nature of climate change has led us to re-evaluate the way we live, especially our energy consumption. Attempts to reduce our energy consumption, such as turning off the lights after leaving a room, are successful in slowing down the rate at which this situation is growing, but more radical approaches are needed to reverse this. It is clear that current forms of energy production such as burning fossil fuels have severe effects on the environment, and so does nuclear fission. Although nuclear fusion is not perfect (with recent articles indicating the radioactive waste which is produced due to neutrons colliding with the blanket, requires careful disposal), it is the best option which we have and the entire scientific community are optimistic that we can reverse climate change and sustain this beautiful, unique planet for future generations to come.

Academic Journal 2023 - 28
Figure 3.2: An image showing the melting of glaciers- a catastrophic effect of climate change.

Titan – The “Earth-like” moon

Nuvin Wickramasinghe

Titan is Saturn’s largest moon, the second largest natural satellite in the solar system, behind Jupiter’s Ganymede. Originally discovered in 1655, Titan has been revealed to be one of our solar system’s most bizarre satellites over time, especially regarding its atmosphere, surface and structure sharing similarities with Earth.

Rivers, lakes, sand dunes, canyons, a weather cycle, an atmosphere etc. are all found on Titan. Despite these features being composed of different materials to our planet, they all are Earth-like features found on this mysterious moon (Waldek, 2022).

Titan appeared in the popular film “Star trek” (2009), where the U.S.S Enterprise comes out of warp in the large moon’s atmosphere to ambush enemy ships attacking Earth. It also appears in TV shows such as “Futurama” and “Eureka” and in the famous anime “Cowboy Bebop” (NASA, 2022a). Iconic supervillain Thanos was also born on Titan in the original marvel comics (albeit the moon appears vastly different) (Aaron & Bianchi, 2013). In the movies, he’s from a faraway, fictional planet in a different solar system called Titan.

Timeline of major discoveries on Titan

On March 25th 1655, astronomer Christiaan Huygens discovered the bizarre moon. Then, Gerard Kuiper in 1944 discovered that the moon had a thick atmosphere, something extraordinary. He did this by finding methane when passing reflected sunlight from Titan through a Spectrometer (Kuiper, 1944). Pioneer 11 (1979) confirmed astronomers’ predictions regarding temperature and mass. Following this, Voyager 1 (1980) pictured the somewhat orange body we know Titan to be today. It also revealed its atmosphere to be primarily nitrogen (like Earth), whilst also containing other hydrocarbons (NASA, 2022b).

The Cassini-Huygens mission was a joint NASA/ESA (European Space agency) effort. From 2004 to 2017, NASA’s Cassini spacecraft orbited Saturn to collect data of the planet and its moons. On 14 January 2005, the ESA’s Huygens space probe reached Titan where the world obtained its first pictures from Titan, as well as furthering our understanding of the large moon. The entire operation was a huge success with the probe being the largest interplanetary spacecraft ever built and Huygens being the farthest landing from Earth ever made (ESA, n.d. a).

Academic Journal 2023 - 29
1 - 6 images of Titan that incorporate 13 years of data collected by NASA’s Cassini (Waldek, 2022).

Building on the Cassini-Huygens success, NASA plan to launch the Dragonfly spacecraft in 2027 following the discoveries of water and the Earth-like atmosphere on Titan. Scientists hope that this mission will be able to help study the possible start of life due to Titan sharing many features of a young Earth. Dragonfly will sample and examine the structures and chemicals there to hopefully gain invaluable information about life as we know it, and as we don’t (The planetary society, n.d.).

Size, orbit and formation

Some information about Titan: The large moon has a radius of about 2575km (nearly 50% wider than the Earth’s moon), as well as having a mass 1.8 times larger than Earth’s moon. Titan takes 15 days and 22 hours to complete 1 full orbit of Saturn (NASA, 2022c). As for how Titan came into existence is quite unique. Some moons, such as Neptune’s Triton was formed elsewhere in the solar system then pulled into orbit by the planet’s gravitational pull. Alternatively, some moons are created as residue from impacts such as the widely believed theory that the Earth’s moon was formed from an impact of the Earth with a smaller object called Theia. Other moons are believed to have been born in circumplanetary discs. These discs exist around a planet when an infant star is making a planetary system. Generally, large moons formed in the dusty material in the discs spiral towards the planet due to the drag from the surrounding gas and dust. However, Yuri Fujii and Masahiro Ogihara, at the Department of Physics, Nagoya University, and National Astronomical Observatory of Japan, created a model that used updated information about circumplanetary discs. The models hinted at a “safety zone” for larger moons where high pressure gas coupled with being a significant distance away from the planet lead to the moon being pushed outwards, causing a balance of forces, preventing Titan from being pulled into Saturn. Once these discs cease to exist, any movement towards or away by Titan from Saturn stops. Titan being formed during the creation of the solar system is supported by data collected by the Huygens’s probe on Titan’s nitrogen isotope ratio. It resembles material found in the Oort cloud, a collection of icy objects that was formed around the same time as the solar system, which implies Titan existed and was created during the formation of the planetary system (Dartnell, 2020; Leman, 2020).

Atmosphere, surface and structure

Titan’s atmosphere is what makes it such a scientific hotspot for discovery. The fact that it’s a moon with an atmosphere immediately draws much intrigue towards it. Unlike Earth’s atmosphere, Titan’s looks like a thick layer around it. This is because the moon is a lot smaller than earth but its atmosphere is 1.9 times larger than earth’s, and its weaker gravity means it pulls its atmosphere

Academic Journal 2023 - 30
2 - What Cassini-Huygens looked like (Rincon & Westcott, 2017) 3 - Diagram that displays the safety zone that Titan was born in (Leman, 2020).

down less strongly. Voyager 1 found that Titan’s atmosphere consists of: Nitrogen (around 95 percent), methane and hydrogen (about 5 percent) and hydrocarbons (from broken up methane due to the sun’s UV light) that many believe give Titan its orange colour. However, due to this continuous breaking down of methane, the moon should’ve ran out of it by now, meaning there must be a source that replenishes the methane, which researchers suspect to be cryovolcanoes (Volcanoes that erupt liquids and vapours into an environment that is below their freezing point, similar to magma from Earth’s volcanoes). Despite this, it still is yet to be confirmed on whether this refreshes methane although its presence would also validate that Titan is alive and still changing from its interior. (Astrum, 2019)

Titan’s special surface and structure also contributes to its uniqueness. Cassini discovered something remarkable, a layer of liquid water beneath the moon’s surface. Saturn’s strong gravity alters Titan’s shape as the moon orbits it, similar to the tides on Earth. If Titan consisted of only rock, the crust will rise and fall by about 1 meter. In spite of this, Titan experiences tides of about 10 meters, meaning Titan isn’t solely made of rock. A liquid layer beneath the outer crust would allow Titan to expand and compress to this magnitude. Due to the surface of Titan being water ice, it’s most likely that Titan’s subsurface ocean is mainly liquid water. The presence of an ocean gives scientists further hope for life on Titan although nothing can be confirmed yet. Experts believe that life is likely to exist when liquid water meets rock, but the current observations can’t tell what the ocean floor of Titan is made of (NASA, 2012a).

Another success of Cassini-Huygens is that the probe developed our comprehension of Titan’s climate. In the lower layers of the atmosphere, it possesses a fully functioning seasonal weather cycle, almost identical to the Earth’s hydrological cycle, except with methane. Titan’s freezing surface temperatures (approximately -180 degrees Celsius) permits methane to form clouds and rain, which then fall onto the surface, filling the lakes and rivers. However, as mentioned prior it’s still yet to be confirmed how methane clouds continue to exist due to photochemical reactions (reactions that occur due to the absorption of energy from light) involving the sun’s light that breaks down methane, with the theorized cryovolcanoes being the possibility many researchers suspect (Mitchell, 2016).

Cassini-Huygens also found its surface to have rivers and lakes of hydrocarbons. During Huygens’s descent, features that resembles rivers from Earth were seen except the involved fluid was methane. The probe then landed on a frozen surface with the below picture being taken. Cassini also revealed a surface littered with these hydrocarbon lakes, the first time this had ever been seen outside of Earth. For example, Ligeia Mare and the Kraken Mare are two lakes found on Titan, both of which are larger in surface area than Lake Superior in North America (ESA, n.d. b;

31 - Academic Journal 2023
4 - A concept of Titan’s internal structure based on Cassini’s findings, developed by Dominic Fortes of the University College London (NASA, 2012b).

Astrum, 2019). Life

First I’ll look at the possibility of humans staying on Titan. Due to the density of the air, we could theoretically walk around Titan without a spacesuit. Despite this, the freezing temperatures and the lack of oxygen means it isn’t feasible and likely unachievable for humans to stay on Titan (NASA, 2022c; NASA, 2022a).

However due to Huygens’s findings of an underground ocean of water arises the possibility of life within them, even though they will likely operate differently to life on Earth. There could even be life we are incapable of comprehending at this point in time present on Titan within these oceans of water or even in the lakes of hydrocarbons. There is still the possibility that Titan is lifeless or yet to develop life (NASA, 2022c).

As I see it, I am hopeful of some sort of discovery of life on Titan eventually. Something that can help us develop our understandings of the beginning of life on a celestial object. There are so many unique features to this moon (atmosphere, water etc.) that lead me to being optimistic for

significant discoveries regarding Titan in the future.

References

Aaron, J. & Bianchi, S., 2013. Thanos rising vol 1. s.l.:Marvel Comics.

Astrum, 2019. The Bizarre characteristics of Titan. [Online]

Available at: https://www.youtube.com/watch?v=B7497mQRn2Y

[Accessed 15 December 2022].

Dartnell, L., 2020. How did Saturn’s moon Titan form?. [Online]

Available at: https://www.skyatnightmagazine.com/space-science/how-did-saturn-moon-titan-form/ [Accessed 13 December 2022].

ESA, 2005. First colour view of Titan’s surface. [Online]

Available at: https://www.esa.int/Science_Exploration/Space_Science/Cassini-Huygens/Titan_s_ surface

[Accessed 17 December 2022].

ESA, n.d. a. Cassini-Huygens factsheet. [Online]

Available at: https://www.esa.int/Science_Exploration/Space_Science/Cassini-Huygens/Cas-

Academic Journal 2023 - 32
5 – The first image from Titan’s surface returned by Huygens in colour form. The “pebbles” appear to show signs of erosion, hinting at the landing being on a dried up lakebed (ESA, 2005).

sini-Huygens_factsheet2

[Accessed 13 December 2022].

ESA, n.d. b. Titan’s surface. [Online]

Available at: https://www.esa.int/Science_Exploration/Space_Science/Cassini-Huygens/Titan_s_ surface

[Accessed 17 December 2022].

Kuiper, G. P., 1944. Titan - A satallite with an atmosphere. Astrophysical journal, Volume 100, pp. 378-379.

Leman, J., 2020. Titan may have a new origin story. [Online]

Available at: https://www.popularmechanics.com/space/solar-system/a31287166/titan-origin-story/ [Accessed 13 December 2022].

Mitchell, J., 2016. The Climate of Titan. [Online]

Available at: https://www.annualreviews.org/doi/abs/10.1146/annurev-earth-060115-012428 [Accessed 18 December 2022].

NASA, 2012a. Titan’s Underground Ocean. [Online]

Available at: https://science.nasa.gov/science-news/science-at-nasa/2012/28jun_titanocean [Accessed 16 December 2022].

NASA, 2012b. Layers of Titan. [Online]

Available at: https://www.nasa.gov/mission_pages/cassini/multimedia/titan20120223L.html [Accessed 15 December 2022].

NASA, 2022a. Titan-overview. [Online]

Available at: https://solarsystem.nasa.gov/moons/saturn-moons/titan/overview/#otp_pop_culture [Accessed 13 December 2022].

NASA, 2022b. Titan - Exploration. [Online]

Available at: https://solarsystem.nasa.gov/moons/saturn-moons/titan/exploration/?page=0&per_ page=10&order=launch_date+desc%2Ctitle+asc&search=&tags=Saturn&category=33#dutch-astronomer-christiaan-huygens-discovers-titan

[Accessed 13 December 2022].

NASA, 2022c. Titan - In depth. [Online]

Available at: https://solarsystem.nasa.gov/moons/saturn-moons/titan/in-depth/ [Accessed 12 December 2022].

Rincon, P. & Westcott, K., 2017. Our Saturn years. [Online]

Available at: https://www.bbc.co.uk/news/resources/idt-sh/cassini_huygens_saturn?adlt=strict&toWww=1&redig=70725CE31DE845308C1D4A05516D3BAD [Accessed 18 December 2022].

The planetary society, n.d. Dragonfly, NASA’s mission to Saturn’s moon Titan. [Online]

Available at: https://www.planetary.org/space-missions/dragonfly [Accessed 13 December 2022].

Waldek, S., 2022. Saturn’s weird moon Titan looks a bit like Earth, and scientists might finally know why. [Online]

Available at: https://www.space.com/study-explains-why-titan-looks-like-earth [Accessed 16 December 2022].

33 - Academic Journal 2023

Maxwell’s Theory of Electromagnetism

1. Introduction

Albert Einstein had a portrait of James Clerk Maxwell hung on his study wall next to pictures of Isaac Newton and Michael Faraday. Einstein once said, “I owe more to Maxwell than to anyone”, and when asked if he stood on the shoulders of Newton he replied: “No, on the shoulders of Maxwell.” The main reason that this scientist earned such praise and respect from arguably the greatest physicist of all time was due to his four very concise field equations. James Clerk Maxwell’s Equations of Electromagnetism are written in a way that would normally only be understood by a university undergraduate physics student; however I believe that they should be more accessible to a wider audience. The purpose of this article is to introduce the theory in an easy-to-understand and concise way.

2. Prerequisites

2.1. Vector Fields

In order to understand what Maxwell’s equations mean, a couple of fundamental concepts must first be explained, with the most basic of these being vector fields. A vector field (also called a vector function) is essentially a 3-dimensional vector with an x, y and z component. They can be written in several ways, as shown:

Vector fields are always written in bold, and they are simply functions of the 3 spatial dimensions (x,y,z) where (x ̂,y ̂,z ) are the respective unit vectors. Vector fields can be used to model certain situations, such as the flow of a fluid in a certain situation, the force of gravity around bodies in space or the strength of electric and magnetic fields, which is how we will see them used throughout this theory.

2.2. Divergence

The second concept essential to understanding Maxwell’s equations is divergence, which is written in the equations as the del operator (upside down lambda) with a dot next to it. Divergence is the measure of a vector flow out of an imaginary surface surrounding a specific point. In order to help visualise divergence, it is sometimes helpful to imagine electric or magnetic fields as the movement of a fluid such as water instead. If there is a net flow out of a point, meaning more “fluid” flowing out of it than into it, then this means the point is called a “source”, which is shown by positive divergence. In the opposite case, if there is a net flow into a point (more “fluid” flowing into it than out of it) then this point is called a “sink”, shown by a negative divergence. The reason for this negative divergence is that the flow out of the point is negative if there is a net flow into the point.

Academic Journal 2023 - 34

An example of positive divergence (Vector Field A) and negative divergence (Vector Field B). It can be seen that it is useful to imagine the black vectors as the motion of water at the various points around P to see if it is a source or a sink. iv

The mathematical definition of divergence is the sum of the rate of change of the vector function in each direction. This is expressed algebraically as:

Or in words: Divergence of A = (rate of change of A in x-direction) + (rate of change of A in y-direction) + (rate of change of A in z-direction)

Some examples of calculating divergence mathematically are as follow

Example 1.1

Consider a point P in a vector field E(x,y,z) with vectors around it as shown. To calculate the divergence at P, we calculate the rate of change in the x and y directions, as we can assume no change in the z–axis in this 2-dimensional diagram. iv

So the divergence at P is positive (+1), meaning there is more flow out of the point than into it, so it is considered a source (which can also be seen intuitively from the diagram)

Example 1.2

We can now mathematically calculate the divergence at any point in a vector field A(x,y,z) using some calculus.

35 - Academic Journal 2023

This equation will now tell you the divergence at every location in space for the vector field. i.e. if we want to know the divergence at the point (x,y,z)=(3,2,1) then:

2.3. Curl

The third key mathematical concept which we need to understand Maxwell’s Theory of Electromagnetism is curl, which is displayed by the del operator followed by a cross. Whilst divergence is the measure of the flow of a vector field, curl is the measure of rotation of a vector field about a specific point. That is - if you were to drop a twig into the “fluid” and fix its centre, would it rotate at all?

Whilst it is relatively intuitive to see whether divergence is positive or negative once you have its definition, curl is a little harder to determine. I will illustrate how positive and negative curl is chosen through a non-numerical example:

Example 2.1

We can see intuitively that there is a curl at point D just by imagining that the arrows represent a fluid flow as we did before for divergence. The rotation is clearly clockwise about D, but is that a positive or negative curl? Firstly, we need to determine the axis of rotation. As the curl is in the x-y plane, the axis of the curl is taken to be the z-axis. In addition to this, curl follows a right-hand rule, meaning if you point your right thumb in the direction of the positive axis about which the rotation is happening (the +ve z-axis in this case) then your fingers will wrap around the thumb in the direction of positive curl. For this example, the curl would be positive if it were anticlockwise, so the z component of the curl at D is therefore negative. v

The mathematical definition of the curl for a vector field A is defined as:

Or in words: Curl of A = (How much spin in the y-z plane) x + (How much spin in the x-z plane) y + (How much spin in the x-y plane) z A note is that as curl is the measure of rotation of a 3-dimensional vector field, we get a 3-dimensional result.

An example of calculating curl mathematically of vector field H is as follows:

Example 2.2

Academic Journal 2023 - 36

To calculate the curl we first need to work out all the partial derivates needed:

Or it could be written in the form

This vector function for curl can be evaluated for any point in space (x,y,z). Vx and V y will always be equal to the constant -1, but it can be seen that Vz depends on the value of x for the point.

2.4. Historical Context

It is also useful to understand the history leading up to Maxwell’s revolutionary work (published in 1873), with the most significant influence being the work of Michael Faraday, whose portrait was also hung on Einstein’s study wall.

Michael Faraday was born to a poor family and was not able to receive any sort of formal education. He managed to get a job as an assistant at the Royal Institution in London and was eventually allowed to conduct his own experiments. Through these experiments, Faraday found that moving a magnet inside a loop of wire would generate electricity in the wire, and the reverse: that a moving electric field generates a magnetic one. The relationship between electricity and magnetism was completely unknown at the time and these observations proved Faraday as one of the greatest experimentalists of all time.

Due to his lack of a formal education, Faraday did not have the mathematical fluency to describe these observations. He instead filled his notebooks with diagrams of lines of force which looked like the patterns iron filings made when surrounding a magnet, which are now what we know as field diagrams. However, what many people will not know is that the entire concept of a field, which is one of the most important ideas in all of physics was actually invented by Michael Faraday. The study of fields required a new branch of mathematics to complete calculations. It was called vector calculus and was created by the Cambridge - educated and mathematically gifted James Clerk Maxwell. It was Maxwell who was able to summarise the entirety of electricity and magnetism in four single equations, as he had the mathematical literacy that Faraday lacked. This was no easy feat, and the achievement is summed up perfectly in another quote from Einstein: “Any intelligent fool can make things bigger and more complex. It takes a touch of genius and a lot of courage to move in the opposite direction.”

3. Maxwell’s Equations of Electromagnetism

Whilst I explained divergence and curl mathematically as well as conceptually, I will focus more on the physical meaning of the four equations themselves, as it is much more important to understand what equations actually mean instead of just being able to plug numbers into them. After explaining the conceptual meaning of all four laws, I will dive into the implications and importance of them on a wider, more general level.

37 - Academic Journal 2023

3.1. The First Equation: Gauss’ Law

This equation essentially dictates how the electric field behaves around electric charges. Gauss’ Law is:

where: E is the electric field

ρ is the electric charge density

ε0 is the permittivity of free space (8.85 x 10-12 F.m-1)

The equation shown above is Gauss’ Law in point form, meaning for any point in space around a charge, the divergence of the electric field is directly proportional to the charge density at that point.

Gauss’ Law can also be used to derive the equation to calculate the magnitude of the electric field outside of a spherical charge (note that E is not in bold):

This is an inverse square law, mathematically similar to Newton’s Law of Gravitation. Like gravity, the electric field drops off away from the surface of a charged surface in proportion to the square of the distance. i.e. the electric field is four times weaker if you move twice as far away from it. A note is that the electric flux through a surface is the amount of electric field that pierces that surface.

Gauss’ Law can be written in integral form, where it states that the net electric flux (Φ) through a closed surface S is given by:

where the integration is carried out over the entire surface S. Here, Gauss’ Law states that the total electric flux exiting any volume is equal to the total charge inside it. So if there is no charge within a volume, the net electric flux leaving it is zero. A positive charge inside will result in a positive amount of electric flux leaving, and a negative charge will result in a negative amount leaving (essentially meaning the electric flux will enter the volume).

This form of Gauss’ Law explains that positively charged regions act like sources for the electric field, and negatively charged regions as sinks, as shown in the diagram on the left. A combination of the integral form of Gauss’ Law and the inverse square law explains the shape of the electric field around a positive charge next to a negative one, as shown. iii

3.2. The Second Equation: Gauss’ Law for Magnetism

The second of Gauss’ Laws is similar to the first one for electric fields, only here it states that the divergence of the magnetic field B at every point is equal to zero:

Academic Journal 2023 - 38

or in integral form:

If we think of the magnetic field as a fluid flow once again, then this law essentially says that that fluid would be incompressible, acting just like water with no sources and no sinks. Another conclusion that can be drawn from this equation is that magnetic monopoles (meaning a north or south pole of a magnet by itself) can not exist, as they would cause a non-zero divergence of the magnetic field. If you chop a bar magnet in half, you always end up with north and south poles in both halves; there is no magnetic equivalent of a positive or negative charge by itself in an electric field.

Similarly to electric fields however, Gauss’ Law for Magnetism describes a very similar shape and strength for the magnetic field around a magnet: that being a closed loop from the north to the south pole as is observed when iron filings are placed around a bar magnet.

3.3. The Third Equation: Faraday’s Law of Induction

This is arguably the most famous of the four equations and is accredited to Michael Faraday as he was the first person to observe the experimental evidence of this law (as explained in section 2.4) Faraday’s Law is formally written as

In this form, it is relatively easy to see what the equation means: The rate of change of the magnetic field B (multiplied by the magnetic constant μ0) depends on the curl of the electric field E. Or even more simply, a changing electric field produces a magnetic field. Like all of Maxwell’s Equations, Faraday’s Law can also be written in integral form:

This can be demonstrated through a simple diagram. iii The blue vector field represents the electric field, which clearly has a curl. Using the method described in section 2.3, we can determine that the curl is positive and about the z-axis. Therefore, using Faraday’s Law of Induction, we can see that the rate of change of the magnetic field (which is the yellow vector here) will be in the negative direction of the z-axis, as shown.

Many of the experiments conducted by Faraday involved magnetic coils and circuits. Another way that Faraday’s Law can be written in the context of a magnetic coil is: Where ε is the total emf induced across a coil and ΦB is the magnetic flux within the coil.

39 - Academic Journal 2023

So in words, this form of the equation states that the magnitude of the total emf induced across the coil is equal to the rate of change of magnetic flux in it, with the emf always induced in the opposite direction to the change in flux. The reason for the emf being in the opposite direction to the change in magnetic flux is due to the conservation of energy: if energy is lost in one form in this case, it will be converted into another form in the opposite direction.

3.4. The Fourth Equation: Ampere’s Law

Ampere was experimenting with forces on wires carrying electric current around the same time that Faraday was working on his similar research in the 1820s. Neither scientist had any idea that around 50 years later their work would be unified by Maxwell in simple, elegant mathematical laws. Ampere’s Law describes the opposite effect of Faraday’s Law, and is therefore mathematically similar:

Where: B is the magnetic field [T]

J is the electric current density [A/m2]

E is the electric field [Vm-1]

μ0 is the magnetic permittivity of free space (magnetic constant)[1.26 x10-6 NA-2]

ε0 is the electric permittivity of free space (electric constant)[8.85 x10-12 Fm-1]

Visually, this is the most complex of the four equations, combining curl, the magnetic and electric fields and electric current density as well as both the electric and magnetic constants. However, the physical meaning of the fourth and final of Maxwell’s equations is no more complex than Faraday’s Law.

4. Linking Concepts and Drawing Conclusions

If we neglect all the physical constants and J, this equation is essentially just the opposite of Faraday’s law. It states that the rate of change of the electric field depends on the curl of the magnetic field. Note that this time there is no negative before the partial derivative, which means that the direction of the electric field is the same as the direction of the curl of the magnetic field. For example, in this diagram on the left iii the curl of the yellow magnetic field is negative, and therefore so is the direction of the electric field.

4.1. Light

It has been shown both mathematically and conceptually that a changing magnetic field induces an electric field and vice versa. These effects had been observed experimentally, as explained in section 2.4 before Maxwell summarised them mathematically. However, his idea for the ages was: what if a changing electric field created a magnetic one that then in turn created another electric field etc? The conclusion that Maxwell drew from this was that the result of this back-and-forth motion would be a moving wave, with constantly alternating electric and magnetic fields turning into one another. Using vector calculus, Maxwell calculated the speed of this new “electromagnetic” wave to be 310,740,000 ms-1 which he was surprised to find was incredibly close to the speed of light (within experimental error). Maxwell was struck by this and wrote prophetically “We can scarcely avoid the inference that light consists of the transverse undulations of the same medium

Academic Journal 2023 - 40

which is the cause of electric and magnetic phenomena”. 4.2. Radio

This diagram iii is a way of visualising the constant alternating oscillations of the electric and magnetic fields, which Maxwell realised was a new type of wave. As a result of the electric and magnetic fields always being perpendicular to one another, electromagnetic waves were classed as “transverse” waves.

As is often the case in physics, discovery was followed by invention. Maxwell had found this new type of wave and had proved that light was one of its many forms. However, the next challenge was to artificially produce these Maxwell waves in the laboratory, which was first done in 1886 by Heinrich Hertz. He achieved this with a simple setup by having an electric spark generator in one corner of his lab and a coil of wire several feet away from it. When Hertz would turn on the spark, he was able to generate an electric current in the coil, thus proving that this new wave had travelled wirelessly from one place to the other.

The direct consequence of this experiment was released to the public less than a decade later in 1894, and forever changed human communication: radio. Today it is easy to take for granted being able to send and receive messages wirelessly over incredibly long distances at the speed of light, however when you stop to think about this there is something almost magical about it. We use the internet every single day for just about everything: working, socialising, storing data, ordering food, entertaining ourselves and so much more. This would all be impossible if not for the discovery of electromagnetic waves, the invention of radio and most importantly Maxwell’s Theory of Electromagnetism.

4.3. The Whole Spectrum

Visible light and radio are of course only two forms of electromagnetic radiation, and it did not take long for scientists to realise it, with some forms such as infra-red having been observed long before Maxwell’s time but only being able to be explained after his theory. In the present day, we utilise the entire electromagnetic spectrum in countless scenarios, from the most subtle of applications to entire machines which save countless lives daily. I am not going to go through every type of electromagnetic radiation and give all the ways that they have been used over the years for humanity’s benefit, as I am sure anybody can come up with a hundred examples just by looking around.

However arguably the greatest and most significant application of Maxwell’s equations was for the crucial purpose of powering the planet. Whilst traditional energy stores such as oil, coal and gas needed to be shipped across vast distances by various modes of transport, electrical energy can be sent over the same distances almost instantly through a network of wires and accessed at the simple flick of a switch.

This led to the legendary battle between Thomas Edison’s DC (direct current) and Nikola Tesla’s AC (alternating current) which took place in the late 1880s and early 1890s. The difference between the two options was that DC always moves in the same direction and never varies in voltage, whereas in AC power the direction of the current reverses usually fifty or sixty times per second.

Whilst I will not delve into the legendary “War of the Currents” in too much detail (there is a very good film all about it which I recommend ) I think it is fair to say that the reason that AC eventually won was due to Tesla’s better grasp of Maxwell’s Equations. The key to the war was in reducing the energy losses that engineers knew would occur over the many miles of wires. Higher voltag-

41 - Academic Journal 2023

es meant lower currents and therefore less energy losses to heat in the wires, but the problem with Tesla’s higher voltage AC was that it was simply too dangerous to be introduced into homes, whereas Edison’s DC was much safer. The trick that Tesla used to solve this problem - and a genius application of Maxwell’s equations - was to use transformers. As AC electricity is constantly changing, it can be converted into a magnetic field which can then be converted back into another electric field, but at a lower voltage. DC current cannot be transformed in this way due to it having a constant voltage that does not alternate. (As you will recall, the third and fourth equations state that electromagnetic induction can only be done with a changing electric / magnetic field) So the solution that we still use today was to use efficient, high-voltage cables for the long distances between the power plants and cities, and then to transform the electricity into much safer, lower voltages when it enters homes.

4.4. Conclusion

By 1900, Maxwell’s Equations of Electromagnetism and their countless applications had brought about both a new fundamental understanding of nature as well as the huge economic prosperity that resulted from the birth of the electrical era. At the turn of the century, prominent scientists were proclaiming “the end of science” as it was thought that everything that could be discovered had been discovered. It seemed that the combination of Newton’s and Maxwell’s equations formed a “theory of everything”.

What physicists did not realise was that the work of these two giants of science was incompatible. However, one man would realise it soon enough and change the world. He was born the very same year that James Clerk Maxwell died (1879), and he would later hang the portrait of the great physicist on the wall of his study.

References

Merriam, A. (2020) The wall of Albert Einstein’s home bears the portraits of 3 scientists [online]. Last accessed: 27/07/2022. Available at:

https://www.cantorsparadise.com/the-wall-of-albert-einsteins-home-bears-the-portrait-of-three-eminent-scientists-f84d0c458dce

Maxwells-Equations.com (2012) Vector Functions [online]. Last accessed: 27/07/2022. Available at: https://www.maxwells-equations.com/vector-functions.php

3Blue1Brown (2018) Divergence and Curl [online]. Last accessed: 27/07/2022. Available at: https://www.youtube.com/watch?v=rB83DpBJQsE&t=814s&ab_channel=3Blue1Brown Maxwells-Equations.com (2012) Divergence [online]. Last accessed: 27/07/2022. Available at: https://www.maxwells-equations.com/divergence.php

Maxwells-Equations.com (2012) The Curl [online]. Last accessed: 28/07/2022. Available at: https://www.maxwells-equations.com/curl/curl.php

Kaku, M. (2021), The God Equation, Great Britain, Allen Lane - Penguin Books Baker, J. (2007), 50 Ideas You Really Need to Know: Physics, London, Quercus Editions Ltd Halliday, D., Resnick, R., Walker, J. (2014), Fundamentals of Physics, 10th Edition, USA, Wiley Maxwells-Equations.com (2012) Gauss’ Law for Magnetism [online]. Last accessed: 29/07/2022. Available at: https://www.maxwells-equations.com/gauss/magnetism.php

UniversalDenker Physics (2019) The 4 Maxwell Equations [online]. Last accessed: 30/07/2022. Available at: https://www.youtube.com/watch?v=hJD8ywGrXks&ab_channel=Universaldenker%E2%9A%9BPhysics

The Current War (2017) [Netflix] Available at: https://www.netflix.com/gb/title/80192838?s=i&trkid=13747225&vlang=en&clip=81316498

Introduction:

Academic Journal 2023 - 42

General Relativity and the Mathematics behind it Aaditya Nandwani

What is the most beautiful Physics theorem of all time? A physicist: if asked this, would find this question notoriously difficult to answer. The 19th and 20th centuries alone; were one of the most influential periods in scientific history; the vaults containing the most fundamental answers to the mechanisms of the universe finally being opened, releasing knowledge about light and EM waves, thermodynamics and quantum mechanics. However, if you were to ask a physicist this question; most of them would mention the theory of General Relativity. Just like Mozart’s requiem or Michelangelo’s Sistine Chapel, many would describe GR as an undying work of art; unifying ideas about space (and its geometry) and finally giving a more precise answer to how the enigmatic “force” of Gravity works. In this article, I hope to explain the development of General Relativity, how it works as well as why its contributions are so important to Physics.

Section 1: The development of General relativity:

An important part of the development of General relativity was the Equivalence principle, which bridged special relativity with General relativity. This principle builds upon the original Newtonian ideas about gravity, acceleration and motion. Newton’s second law states that the acceleration of an object is directly proportional to the net Force acting on the object, which is famously summed up by the equation F=mia (where mi is the mass and a is the acceleration) In this case the mass is known as the inertial mass; and this equation described how an objects motion changes with acceleration. However, he also stated that F=mg, where m is the gravitational mass. This states that if an object is in free fall, it will experience a force equivalent to the mass of the object multiplied by the constant of free fall (which is 9.8 ms-1). Einstein expanded on this and stated that the inertial mass (mass derived from Newton’s second law) is equal to the gravitational mass (derived from the latter equation). This leads us on to Einstein’s well-known elevator thought experiment. He imagined a person in an elevator, and the cables to the elevator snaps. The fate of this person is rather grim and unfortunate! However, he just focused on the period in which the elevator is falling. When the elevator is in free fall, he postulated that it is impossible for the person inside to tell the difference between gravitational effects and acceleration effects.

Einstein then used his thinking and expanded on the equivalence principle. He stated that the effects of acceleration and gravity are “locally indistinguishable”. This word “locally” is significant, it means that in a small region of space, and therefore it is clear that Einstein was starting to make a distinction between the behaviour in small regions of space and larger regions of space. This work

43 - Academic Journal 2023
Figure 1.1: An image illustrating the elevator thought experiment Figure 1.2: Newton’s 2nd law of motion: a piece of the general relativity puzzle.

was a key milestone in his development of the field equations; the script if you will, for the theory of General Relativity.

Section 2: General Relativity

In the introduction, it was mentioned that General Relativity is the most beautiful theorem of all time; partly due to GR’s ability to uncover and answer previous questions arising from Newton’s work. General Relativity consists of two main ideas: Space-time as a 4 dimensional “fabric” and matters interaction with this Space-time.

Idea 1: Space-time:

Before Einstein’s work, space was described using a different form of Mathematics called Euclidean geometry, which described the topology of space around us using a cartesian co-ordinate system, where, roughly speaking, the position of a point is determined by using two perpendicular lines around it. Space-time, however, is a wildly contrasting idea to this original branch of Mathematics! Originally, as described, space was believed to consist of only 3 dimensions: up and down, forwards and behind, left and right. Early 19th century Mathematicians added on another dimension: time. Therefore, space is no longer a static 3-dimensional structure, but rather a 4 dimensional flexible “fabric”, which consists of the 3 spatial dimensions fused with 1 time dimension. This fascinating addition would allow Einstein to work on geodesic mathematics which would eventually contribute to the final theory of General Relativity; for which he won the Nobel Prize in 1922.

To understand geodesics; it must be made clear that if an object is not accelerating; it will follow the shortest possible path in space-time. An interesting thought experiment can be conducted; imagine two scenarios (both in which there is a pair of people walking together). The first scenario is on a flat plane. If, at the start, their paths are parallel to each other; regardless of how long they walk, it is impossible for their paths to converge. However, now imagine the second scenario; in which they are (hypothetically, of course!), walking on a spherical 3D plane. They are still walking in a straight line; however their paths indeed do converge! This means that all non-accelerating objects will move in straight lines; however, their straight lines are distorted due to warped spacetime.

Idea 2: how does matter cause space-time to be warped?

You may be aware of Newton’s gravitational theory; one tied to many myths and mysteries; most commonly regarding the apple! It states that “Every particle attracts every other particle with a

Academic Journal 2023 - 44
Figures 2.1 and 2.2; a 4-Dimensional model of Space-time versus the Euclidean 3D model of space. A diagram showing the concept of what a geodesic is.

Force that is proportional to the product of their masses and inversely proportional to the square of this distances between their centres” This statement describes a mathematical formula; which summarises Newtonian views about gravity; that it is a non-contact attractive force between objects of mass; and the strength of the force depends on how big the objects are (strictly what their masses are). Although this does work to explain large objects (such as planets orbiting planets) and “ordinary” situations, for example why a ball falls when we drop it; the theory immediately disintegrates when we attempt to explain physics on a microscopic scale or at high speeds near the speed of light. Therefore, Einstein, using features from his Special Relativity theory and the equivalence principle, created a new definition of Gravity. Instead of Gravity as a force (which is described as a mechanical influence which affects the motion of an object); he made a revolutionary observation. Gravity is simply the curvature of space-time. This provided a mechanism for why objects of mass are attracted to each other; and even explains why there is a stronger attraction between objects of larger masses. Einstein described that space-time was flexible (so could be twisted, bent or warped) and therefore a large object such as the Earth causes the space-time around it to be deformed in a way akin to the shape of the Earth. Therefore, the falling of the apple is not explained by a force but instead just geometry. If the apple was on a sheet of completely empty, flat space-time and pushed with a force, it would continue in a straight line. However, when the apple is dropped; the apple follows a geodetic path (as described earlier, due to the original straight line being contorted as a result of flat space-time), and therefore the apple is “attracted” towards the Earth.

Mathematics of General Relativity; section1: Geodesic mathematics:

The first piece of Mathematics which must be tackled in order to understand GR is Geodesic Mathematics. As described earlier; a geodesic is a curve representing the shortest path between two points on a surface. When dealing with the trajectories of objects in curved-space time, we can look at the velocity of the object at a given time and given position and then “transport” this velocity across the trajectory to work out its shape (and therefore the geodesic). This essentially means that the vector of the velocity does not change, therefore the “natural” movement of bodies is non-accelerating. The velocity of the object can be written in the form as shown above: this means that that the vector of the velocity can be written as the sum of the horizontal and vertical components of the vertical vector multiplied by the basis vector. The part in the middle of the derivative (the part in the brackets), is a shorthand version which has been derived from the equation we achieved above. The next part of this includes manipulating derivatives and using a derivative product rule.

We can say that the derivative of the product is equal to the sum of (each term multiplied by the derivative of the other term), which allows us to create the equation above. From the manipulation of the equations above, we now have an equation linking the rate of change of the components of

45 - Academic Journal 2023

the vector velocity, the rate of change of the basis vector and the actual rate of change of velocity; we can move on to focus on the basis vector.

- The basis of a vector space can be simply described as the set of vectors in that space which can be used as co-ordinates for that vector.

We then can look at the rate of change of the basis vector (The basis vector is represented by e). As seen in the diagram above; we can explore how the base vector (Which is represented by e) can change with respect to x (the change in position of the co-ordinates)

When exploring this; it introduces us to a new tool; the Christoffel symbol; which is crucial to understanding general relativity. It explains how the grid of space-time changes as time progresses.

Finally using the Christoffel symbols; we can use the equation we had earlier to achieve the final Geodetic equation.

That is the beauty of Physics. An equation no longer than a quarter of a line allows us to predict the trajectory of how an object moves in space-time!

Mathematics of General Relativity; Part Two: Einstein Field Equation:

As described earlier; the second great pillar of General Relativity is the Einstein Field Equation. However, for us to understand and manipulate the Einstein Field Equation; each component must first be analysed.

- A tensor can be described as a mathematical object which has “an arbitrary yet defined number of indices”

1. The Ricci Tensor is represented by R mu v (the symbol on the extreme left) and is a tool which allows physicists to represent the difference between volumes in curved space and regular Euclidean space (which was mentioned earlier in the article)

Academic Journal 2023 - 46
Figure 3.1: A diagram showing how the basis vector changes with time.

2. The Metric tensor is represented by the symbol g mu v (the symbol on the right of ½) and is a tensor which allows Physicists to calculate small distances between points on a surface

3. The Ricci Scalar (on the right of the metric tensor) characterises the average curvature in all directions

4. The last component we must analyse is the momentum energy tensor which is a tensor which describes the momentum and energy of matter

Through interpretation of the equation, the left side of the equation tells matter how to curve space-time, whereas the right side of the equation tells matter how to move through the curved space-time. In summary; these two equations (the geodesic equation and Einstein’s field equation); have allowed us to model the curvature of space-time and how the space-time is curved!

Applications of General Relativity; Black Holes:

Black Holes are one of the most revered yet mysterious monsters in the Universe, swallowing and consuming any objects unfortunate enough to be within the critical distance. The definition of a Black Hole is a region of space-time in which the gravitational pull is so strong that not even light (which travels at 3 * 10^8 m/s in a vacuum) can escape it. The Black Hole consists of the Event Horizon (which can be thought of as the “mouth of the Black Hole”) its existence initially postulated by a physicist called Karl Schwarzschild. As an object gets closer to the Black Hole; the escape velocity (velocity required for an object to escape the gravitational pull) increases until we get to the Event Horizon (at which the escape velocity is larger than the Speed of Light). As we know from Einstein’s theory of Special Relativity; it is theoretically impossible for any object to travel at a speed exceeding 3 * 108 m/s and thus any objects which find themselves within the escape horizon cannot escape. The Black Hole also consists of the Singularity (which can be thought of as the “Stomach of the Black Hole” where all the matter and energy which falls into the Black Hole accumulates). From General Relativity, we know that matter causes space-time to warp. However, general Relativity also predicts that at the singularity; the space-time is warped to an infinite degree which must mean that the singularity there is an infinite density! Although this may seem slightly counterintuitive and some may therefore disagree with the existence of Black Holes, they have been proven to exist (in 2019 the first ever picture of a black hole was taken by the Event Horizon Telescope!)

What is the Future of General Relativity?

Despite General Relativity being one of the most successful and significant theories in all of Phys-

47 - Academic Journal 2023
Figure 4.1: An image of a Black Hole taken by the EHT (Event Horizon Telescope) in 2019.

ics (and despite an experiment successfully detecting Gravitational waves in 2016 essentially proving GR) there are still some strands of the story which are incomplete. General Relativity predicts the existence of a singularity at the centre of black holes (an area of infinite density); however, at the singularity, many of the laws of Physics we currently have start to break down. Another example is at the quantum level; where one can only know the probability of an outcome happening, rather than knowledge of a definite outcome. This greatly contradicts general relativity which relies on “continuous” and “deterministic” nature of events; where one event clearly causes another event to take place. Black holes are a perfect case, where these two ideologies dramatically seem to clash. General Relativity states that information can be destroyed in a Black Hole, whereas Quantum Mechanics states that it is impossible for information to be destroyed. So, what does the future hold in place for General Relativity? While some say that GR is incomplete and must be abandoned; there are many who are seeking a theory which elegantly ties together General Relativity and Quantum Mechanics (for example new strides in String Theory

seem particularly promising). Therefore, although it seems to be impossible to answer the question of this theory’s future; it is possible to answer an equally important question. Is it the most beautiful theory of all time? Despite its flaws, General Relativity continues to inspire and excite physicists across the world. A theory which initially set out to revolutionise Gravity has managed to establish itself as a backbone of Physics, providing an entirely new and refreshing outlook to the Universe. Herman Bondi albeit, once said, “A theory is only scientific if it can be disproved”. Will this colossal pillar ever be broken? Only time can tell.

Sources and references:

Bibliography: General Relativity article: Hossenfelder, Sabine. “What Is Einstein’s Equivalence Principle?” YouTube, 1 Aug. 2020, www.youtube.com/watch?v=vng2-R64rAY. Accessed 15 Nov. 2022. Helped me to understand the basics of what the Equivalence Principle is.

Possel, Markus. “The Elevator, the Rocket, and Gravity: The Equivalence Principle «Einstein-Online.” Einstein-Online.info, 9 Jan. 2005, www.einstein-online.info/en/spotlight/equivalence_ principle/. Accessed 17 Nov. 2022. Explained what the elevator thought experiment was in more detail and also how it was used to show the equivalence principle.

Arvin Ash. “General Relativity Explained Simply & Visually.” YouTube, 20 June 2020, www. youtube.com/watch?v=tzQC3uYL67U. Accessed 18 Nov. 2022. Gave a basic summary of what space-time was and what General Relativity says about space-time.

Mann, Adam. “What Is Space-Time?” Livescience.com, Live Science, 19 Dec. 2019, www. livescience.com/space-time.html. Accessed 21 Nov. 2022. Helped me to understand what spacetime is and why it is considered as 4 dimensional.

Academic Journal 2023 - 48
Figure 5.1: Quantum mechanics: a branch of physics concerned with the subatomic levels. On the left is an image illustrating quantum entanglement, a particularly fascinating branch of Quantum mechanics.

ScienceClic English. “The Maths of General Relativity (3/8)- Geodesics.” YouTube, 8 Dec. 2020, www.youtube.com/watch?v=3NnZzRb7L58. Accessed 22 Nov. 2022. Explained what geodesics are; also provided me with information for the section titled: Mathematics of General Relativity.

Hoang, Lê Nguyên. “Spacetime of General Relativity.” Science4All, 2 June 2013, www. science4all.org/article/spacetime-of-general-relativity/. Accessed 27 Nov. 2022. Explained how space-time is warped (Due to gravity) and also provided basics for the Mathematics of General Relativity.

DeCross, Matt, et al. “General Relativity | Brilliant Math & Science Wiki.” Brilliant.org, brilliant.org/wiki/general-relativity-overview/. Accessed 1 Dec. 2022. Gave an in-depth explanation of the mathematics of the Einstein Field Equation and the Geodesic equation.

ScienceClic English. “Maths of General Relativity (7/8)- the Einstein Equation.” YouTube, 5 Jan. 2021, www.youtube.com/watch?v=PCujLVSRuMk. Accessed 1 Dec. 2022. Provided an explanation of what the Einstein Field Equation is, and what the components of the equation are and their meaning.

Nola Taylor Redd. “Black Holes: Facts, Theory & Definition.” Space.com, Space.com, 11 July 2019, www.space.com/15421-black-holes-facts-formation-discovery-sdcmp.html. Accessed 3 Dec. 2022. Provided a basic description of what Black Holes are.

“Anatomy | Black Holes.” NASA Universe Exploration, universe.nasa.gov/black-holes/anatomy/. Accessed 4 Dec. 2022. Provided an explanation of the parts of a black hole and how they work.

“Unifying Quantum Mechanics with Einstein’s General Relativity.” Research Outreach, 19 Dec. 2019, researchoutreach.org/articles/unifying-quantum-mechanics-einstein-general-relativity/. Accessed 5 Dec. 2022. Provides information whether there have been attempts to combine General Relativity with Quantum Mechanics.

Powell, Corey. “Relativity v Quantum Mechanics – the Battle for the Universe.” The Guardian, 4 Nov. 2015, www.theguardian.com/news/2015/nov/04/relativity-quantum-mechanics-universe-physicists#:~:text=In%20general%20relativity%2C%20events%20are. Accessed 6 Dec. 2022. Shows how General Relativity contradicts with Quantum Mechanics.

49 - Academic Journal 2023

Why Laplace’s demon doesn’t work

Bright Lan

According to google, randomness is “The quality or state of lacking a pattern or principle of organization; unpredictability.” In our society, we rely on the idea of randomness for countless things, from board games to military drafts. Randomness prevents bias, helps ensures that decisions are fair and justifiable, and keeps our lives interesting. But are things actually random? Take something such as a coinflip for example. Coinflips are viewed by almost everyone to be a fair way to decide things, such as who serves first in tennis, because of its seemingly random nature.

However, coinflips aren’t actually random if you think about it. If we knew the initial conditions, the force the coin was thrown with, and countless other precise details about the flip, then technically we could calculate which side the coin would land on before it had even landed. In fact, coin flipping machines have even been built to ensure the same result every single time.

Now imagine this concept of predictability on a completely different scale. Not to just to a singular coin, but to the entire universe. Yes, it sounds bizarre, but this is what French scholar Pierre-Simon de Laplace suggested in 1814 – that if a “demon” of some sort knew the precise location and momentum of every single atom in the universe, it would be able to calculate both the past and the future at any time by using the laws of classical physics. This theory was a major issue, because if it was indeed true, it would mean that we humans would have no free will. Anything that had taken or will take place would be completely predetermined. The theory does make some sense after thinking about it, and at the time it sparked a huge controversy which led to many other developments in science in order to disprove it. Here, I will attempt to explain three important arguments against Laplace’s demon.

The first argument is that Laplace’s demon is incompatible with quantum mechanics. One way is because of Heisenberg’s Uncertainty Principle. The principle states that we cannot measure BOTH the location and the momentum of a particle with absolute precision. Basically, the more precisely we measure the location of a particle, the less accurately we know about the momentum, and vice versa. This means that Laplace’s demon cannot possibly exist, as it wouldn’t be able to know both the momentum and location of any atoms at all. This argument also further reinforces

Academic Journal 2023 - 50

the butterfly effect argument, which we’ll cover later.

Quantum mechanics also states that there are certain things which are completely impossible to predict. For example, radioactive decay – it is impossible to predict when a specific radioactive atom will decay. The reason for this is because at that subatomic level, chances are determined by, well, nothing. In other words, it’s impossible to predict something if you have no other information on it. Everything we know on a quantum level is probabilistic – there is a chance of one thing happening and a chance of another.

A very good demonstration of this in action is through a slight variation of the famous double split experiment. In the experiment, light is passed through a plane with two slits in it. However, in our variation, we place detectors at each slit, to see which slit the photons will go through. Since there is no other information that we have, we cannot perform any calculations and it would be impossible to predict which slit the particle would pass through.

Another potential argument against Laplace’s demon is the butterfly effect. To explain it simply, the butterfly effect is when small changes in the initial conditions can amplify and result in a huge change over time.

I’ll give you a historic example: in the 11th century, Gregory IX, who was Pope at the time had a strong disliking for cats, and ordered for them to be exterminated, claiming that they were associated with the devil. Many cats were then killed, and he died. But this change would have a much bigger impact over time. The reason for this is because there were less cats, less rats were being eaten. As a result, there was an increase in the rat population, meaning the spread of the bubonic plague and black death would be much larger, causing countless deaths. Now, this can also be applied to Laplace’s demon. This is where the argument has two sides. If Laplace’s demon’s knowledge on the momentum and location of every particle wasn’t infinitely accurate then this slight variation between its knowledge and the actual properties of the particles would be amplified over time, and any of the demon’s predictions would be completely inaccurate. We can even see this in weather forecasts – even with today’s incredible technology, it’s almost impossible to consistently forecast the weather accurately over 10 days in advance, because of the small flaws in our data amplifying over time, resulting in huge inaccuracies. However, this argument is invalid if we assume that Laplace’s demon does know everything infinitely accurately, because that means there would be no variations at all. However, this is impossible, due to Heisenberg’s principle, as explained in the previous paragraph.

The final argument against Laplace’s demon is due to the laws of thermodynamics. In thermodynamics, there are reversible and irreversible processes. For a process to be reversible, it would need to be able to get the system to exactly where it was before the change, without any change in the universe. This simply isn’t possible – reversible processes are merely theoretical. For example, if I leave some ice out in the sun, it will melt – but it won’t return to its frozen form without any external change. The actual reasoning behind this is because of the 2nd law of thermodynamics

51 - Academic Journal 2023

– the entropy of a closed system can only increase. In this case, the entropy of the water is higher than that of the ice.

However, irreversible processes cannot, as stated in the name, be reversed, and make up all the processes which happen in our universe. This completely disproves Laplace’s demon, which relies on the premise of reversibility, supposedly being able to reconstruct the past from the present. In conclusion, although Laplace’s demon is widely regarded as an incorrect theory, it’s still an extremely important one, because of the very need to disprove it. It has led to countless developments in science, especially in the field of thermodynamics, and has sparked debates all around the world in both physics and philosophy. Even in recent years, people are still coming up with new theories against it, clearly showing its significance. But if, somehow, free will was indeed just an illusion, just like in Laplace’s theory, you still shouldn’t lose any sleep over it – just keep on living your life and have fun.

Sources:

“Laplace’s demon”

https://en.wikipedia.org/wiki/Laplace%27s_demon Accessed on 18/06/22

Unknown published date and author

Looking Glass Universe (25/02/15) :“Quantum randomness”

https://www.youtube.com/watch?v=hGGb0nGTPLk Accessed on 19/06/22

Derek “Veritasium” Muller (06/12/19) : “Chaos: The Science of the Butterfly Effect”

https://www.youtube.com/watch?v=fDek6cYijxI Accessed on 18/06/22

Michael “Vsauce” Stevens (16/07/14) : “What is Random?”

https://www.youtube.com/watch?v=9rIy0xY99a0 Accessed on 17/06/22

Tangerine Education (12/05/18): “Reversible and Irreversible Thermodynamic Processes”

https://www.youtube.com/watch?v=hpur62rjYuw Accessed on 19/06/22

Also many thanks to Aashman Kumar and Dr Corlett for helping with some of the parts I struggled to understand

Academic Journal 2023 - 52

The Shrouded History of Women in STEM

The history of women’s involvement in the field of engineering dates back to the advent of the industrial revolution. Despite the notable contributions made by women to the discipline, their efforts have historically been undervalued and overlooked. However, this has not deterred the determination and resilience of women in the field, who have continued to persevere and break down societal barriers, paving the way for future generations. While the representation of women in engineering has improved in recent years, gender discrimination and bias remain persistent challenges. The ongoing pressures of a gender pay gap and a lack of representation in leadership positions serves as a reminder that there is still much work to be done in the pursuit of gender equality in this field. Nevertheless, the tenacity and impact of women in engineering serves as a source of inspiration and motivation to strive towards a more diverse and inclusive future. Within this article, I will seek to represent early female engineers: Ada Lovelace, Kathrine Johnson and many more, highlighting the importance of their creations and the absurdity of their shrouded skills.

Throughout history, there have been many instances where women’s contributions to the field of engineering have been overlooked or credited to men. Ada Lovelace’s contributions to the field of computer science went largely unnoticed until the 20th century. Despite her groundbreaking work on Charles Babbage’s Analytical Engine, her achievements were not widely recognised or acknowledged until much later. Lovelace’s collaboration with Babbage resulted in the creation of the world’s first computer program, yet her contributions were often overshadowed by those of her male collaborator. Lovelace’s ideas about the potential uses of the Analytical Engine, particularly in the realm of music and art, went far beyond the capabilities of the machine itself, and her insights into the future of computation helped lay the foundation for modern computer science. However, despite her significant impact on the field, Lovelace’s contributions were not widely recognised or appreciated during her lifetime. In fact, her work was largely ignored for over a century after her death, as the field of computer science was dominated by men and the study of her work was not a priority. It was not until the mid-20th century, when computer science as a discipline began to emerge and gain recognition, that the full extent of Lovelace’s contributions to the field became widely recognised and acknowledged. Today, Ada Lovelace is considered an important figure in the history of computer science and a symbol of women’s achievements in a field that was, and remains, dominated by men.

Furthermore, Chien-Shiung Wu was a trailblazing physicist who made significant contributions to the Manhattan Project and the study of radioactive decay. Despite her key role in these experiments, her contributions were credited to her male colleagues, who went on to win the Nobel Prize in Physics. This is a clear example of the systemic gender bias that has persisted in the field of science and technology, where women’s contributions have often gone unnoticed, and their work has been credited to their male colleagues. Wu’s experimental work was crucial to the discovery of the violation of parity, a fundamental principle in physics, yet her name was not mentioned when her male colleagues were awarded the Nobel Prize in Physics. This unjust situation highlights the persistent barriers that have prevented women from being recognised for their achievements in science and technology and the need for further efforts to promote gender equality in these fields. Despite these obstacles, women like Chien-Shiung Wu have persevered and made significant contributions to the field, inspiring future generations of women to pursue careers in science and technology and rise to the occasion of more women in STEM.

53 - Academic Journal 2023

Finally, Katherine Johnson was an American mathematician and aerospace engineer who made significant contributions to the United States’ aeronautics and space programs during the 20th century. Despite her key role in several important missions, including the first American human spaceflight, Johnson’s work was not widely recognised until recent decades. This is due, in part, to the systemic barriers that have prevented women, particularly women of colour, from pursuing careers in engineering more broadly. Johnson’s experience is a testament to the resilience and determination of women who have overcome systemic barriers to make significant contributions to the field of science and technology. Her legacy serves as an inspiration for future generations of women and girls who aspire to pursue careers in these fields and will encourage them to take an active stance in proving their own worth.

In conclusion, it is crucial that we continue to work towards a more equitable future for women in engineering and all STEM fields. This requires addressing the systemic barriers that have prevented women from pursuing careers in these fields and promoting policies and initiatives that support women’s advancement and recognition in STEM. By doing so, we can ensure that the contributions of women in engineering are no longer shrouded in obscurity but are instead celebrated and recognised for their vital role in shaping our technological future.

Academic Journal 2023 - 54

Imperfect Harmony

Jazz is perhaps the musical embodiment of progressive change. Its roots as African-American dreams of freedom improvised and intermingled with mainstream and white Folk music, the technicalities of the music itself, and its experimental pioneers such as Miles Davis, all contribute to its image of challenging established ideas of the time, both musical and social. Yet the genre’s desire for change did not extend into the realm of feminism, and surprisingly, the relationship between the two was, and to some extent still is, very problematic - one of imperfect harmony.

Jazz music is a musical genre developed in the late 19th to early 20th centuries by African-American communities in New Orleans. It has roots in blues and ragtime, embodying certain ideals of freedom and independence through which it evolved, in the form of improvisation, swung rhythms and complex extended chords, to name but a few characteristic features. In the words of immortal pianist Duke Ellington, “the music is so free that many people say it is the only unhampered, unhindered expression of complete freedom yet produced in this country”. When considering this background of challenging oppressive societal ideals, and the fact that the peak of the Jazz age (the 1910-20s) heavily intertwined with first wave feminism, which included the Suffrage Movement, the genre was the perfect candidate to embrace such movements, and actively support women in gaining autonomy stripped from them by the patriarchal society of the time. However, this was not the case, with the genre even actively excluding aspiring female musicians, resulting in little to no representation of women in the industry, the history of talented female artists being supressed, and even today, a very hostile environment for any who are attempting to discover the world of jazz.

A quick glance at the top 100 Jazz artists on ‘Rate Your Music’, (a website where the rankings are based on users reviews), would be enough to visually describe the issue. Out of one hundred musicians, American pianist and harpist Alice Coltrane (Figure 1) is the only female musician, occupying the number 39 spot. The simple explanation for this is an extreme lack of representation. Exclusion of women from the Jazz industry whilst it was still developing has transformed it into a male-dominated one, where talented female artists are hidden from the public eye, and in turn less women are encouraged into the industry, resulting in the viscous cycle we found ourselves in. There are several reasons responsible for the lack of female musicians. Primarily, jazz has always been deemed as a ‘wild’ genre, with its free improvised forms and fast tempos commonly found in subgenres such as bebop, and any female jazz instrumentalists were therefore considered unfeminine and disregarded. Furthermore, jazz began in light night bars and the red-light district of New Orleans, which presented these musicians with a daunting decision between passion and dignity, and as Susanne Vincenza of all female jazz band ‘Alive!’ summarised, “What ‘real lady’ was going to be part of that scene? It was not proper.” Finally, due to the complexity of the music, heavily built on complicated musical theory completely distinct from other genres, there was a common belief that jazz was too complex for women to understand and excel in, a belief constantly reinforced by influential figures such as Marvin Freedman of the Downbeat magazine, who stated, “there are two kinds of women, those who don’t like jazz music and admit they don’t, and those who don’t like jazz music but say they do”.

55 - Academic Journal 2023
Figure 1

As a result of the above common archaic beliefs, women were alienated from these very much patriarchal jazz environments, and the very few involved were viewed as masculine and were significantly less popular primarily due to their gender, an example being pianist Mary Lou Williams, who only gained more recognition due to her perceived status as “one of the guys”. This begs the question, why should female instrumentalists have to make a choice between their femininity and their careers? An interesting nuance is that whilst instrumentalists struggled to find recognition, female vocalists were far more successful, with vocalists such as Billie Holiday, and Ella Fitzgerald becoming intrinsic names to the genre. Potentially, singing was considered more feminine, and certainly the standards in their repertoire were far less ‘wild’, with slower tempos and less disjunct chord progressions. Even the lyrics sung were designed to appeal to a male audience, with a notable example being early Jazz ballade ‘Black Coffee’ which contains the lyrics “Now man is born to go loving. A woman’s born to weep and fret and stay at home and tend her oven”, which speak for themselves as problematic and highly supportive of the rigid gender roles feminists were attempting to dismantle at the time.

One example of a talented female Jazz musician practically erased from the records was Vi Redd. Vi Redd was a saxophonist and vocalist, born in Los Angeles in 1928, whose popularity peaked in the 1960s. an extremely talented musician, she embarked on several tours, including a trip to London to perform at the prestigious Ronnie Scott’s, and was the first instrumentalist to headline a jazz festival. However, she faced much gender-based criticism, and was viewed more as a vocalist in spite of her undeniable prowess on the alto sax. In the Los Angeles Sentinel’s report of the afore mentioned jazz festival, they described her as an “attractive young girl alto sax player,” and barely touched on the actual music produced. Perhaps more absurd is the condescending comparison of the then 34 yearold mother-of-two to a ‘young girl’. Despite her immense talent, many record companies were reluctant to invest in female artists, she only recorded three albums, including the brilliant ‘Bird Call’ (see figure 2), with most recordings quickly going out of print. Certainly, Vi Redd is just one example of how female jazz instrumentalists remain largely invisible to jazz history.

Evidently, the history of Jazz is one blatantly conforming to patriarchal beliefs, but has this changed in modern times? Certainly, there are more commercially successful female jazz artists than the peak of the jazz age, and even today this number is increasing, but the issue of underrepresentation has by no means been completely resolves, with women only making up 16% of the core personnel of jazz albums produced in 2021 (see figure 3). The opportunities for aspiring female musicians are broadening, but due to the relative absence of predominant female figures in the jazz industry, we may be finding ourselves in a viscous cycle where there are no role models to look up to, and therefore a lack of desire to explore the world of jazz. Even then, whilst they may not be as widespread, archaic beliefs about the capability of women to perform and listen to jazz are still very much present, as demonstrated by the following quotes. A 2012 copy of the ‘Downbeat’ magazine declared that 33 year-old saxophonist Hailey Niswanger “has the power to be one of the best female alto saxophonists in the country, if not the world”. Whilst this does indicate a positive shift in the industry, one of increasing opportunity and exposure for female artists,

Academic Journal 2023 - 56
Figure 2 Figure 3

it also suggests that Niswanger is limited, and defined by her gender, only with the capability to be a ‘female alto saxophonist,’ rather than just an ‘alto saxophonist.’ More shocking, are pianist Robert Glasper’s words from a 2017 interview, claiming that, “When you hit that one groove and stay there, it’s like musical clitoris. You’re there, you stay on that groove, and the women’s eyes close and they start to sway, going into a trance.” He faced extreme backlash on social media for this and other comments, but it does expose the archaic belief that women can only appreciate jazz on an erotic level, nothing deeper. As NPR’s Michelle Mercer revels, “To be a female jazz fan and critic is to live with a frustrating irreconcilability: I have an intellectual passion for creative, complex music and, sometimes, the musicians who make that music doubt my ability to appreciate its creativity and complexity.”

Whilst the background of jazz is one of shocking misogyny and female exclusion, there has been some positive change, with increased female opportunity, but inevitably, mindsets which oppose female participation in the genre do still exist, and have no place in the wonderful art that is jazz.

57 - Academic Journal 2023

Just Watch Me Habibah Choudhry

Your disinterest is my protection And my defence is prevention

From all the things that you would do If I liked your attention.

My silence is seen as depression

Yet my speech is somehow aggression. Why am I fighting a battle that wouldn’t be won, Even if all of creation Built a wall from your oppression? Because I am just a possession.

My weakness is explanation A reason for desperation. My being is your inclination To act on your temptation. If I succumb to your decisions I am a coward for no reaction To your tyranny.

Yet, if I stand up for violation, To my rights, And degradation, Of my worth, I ruin your reputation. Because your honour is worth more than my predation, By you, your attackers and...

Exploitation! I cannot escape, The chains of generations in my position.

My retaliation Is an allegation

To your masculinity, A threat to your superiority. If I am the prey Why do you worry so much Of change to your authority?

Academic Journal 2023 - 58

It’s not like you’d ever promise me The safety of my own body. Countless opportunities

To turn me into yet another horror story Of warriors, slave to your violence: gory. Unable to run, trapped by Your sexist ideology.

“Don’t speak too loud”

“Don’t wear those clothes”

“Don’t fall in love”

Watch me

59 - Academic Journal 2023

Cats in the Courtroom: the problem of a fickle judiciary

Those of us familiar with the courtroom drama may think of judges as a sort of trial referee: a stern yet sympathetic authority figure whose sole purpose it is to say ‘hmm…I’ll allow it this once’ with an eyebrow arched in curiosity as the prosecution commits some of the most heinous violations of legal protocol you’ve ever seen (they really did Get Away With Murder in that show). While to some extent this is true (a story all in itself, believe me), judges serve a much more significant, if often overlooked, purpose in the trial process.

The prosecution presses particular charges against the defendant on behalf of the crown; the jury decides whether or not the defendant is guilty of the crime; and if they’re found guilty, the judge decides the sentence the defendant has to serve. This power, however, is pretty contentious. In a system founded on checks and balances, sharing power between the people and the state, establishing a paragon of justice, it’s a little odd that once they’re found guilty, one guy has the power to decide whether they go to jail for 6 months or 6 years. Since judges have unilateral power, within the bounds of statute/precedent limitations, to decide the severity of a sentence – including, in several jurisdictions across the world, the death sentence – the question then becomes: can judges be trusted not to be subjective or arbitrary when passing judgement? And if not, what can be done to overcome this?

This idea is often raised in the context of the question, ‘does it matter what a judge had for lunch?’: the idea being, if the judge had a particularly satisfying lunch, would they be more inclined to give a more lenient sentence and vice versa? Where sentences aren’t mandatory and are instead left to judicial discretion, there is a natural risk that the personal biases of a judge, however slight, may affect their sentencing decisions. But is this a valid concern?

There is a strong argument to suggest that, even with the power a judge has over sentencing, the feelings of the individual ultimately have very little bearing on the eventual sentence due to the system of legal safeguards that constrain this ‘unilateral’ power. The first of these is the sentencing limits: most crimes will carry a minimum and/or maximum sentence, which provides a range in which the judge can operate at their discretion. This, in theory, prevents two convicts from receiving wildly different sentences for the same crime, with judges considering circumstantial factors to make adjustments where necessary. Therefore, even if a judge’s impartiality may be compromised by personal factors, the consequences of this should not be excessively significant because the outcome is essentially the same. Furthermore, judges have the option to remove themselves from the trial if they recognise their impartiality may be compromised. This process, known as judicial recusal, exists to check the legal power of a judge’s opinion by giving them the option to withdraw

Academic Journal 2023 - 60

from a case they may compromise, and, since judges are renowned for their legal wherewithal, they will more often than not opt to recuse if they suspect they may unfairly influence sentencing. Although some may argue that, since recusal is a personal choice, there is no guarantee that a judge will go through with it, lawyers have the ability to request recusal if they believe a judge is not in a position to make an unbiased ruling. If this is found to be correct, the judge is now expected to recuse themselves, and failing that, the lawyer can file a higher court appeal for judicial disqualification which takes the matter out of the judge’s hands. On the subject of appeal, a judge’s ruling is not necessarily final. If the defense believes a conviction or sentence to be unjust, they have the option to appeal the case through to the higher courts. Ultimately, unless the defendant has some astonishingly bad luck, they will at some point receive an uncompromised second opinion that does its due diligence in trial, maintaining the objectivity required for our system to function. The legal system recognizes the authority of the judge in sentencing, but it is also aware that judges are only human too. It is not oblivious to the ramifications of this level of unilateral control if left unchecked, and so the aforementioned safeguards hypothetically prevent this authority from contravening the justice it sets out to protect.

However, regardless of what the legal system ‘sets out’ to achieve, does this system actually work in practice? Can something as simple as a dodgy lunch really compromise the broader integrity of justice? Despite the steps taken to minimize the possibility of unfair trial through non-legal factors, the system is far from perfect. Firstly, the range of minimum to maximum sentences can vary wildly and depend largely on non-empirical value judgements. Looking at shoplifting as an example, where stealing over £200 worth of goods can carry a maximum sentence of seven years in custody, it depends on whether or not the judge believes the defendant to be ‘capable of rehabilitation’ that decides if they experience jail time. The consequences of imprisonment relative to a non-custodial penalty are unjustly disproportionate, with severe and long-term impacts on social, physical and psychological wellbeing, and the difference between one or the other is solely how charitable a judge is feeling on a certain day. This, too, is where the problem lies with existing safeguards: it underestimates how powerful the most minor of influences can be. Judicial recusal most commonly applies to severe, and most importantly obvious, conflicts of interest, largely pertaining to personal involvement or ideological biases. These are blatant, quantifiable and easily resolved IN THEORY. In reality, the prejudices of a judge may be much harder to prove, harder still to force recusal, and that still doesn’t account for the possibility of a judge, on average a sixty-year-old upper middle-class man, forming a dislike of someone who wears a crop top or septum piercing to court. As mentioned previously, the ‘essentially negligible’ differences between sentences are not so negligible to the lives they impact – an extra year in jail means little on paper, but everything in practice in terms – so both the random and systemic biases of a judge in determining where they fall on the sentencing spectrum expose defendants to massive, undue risk. This also fails to address the strain on the defendant throughout this process: appeals are not a guarantee, especially when the influencing factor is so insignificant that it is entirely overlooked, and even if one is granted, it takes up valuable time and resources to achieve a ruling that should have been given in the first place. There’s a domino effect at work here, where the smallest factors can snowball into cataclysmic disasters, and the legal system fails to account for such possibilities.

However, the most dangerous possible consequence of judicial indiscretion is embedded into the very fabric of British law, and could be disastrous if left unaddressed. The UK, like many anglophonic countries, operates under a common law system. Unlike civil law, where case rulings are largely determined by legislature, common law depends on case precedent: that is, a judge’s ruling has the potential to influence how future cases of similar circumstances will be settled. Herein lies the problem: a questionable decision made by a judge on a certain day in a certain situation goes on to shape how all future cases in that vein will be settled. The power of judges to shape the fabric of the law through their decisions is not a responsibility taken lightly, but this unilater-

61 - Academic Journal 2023

al authority is still dangerously susceptible to human subjectivity, and the more ingrained these rulings become in our legal system, the more difficult they are to extract. When a soggy egg-andcress sandwich has the capacity to determine that every shoplifter from a low-income community is sentenced to seven years in prison, no matter how unlikely it may seem, it is a sign that serious initiative needs to be taken to eliminate even the possibility of such an outcome.

The question then becomes, when confronted with the problems of an arbitrary judiciary, what can be done to fix it? The first potential solution, to have a fixed sentence for a crime substantiated by quantifiable factors, e.g. the value of goods stolen, has its merits in that it eliminates the possibility of injustice on a case by case basis. However, such a system leaves little to no room for human intervention: circumstances differ, and an impartial judge can determine whether or not a convict is entitled to a more lenient sentence based on their situation. Although this solution promises fairness between cases, it seriously undermines fairness within cases. The second potential solution, to have a panel of multiple judges collaborate on a ruling, carries greater promise: the immediate introduction of a second opinion separate from the appeals process would save resources and guarantee a greater element of impartiality. However, this system would only occupy more immediate resources where they may not be required, and while a collection of different opinions could nullify potential bias, the equal likelihood that two similar opinions collide could only exacerbate the initial problem. A ‘good judge, bad judge’ setup, though seemingly taken from a lost Gilbert and Sullivan musical skit, increases the odds of an impartial ruling, but the defendant facing trial after a canteen catastrophe stands little chance. The significance of the judge in our justice system can be our greatest strength, but it has the potential to be our greatest downfall, and with the system as it is, that human element is impossible to compensate for.

Judges are the foundation of British law. Their expertise, their fair judgement and their uncompromising commitment to the law make them the indispensable glue that binds the system together, ensuring that, no matter the ruling, trials remain free, fair and factual. However, judges are also only human, with human problems and biases. These human problems colliding with the required objectivity of law is a recipe for disaster, with potentially severe and lasting consequences. And yet, without the human element, if we were all exposed to the unfeeling monolith of Law, would we be any better off? Ultimately, judges are arbitrary by nature of their humanity, but how we seek to resolve this without compromising the fundamental humanism of our legal system is a challenge that will take a great deal of examination, understanding and patience to resolve.

Academic Journal 2023 - 62

Covid, Ukraine, and the ensuing food crisis

“Wars begin when you will, but they do not end when you please”

Putin’s invasion of Ukraine has proven a fitting demonstration of Machiavelli’s aphorism, with the Russian army bogged down in the Donbas, accepting piecemeal progress for increasingly severe casualty-figures. Yet the consequences are not limited to Ukraine, and the destruction and disruption caused by the war has spread worldwide.

For impoverished and starving children in Somalia, Egypt and Sudan – countries which are completely dependent on Russian and Ukraine wheat imports – the war has generated an unprecedented threat of hunger and famine. As a result, some 143 million people now face severe food insecurity, according to the UN secretary general, after food prices rose by 55% globally. Meanwhile, in the UK, food price rises are forecast to reach 15%i.

The immediate cause of the accelerating price pressures was the Russian invasion of Ukraine, with the two countries accounting for 30% of global grain production as a result of the fertile Eurasian soil they both share. The eruption of brutal conflict between the two nations has thus precipitated a severe contraction in global grain supply, reducing the quantity available on the market and raising the price.

The Russian naval blockade has left 20 million tonnes of grain stuck inside Ukraine, unable to access the traditional sea route through Odessa. Whilst Ukraine could theoretically export grain through its Western border, poor infrastructure and the threat of Russian bombardment makes this

63 - Academic Journal 2023

infeasible. Irreconcilable differences between Russian negotiators and the West suggest that this grain will never depart successfully.

However, whilst it is easy to focus on the impacts of the war in Ukraine, threats to food security significantly predate the war, with price volatility stemming back to early 2021. Rising transport costs have seen the price of shipping go up by some 400% in the last two years, hampering the ability of food exporting countries to rapidly meet the demand from dependent nationsii. At the same time, the rising price of Crude has strained the agricultural process, given that harvesting and refining procedures require intense use of oil-powered machinery. Global food infrastructure sits in a precarious position in the aftermath of Covid, leaving it woefully underprepared for a severe supply-side shock after the war in Ukraine.

And yet, despite the inability of the global food system to even meet standard levels of demand, rapid economic recovery post-Covid has seen commodity demand expand at an unparalleled rate. Expansionist monetary and fiscal policies by Western governments have seen the Eurozone money supply expand and the American money supply nearly double, contributing to rampant inflationary pressures which have seen consumer prices rise quickly. The reopening of restaurants, hotels and the end of social distancing has also increased the demand for food just as global supply came under the greatest strain.

What,

then, might be done?

Given the seeming impossibility of rescuing any large part of the Ukrainian grain, we have been forced to look for alternatives. A first step might be to cease the hoarding of food by wealthy nations, or to end the abuse of food power as a political weapon by food-exporting countries. Recent bans on wheat exports by Vietnam, Kazakhstan, and most recently India have prevented the global market from allocating food resources adequately. Countries like Sri Lanka and Nepal, which import significant portions of their wheat from India, now face food crises while African countries are reeling from the loss of a promised 10 million tonnes of wheat from Indiaiii. Ending this policy of food hoarding would allow food supplies to spread throughout the global economy and reach those who need it most.

Academic Journal 2023 - 64

In its latest report, the Consultant Group on International Agricultural Research has called for new investment into agricultural research and development so as to improve efficiency and productive capacity. In doing so, we might be able to achieve a longstanding state of global food security which would make crises like these a thing of the past. The body also called for an end to those sanctions which “obstruct food and fertilizer trade,” by allowing Russia to put its food and fertilizer (of which it is a key exporter) onto marketiv. Whilst the geopolitical ramifications of cutting back sanctions might seem severe, in the context of a global food crisis, we may have no choice but to seek compromise and allow agricultural trade to renew after months of instability.

In the long-term, the need for discovering a more secure and environmentally friendly food supply has become increasingly apparent. The wasteful nature of intensive livestock farming, in which animals receive far more calories than they produce as meat and produce, is no longer tenable in a world of increasingly scarce food resources. A study by the Boston consulting group found that investment in plant-based meat alternatives was the best form of climate investment, with beef producing up to 30 times more emissions than Tofuv. Studies by the LSE, in collaboration with the Grantham Research Institute and the Global Green Growth Institute, have lent further support to the need for a strong and stable eco-friendly food infrastructure to tackle current shortagesvi. Food security is a solvable issue, with the world producing enough food for 10 billion people already. A host of issues have put unprecedented pressure on a system which is unnecessarily vulnerable to shock and disruption, showing the need for urgent reform. International cooperation, combined with the development of a coherent and practical long-term global food strategy, is immediately necessary to avoid food shortage becoming a consistent issue.

65 - Academic Journal 2023

How has conflict shaped medicine?

War and conflict have existed since as long as we can remember, as old as history itself, accompanying with it both injuries and fatalities, but also innovation. While forms of war and weaponry have been evolving and changing over time, one cannot ignore the development of military medicine alongside it as the two go hand in hand. This article will overview how the healthcare for soldiers both during and after war has changed over time, but also influenced other spheres of medicine.

Military medicine as we know it only started in the 18th century, but evidence in old civilisations can be observed. As many of us have learned from basic history, many of these age-old civilisations were not estranged from war, with tales of empires and conquests being extremely popular. However, the healthcare in these civilisations was far more advanced than you would expect, despite how crude their conflict may seem in comparison to today. A papyrus dating back to 1600 BCE describes techniques used by ancient Egyptians such as cauterisation (burning wounded areas to seal them and prevent excessive bleeding or infection) which were used for many following centuries through various eras.

But the most notable of ancient civilisations is of course the romans, who made massive strides in military healthcare which influenced centuries to come and most closely resembles modern military medicine. One of the fundamental threats to health in war are the poor conditions and lack of hygiene. Epidemics would sweep regularly across armies and their camps, effectively becoming as deadly a threat as the physical battle itself. However, the romans’ understanding of sanitation reduced the number of epidemics faced by their troops, in contrast to other armies. The appreciation of sanitation in military medicine was lost unfortunately, but roman advancement never truly died and was rediscovered millennia later.

Following the downfall of the roman empire, medicine regressed while conflict slowly grew Through the Middle Ages war advanced rapidly, due to introduction of gun powder weapons, and medicine struggled to keep pace. Greco-Roman medicinal theories were rediscovered and adapted from the 15th century onwards, as warfare advanced, with injuries becoming frequently more fatal with the use of firearms. But there was only the battlefield surgeon Ambrose Pare (1510-1590) made a real breakthrough, he rediscovered the roman treatment of using turpentine, an effective wound antiseptic. He also rediscovered the use of ligatures to tie off bleeding vessels, stepping beyond cauterisation. Pare was such an influence on medicine that published “The Method of Curing Wounds Caused by Arquebus and Firearms” in 1545, a publication which was referenced and used for following centuries. Although limited by old theories, conflict initiated both development and adaptation of practices due to the constantly evolving severity of threats and outlines the key point: conflict creates a demand for medical progression.

Healthcare progression does not only entail new methods of treatment, but also the introduction of new tools and devices. We can see the development of AI in medicine as the modern equivalent but glancing back in history we can see the origin of basic tools we take for granted, As previously mentioned the introduction of firearms changed warfare and complicated medicine by creating new challenges for medical surgeons. Although early guns and rifles could not fire numerous rounds in quick succession, nor were the firearms perfectly accurate, but they were still able to pierce metal

Academic Journal 2023 - 66

armour and had the capability of being very deadly. The entry of the musket ball is responsible for most of the damage, which is reported to be organ damage, heavy bleeding, bones shattering or just immediate death. In the case of injury, in order to treat the physical damage, the bullet had to be removed primarily before any ligatures or cauterisation. As a result, several tools, such as the terebellum (bullet extractor screw) or forceps were invented to remove objects like bullets from the body. Tools like the forceps are now commonplace in contemporary surgery.

A rather more curious invention was designed by Napoleon’s chief surgeon. Until the 18th century, combat officers were not easily persuaded that it was worth risking healthy soldiers to retrieve wounded soldiers on the battlefield. Often the wounded were left to lie on the field and were only treated once the fighting had stopped. Dominique Larrey introduced his “brancardiers” or stretcher bearers, who were able to transport the wounded from the battlefield to be treated. But his more revolutionary idea was the “flying ambulance.” The flying ambulance was a horse drawn vehicle which functioned as a mobile treatment centre with medical staff, equipment, and a padded area to put the wounded down. The invention is a giant steppingstone into creating our modern ambulances, with the ambulance wagon acting as a field station I the war zone, treating the wounded on site then transporting the more gravely injured to field hospitals. While conflict may not always directly create the tools or services we see today, throughout history conflict creates various challenges which create the space for various creative solutions. These solutions then form the basis of what is slowly adapted into modern public healthcare.

In everything discussed so far, solutions to the problems caused by conflict never really considered the fundamental science behind treating infections. The problems in conflict of various diseases plaguing soldiers were recognised and linked to ongoing research projects during World War one. They were recognised as aiding the war effort and were therefore invested in. Laboratories in the army medical school were essential for curing tetanus and typhoid during WW1. Sir Almroth Wright was a professor of pathology who carried out research into typhoid. Typhoid was a significant cause of fatality among soldiers – in the south African war 6000 troops died from weapon injuries while 160000 died from disease – so lessons from past practices needed to be learnt and solutions needed to be devised. Armies invested in research into microbiology, allowing Sir Wright to develop a typhoid vaccine. The British army launched a public health campaign, leading to 90% of troops being vaccinated by 1916 and making WW1 the first war where fewer soldiers died from disease than wounds caused by the enemy. The mistakes in conflict offer a lot to learn from and lead to investment into existing science, as we can see here. The development of the typhoid vaccine would never have been so quickly hastened if not for conflict, and it allowed several fields of research and biomedical science to grow, eventually informing and continually creating the modern treatments we require today.

An often-overlooked effect on conflict in medicine is the increased employment of various medical professionals and requirement of new roles. War has especially created previously unfathomable opportunities for women in healthcare. During the first world war, most men were sent off to fight

67 - Academic Journal 2023

and with the guaranteed tidal wave of casualties, there was a high demand for healthcare. This resulted in the War Office calling on women to drive ambulances and for female surgeons to perform surgery in the war zone and also at home. Many women were never given the chance to prove their medical competency before, but the shift in opportunities caused by WW1 was instrumental to changing opinions and is arguably the start of the monumental introduction of women into senior roles in medicine.

Finally, conflict has not only impacted physical maladies, but has raised enormous awareness towards mental health and therapeutic support. War is horrific and throughout a soldier’s life they will have had many traumatic experiences which have led to psychological and psychiatric issues like shellshock, battle fatigue but most well-known: PTSD. All these disorders were poorly understood at first and highly stigmatised, but their prominence in conflict led to their investigation. Post traumatic stress disorder was a term coined by the American psychiatric association in 1980 and explained the mental effects of war and eventually trauma in general. Now PTSD covers mental trauma of all kinds, including accidents or even natural disasters. Conflict drew awareness to these disorders, and indirectly to other mental health issues too, enabling health professionals and public to understand the need for therapeutic support and to be more accepting of mental health. It is an understatement to say conflict and war is the symbol of suffering. Yet it is almost ironic how innovation in death leads to discovery of how to save lives. Innovations in health are often responsive to conflict, as the evolving challenges requires a stream of resourceful and creative thinking, which creates the perfect culture for rapid progression. These creations and discoveries are often adapted to fit previously unrealised purposes, which lead only to further advancement. Conflict has created opportunities and progression in the diversity within medicine, whether that be the roles of women, or awareness into mental health. It is undeniable how conflict has carved medicine into what it is today.

Academic Journal 2023 - 68

Vengeful violence – to what extent is it justified?

In fifth century Athens, anger was viewed as an innate response to being wronged or disrespected. Revenge relieved them of their complaint and was essential for the preservation of their reputation and honour. They took pride in extending their agendas past “private revenge”; however, they left the perpetrator of such violence in the hands of the legal system and viewed this punishment as a form of vengeance in itself for the victim. However, this moral standard they set for their society is not sustained when it comes to entertainment. Instead, they indulge in the fantasy world of violence, just as a modern audience would enjoy the ‘Saw’ and ‘The Purge’ franchises whilst not openly advocating for the justifications of violence.

In his ‘Poetics’, Aristotle states that in order to create a great tragedy, fear and pity must be evoked in the audience, leading to a catharsis of such emotions, often achieved by the suffering of the characters on stage. One of the greatest ways ancient playwrights create pity in tragedy is in the subversion of the harmony of the household through an act of intra-familial violence.

In the words of Dover, ‘an Athenian felt that his first duty was to his parents, his second to his kinsmen, and his third to his friends and benefactors; after that, in descending order, to his fellow citizens, to citizens of other Greek states, to barbaroi and to slaves’. This conveys that the worst crime one can commit is an act of violence or murder against his immediate family because, as William Allan puts nicely, ‘it violates the closest bonds of allegiance’, and the inclusion of such kin-killing elicits a powerful emotional reaction in the audience. Usually, such a response is only provoked after a family member has been wronged by another, and so the need for achieving revenge is a means of personal justice, and as aforementioned, as a means to protect their reputation and honour. However, where do we draw the line between sheer brutality and the delivery of justice?

Euripides’ Bacchae deploys Dionysus’ revenge for King Pentheus of Thebes’ hubris and incompliance to accept his worship, through the mistaking of a friend for a foe. Upon Agave’s lack of recognition of her son Pentheus, she tears his limbs apart with her bare hands believing him to be prey. The sound of Pentheus’ harrowing cries in attempt to provoke Agave’s recognition of him is full of pathos and suffering, arousing a great sense of sympathy in the audience. Here, a personal form revenge of Dionysus is at work, as her carries out the most satisfying punishment: rejection by a family member, just as Dionysus himself was denied by both his mortal (Agave, Ino and Autonoe) and divine descent (Zeus). Due to his status as a god, we are invited to permit any punishment Dionysus exacts; gods are typically regarded as having the moral high ground however are not held to the same moral standards as mortals, allowing them freedom to act as they please. However, this only creates more sympathy on behalf of Agave in particular as she is completely innocent of the destruction of her family. Therefore, from Bacchae we can observe that in Greek tragedy, there are many injustices in which we are left only to challenge.

Kin-killing is again deployed elsewhere in many of Euripides’ extant plays, and similarly to Bacchae, his Medea involves a tragic filicide, however, here the perpetrator of such a heinous crime is not in the hands of a god (Dionysus), but a foreign woman. Allan observes ‘women’s vengeance is always related to their status within the family unit as when their position within the household is being undermined’. This is true for Medea, as her abandonment and adultery by the shame-

Academic Journal 2023 - 69

ful, oath-breaking liar Jason drives her revenge plot, acting as a consequence of this injustice. However, this is not to say that Medea is completely free of blame for her situation, nor was it the adultery alone that sparked her revenge. Another huge contributing factor to her vengeance is in being laughed at by her enemies; a Greek audience would understand the significance of this as they can recognise Medea’s honour and reputation has been subverted. (Although her status aa a foreign woman makes us question whether the Greek audience would hold her to their golden standards). However, this is not to say that Medea is completely free of blame for her situation. Firstly, she murders her own brother, betraying her father, in order to escape with Jason, and so cuts herself off from her birth family. She is also aware that in killing her sons she will suffer alongside Jason, evident in her hesitation as she holds a contest with herself to convince her to perform the act, and yet after all of this she still follows through. From observing Dover’s previous comment that protecting family is placed above all else, the audience perhaps would not traditionally take the side of Medea as she turns this concept of household harmony on its head. Nonetheless, because of the unrealism and unrelatability of the tragedy, the audience are invited to enjoy watching the sufferings of Medea and the wider society as a result of her decision to commit filicide, not necessarily justifying her actions completely. The main conflict arising in Euripides Medea is the constant struggle to decide who to side with, as there is sympathy for both parties involving innocent children and an abandoned foreign woman.

Vengeful acts are not limited to ancient Greek tragedy; in our earliest sources of Greek literature, the Homeric epics Iliad and Odyssey, revenge is a key premise. Achilles breaks his self-driven vow to abstain from fighting in order to avenge his friend Patroclus’ death by, not only murdering, but mutilating Hector. In the Odyssey, the slaughter of the suitors reasserts Odysseus’ position as the aner, the man, of his household upon his return to Ithaca, and delivers justice due to the suitors subversion of hospitality, drinking all of his wine, eating all of his food, overstaying their welcome, and sleeping with his maidservants. In both instances, it could be considered that love and protectiveness over one’s friends and loved ones is the motivator for such acts of violence as opposed to its victims. Moreover, because the suitors and Hector deserve to die, or in other words, their deaths were morally acceptable as there was sufficient reason for such a form of punishment or treatment, violence here is justifiable.

The topic of vengeance and violence in ancient literature, particularly ancient Greek tragedy, has been widely discussed by scholars as to the reception of it by modern and ancient audiences, as well as the extent one can justify acts of murder driven by revenge. However, deserved or not, the act of murder itself will always remain tragic.

Academic Journal 2023 - 70

Depp v. Heard; Courtroom Case or Societal Struggle?

Nayat Menon

The Johnny Depp and Amber Heard defamation case began on April 11th 2022 and concluded on June 1st 2022 with every moment scrutinised and dissected by people across the world. As the case progressed, online articles and social media posts spoke volumes about the support Johnny Depp was receiving. However, the case has brought to light two very important existing problems that I’m going to elaborate on in this article.

The first of these issues is toxic masculinity. Toxic masculinity is defined as “a set of attitudes and ways of behaving stereotypically associated with or expected of men, regarded as having a negative impact on men and on society as a whole.”

According to Healthline, traits of toxic masculinity include themes of:

· Mental and physical toughness

· Aggression

· Stoicism

· Heterosexism

· Self-sufficiency

· Emotional insensitivity

To date, men have often been forced to repress their emotions in order to act in a specific way that coincides with societal expectations and conforms to traditional gender roles. We come across it in our everyday lives but it’s most commonly dismissed due to its frequent use; “man up” has been a phrase used for decades to the point at which its often overlooked, whereas in reality it implies that one can be more or less of a man based on their behaviour, which is fundamentally wrong. Similarly, “no homo” has more recently become a means of justifying any affection from one male to a male friend, since the societal standard for masculinity requires attraction to a cisgender, straight woman. Anything that could be interpreted differently comes at the risk of threatening one’s masculinity. We as a society have become desensitised to these phrases just because they’re commonplace in our everyday lives. As a result, men are often not seeking the mental help that they require so that they don’t appear ‘unmanly’. For decades now, 75% of suicides in the UK have been men and it is the biggest killer of men under the age of 50. However, only 36% of referrals to NHS talking therapies are for men- it is an ongoing issue that not enough focus is placed upon and it’s one that must see immediate change.

During the defamation case, an audio recording was played as evidence against Amber Heard. In the recording, she clearly said “Tell people it was a fair fight and see what the jury and judge think. Tell the world, Johnny. Tell them, ‘I, Johnny Depp, I’m a victim, too, of domestic violence, and it was a fair fight,’ and see if people believe or side with you.” The words speak for themselves. It could be argued that toxic masculinity has made it easier for female domestic abuse perpetrators to victimise themselves, placing their trust in our social archetype which automatically links violence and aggression with typical masculine characteristics. However, by openly expressing his experiences of domestic abuse in a high profile case such as this one, regardless of their validity, Johnny Depp has broken apart the generalised expectation of what a “real victim” looks like. He has demolished the wall that previously prevented male victims from sharing their experiences of

71 - Academic Journal 2023

domestic abuse, to provide them with the opportunity to come forward and express themselves freely, without the pressure of societal standards looming over them. The surge of online support for Depp has undoubtedly further boosted the confidence of male domestic abuse victims across the world and reinforced the idea that they no longer will be frowned upon or discriminated against based on gender norms. But how can we further support this change? Clare’s Law, also known as the Domestic Violence Disclosure Scheme, allows anyone to ask the police and obtain information from them about a partner in order to protect themselves from the risk of potential abuse. This enables people to feel more secure in their relationship and is extremely beneficial, but could the name “Clare’s Law” be hindering male victims from utilising the service since it can be seen as targeted towards women? Should policymakers adjust this to ensure it’s more gender-neutral?

Although this case benefits male victims who are suffering as a result of toxic masculinity, it is also counter-intuitively being used as a means of worsening misogyny. This is the second prominent issue I’d like to highlight. On the one hand, there has been overwhelming media support for Johnny Depp which as previously stated, can have a considerable positive impact on society; on the other hand, the excessive negative remarks made against Amber Heard that have been expressed online have caused misogynists to be even more misogynistic, exposing their apparent hatred and distrust towards women in general. It seems like the significant outburst of support for Johnny Depp has provided a means of blanketing blatant misogyny, since there is currently a blurred line between unfeigned support for Depp and support which stems solely from the urge to side against a woman. Due to majority of domestic abuse cases originating from women’s claims against men, misogynists have seized the opportunity, especially after Depp’s win, to make unsubstantiated claims about women. There has a been an increase in online posts and articles which accuse women of constantly making false allegations in order to gain financially, and to lower the man’s status/reputation in hopes that society will believe them. According to the Office of National Statistics (ONS), 73% of domestic abuse-related victims in 2021 were female. Of course, this figure only accounts for those who opened up about their experience, but since the outcome of this case, there’s cause for concern that we might regress to a time when matters involving female domestic violence victims are trivialised and almost disregarded by a large proportion of the population. As a result of Johnny Depp winning this case, there’s a chance the ‘Me Too’ movement will also be ultimately undermined in a similar manner and for similar reasons. All in all, the mountain of hate Amber Heard has received from misogynists will deter women who desperately require help from seeking it.

In the long term, the ideal outcome we hope to see from the Johnny Depp and Amber Heard case is that men are better supported and encouraged to open up about their experiences, while simultaneously, women are not demotivated to do the same. The huge amount of media scrutiny during this case has brought to light the impact of social media on public opinion and societal views in general. It has also shown the massive influence it can have over how cases such as domestic violence are perceived and creates the opportunity for both positive and negative change.

Academic Journal 2023 - 72
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.