Habs Extended Research Book 2024

Page 1

2024
Habs Diploma Extended Research Project Book

Welcome to the 2023 collection of prize-winning Extended Research Projects.

This independent study programme gives students in their Lower Sixth year the opportunity to study a question of their choosing with guidance from a supervisor in their field. There are few restrictions to where their interest and passion can take them and the process culminates in the submission of either a 3,000 to 4,000 word written piece or a production project such as a film or computer programme

with supporting materials. All will have engaged in their own research and presented their views, positions, intentions and/or judgements in an academic manner. The project seeks to encourage them to be intellectually ambitious alongside developing the creativity, independent study skills and broader scholastic attributes which will allow them to thrive at the finest institutions of higher learning.

After a rigorous process of external marking and viva voce interviews, the prize-winning projects presented here represent the very best of the over 250 unique submissions made by students across the Haberdashers’ Elstree Schools. They showcase the remarkable application, courage and ambition of all of the students in completing such exceptional pieces of

independent work alongside their A Level subjects and many cocurricular commitments.

We are immensely proud of the achievements of our students; the depth and range of the projects they have completed is inspiring and we are excited to share them with you.

3

Creative Faculty

STEM Faculty

PAGE

Contents
1ST PLACE FRASER HAUSER (CREATIVE WRITING) PAGE 9 2ND PLACE BENJY EZRA (ART) PAGE 39 3RD PLACE JOSHUA JONAS (MUSIC) PAGE 57
1ST PLACE ARAN ASOKAN (CHEMISTRY) PAGE 61 2ND PLACE ANNABEL ROOM (MEDICINE)
77 3RD PLACE AARYAN DOSHI (PHYSICS)
87 HIGHLY COMMENDED APARNA SHANKAR (BIOLOGY) PAGE 101 HIGHLY COMMENDED AYZA AFFAN (COMPUTER SCIENCE) PAGE 109 HIGHLY COMMENDED DANIEL-SAMUEL BAYVEL ZAYATS ENGINEERING)
PAGE
PAGE
121

Humanities and Social Sciences Faculty

1ST PLACE ABIGAIL SLEEP (CLASSICS)

2ND PLACE JANA LAI (PSYCHOLOGY)

3RD PLACE RAYAAN AHMED (ECONOMICS)

HIGHLY COMMENDED SOPHIE GRAHAM (ENGLISH)

HIGHLY COMMENDED ZACK FECHER (POLITICS)

HIGHLY COMMENDED ARYAN JANJALE (PHILOSOPHY)

PAGE 141

PAGE 151

PAGE 161

PAGE 171

PAGE 195

PAGE 209

5

Creative Faculty

7

Fraser Hauser

CREATIVE WRITING

It is hard to say what The Artist’s Parable is truly ‘about,’ and indeed I fear it would defeat the point of the work to do so. Rather, I believe the work was an attempt – failed, mind you – to put to rest some lingering questions about the nature of art that had been lingering in my mind for a long time. These are questions such as: can art be truly genuine? does this matter? and what should art be? It is for this reason that the work is multi-disciplinary, for I wanted to target these questions from different angles; personally, I believe the musical component of this work to be inseparable from the written element. Either way, it is a work I’m very glad to have produced and one that I feel supplemented greatly my study of English, History and Music at A level. Plus, I believe The Artist’s Parable set my application to study English literature at university firmly on course, and I’m grateful to have had the opportunity to submit this work as my ERP.

The Artist’s parable.

9
‘All art is at once surface and symbol.

Those who go beneath the surface do so at their own peril.’

- Oscar Wilde, Prelude to ‘The Picture of Dorian Gray.’

PROSE , he thought, is like a glass of wine; sipped at intervals by men in suits from half carafes at au Babylone in Paris; guzzled from cartons by the accepting middle aged at children’s parties; or necked, red, thick and cheap from bottles by teenagers on car seats at night. It can be formal and scientific, or acrid and intoxicating; he pictured bottles stacked neatly on the sommelier’s rack, and then smashed or vomit – spewed at the plaza de toros de las vantas, where he had felt so very alienated. Why then, he thought, must books be picked to pieces? Why must we enjoy them too much or not at all? They, like wine, have lost their regal quality, and are ‘available at half price!’ – he read from a bookshop window. But no, he thought, they haven’t lost it entirely; he remembered how a teacher had told him to shy away from pretty covers and how that had stopped him reading. He remembered how a friend had mocked him for choosing the bottle with a pretty label and how he hadn’t allowed himself to enjoy it. This was, he concluded, no world to birth his work into. Oh! how he would hate to have his wine left broken at a bullring or to have it logged and documented like an artifact at a museum only to be spat out into a plastic cup. He cringed to picture his novel wrung out on the academic’s table until devoid of all meaning, its spine weary with pencil annotations. Placed unopened on a bedroom shelf to bolster its owner’s ego or perhaps their chances in bed. ‘Useless!’ he said aloud, and vowed never to be a writer.

The artist left the library at dusk, and London was cold and dry-blue with streetlights as students pushed past him to get to the entrance. He held a thin

11

slip of paper - a receipt from Trattoria du Romano near Venice. Cradled it, damp and creased with April rain, but that was not important. All he needed was the address. Scribbled frantically on the back, it read: ‘Chalk bridge, Towpath Rd, river lee navigation, The Wyrd, 11ish?’

That was where he needed to be.

MUSIC, he continued, is much like tobacco. He readjusted his clarinet case until it sat flush with his back, and recalled how he had always played, and how difficult it would be to stop. For him, music had become not so much a hobby or career; it was an inveterate propensity that – though he would never admit it – somewhat defined his character. He loved music. He loved it very much. “But I’m no musician!” he said (out loud, though no one heard). He, unlike everyone else, wore no headphones; he had grown to despise how visual, how aesthetic music seemed to be. Unlike Prose, he thought, one’s eyes are free to wander unhindered with music, and even if one shuts them their mind drifts to personal associations – perversions even – that they might have with it. And then we dramatise! We make our lives like movies, lives that are not ours, not anyone’s, the lives of those one aspires to be. Smoking is cool, he thought, but the chic brings wrinkles, the buzz, consequences. Unlike Prose, so much of music must be built upon falsifications; music is, like smoking, too good for this world. Those he knew who wrote music the best – ‘music heard so deeply that it is not heard at all, but you are the music while

the music lasts 1;’ music that only poetry can describe – were those who, stuffing pipes at piano stools or disappearing behind a stage door for an origami-grade rolled cigarette, could falsify with integrity, and make movies of us all. Indeed, they seemed so sure of their music that one would struggle to question their authenticity; it hung around their person, exuded their character with an unfaultable accuracy. You could smell and see - sense -music on them, wherever they were and were not; on their clothes, their face, and through their half glasses left idle on a pub table. Whether music was doing them good was to them neither here nor there; they could not do with the half-glass alone, and would not be convinced otherwise. How easy for them it must be! he thought. Their work, that was so true to themselves and yet fit so seamlessly into others’ lives was to him a mystery of creation; one must simply be good at lying, or at convincing themselves they are not, he concluded. He plugged what he could of the address into his phone then proceeded as required, pausing only briefly to light a cigarette in the dull warmth of Smithfield, long after hours. Sounds of people moving hung in the air: the shutting of pub doors; the wavering hum of trains beneath; an argument over money or perhaps the request for a lighter. It might have seemed quite harmonious to a quiet mind.

13

The artist’s however, was full of fragments. He heard distinctly a small moment from a night in Dublin months ago, and there too was a looping scrap from a score 2 he had glanced at before leaving the library that he would surely return to. But beyond these were hints at things more profound, perhaps the ineffable power he had (and still) felt, holding, hearing that halfsilence - both in and out of time - before the applause of the night before; a fleeting power which faded and died unnoticed at Exmouth market.

POETRY then, he asserted, must be like drugs; or perhaps a strong spirit sent for when the wine was done, and we simply couldn’t wait. It is like a psychedelic: intoxicating to the point such that it cannot be explained; powerful such that it can make or break, at least define the mind; and alien, such as to change someone forever. It is an attempt to explain the unexplainable; to remember a forgotten melody from a mere fragment 3. To pick up the pieces. That, is the poet’s job. To assemble a long-smashed vase, from fragments swept away, or to reconstruct a skeleton from bones buried long ago. No, he thought, rather to construct from those old components a new vase, a new skeleton, and then to make it breathe; to find new meaning in their intricacies. Intricacies, that scared him. That seemed out of reach. Truthfully, he had not the mind to see them, nor to express them effectively; to seem free within a sonnet or grounded in free verse; to interpret perceptively the most innocent of things. The psychedelic had chewed him

2 ‘Score,’ here refers to a sheet music score.

3 This is not his idea – It was written in a letter from Philip Larkin to JB Sutter, that the artist had read and forgotten.

up and spat him out, and now he could not - would not - trust his eyes. What appeared, then disappeared before him often had no bearing on others; his mind, was scarred. Perverted. Then again, it always had been.

The artist arrived at Kings cross in the final, lambent light of day, glad he had walked there. Minutes before, his mind had briefly fallen silent, and he had seen a picture in a woman, sitting on a curb across the Pentonville Road in the dry-blue lamplight, her face reflected in the freshly fallen tarn water of a puddle before her. For a moment, it had been clear in his head, though he would have to wait to see if it had been made clear in execution.

Deep below the city, the artist considered his chances. Perhaps he could in any case simply hang it on his wall at home or put it on the internet. That would be enough, but only if it was good. Briefly, it seemed as if the whole world rested upon this single photograph, and not the score in his bag nor the address on the receipt or any coherent amalgamation of his past beyond the woman at the curb.

Upon emerging back into the world however, the photograph was quite forgotten, lost in an invasive, periphrastic line of thought provoked by some poem the man opposite him on the tube had been reading. He stepped outside the station, unsure of where he was going.

ART, or at least what he saw in galleries, was more difficult to place. He thought it much like lust or longing; emotions often brought on quickly,

15

without a clear cause; emotions that, until experienced, are a mystery one feels they understand. As a child, he had been bored by the Beatles in his father’s car; how could they sing song after song about something seemingly so simple and dull? so superficial - gross even. It was not until he thought himself assailed by love that he saw the songs for what they were or seemed to be, and pushed the songs on others, blithe in their ignorance! With love in his system, the songs made sense. Perhaps then, Art is like a cocktail, deceptive in strength, that to its drinker is a risk they are used to and yet unaware of. Can I hold it together this evening after a couple Manhattans? he remarked to himself rhetorically and laughed. Often, he would skip the odd meal to make drunkenness cheaper; as, when he was younger, he would when supposedly love-sick. But Hemingway had been right; art is more beautiful when hungry, 4 and drunkenness more intense when empty. Indeed, it was not until he walked, nauseated, 5 empty-bellied and adult through the Giardini della Biennale - at an hour far earlier than one would expect – that he, upon sitting down with his family at a restaurant in Burano later that day, felt he could (and should) explain the art he had seen with great clarity and scholarship, whether or not they wished to hear it. So much for them; he ate and drank heartily, and paid when it was over.

By now, the sun had long set, and save for the odd drunk or curious walker the artist was alone. It was a Tuesday, and the city had gone to sleep. “Perhaps

4 Ernest Hemingway – A Moveable Feast.

5 ‘Emotional,’ perhaps.

I could be an artist,” he thought.’ He found the nearest bench and sat down; closed his eyes briefly, then forced them open. Beyond him lay a narrow, effulgent path of white toward the crescent moon in the sky or perhaps the aircraft light of an idle crane that parted the treacle-black water of the Walthamstow wetlands. A plane passed slowly, as if hovering overhead, and a man approached the artist slowly.

‘Surely, it must be him! And why else would he be out here?’ the man muttered beneath his breath. He had hidden his face from the artist behind a book of poetry on the tube - had left the station through a different exitbut was now convinced that he and the artist were heading to the same place, which (to say the least) puzzled him slightly. But he had to say something.

On the bench, the artist tensed up slightly as the man approached, his face concealed by darkness; it had been the aircraft light, which had turned off without warning moments before. So much for that. In between the man’s footsteps, the artist heard clearly the dissonant hum, the din of the London beyond that he had long drowned out. Patiently, he sat, waiting for the man to pass; but that did not happen.

‘Sorry, is it Adam?’ The man asked assertively.

Film, he confessed, was something he knew little about. It was to him rather like beer; cheap, and often comforting provided one enjoys it at home, perhaps in packs of four, six or twelve; enough to draw one back, and to balance the books at the end of the month. Easy, and inconsequential;

17

background noise with a partner, for when music was too obtrusive. A comfort. ‘There is no need to venture out;’ to invite extra cost atop the fixed monthly fare for something hoppier that requires more attention. £7 2/3rds pints, hazy in twelve-percent opaqueness, eyes squinting in an indie cinema.

Deceptive prices. Drinking noise. The preference of the bearded IPA nuts, Amber Leaf in hand. And it was either this, or Haribo-vomit seats to view the domestic blockbuster of tomorrow; Budweiser-float with a paper straw. Crying children. Johnny Depp’s redemption story. He simply could not get excited. Film, could never be a destination; he would simply wait, and pay for it to come to him.

‘Um, Yes?’ The artist replied. He pretended not to recognise his inquirer, but in fact recalled his face even in the dark. The man’s figure, now rotund and plump, had changed (grown) since they had last met, and a leather portmanteau hung from his left shoulder. The artist placed him in his early fifties.

‘I’m not sure if you’ll remember me, but I believe we have worked together; Einstein on the beach? 6 2016? You conducted, I’m sure!’

‘Oh… Yes, I did. Sorry, what’s the name?’

‘It’s David, don’t you remember? I played the judge and doubled as a dancer, believe it or not!’

6 An Opera written in 1975 by Philip Glass and Robert Wilson that has no single coherent narrative.

‘I um-’

‘Are you heading to the Wyrd? ’

David, the opera singer, spoke with a certain pomp which the artist found deeply irritating. That - unlike his figure - hadn’t changed.

‘Um, err, yes although I’m not really sure where it is… Katherine? gave me this address but it got smudged in the rain haha… and lea valley navigation (he showed David the receipt) doesn’t quite cut it…’ he laughed again.

‘Yes, no that isn’t ideal, but I’ll show you where it is! It isn’t far… but what brings you here, may I ask?’

‘I err, just liked Katherine’s work and happened to bump into her on a train; she then invited me here.’

‘Huh; I’d of thought you were too high brow for the Wyrd myself. Its not much of a destination, more of a hang-out, and I definitely wouldn’t call it a party. But I should not be too negative; we do have fun! But come, we’re late already.’

The opera singer set off down the narrow path at quite a speed and slipped, heavy footed, into the near-pitch black.

19
2.

It did not surprise the artist that the Wyrd was in fact a boat, although other aspects of it most certainly did. Docked to the Canal’s edge by a loose cleat hitch, its long, thin hull stretched out diagonally across the water like a forgotten Ever Given,7 so low the artist suspected she had grounded out.

“That’s well parked,” The artist remarked. David the opera singer chuckled briefly.

As they drew closer, the Wyrd’s colour changed distinctly from a orange–yellow to a dark–ish blue that had been obscured by lights; lights from the industrial park beyond the metal fence of the far bank, that mixed with the dull, stagnant green of the water below. The artist noticed too a strange obelisk-like structure on the boat’s roof that was revealed to be a small, minaret-style chimney puffing out a thin line of smoke.

“I’ll just send them a message to let them know we’re here; one moment.’

The artist nodded and turned toward the metal fence; ‘TRESPASSING STRICTLY FORBIDDEN,’ a sign read. He took a few steps back. Looked down to the stern of the boat. Beyond the central windows – which, clouded by condensation, offered no view inside – a thin border of white light seeped through what he presumed to be a blackout curtain, and illuminated briefly a small wooden coracle that sat angularly in the water and was tied to a loop on the boat’s roof.

‘Strange,’ he said aloud.

7 One of the largest container ships in the world, which famously blocked the Suez Canal for a sixday period in March 2021.

21

‘OK they know now, should be -,’

The Wyrd’s front hatch flew open and smashed against its hinges with an idle thud.

‘Oop, shit! Sorry David, is Adam here?’ Katherine popped out from the hatch much like a jack-in-the-box and scanned the canal path.

‘Yep I’m -,’ The artist began.

‘Oh thERE he is! You two had best come in; I think the weather’s about to turn.’

Adam stood, frozen for a moment.

‘After you,’ said David, gesturing toward the boat.

Katherine ducked back into the small, red-tinted room she had emerged from and began fiddling with a series of locks on the inner door while the artist climbed in behind her. She wore a pristine set of navy coveralls and a joint rested against her ear.

‘I can’t believe I’m doing this,’ the artist thought, standing huddled in a darkroom, the opera singer’s belly protruding uncomfortably into his back. The air was thick and he felt his suit cling to sweat.

‘Sorry, this is quite the operation; it’s a soundproof door you see. Just one more. Oh, and I will warn you Adam, it won’t be what you expected in there.’ Katherine placed her set of keys in an old, empty can of film and then placed that on a small shelf. ‘See for yourself.’ Slowly, she pushed the door open with her right shoulder, and slid off to one side.

She was right.

It sounded like a rainforest. Like the start of Nunu 8 . Various butterflies and moths scrapped over the glow-worm hanging lightbulbs then gave up and came for one’s sweat. They had escaped from the terrariums, of which there were at least six; each between four and five feet tall, convex in shape and placed atop shallow plinths. At the far wall, a man sat reading on a large wing chair placed upon a dais, and two further terrariums stood in alcoves on either side of him, as if marking a throne. He did not look up and kept his face concealed.

Have I found Kurtz?! The artist thought. Willard had to look a bit harder.

He made to speak, but his throat and eyes began to burn for the stench on board was inescapable. It oozed from the very walls; from the innumerable nosegays and bouquets of freesia, daphnes, amaryllises, plumerias – proteas even – that lined the outer wainscot of the cabin, and much of the border between walls and ceiling; from the two Mabkharas burning Bakhoor by the entrance; from a cannabis pipe atop a spinet piano. If one were to cough, they would never stop - but Adam couldn’t help it. He stumbled toward the port wall, dropped his bag and felt a glass press against his lips.

‘Fucking hell! So violent! Drink this, it’ll help; see, that door is a sort of airlock too.’ Katherine’s voice was clear and comforting, and he felt the water soothe his throat. She gestured to the man that the guest had arrived.

8 A track by Mira Calix, released 2003, composed largely of insect-noise samples.

23

‘Fuck me!’ the artist spluttered between sips. ‘That chimney should let a bit more out!’

He poured some of the water onto a handkerchief and cleared his eyes of onion-tears. Katherine and David, both smirking slightly, had sat down on a small sofa opposite where he was standing, and were pouring glasses of wine. He smirked back, though mainly at their contrary appearances; whilst she was slight, elegant and quite beautiful, the sepia-tinged, frowsty lamplight aboard the Wyrd did David no favours in appearance; he seemed a chubby infant, with sweaty stubble and a cigar.

‘Ah, I see my wife has helped you,’ Kurtz’s voice bellowed from across the room. ‘She’s awfully good at it these days. Most respond like you did, sometimes worse! But I like it that way.’ The man rose whilst speaking, crossed the tessellated slate floor and thrusted his hand out toward Adam, who took it. His face was furrowed, his hair grey, and his eyes were deeply set and jaundiced. However, he still retained an element of youth. ‘It’s Aitzaz,’ he said, ‘and I know who you are; it is a great honour.’

‘Aitzaz,’ the artist replied. ‘Good to meet you.’ He needed a cigarette but left asking if he could smoke one for too long; the man continued: ‘I am most pleased that you came here, though due to my wife’s work not mine; I hear you are a fan of it.’

‘Yes, I think Katherine’s novels are brilliant,’ She sat lighting the joint she had prepared and did not hear him. ‘Very creative. Eh, um; sorry can I smoke a cigarette in here?’

‘I’d rather you didn’t,’ Katherine butted in, almost in unison with David, who’s Brick House Churchill gave off thick, gooey smoke and who’s shirt was stained red with Chateau Giscours . ‘No, not in here!’

‘Ridiculous!’ Aitzaz asserted, directly toward his wife; he was the shorter of the pair. ‘Be my guest,’ He produced a pack of Dunhill silvers from his pink, collared shirt pocket and held them right up to the artist’s mouth; all the text on the carton was in Arabic, and Adam took one with his hand and lit it tentatively. ‘Do not flatter her too much!’ Aitzaz continued. Laughed, but Katherine did not. ‘The novels are successful, but such high praise is unhealthy.’

‘Oh, I think you should give people praise every once in a while. Especially when they deserve it. Otherwise it can all get too much, and you get too hard on yourself.’ Adam glanced back at Katherine. She had covered her mouth and nose with her coveralls and was hiding behind them, poking her head out occasionally for brief pulls on the joint or sips of her wine. She said nothing and put on a N95 mask.

‘You know, Adam, I have met you once before; you may not remember.’ He strolled back toward his throne, but paused and stood on the centrally positioned, damasked rug that stank of mould. ‘Please, sit down.’ He gestured toward a second sofa and lit a cigarette for himself.

‘Thanks.’ Adam did not recall having ever met this man but was relieved to finally be offered a seat.

25

‘It was two, three years ago I believe. I had been commissioned as the artist for the Lebanese pavilion at the Venice Biennale and was in my prime regarding both the reception of my work as well as my prolificacy. I had created a multi-disciplinary work of great scale that was re-constructed in the Giardini by a highly paid team of foreign labourers with assistance from some of my trusted artist friends who had European or American passports. And this was mostly funded by my family for the money provided to me by arts funds was not enough. But that is not important. I was a very different man then, and I remember feeling as if no one could possibly top the work I had created when I arrived on opening day, but then I walked around! And I recall you distinctly directing a group of musicians in the British pavilion and that I found it rather bewildering but excellent. Improvised, I believe. We talked briefly afterwards, but you seemed slightly subdued and claimed the performance had been no good. Either way, you were not too keen on praise then my friend. But enough! What is it you are doing these days? For I am most interested. And will you take anything? A drink perhaps, or something stronger?’

Adam sat bolt upright, and was only slightly short of mortified, for he did not remember this meeting in the slightest, and it had not been his artwork. He had merely coordinated a small musical element -blagged it, drunk- and had left before lunchtime.

‘Its Mahler nine, isn’t it? LSO?’ David shouted over the noise, cutting off his conversation with Katherine.

‘No that -,’

‘Ah, yes, for I was there last night. Magical, just magical! And you can hold silence well, my friend!’ Aitzaz spoke over him.

‘Tha – thanks, I appreciate it, but Mahler nine is done as of yesterday.’ The fumes were starting to get to him. ‘I – I, shouldn’t say this really, but I have two new commissions. Royal Opera House. Opera called Lucia, about James Joyce’s daughter. Oh yeah, and Turangalîla. But I’m, But I’m not sure if I’m up to it really. I want to try something else.’

‘Lucia? Why I think I’m in that one chap! And don’t worry I’m sure it will be splendid.’ David did not seem to notice Adam’s sombreness.

‘Don’t want to do it! But why? You have such a skill. A way with things. And you are young my friend! Under 30 and conducting at this calibre?’

David and Aitzaz spokes simultaneously, amidst the lingering bug sound, and watched as Adam’s head drooped toward his knees in sadness and the rain began to fall outside.

‘Please can we talk about something else?’ Adam muttered, much like a child.

‘Don’t worry Adam, I expected Aitzaz to suck up to you this way. It must be uncomfortable. He even told me to keep quiet, I might cramp his style, he said.’ Katherine put a hand on Adam’s shoulder, then removed it, and the room fell silent. Aitzaz retreated toward the door opposite the entrance which opened, as expected, onto a small room in which he was growing drugs. He returned shortly afterwards holding another pipe which he gave readily to Adam.

27

‘Please do not Embarrass me, Katherine. And yes, we may talk about something else. I was going to ask what you thought of the boat, for it is only a temporary project of mine that I will exhibit as a series of photographs. You however are lucky enough to see the whole thing.’

‘Fucking hell Aitzaz, just ask him out already.’

‘SHUT UP KATHERINE!’ He snapped and lunged toward her, but she didn’t flinch even slightly, for he looked quite pathetic. ‘GIVE ME THIS ONE FUCKING THING BETWEEN ALL OF YOUR ATTENTION! I HAVE MY SUCCESSES TOO!’

Adam and David quite simultaneously reached for their phones and scuttled towards the door, making up then amending their excuses as they moved. As they wrestled with its various locks and escutcheons, their hosts were still at each other, careering round the terrariums as if they were not there at all and the room was perfectly normal. In his occasional glances back, they seemed as if they were dancing to a simple motif Adam thought he heard from behind the spinet piano but resolved that he must have simply made up.

‘Hurry up David! You’ve been here before! You must know how the door works.’

‘I DO ITS JUST QUITE COMPLICATED.’

In one swift movement, David finished unlocking the door, though not in a conventional sense, as he and the artist fell atop of one another into the adjacent room having torn the large metal door clean off its hinges. ‘Oh my good lord, I’m ever so sorry.’

Adam could take no more and began laughing uncontrollably as he pulled himself back up, leaving David face down in the darkroom. He could still hear the piano, but the two had ceased their performance, Katherine soon joining him in unrestrained laughter and Aitzaz’s face scrunching inward, from shock to apoplexy. Stubbing out his cigarette, Aitzaz let out a enigmatic grunt and strolled back to his chair, at which point Adam noticed a knocking sound from the outer hatch.

‘Answer that, Katherine. And it better not be the police,’ Aitzaz grumbled between breaths.

29
3.

Katherine was eager to move the boat that night, despite Aitzaz’s bid to remain for the purpose of a photograph he had not yet taken; however upon receiving an ultimatum to stop blocking the canal or be reported from a man at the hatch, he promptly folded, and Katherine donned a black poncho and resolved to move the boat out onto the wetlands for the night, and search for a spot in the morning. Adam and David had no say in the matter and were by then too drunk or drugged and confused to object.

‘Don’t worry about the door,’ said Aitzaz, having calmed down slightly. ‘I’ll make something of it.’

‘I do have to ask man, what’s with this place?’ Adam replied, giggling slightly through sips of David’s wine. ‘Like, you don’t live here do you?’

‘No, no I do not. But I do at the moment. It is a new, risky project of mine but I do not regret it. You see Adam, despite what Katherine might think, I am not a failure; I have worked on film sets, produced concept albums, successful exhibitions and have published a collection of poetry and a handful of novels. But, I am not happy with them, and I found having to keep these things separate tiring and unfulfilling. And it is not because they did not do well - again, I was at the Biennale a few years ago - but rather because I didn’t feel that they were genuine. If that means anything. But I feel that in this Boat I have created a space that is truly mine regardless of how strange it is and I had to do something new, even if it reduced me to madness’ He paused, and leaned over to one of the small airplane windows, noting that they were now moving. ‘I feel as if to confine one’s self to a single medium is so awfully restrictive and gives way to a condition of split self; and that is why

31

I use this boat as a shebeen of sorts because I get so many opinions on it from people of so many disciplines. They think, “Aitzaz Haidar has lost his mind!” but usually come round to understand it once I explain. It is an essay on pathways, routes out, entrapment and freedom within limits. I shall exhibit it as photographs purely because of its illegality; otherwise I should want everyone on here. It is my safe space, an outward manifestation of myself, and that is what I want my art to be beyond anything else.’

‘He gives the speech to everyone Adam, maybe you can explain it to me! I just come here because it is lawless and I’m rather fond of Ka – never mind.’ David stage – whispered crudely toward him, in clear earshot of his host.

‘All that said, I still respect a good craft, and that is why I encouraged you to take those commissions. But I feel you have a similar condition to myself in all this, and I am sorry if I came across pressuring.’

Adam sat back in his seat such that his head met its backrest and picked up the pipe once more. What did make of this man, Aitzaz? It was a gloomy mix of fascination, pity, confusion, and deep respect; that he should, after all his success, have the courage to make something as different, as bizarre, as this, that which Adam didn’t grasp as art in of itself; that was quite something. And yes, he did feel much the same way himself. He did not want to be a musician, a filmmaker, a poet, or a novelist; it was a matter of definition, of confinement, for if you want to be those things, ‘you will invariably become

them.’9 He thought of all the praise he had received, for being something he did not want to be; how his fiancée had been ‘so glad to be marrying a conductor;’ how his parents only put up with him when he told them of the next great project, the next pay-check. He thought of hanging around, outside stage doors, with people so set and happy with being musicians; drugs wearing off, and having nothing to write about; and wishing, longing to be like others. Katherine, definitely; perhaps not David. But now, there would be no more of that; if there was space for this man, there would be space for him, and he would make what he wanted, when he wanted, and only take up that commission if he felt like it; he had enough money already, and his fiancée worked in finance.

Adam could sense the wetlands were close, and had given up on going home that night. Conversation ground to a halt on board, and the Wyrd inched slowly along the canal, almost crawling along its bed. The rain that fell outside had grown heavier such that from the windows it looked as if a thick sheet of glass were falling out of the sky, glinting with light then falling to the ground on perpetual loop. Inside, the sound of insects, idle conversation and the background piano merged into a strange kind of silence that one got used to, much like a city soundscape; now, he could imagine staying, and fell into a strange, comforting reverie in his new home, as if it had always been there and nothing could disturb him.

9 Quote by Oscar Wilde.

33

Aitzaz was the only one standing up, and was taking innumerable photos then fiddling with the broken door which still brought laughter to Adam’s mouth.

‘I’m slightly worried that the Wetland’s might be too choppy for my third guest,’ he said.

Adam thought nothing of this comment for a few minutes, presuming Aitzaz had made a mistake, but then winced at the thought of the space being unknown to him in any way.

‘Thir – third guest?’ He mumbled, his head spinning.

‘Ah, yes I probably should have told you, though I presumed you heard the piano playing, no? Its not a pianola!’

As if on demand, a man of small beard, trimmed hair and average height stumbled up off the piano, gripping it for balance. Adam recognised the face; it was one he had seen on television, at Glastonbury, at the Hammersmith Apollo; it was a musician he adored, manifested upon this boat. But no, for he was real; his eyes bloodshot and his pupils the size of beachballs.

‘Careful, Careful.’ Aitzaz mumbled, backing off. ‘He’s been on some stronger stuff.’

David, Adam and Aitzaz froze as the man’s face darted about the room, and small, frightened yelps fell out of his mouth. Without the piano, the room seemed achingly loud, and beneath their feet, the boat began to rock with unnerving speed; they had arrived at the wetlands.

‘No-one move,’ Atizaz asserted, as if he hadn’t created the situation and was now the hero within it. They sat, or stood completely still for what felt like hours, until a crackle of lighting and a huge gust of wind to the side of the boat set the man wild, clawing at the windows for a means of escape.

‘Fuck! Keep him away from the side hatch and Jesus! Guard the terrariums!’ Aitzaz screamed, running round the room aimlessly once more as Adam and David shielded themselves in their respective seats, fully aware their host was out of his depth. The man, quite unknown to Aitzaz, had found the side hatch, opened it without challenge, and proceeded to launch himself out into the water, pushing the boat further down in the process. Before anyone heard the splash, water began flowing freely into the boat on the starboard side, tilting it downward and pushing the port side terrariums off their plinths until they smashed, ear-splitting, onto the slate below.

It was chaos. All manner of insects that Adam had never seen – never knew existed – clambered freely about the cabin or took flight within it. There were titan beetles, bizarre moths, mantises, various hand-sized cockroaches, and soon after the lights were broken by water, the scene was lit purely by a host of luminescent fireflies. How Aitzaz had got hold of these bugs was a mystery; how he had kept them alive was a miracle. Not that that mattered. It was every man for himself, and Adam forced himself to sobriety upon seeing David collapse, paralysed by some strange, foreign hornet Aitzaz had collected.

35

Unsure whether to scream or laugh, Adam swiftly manoeuvred himself out on to the boat’s exterior, his hands on the roof and his feet underwater, lodged on the gunwale. He was heading for the coracle, and looking for Katherine, though she had long since jumped ship for a reason he did not understand. The water was cold as he swam for the small wooden vessel, and looked as though it stretched for miles out into a roaring sea rather than for a mere thirty or so metres. He could not believe he was in London, and had no idea what had happened to Aitzaz. But there was no time for heroics; he clambered into the tiny boat, cut the connecting rope, and began rowing furiously with a small pink plastic oar he found in the footwell towards the bench he had sat on earlier.

‘This is fucking ridiculous!’ He cursed himself as he moved slowly across the water. Glancing down, he saw what he thought was a piece of bamboo longed onto his tie, but was in fact a huge stick insect that spanned the entire length of his torso that he batted away into the water out of fear and anger. On the verge of tears, he reached the bank, relieved to be safe. There are some dangerous, dangerous weirdos out there! He thought. I thought you knew to always steer clear of them. You fucker! You’re about to be married for fucks sake! Stop hanging around on illegal bug smuggling boats with idiots! Fuck you! This will make the press! And David will talk about it if he makes it out alive. And God! Don’t say that he will die. And Aitzaz too! You’re a conductor! Go home!

He glanced at his watch; it was far too late to take a train. Out on the water, he saw the Wyrd go down slowly until it disappeared under, and the silhouette vanished; merged into the treacle black.

He would start work on the commission tomorrow, but it was at this point that he realised, - with an awful, awful sinking feeling - that he had left his scores on board. The end.

37

Benjy Ezra

ART

Benjamin Ezra chose ‘Narrative Storytelling through Sculpture’ as his ERP title, intertwining an interest in Classical Mythology and a desire to explore sculpture. His project dissects the conventional portrayal of classical stories through the atemporal medium of sculpture. Using the stories of Prometheus and Pandora as a nucleus, he crafted twin sculptures of his own, illustrating themes of ascension and retribution in a tangible form. Benjamin is presently studying English Literature, Art, and Mathematics, with aspirations to enrol in an Art Foundation program next year.

Storytelling through Sculpture

Accompanying walkthrough text

Narra$ve in Ancient Greek Art

Introduction

Art is prehistoric, developed by some of the first humans as a means of communica=ng – telling stories as drawn or engraved on cave walls. Pictures, symbols, and art as a whole was used as a narra=ve device before text or speech existed. How has this been developed? And in ancient Greece, how was art used to retell the stories we all know today of myth and monster? This civiliza=on was known for great sculptures and architecture – The Parthenon’s pediments crowded with sculptured retellings of myth. How is a wriGen story told through sculpture, and how or what can I create to develop and respond to this?

Beginning Research into this Concept

Most Greek stories were wriGen in the form of poetry: The Iliad and The Odyssey being key examples, and many we learn through ancient ar=facts and poGery. The book ‘Image and Myth’ examines a key difference between these two different narra=ves. Luca G iuliani argues that prose exists in succession, and therefore can be craQed by the author to be read and experienced in a specific, measured way and speed. The reader cannot choose any way of understanding the story other than that provided, and hence the story remains sequen=al for every readeri. However, art exists all within the same instance – it is temporally unified. Every part of the story told exists in the same frozen moment of depicted =me. The ar=st is far less capable of structuring the process of recep=on as a temporal sequence, meaning every person experiences the story in their own order and can take more varied impressions – how does an ar=st ensure the intended storyline is being told? How can a single image or sculpture depict mul=ple series of events in the correct order?

In the ar=cle ‘Greek Art’ by John Griffiths Pedley, he examines the methods ar=sts within this period used to create such narra=ves. When comparing the representa=on of mythological figures within Greek art to other periods of =me or cultures, such as Hindu or Egyp=an gods, there is a no=ceable difference in content. OQen the art becomes symbolic and inhumane – animal heads, obvious superhuman imagery such as wings, mul=ple arms, or distor=on of body shape and size. Yet in Greek representa=ons of stories and gods, rather than aGemp=ng to create a new form, superior or different in nature to the ordinary man, aGempts are made to depict superiority via perfec=on of the already used human form and shapeii. This involves, Pedley argues, the use of idealism and aesthe=cally pleasing curves, angles, or propor=ons, depic=ng speed, strength, or power within a given mythological creature. It is not a different form to the human, but simply a heightened one iii Knowledge of anatomy is taken advantage of to create what should be, as opposed to what is – crea=ng a godly sculpture or image in the sense that it becomes an amalgama=on of perfect forms, muscles, curves, and sizes. Pedleys research into this field can prove extremely useful when needing references and methods to respond to this theme, in order to appropriately engage with this style of sculpture and image. Yet while it provides support in narra=on of character and form, it does not explore much in terms of narra=on of events

39

In the book ‘Sculpture and Vase Pain=ng in the Archaic and Classical Periods’, Susan Woodford analyses the different stories depicted through Greek poGery, mostly from the perspec=ve that the viewer is already familiar with any of the narra=ves. For example, a vase in the Eleusis Museum (above, Figure 1) portrays the story of Odysseus and the cyclops Polyphem us on its neck. To make the story clear, the painter chose to illustrate the easily recognisable scene of blinding , having three men poin=ng their spears in the single eye of a cyclops (indicated by his great size in comparison) – which a viewer would

Figure 1. Funerary Proto-A2c Amphora with a depic9on of the blinding of Polyphemos by Odysseus and his companions, Archaeological Museum of Eleusis, Greece Figure 650 BC, probably

hence assess is Polyphemus. Their leader, leaping and the only form in white, is therefore featured in an obvious way, which in turn also portrays the story though the lens of this ‘hero’ – his dynamic pose, and unique colour depic=ng him as a saviour. The cyclops raises one ineffectual hand in an aGempt to push the stakes away, and with the other it holds a wine cup. This is crucial, because while the painter depicts this storyline all in one, it is not intended to be received as the three men aGacking Polyphemus while he is drinking; the wine cup is present to clarify that the cyclops had been made drunk prior to the ins=tu=on of the aGack planiv. The ar=st was not interested in depic=ng a single moment, but the story in its en=rety. Contras=ngly, we are provided with another example in which a painter depicts the story of Troy on a vase (above, Figure 2), but rather than depic=ng the story through one image, they use the cyclical shape of the vase to progress the story leQ to right. These are two strikingly different narra=ve approaches – a story in one image, and a broken-down story in mul=ple images

This leaves me with a series of decisions to make, as well as more research to inves=gate. In responding to this, how do I wish to depict a story? Shall it be done in succession, or with no temporal sequence? The methods which I will replicate Greek styles and tradi=onal art can be learned, and more inspira=on can be found My next steps will be to find a specific story to explore and to develop these themes of sculptural narra=ve.

Ini$al Explora$on of form and narra$ve

Selecting a Story

AQer reading more and researching different classical stories, ini=ally the story of Icarus intrigued me the most, and seemed to have lots of possibili=es for explora=on. In this par=cular myth, Daedalus, a mythical inventor, created a set of wax wings for he and his son, Icarus, to escape cap=vity from King Midas. Yet Icarus ignored his father’s warnings, flying too close to the sun, meaning the wax melted, and he fell to his tragic death in the ocean. I think it was honestly the opportunity of depic=ng wings, along with the concepts present in the story – perseverance, tragedy, and ambi=on that en=ced me.

However, aQer looking at different ways the story has been depicted before, I realised that the story would not allow me to truly develop and explore narra=ve sculpture. Firstly, the order of events relies too much on outside influence: the sun, the ocean, the tower in which they were held cap=ve, all which hold important influences on the stages of the story (such as the mel=ng of the wings, or the cause of death). Therefore, as I am intending to create a figure-based sculpture, there would not be much opportunity to include landscape and scenery. Furthermore, the myth is too simple – too liGle happens; there are only really 3 different stages of the story. I would much rather use a story with many more details and context to allow me to really stretch my development and experiment with what I can create as a response to a narra=ve text.

Following this decision, I managed to decide on a story that really interested me and abided by the rules I had set to allow my experimenta=on to be as wide as possible - the story of Prometheus, which I soon realised =ed heavily into that of Pandora, being a connec=on I wanted to visually explore. I read widely to understand the key aspects of the story, using the extracts below as key inspira=on.

Prometheus, however, who was accustomed to scheming, planned by his own efforts to bring back the fire that had been taken from men. So, when the others were away, he approached the fire of Jove, and with a small bit of this shut in a fennel-stalk he came joyfully, seeming to fly, not to run, tossing the stalk so that the air shut in with its vapours should not put out the

41

flame in so narrow a space. Up to this 9me, then, men who bring good news usually come with speed. In the rivalry of the games they also make it a prac9ce for the runners to run, shaking torches aWer the manner of Prometheus.

In return for this deed, Jupiter, to confer a like favour on men, gave a woman to them, fashioned by Vulcanus [Hephaistos (Hephaestus)], and endowed with all kinds of giWs by the will of the gods. For this reason she was called Pandora. But Prometheus he bound with an iron chain to a mountain in Scythia named Caucasus for thirty thousand years, as Aeschylus, writer of tragedies, says. Then, too, he sent an eagle to him to eat out his liver which was constantly renewed at night.v

Pseudo-Hyginus, Astronomica 2. 15 (trans. Grant) (Roman mythographer C2nd A.D.)

He hid fire; but that the noble son of Iapetos stole again for men from Zeus the counsellor in a hollow fennel-stalk, so that Zeus who delights in thunder did not see it. But aWerwards Zeus who gathers the clouds said to him in anger: ‘Son of Iapetos (Iapetus), surpassing all in cunning, you are glad that you have outwided me and stolen fire a great plague to you yourself and to men that shall be. But I will give men as the price for fire an evil thing in which they may all be glad of heart while they embrace their own destruc9on.’

So said the father of men and gods, and laughed aloud. And he bade famous Hephaistos make haste and mix earth with water and to put in it the voice and strength of human kind, and fashion a sweet, lovely maiden-shape, like to the immortal goddesses in face [Pandora] . . . But when he had finished the sheer, hopeless snare [Pandora the first woman created by the gods], the Father sent [Hermes] . . . to take it to Epimetheus as a giW. And Epimetheus did not think on what Prometheus had said to him, bidding him never take a giW of Olympian Zeus, but to send it back for fear it might prove to be something harmful to men. But he took the giW, and aWerwards, when the evil thing was already his, he understood.vi

Hesiod, Works and Days 42 ff (trans. Evelyn-White) (Greek epic C8th or C7th B.C.)

AWer crea9ng men Prometheus is said to have stolen fire and revealed it to men. The gods were angered by this and sent two evils on the earth, women and disease; such is the account given by Sappho and Hesiod.vii

Sappho, Fragment 207 (from Servius on Virgil's Aeneid) (trans. Campbell, Vol. Greek Lyric II) (Greek lyric C6th B.C.)

There is so much occurring in these myths and all of the external influences – a jar, chains, fire, an eagle, are much more easily depicted three-dimensionally as opposed to landscapes. More importantly, there are many deep narra=ves, strong concepts of punishment, kindness, curiosity, and power, which have so much opportunity to be displayed through sculpture. In an ar=cle of hers, Helen Huckel highlights these different narra=ves. To Hesiod, Prometheus was a trickster, and to Aeschylos, he was a martyr who sacrificed himself for mankind’s benefit. In some versions of the story, he is the creator of man from clay, and in others it was he who gave mankind a conscious existence viii . Yet in all aspects, it was he who willingly defied the Chief god, and allowed man to develop, grow, and exist in safety – fire being a substance s=ll extremely crucial today. What caused this complex =tan to have pity on humanity? Was this really his crime? And did this empathe=c, pi=ful crime jus=fy the crea=on of Pandora, who is blamed for the opening of her jar when she was made to do just that? The contrast between Prometheus’ total control over his ac=ons, and Pandora’s calculated posi=on on the god’s chess board is unique, and through these ques=ons these characters are =ed together – being exactly what I want to depict through my sculpture.

S tage 1 – Sketching and Photoshoot

Beginning the development of a physical sculpture, I wanted to explore with different ways I could depict Prometheus, and at the same =me develop my sculptural skills, as I have never touched the subject before. Hence, I first researched different ways this story has been sculpted in the past, and then created a series of sketches to photoshoot based on.

Some of the images I found most interes=ng are shown above, but the main themes of the sculptures, as seen in Figures 3 and 4, are a bold, courageously posi=oned Prometheus, amongst stone, and chained with an eagle on him. Despite Figure 5 being an old p ain=ng, the way the ar=st has chosen to depict him is s=ll worth looking at; here Prometheus is more cau=ous, careful, in the moment of stealing fire. OQen it was illustra=ons as opposed to sculptures that focus more on the suffering aspect of his story. The rest of the reference images I used can be found in the folder ‘1. Small Prometheus’ in my folder collec=on.

This sparked interest and ideas for me which in turn became the following sketches. The main ideas I wanted to portray were the heroic, brave side of Prometheus; the more secre=ve, trickster side; and by contrast the human connec=on he has. These manifested in the forms of pose, inclusion of things like fires, rods, or campfires (represen=ng his theQ), and the inclusion of a smaller, human form.

Figure 3. MGA Sculpture Studio, llc. Figure 4. Prometheus, cap;vated (1872-1879) by Eduard Müller (18281895); Museumsinsel Berlin-MiMe, Berlin
43
Figure 5. Prometheus Carrying Fire - Jan Cossiers (1600-1671) - Prado Museum

Following the planning, I set up a photoshoot to allow me to see these ideas in reality and aid me in crea=ng the form using reference images. I recognised the lack of development some of the ideas had – such as Prometheus wrapped around a small mountain, as well as the level of difficulty of some considering this piece is to help me get to grips with sculpture.

Figure 6. Model (Oliver) posing as Prometheus, climbing a mountain with a flame on top Figure 7. Prometheus aUer stealing fire Figure 8. The upper view of (Figure 8) Figure 6. Ini;al Sketches for photoshoot to plan development of small Prometheus

Overall, I preferred the pose shown in Figure 7, as it seemed a good star=ng point to build skill from, and began to portray different aspects of Prometheus that I wanted to depict in the same instant – the flame, showing his theQ, and kindness to humani ty, but his expression and stance showing his carefulness and worry about his fate; the dilemma of his decision pictured well.

Stage 2 – Creating the F igure

AQer deciding which pose to use, I used these images to develop my skills and learn how to create a figure three-dimensionally, as seen in the images of the sculpture below. I found various YouTube tutorials helpful in finding where to start, and propelled me into the pieceixxxi

Stage 3 – Review

Looking back at the crea=on of this, it was an extremely useful process to enable me to springboard onto the final piece I would eventually create. When approaching this small form, I went about crea=ng the limbs by crea=ng them separately and connec=ng them – something that nega=vely affected the cohesion of the body’s form, and meaning my next approach to this would be to create the form all in one, and hence perfec=ng the limbs on the same body of clay, also meaning the structure is much stronger. Further, ensuring a body is standing, as I has planned for this, was much harder than I thought, and something I did not consider too much in the beginning, meaning the pose was not dynamic nor structurally intelligent to ensure it could hold itself up, meaning it had to rely on the lateaddi=on of a stone-like form on the ground, which in turn did take away from the power that the form was supposed to have, standing alone.

The nudity of the sculpture was reflec=ve of the fact he is a =tan: oQen depicted nude, but I think the addi=on of some level of clothing would add more to the piece and be interes=ng to create. Further,

Figure 9. Final Small Prometheus Outcome
45
Figure 10. Final Small Prometheus Outcome

the inclusion of the torch resembles the theQ aspect of the story, but his pain, his cap=vity, and his punishment are irrelevant here. Overall, this process was very helpful to understand the art of sculpture, but the crea=on does not well resemble the en=re story, only really a specific moment –how can I develop more a sense of =me through this fixed piece?

Manifesta$on of narra$ve in final form

Planning Final Forms

Moving on to planning a proper piece as a final response, I knew I needed to include both Pandora and Prometheus but needed to determine how to link them together. Ini=ally, I created the sketches below (in Figure 11), contras=ng Prometheus’ bold brandishing of his stolen flames – showing his resourcefulness, but also his kindness to humans, with Pandora’s weakness and curiosity, reflected by his tall stature, and her lying down on the opposite side.

In addi=on, to =e together the temporal aspect of the story, I knew all parts of the story needed to exist in the same =me, and I portrayed these through symbolic objects. For Prometheus, that meant including the bird, his arm chained, and perhaps some small humans at his feet. For Pandora, that meant the giQs of the gods, such as clothing or jewellery, and the pile of clay from which she was moulded.

However, I reformed these ideas (Figure 12), and I reshaped the general structure of the arrangement. This was ini=ally caused more by prac=cality – having Prometheus stand would be logically hard considering he is directly in front of Pandora, meaning he cannot lean on much, and further, the size of the kiln I have access to inhibits such a height. However, once laying him down, I found that rather than showing the contrasts between the characters in this story, the similari=es between this =tan, and a human, in their moments of punishment, were much more interes=ng to experiment with and develop, promp=ng those final sketches.

This set in mo=on my final series of photoshoots for both Prometheus and Pandora (all images available in the folder collec=on).

Figure 11. Ini;al Plans for final outcome Figure 12. Later, refined sketches of final outcome arrangement

During the photoshoots, I experimented with other poses of pandora, such as siing or crouching as opposed to leaning, but I ul=mately felt that the posi=on shown in Figures 13 and 14 well reflected the grace and innocent aspect of her, in contrast to the drama=c opening of the jar, resembling different narra=ves well. Further, the Prometheus pose I used ended up being quite different, with the posi=on mirroring Pandora’s more, beGer reflec=ng the pain he is in, but also providing a more dynamic contrast with the high raise of the torch.

Creating the Final Sculptures

Furthering my research, I had the opportunity to visit an exclusive Rodin exhibi=on – a Sculp=st present in the 19th century who created large, incredible figures and forms in the classical style. The images I took where extremely helpful to both inspire and show me more accurately how to achieve a real, tangible feeling and shape of a form, which I used heavily in reference to during my sculp=ng of the final outcome. These images can be found in my folder collec=on.

The crea=on of Pandora was lengthy and difficult. It was s=ll really the second sculpture I’ve made, and the first of that size, but using my smaller prac=ce I approached her form much more cohesively, but arriving at the clothes was a challenge. Rather than relying on my images, which no longer aligned so much with the stage my sculpture was at, I used a cloth, arranging it around my sculpture to inspire how I approached crea=ng and arranging Pandora’s clothes (as seen in Figures 17 and 18, as well as in the folder collec=on)

Figure 13. Pandora Photoshoot Reference Image Figure 14. Pandora Photoshoot Reference Image Figure 15. Prometheus Photoshoot Reference Image
47
Figure 16. Prometheus Photoshoot Reference Image

AQer finishing Pandora, I made Prometheus, and wanted him s=ll to remain mostly nude, due to his being a =tan, but added a small sec=on of clothing, further crea=ng some contrast between the two.

I also decided to not include the pre-planned eagle in the piece for three reasons: logis=cally, it would have to be quite small and would be briGle and highly likely to break; it could poten=ally distract from the main forms of the two central characters; I could s=ll allude to, and therefore symbolise this aspect of the story without the eagle itself.

Final Photos

Along with the full photoshoot in the collec=on, below are some of the best images of the final, completed arrangement, with both sculptures complete.

Figure 17. Arranging rags to aid clothing crea;on Figure 18. Arranging cellophane to create more creases
49
51

Conclusion

In this piece, Pandora and Prometheus are connected by their punishments. They are both depicted in their worse moments, but their story and their strengths are highlighted and resembled through different lenses of the sculpture, all in this fixed narra=ve. Pandora is shown at her worst; giving in to tempta=on and releasing, unbeknownst to her, the worst evils into the world. Yet here she is seen lying down, comfortable. Her nature of innocence and grace is clear, but the controlling and punishing ac=ons of Zeus overarch this, through the opening of the jar. Her robes resemble the beauty and craQ giQed by Aphrodite and Athena, and her floral wreath is a marriage wrath, common in Greek =mes for brides to wear, symbolising her giving away to Epimetheus, with no choice. Moreover, the depic=on of the seven deadly sins as snakes is a key image used oQen in classical stories, taken from Medusa, who was raped by the Gods, and punished as a twisted, nonsensical result. Similarly, she was created by Zeus simply to serve as a punishment for mankind. The contents of the jar in this way resemble the control Zeus has over her – using her through this jar, and punishing her throug h it when she is his very crea=on, accentua=ng the cruel hypocrisy of the Gods, and the lack of agency she has in this fate. This complicated moment of punishment she lies in is arranged cyclically to that of Prometheus’, who I wanted to be on her other side, crea=ng a totally outward, three-dimensional narra=ve – She begins where he ends, and vice versa. Her leaning down, opening the jar s=ll contrasts with his high raise of the flame torch he stole, yet he, a =tan, is similarly being punished, this =me for his own ac=ons, despite the fact they come from a posi=ve, empathe=c, fatherly place. He lies there, not in grace, but in pain, in a lower posi=on, weak, and on the ground, but bringing focus to his brave, strong ac=on of defying the ruling Gods, all for the sake of humankind. His punishment of chains is included, and a slit at his liver to show its infinite cycle of destruc=on and healing.

Here, these two very different people – a god and a =tan; a young, used woman and an incredibly powerful, age-old man; a jar-opener and a fire-giver, are united by their pain and punishment. The focus is brought to the jus=ce of their ac=ons, and despite their suffering one is brought to acknowledge the worth of the ac=ons which led them there – advancing mankind. The cruelty of the gods which are not present is also brought to aGen=on, and overall, for me, this piece sparks the concept of rebellion against an unjust system.

Reflec=ng on my ini=al concept, can a story be well told through sculpture? Without a doubt, yes. The inability to develop a visual temporal narra=ve might hinder the ability to create an en=rely accurate narra=ve, but at the same =me it allows the ar=st to completely transform the process of understanding a story. It does not maGer so much that the original narra=ve is kept, because the existence of a complex story, shown in one beau=ful moment – whichever the sculptor chooses, allows a story to be told through different lenses, and allows the viewer to have to think, to visually explore and rediscover a story in whichever frame it is chosen to be displayed. The ar=st has the freedom to create whichever links, show whichever concepts, and challenge ideas which they find interes=ng. This in no way is an improvement nor degrada=on of the wriGen form, but a completely different way of experiencing a story, and has allowed me to explore and create a piece I find intriguing, prideful, and through-provoking.

53

i Image and Myth: A History of Pictorial Narra9on in Greek Art, Luca Guiliani, Translated by Joseph O’Donnell, (2013), pp. 24-172

ii Idealism in Greek Art, Percy Gardner, The Art World, Vol.1, No.6, (1917), pp. 419-421

iii Greek Art, John Griffiths Pedley, Art Ins9tute of Chicago Museum Studies, Vol.20, No.1, Ancient Art at The Art Ins9tute of Chicago (1994), pp. 32-53

iv An Introduc9on to Greek Art: Sculpture and Vase Pain9ng in the Archaic and Classical Periods, Susan Woodford, 2nd Edi9on, (2015), pp.62-96

v Pseudo-Hyginus, Astronomica 2. 15 (trans. Grant) (Roman mythographer C2nd A.D.)

vi Hesiod, Works and Days 42 ff (trans. Evelyn-White) (Greek epic C8th or C7th B.C.)

vii Sappho, Fragment 207 (from Servius on Virgil's Aeneid) (trans. Campbell, Vol. Greek Lyric II) (Greek lyric C6th B.C.)

viii The Tragic Guilt of Prometheus, Helen Huckel, American Imago, Vol.12, No.4, (1955), pp. 325-336

ix How to Sculpt in 4 steps: haps://youtu.be/jCIGcAz0Snk

x Sculp9ng Timelapse – HEAD MODELLING (Tutorial): haps://youtu.be/64bpcvDM4Ug

xi Speed Sculp9ng – What You NEED to know: haps://youtu.be/H4WtpO8vfTU

55

Joshua Jonas

MUSIC

Joshua Jonas chose to produce an album of music as it was a chance for him to exercise his creative talents, whilst pursuing other academic interests. The EP consists of 3 songs across metal, Lo-Fi, and pop genres, and Josh handled the entire production process, including writing, mixing, and mastering. Joshua Jonas is studying Biology, Chemistry and Maths and wants to take Biochemistry at university.

Standoff Sequence

Initially, I am drawing an influence from MF DOOM’s album “Mm..Food” in which he has multiple interludes like this that seamlessly transition into the next track, which is evident in this piece transitioning into the metal track. I am sampling an episode of Spiderman 1967 called “To Catch a Spider”. With this track, my main influences were Iron Maiden’s album “Powerslave” and Rage Against the Machine’s self-titled album in which several elements of these albums are incorporated into this track. With my overall EP, I am aiming for the over-arching theme to be the focus on guitar parts and this track almost solely focuses on the guitar. Initially, I have a slow rhythmic build up which is similar to some of the tracks off the Rage Against the Machine album, containing distorted power chord stabs and high-pitched licks. The chorus continues with a simplistic hook containing many dissonant intervals at a faster pace, accompanied by the drums which fully enter at this point and drive the chorus forward. During the latter half of the chorus, a lead guitar counterm elody playing in thirds enters similar to some of the tracks on Powerslave and providing melodic variation. During the verse, there is an extremely simplistic guitar line playing, reminiscent of RATM’s verses on their album. The guitar solo follows and contains many elements from Iron Maiden’s solos on Powerslave such as tapping, doubling the guitar in thirds and frequent modulations. Following this, the chorus occurs again, helping to solidify this as the main idea within the track. Immediately at the end, there is a short transition in which the rhythm of the track shifts from straight rhythms to triplet rhythms in the outro. Driving guitars and loud drums help to set the pace of this section, varying the pace of the song as a whole to provide contrast for the listener.

Ship In A Bottle

The aim of this song is to be a catchy sequence that transitions into the next song. With this track, I am again drawing an influence from MF DOOM as he tends to include many short interludes within his songs. I am sampling an episode of Bagpuss called “Ship In A Bottle”. I have chopped up some audio sections in the song so that certain phrases end on the beat which adds to the syncopation of the song. I have added a bouncy drum loop, some jazzy guitar chords, and a syncopated bassline that all add to the groove of the song. Halfway through the song, a plucky guitar line plays that provides contrast for the listener.

Contemplating Things:

With this track, my main influence was Denzel Curry’s album “Melt My Eyez See Your Future” and specifically the track “Mental”. The track contains a very mellow instrumental which I implemented in my track. While Curry’s track had a clear spoken word influence, my track aimed in the Lo-Fi hip-hop direction. The chorus of my song is very simplistic, consisting of an electric piano, some drums, a bass, and my vocals. As well as this, the harmony consists of only 2 chords throughout, ensuring that the listener’s full focus is on the vocals. To link this song with the EP as a whole, there is a guitar solo after the 2nd chorus which is also simplistic in nature, as well as a call and response section in the final prechorus between the guitar and the vocals, which adds variation towards the end of the song.

ERP Complete Review
57

STEM Faculty

59

Aran Asokan

CHEMISTRY

Aran Asokan chose “Carbon Mineralisation” as the focus of his ERP and its relevance in the context of overcoming coral bleaching. The project delved into the prospects of mineralisation as a carbon sequestering method and went into detail about how climate change is affecting coral reefs and how mineralisation can tackle the issue. Aran Asokan is studying Physics, Chemistry, Mathematics and Further Mathematics at A-level and is pursuing an engineering degree at university.

ERP: Is Carbon Mineralisation the Key to Tackling the Climate Crisis?

Introduction

Environmental conscientiousness has rapidly become one of the greatest concerns in society. The importance of adaptation of our means of energy generation towards sustainable methods has been emphasised greatly, but the reversal of damages incurred by humanity is relatively understated.

Current environmental revitalisation agendas revolve around carbon sequestration methods characterised in one of three ways: biological, technological, and geological This paper aims to highlight the importance of geological approaches, and to integrate them with a biological system too, in the form of coral reefs.

Despite the global relevance of CO2 removal, engineered geological methods remain a niche solution, with carbon mineralisation still in its infancy. The process encapsulates the formation of carbonaceous rocks, largely in volcanic geological formations, using carbon dioxide extracted from the atmosphere: an acceleration of a naturally occurring, though much slower version of the process which often takes centuries [1]. Although the process seems propitious conceptually, its scalability and viability has sparked debate over whether it is worth developing.

Instead of geological methods, technological advancements in Direct Air Capture (DAC) have received much more attention. Currently, over 90% of CO2 is captured from emissions immediately upon leaving factories [2]; however, DAC in its current state is not viable due to the number of capture sites needed to cover the entire atmosphere.

One carbon capture company, Climeworks, uses a potassium hydroxide filter a series of several reactions to produce pure carbon dioxide [3]. Air is pulled into a contactor system; the mixture passes through a filter containing potassium hydroxide solution, causing CO2 molecules to bind to the solution. Calcium hydroxide pellets pass through the solution, forming calcium carbonate. A centrifuge separates the potassium hydroxide and calcium carbonate; the KOH is fed back into the filter to be reused. The calcium carbonate is heated to high temperatures, forming calcium oxide and gaseous carbon dioxide, at which point the pure carbon dioxide gas can be sent for storage. A particular focus will be placed on carbonation/mineralisation for this.

As of September 2022, the International Energy Agency reported 18 operational direct air capture plants across the US, UK, and Canada [4]- evidently a long way away from the number needed to

Figure 1. An illustration of a typical DAC process [3]
61

reverse emissions stemming from the industrial revolution. Nevertheless, carbon capture and storage (CCS) methods will prove to be a necessity in rectifying decades of greenhouse gas emissions: it is estimated that CCS can reduce 85-90% of CO2 emissions from large emitters and from operations with high energy consumption rates [5]

One limitation with DAC is that the gaseous carbon dioxide removed from the atmosphere is inevitably recycled to be used in various places, such as in carbonated drinks, fire extinguishers, and as a refrigerant. However, this way of managing CO2 means that any volume of it that is sequestered eventually re-enters the atmosphere and must be re-captured. This creates an unsustainable loop in which costly DAC units are constantly running. Instead, the aim of DAC should be to lower the total volume of CO2 circulating around by finding a more long-term method of storage. In tandem with DAC technology, carbon mineralisation can provide a much more robust overall process for carbon capture, storage, and utilisation

Chapter 1: Conceptualisation of Mineralisation

Carbon mineralisation is a process facilitated by exposing certain rock species to aqueous/gaseous CO2 to encourage accelerated mineral formation in the pores of these rocks (displayed in the picture above) [1] In the context of aqueous carbonation in concrete, the reaction starts off by CO2 dissolving in a water film covering the mineralisation site to form carbonic acid [7]. This acid dissociates into 2 H+ ions and a CO3 2- ion. Similarly, the rock undergoing the process, in this case Ca(OH)2, provides a positive ion - here, Ca2+ - to react with the carbonate ion, forming a carbonate mineral precipitate. Though the general reactions taking place are well understood, specific kinetics and mechanisms are currently unknown as carbonation is an under-explored chemistry process.

Basaltic and ultramafic rocks are classifications with high pyroxene, olivine, plagioclase, Mg, Fe, and Ca contents and low Si contents; displaying the greatest affinities with carbon dioxide, these types are the most favourable for mineralisation Silicates, too, are often used for mineralisation research

Figure 2. An image of mineralised carbon dioxide in rock pores [6]

purposes, both due to their abundance in the Earth’s crust and their fine balance between good reactivity with carbonic acids and the ability to form stable carbonates upon reaction.

The H+ reacts with a water molecule to form a hydronium ion, which then reacts with a hydroxide ion to form two molecules of water [7].

Ex-situ refers to the extraction of such rock types, to be used in high temperature and pressure reactors to forcibly control the reaction; the conditions needed range from 45-185°C and 1-150atm (depending on the sample being carbonised) [8]. Although the maintenance of such extreme conditions creates a large associated cost, ex -situ allows for iterative modification of the process to optimise reaction rates via additives and catalysts, and minerals produced can be sold for cement manufacturing, which helps combat the steep costs.

Additionally, ex-situ allows for the resolution of biohazard concerns surrounding mining tails; asbestos fibres interlaced within mined rocks are often inhaled by workers on-site, resulting in various health effects such asbestosis. CO2 mineralisation acts to plug up pores in the rock, preventing the asbestos fibres from escaping and entering respiratory systems. The specific use of mining tails drives process costs down significantly: so much so that it becomes cheaper than in -situ - $8 per metric ton of CO2 mineralised compared to $30 for in-situ in basaltic formations, as estimated by USGS [9]. The main limitation of ex-situ stems from the large volumes of rock that must be extracted to mineralise enough CO2 to create a considerable impact on atmospheric CO2 levels. It is estimated that there is currently somewhere between 7.82 and 43.1 Gt of CO2 in the atmosphere, making it apparent that mineralisation techniques are currently far away from the scale required, as currently the annual volume of CO2 mineralised is in the 10,000s (tons).

In-situ mineralisation, in contrast, details pumping CO2 into underground rock formations. With a comparatively small maintenance cost due to shorter machine use periods, in-situ has a much greater efficacy over the large-scales of mineralisation needed to help the environment. Although one would expect in-situ to be a significantly slower process as ex-situ processes are engineered to maximise the surface area of rock carbonised, the total area of exposed rock in in-situ is so much greater, due to the total volume of basaltic/ultramafic rock underground, that the process is quite rapid (though still

Figure 3. Showing a reaction pathway, in which gaseous CO2 dissolves in aqueous solution to form carbonic acid. The carbonic acid dissociates into a hydrogen carbonate ion and a H+ . Simultaneously, calcium hydroxide in the reacting mineral dissociates into a hydroxide ion and a Ca2+ The Ca2+ reacts with the carbonate ion from the hydrogen carbonate, forming calcium carbonate.
63

slower) Research conducted by CarbFix, an Icelandic carbon mineralisation company, reduced the mineralisation period to 2 years, as described by Ó.Snæbjörnsdóttir [10]. The unnatural efficiency levels make this rather captivating, as it creates the prospect of reversing the adverse effects of CO 2 emissions over the last century in a very short period.

In-situ could be viewed as better primarily because it does not require the accruement of a storage site for the carbonate minerals since they can simply be left underground; however, ex-situ is arguably more cost-effective since the sequestered carbonates can be sold as a commodity- this is not possible when the carbonates are kept underground (it is clear to see why there are contending views in the field). In-situ is easily scalable as there are ample areas of rock that can be sufficiently carbonised (below), making it quite promising; a 2013 assessment of CO2 storage potential in the US, carried out by USGS [9], calculated that around 3000Gt of CO2 could be stored in sedimentary basins alone.

Combined with the additional area of ultramafic rocks, the CO 2 storage capabilities far exceed the CO2 volume in the atmosphere – especially exciting as this makes a future with a more stable climate far more substantial without even considering storage capacity globally.

Chapter 2: Opportunity for Application of Mineralisation

Application and optimisation are of the utmost importance in transitioning the carbonation model from a simple form of planetary homeostasis to an excelling CO2 storage method. One area that is currently in dire need of investment is coral reef vitality. Coral reefs are living organisms that constitute for around 0.01% of the ocean floor but are essential in supporting ecosystems of 25% of marine life [11] through habitat provision They also offer key services to coastal societies by dissipating incoming tidal waves, and by providing an estimated $36bn in annual tourism revenue [1 2]. Corals thrive due to a symbiotic relationship with the microscopic algae zooxant hellae; the algae provide the primary food source for the coral (around 90% of its nutrition) and is, in return, protected by the coral’s exoskeleton from preying fish species

Despite all corals executing the same role in aquatic systems, their anatomy diverges into the soft and hard types. Soft corals are made up of fine needle-like aggregates of calcium carbonate called sclerites, as illustrated below [13].

Figure 4. An outline of the Climeworks process integrating mineralisation with DAC [5]

In contrast, hard corals are made up of thousands of polyps (an anemone); the coral larvae start off by anchoring itself to a seabed rock and synthesises an exoskeleton comprised of calcite and aragonite [14] In a space called the extracellular calcifying medium (ECM), precipitation of bicarbonate and carbonate ions occur, forming the aragonite/calcite is used for the expansion of the coral’s exoskeleton [15]. The growth process is split into two aspects: linear extension and lateral thickening. A series of polyp extension, skeletal extension, and skeletal thickening takes place over the lifespan of the coral

Soft corals do not take up calcium and carbonates from seawater like hard corals, meaning that they will not be beneficiaries of mineralisation to the same extent as hard corals [16]. Nonetheless, the process will help regenerate coral reefs collectively, which will certainly have some effect on soft corals.

Coral reefs are currently under immense environmental pressures onset by a myriad of human activities and their associated consequences. Overfishing has led to the diminishing of fish species that act to consume macroalgae found on coral: this causes a eutrophication -esque cycle whereby the corals die due to a lack of photosynthesis, ultimately killing the entire reef and indigenous marine life. Sediment deposition from land levelling effectively causes the same inhibition of photosynthesis. Moreover, a surplus of nutrients suspended in reef seawater from sewage and local agricultural developments promote macroalgae growth [11]

Rising sea temperatures, too, jeopardise corals; as average sea temperatures increase, bleaching occurs in the corals [12]. The chloroplasts inside the zooxanthellae cells become overstimulated, which accelerates the rate of photosynthesis. During the intermediate steps of photosynthesis, biological agents transfer energy released via electron transport chains. Some of the travelling electrons react with local oxygen molecules to form reactive oxygen species (ROS), which includes both radicals and non-radicals [17].

Figure 5. Sclerites on a soft coral [13]
65

ROS formed are harmful to the coral as they cause cell damage by reacting with proteins, lipids, DNA helices, and the mitochondria, disrupting the cell. Ejection of the algae from the coral occurs as a defence mechanism to minimise damages; however, as the algae is the fundamental food source, the coral rapidly deteriorates and becomes susceptible to disease, all while losing its characteristic vibrant colours [1

Another by atmospheric carbon dioxide concentrations, . This alters the proportions of bicarbonate and carbonate ion concentrations present in favour of the bicarbonate ions [19]. During calcification, H ions from the bicarbonates are dissociated and ejected, lowering the pH in the surrounding environment. This is harmful as these ions can go on to react with carbonates to form more CO2, amplifying the imbalance between bicarbonates and carbonates. The increased bicarbonates diminish the amount of calcification that can take place as the use of bicarbonates in the precipitation reaction is more energetically demanding than with carbonates [15], ultimately reducing growth potential.

While most studies conducted thus far considered calcification as a product of linear extension and lateral thickening holistically, a study by PNAS looked at them as separate components. The effects of pH on total calcification varied from site to site, due to other factors such as temperature: even though high enough temperatures can induce bleaching, slightly lower ranges increase precipitation rate. Results from the investigation showed no correlation between linear extension and the saturation state of carbonates in seawater, ΩSW, which is dependent on pH. However, a strong linear correlation was found when comparing ΩSW to lateral thickening. This means that skeletal density decreased linearly as acidity increased [15].

Figure 6. An illustration of ROS formation, leading to cellular damage [

The foremost implication of a decreased skeletal density is that corals are becoming more susceptible to breaking under mechanical stress from inbound waves. Estimations have shown that acidification have resulted in a 13% drop in skeletal density from the 1950s-2000s, which equates to a 60% drop in compressive strength due to the exponential relationship between mechanical stress resistance and density [19]

The study also considered the disparity in carbonate chemistry between the ECM and in seawater. A calculated value for ΩECM was gathered by taking measurements of seawater temperature, salinity, and inorganic carbon concentration and combining this with pH calculations of the ECM done by analysing Boron isotope compositions. ΩECM was shown to vary from 11.6-17.8, around 4 times greater than ΩSW. A comparison between ΩECM and calcification was also made, giving a more complete insight into how acidification affects coral growth, with no changes from the trends observed with ΩSW Finally, a model was made to predict how the carbonate ion concentration in seawater, and therefore coral density, would fluctuate as CO2 levels rise. Shown was a projected decrease of carbonate concentration across the tropics by around 100 µmol/kg by 2100 (half of its preindustrial level), with an associated decrease of aragonite precipitation by 48% and an 11-17% decline in coral density. Evidently this is an issue that requires swift attention.

Ongoing initiatives to combat coral degeneration are largely confined to coral nurseries as a means of providing damaged reefs with auxiliary, healthy corals. The standard procedure involves extracting

Figure 7. [Left] the process of vertical extension and lateral thickening. [Right] results comparing ΩECM and ΩSW with extension, density, and total calcification [15]
67

small clippings from donor reefs and attaching them to nylon rope networks or plastic meshes so that they remain suspended in the water, maximising potential growth rate (propagation) [20].

The process is often rather lengthy, generally with a 12–18-month incubation period, leaving the threatened reef exposed to possible irreversible damage as it continues to bleach and decay due to the overly acidic conditions. The developed clippings are then introduced to the reef to encourage recovery by rebuilding the reef and attracting more fish to accommodate themselves in the reef. This typically lasts up to 1 year [21], though can take up to 10 years in extremities

More recently, research into 3D printed corals has been conducted to assess any changes in fish behavioural patterns when placed in an environment with the models. A study by the University of Delaware compared natural samples with ones made from a set of filaments, including polyester and corn starch [22]. Factors investigated were mainly fish activity, encompassing total distance travelled and frequency of movements.

Irrespective of the nature of the corals, there were no discernible difference in behaviours as the artificial corals still served their principal function of providing protection to the fish. The study also investigated the tendencies of coral larvae when choosing settling grounds before rooting themselves with an exoskeleton. There was, again, no noticeable change, meaning that the 3D printed models could act as suitable placeholders of some nursery-grown corals. The successes of the biodegradable filaments are also testament to the fact that human intervention can be done with disrupting the reef in the future. As the corn starch models attract more fish and instigate higher growth rates, they themselves degrade, maintaining the natural aesthetic and the integrity of the reefs.

Attempts to mitigate coral bleaching have also taken form as coral probiotics: chemicals that promote the growth of a bacterial culture. This field of study strives to build resilience in corals under thermal stress by providing free-radical scavenging bacteria with antioxidant properties, which would, in

Figure 8. An example of a 3D printed reef made in Hong Kong [23]

theory, minimise bleaching [24]. Several hundred species of bacteria were collected from active reefs, from which desirable genes were identified

Selection of appropriate bacteria strains was carried out by measuring antioxidizing properties using 2,2-diphenyl-1-picrylhydrazyl (DPPH, a free radical), which is reduced in a successful test, resulting in a colour change from violet to colourless. Another important consideration is the transmissibility of the probiotic across generations of coral larvae, which is a necessity in ensuring the longevity of the probiotic [24]. Although some experimental data has provided good candidates for the probiotic, th e concept is still extremely new, meaning that further testing to provide conclusive results is yet to happen. Nevertheless, the objective of regenerating and protecting should be to sustain them until we are able to reduce the concentration of carbon dioxide in the atmosphere, hence why the probiotic alone will not suffice.

Chapter 3: Integrating Mineralisation as a Solution

Alternatively, mineralised carbonates and bicarbonates could be used for coral proliferation, given that calcification is reliant on such species. Currently, it has not been conclusively shown that calcification is driven solely by either of the two ions [25] [26], though there is a clear positive correlation between coral growth and [DIC] (dissolved inorganic carbon: CO3 2- and HCO3 -) DIC also includes CO2, but its acidity when in aqueous phase leaves it undesirable due to the issues surrounding ocean acidification discussed before. This means that direct deposition of captured carbon dioxide would be counter effective as it exacerbates pre-existing problems. As such, the employment of mineralisation as a source of DIC feedstock, as opposed to purely DAC, could yield accelerated calcification. [Ca2+] is another determinant of calcification rates; however, calcium ion concentration is fairly uniform and high across the ocean, leaving it not so relevant in the context of coral regeneration [27]

By feeding solid carbonates to coral ecosystems at a controlled rate, reefs that would normally take years to regenerate through coral nurseries would take considerably less time. The combination of support reefs being brought in, and optimal calcification rates streamlines the overall rejuvenation process and speeds it up Figure 9 highlights how calcification increases with [CO3 2-] up to around

Figure 9. Graphs illustrating how the two main carbon-based calcifying substrates’ concentrations affect calcification rates, with the black triangles and white dots referring to calcification under dark and light conditions respectively [27]
69

650µmol kg-1 yet figure 10 shows that [CO3 2-] tends to be around 205µmol kg-1. This justifies the concept of pumping in mineralised carbonates to achieve greater calcification [28] Although the value was calculated from a sample form the Gulf of Mexico, the variation in carbonate concentration in more coral-abundant areas (mainly south-eastern Asia) will not be large enough to create a noticeable difference.

Enhancement of this additive supply system stems from analysing the diffusion coefficient during the mineralisation and dissolution (into water bodies) stages. Calculations of the diffusion coefficient vary depending on the nature of the media involved, though are often oriented around Fick’s laws of diffusion. For porous media, microstructures must be considered for more accurate behaviour predictions when carbon dioxide is exposed to the carbonating agent. The effective diffusion coefficient, De, which considers such structural layouts, is given by the formula [29]:

Where τ is the tortuosity of the media, δ is its constrictivity, ε is the porosity, and Dw is the self-diffusion coefficient (the coefficient when the chemical potential gradient within in a medium is 0) τ is a parameter that refers to the curvature of flow paths fluids travel through in porous media, often measured using mercury injection capillary pressure analysis (MICP) ε is a ratio between the porous volume and the total volume of the media being analysed and is also measured using MICP methods [30]. δ describes the retarding effect experienced by fluid particles traversing porous media due to the narrowing of pores in some sections of the media.

Therefore, selection of a suitable carbonating medium must encompass microstr uctures and an overall meta-structure that are favourable in maximising the effective diffusion coefficient but must also be abundant and accessible enough for sustainable use . This is especially evident in the case of the mineral wollastonite, which mineralises far quicker than almost any other ultramafic mineral;

Figure 10. A table of various carbonate chemistry values calculated form the analysis of a sample of seawater from the Gulf of Mexico [28]

however, because wollastonite is only available in small veins, they would not be viable long -term [31].

When weighing up all these factors, one of the clearest ideal minerals to utilise is basalt. Foremost, basalt’s composition contains many mafic accessory minerals, such as plagioclase, olivine, pyroxene, hypersthene, amphibolite, and orthoclase [32] Basalt’s internal structure allows for tremendous mineralisation capacities, particularly its porosity, which tends to range from 0-25%, and its high permeability range of 10-14-10-9m2, meaning it allows a reasonable volume of CO2-bearing fluid to permeate through its surfaces, and has a high pore coverage for the fluid to mineralise on [33]. Making up over 90% of volcanic rock types, basalt is in a great surplus and can be found anywhere from surface level to around 5km deep into the Earth’s mantle [33], emphasising its suitability in a mineralisation system due to its accessibility Furthermore, basalt’s nontoxic nature ensures that its deployment into coral reefs will not induce a biohazard.

By the creation of a basalt ex-situ mineralisation loop, mineralised carbon dioxide would be deposited at coral sites to boost [DIC], amplifying local calcification rates and overall carbon storage potentials. Although ex-situ mineralisation tends to be more costly due the maintenance of high temperatures and pressures, these changes in turn increase the effective diffusion coefficient in the minerals, making the process more time efficient [34]. On the other hand, basalt that has already mineralised carbon dioxide could be extracted and used in the same way in the ocean, providing a method that takes advantage of the lower costs of in-situ mineralisation but may be less time efficient.

Conclusion

In response to the title question initially proposed, carbon mineralisation is not the key to tackling the climate crisis: it would not suffice as a standalone solution. Instead, carbonation systems act alongside direct air capture cycles to achieve carbon capture and storage, creating a certain dependency on air capture innovation for mineralisation success

Mineralisation presents itself as a key bridging point between how humanity deals with the climate crisis currently, and how it will be solved in the future. Further, geological storage methods are in a state of dormancy, with a great potential to accelerate CO2 management. In its state as of now, mineralisation simply has not scaled up enough to make a significant impact on atmospheric carbon dioxide concentrations.

Mineralisation has created the prospect of a relatively simple and efficient method of coral reef regeneration, mitigating and potentially reversing one of the many adverse effects imposed by the climate crisis. Of course, mineralisation purely in corals be ineffective as they sequester around 70-90 megatons of CO2, minute in comparison with the current carbon dioxide levels in the atmosphere [35]. This is, nevertheless, one extremely useful application of carbonation which justifies its exploration to find more beneficial use cases for the chemistry involved.

71

Bibliography

[1] Zevenhoven, R., & Fagerlund, J. (2010). Mineralisation of carbon dioxide. Abo Akademi.

[2] C2ES. (n.d.). Climate Solutions, Technology Solutions, Carbon Capture. Retrieved from Center For Climate and Energy Solutions: https://www.c2es.org/content/carbon-capture/ [accessed 1 April 2023]

[3] GeoEngineering.global. (n.d.). Advancing the Mitigation of Climate Change and Global Warming through Geoengineering Education and Research. Retrieved from geoengineering.global: https://geoengineering.global/direct-air-capture/ [accessed 13 February 2023]

[4] Budinis, S. (2022, September). Direct Air Capture. International Energy Agency. Retrieved from iea.org: https://www.iea.org/reports/direct-air-capture [accessed 13 February 2023]

[5] Li, J., Hitch, M., Power, I., & Pan, Y. (2018). Integrated Mineral Carbonation of Ultramafic Mine Deposits. MDPI.

[6] CarbFix. (n.d.). We turn CO2 into Stone. Retrieved from carbfix.com: https://www.carbfix.com/ [accessed 15 February 2023]

[7] Badaoui, A., Badaoui, M., & Kharchi, F. (2013). Probabilistic Analysis of Reinforced Concrete Carbonation Depth. scirp.org.

[8] Gadikota, G. (2016). Ex Situ Aqueous Mineral Carbonation. frontiersin.org.

[9] USGS. (2019, March 2019). Making Minerals- How Growing Rocks can help Reduce Carbon Emissions. Retrieved from usgs.gov: https://www.usgs.gov/news/featured-story/makingminerals-how-growing-rocks-can-help-reduce-carbonemissions#:~:text=Carbon%20mineralization%20is%20the%20process,escape%20back%20to %20the%20atmosphere [accessed 13 February 2023]

[10] Snæbjörnsdóttir, S., Gislason, S., Galeczka, I., & Oelkers, E. (2018). Reaction path modelling of insitu mineralisation of CO2 at the CarbFix site at Hellisheidi, SW-Iceland. Geochimica et Cosmochimica Acta Volume 220, 348-366.

[11] Wood, K., & Burke, L. (2021, December 13). Decoding Coral Reefs: Exploring Their Status, Risks and Ensuring Their Future. Retrieved from WorldResourcesInstitute.org: https://www.wri.org/insights/decoding-coral-reefs [accessed 4 April 2023]

[12] Royal Museums Greenwich. (n.d.). What Exactly is Coral? Retrieved from rmg.co.uk: https://www.rmg.co.uk/stories/topics/what-coral [accessed 20 February 2023]

[13] Scott, C. (2021, November 10). What are Sclerites? Retrieved from noaa.gov: https://oceanexplorer.noaa.gov/facts/sclerites.html [accessed 1 April 2023]

[14] NOAA. (n.d.). The Coral and the Algae. Retrieved from oceantoday.noaa.gov: https://oceantoday.noaa.gov/fullmoon-coralandalgae/welcome.html [accessed 14 February 2023]

[15] Mollica, N., Guo, W., Cohen, A., Huang, K.-F., Foster, G., Donald, H., & Solow, A. (2018). Ocean acidification affects coral growth by reducing skeletal density. PNAS.

[16] Farnsworth, R. (2022, September 2022). What Is the Difference Between Stony & Soft Corals? Retrieved from bulkreefsupply.com: https://www.bulkreefsupply.com/content/post/what -isthe-difference-between-soft-and-stonycorals#:~:text=Stony%20corals%20uptake%20calcium%20and,and%20carbonate%20from%2 0the%20water [accessed 16 March 2023]

[17] Khorobrykh, S., Havurinne, V., Mattila, H., & Tyystjärvi, E. (2022). Oxygen and ROS in Photosynthesis. NCBI.

[18] Gaspar, T., Kevers, C., Franck, T., Bisbis, B., Billard, J.-P., Huault, C., . . . Greppin, H. (1995). PARADOXICAL RESULTS IN THE ANALYSIS OF HYPERHYDRIC TISSUES CONSIDERED AS BEING UNDER STRESS: QUESTIONS FOR A DEBATE. Bulgarian Journal of Plant Physiology, 80-97.

[19] Wilson, M. (2020, September 1). Disentangling influences on coral health. Retrieved from Physics Today: https://pubs.aip.org/physicstoday/Online/22381/Disentangling -influences-on-coralhealth [accessed 28 February 2023]

[20] Coral Reef CPR. (2016). CORAL GARDENING APPROACH. Retrieved from coralreefcpr.org: http://www.coralreefcpr.org/coral-nurseries.html [accessed 28 February 2023]

[21] Aloysius, S. L. (2020, March 16). Artificial Corals: Improving the Resilience of Coral Reefs (part II) Retrieved from earth.org: https://earth.org/artificial-corals-improving-the-resilience-ofreefs-part-ii/ [accessed 31 March 2023]

[22] Ruhl, E., & Dixson, D. (2016). 3D printed objects do not impact the behavior of a coral-associated damselfish or survival of a settling stony coral. NCBI.

[23] Rakshit, D. (2020, August 31). Marine Scientists In Hong Kong Are Rebuilding Coral Reefs With 3D-Printed Tiles. Retrieved from the Swaddle: https://theswaddle.com/marine-scientists-inhong-kong-are-rebuilding-coral-reefs-with-3d-printed-tiles/ [accessed 31 March 2023]

[24] Dungan, A., Bulach, D., Lin, H., Oppen, M., & Blackall, L. (2020). Development of a free radical scavenging probiotic to mitigate coral bleaching. bioRxiv.

[25] Jokiel, P. (2013). Coral reef calcification: carbonate, bicarbonate and proton flux under conditions of increasing ocean acidification. National Center for Biotechnology Information.

[26] Salleh, A. (2012, December 19). Chemistry may save some coral from acidity. Retrieved from ABC Science: https://www.abc.net.au/science/articles/2012/12/19/3657112.htm [accessed 2/6/23]

[27] Bach, L. (2015). Reconsidering the role of carbonate ion concentration in calcification by marine organisms. BioGeoSciences.

[28] Sharp, J., & Byrne, R. (2019). Carbonate ion concentrations in seawater: Spectrophotometric determination at ambient temperatures and evaluation of propagated calculation uncertainties . Marine Chemistry, 70-80.

[29] Li, C., Zheng, Z., Liu, X., Chen, T., Tian, W., Wang, L., Liu, C. L. (2013). The Diffusion of Tc-99 in Beishan Granite-Temperature Effect. World Journal of Nuclear Science and Technology.

[30] Montegrossi, G., Cantucci, B., Piochi, M., Fusi, L., Misnan, M., Rashidi, M., Hashim, N. (2023). CO2 Reaction-Diffusion Experiments in Shales and Carbonates. Minerals.

73

[31] Kelemen, P., McQueen, N., Wilcox, J., Renforth, P., Dipple, G., & Vankeuren, A. (2020). Engineered carbon mineralization in ultramafic rocks for CO2 removal from air: Review and new insights . Chemical Geology.

[32] Luhmann, A., Tutolo, B., Bagley, B., Mildner, D., Seyfried, W., & Saar, M. (2017). Permeability, porosity, and mineral surface area changes in basalt cores induced by reactive transport of CO2 -rich brine. AGU Publications.

[33] Jasim, A., Hemmings, B., Mayer, K., & Scheu, B. (2018). Groundwater flow and volcanic unrest Springer Link.

[34] Azin, R., Mahmoudy, M., Raad, S. M., & Osfouri, S. (2013). Measurement and modeling of CO2 diffusion coefficient in Saline Aquifer at reservoir conditions . Central European Journal of Engineering.

[35] Allemand, D. (n.d.). Coral Reefs and Climate Change. ocean-climate.org.

Buis, A. (2019, October 9). The Atmosphere: Getting a Handle on Carbon Dioxide. Retrieved from nasa.gov: https://climate.nasa.gov/news/2915/the -atmosphere-getting-a-handle-on-carbondioxide/ [accessed 13 February 2023]

Hills, C., Tripathi, N., & Carey, P. (2020). Mineralization Technology for Carbon Capture, Utilization, and Storage. frontiersin.

Mundy, B. (2022, October 19). Converting carbon dioxide to solid minerals underground for more stable storage. Retrieved from phys.org: https://phys.org/news/2022-10-carbon-dioxidesolid-minerals-underground.html [accessed 13 February 2023]

75

Annabel Room

MEDICINE

Annabel Room’s interest in the continuing shift towards personalised medicine was the inspiration for her ERP. Her project evaluates the potential range of benefits and drawbacks of using personalised medicine in the treatment of breast cancer. Topics discussed in the project include cellular based immunotherapy, pharmacogenomics and cost feasibility. Annabel Room is currently studying Biology, Chemistry and Maths at A Level and hopes to study Medicine at university.

Is Personalised Medicine the Future for the Treatment of Breast Cancer in the UK?

The premise of personalised medicine is tailoring the treatment towards the individual, rather than the overall population. Breast cancer is the most common cancer in women in the UK and leads to an estimated 11,500 deaths in women per year in the UK alone (Cancer Research UK, n.d.). The current most common treatment methods are surgery (the tumour is surgically removed); chemotherapy (drugs are used to shrink the tumour) and radiotherapy (where high doses of radiation are given to the patient to shrink and kill cancerous cells). These treatment methods are often used in conjunction with each other for the best outcome for the patient. For example, a patient may be given chemotherapy to reduce the size of the tumour, then undergo surgery to remove the tumour and then be given radiation therapy to kill the remaining cancerous cells. However, the high number of deaths from breast cancer clearly shows that current treatment methods are not succeeding and a change needs to be made. As every person’s cancer has a different genetic makeup, it only follows that the treatment should be specific to that patient too. Personalised medicine in the treatment of breast cancer shows promise in many ways, including cellular based immunotherapy, active surveillance, pharmacogenomics and genetic editing.

Personalised medicine will enable more successful treatments in patients with breast cancer by using cellular based immunotherapy. Immunotherapy has had greater success in patients with cancers that have more mutations for example, blood cancers including leukaemia, because tumours with more mutations means that there is a higher likelihood that the immune system will see it as foreign and attack it (Ledford, 2019). However, a clinical trial that was carried out by Dr Rosenburg at the National Cancer Institute used cellular based immunotherapy to target the tumour cells in a patient with hormone receptor positive (HR-positive) metastatic breast cancer with success. In the trial, the patient’s tumour DNA was sequenced to establish the mutations (of which 62 were found) and tumour-infiltrating lymphocytes (TILs) were then used to target the mutations (Zacharakis et al. 2019). The TILs (which contain T and B cells) that have the correct antigens to attack the cancer cells were harvested from the patient’s body and then grown in culture with interleukin (IL) -2 (Zacharakis et al. 2019). IL-2 is a protein that increases the division of TILs, meaning that the TILs can divide and multiply rapidly, creating a larger number of them in a much shorter space of time. Large numbers of these lymphocytes were then given to the patient with the aim that they will attack the tumour (Wang et al. 2021), and in this clinical trial, this was the case – 6 weeks after the TILs were given to the patient, the tumour had reduced in size by 51% (Zacharakis et al. 2019). The patient was also given pembrolizumab during the trial which is a checkpoint inhibitor (Zacharakis et al. 2019). Immune checkpoints prevent the T-cells from killing normal, healthy cells by keeping them inactive but cancerous cells also have these immune checkpoints and so the T cells remain inactive and do not destroy the cancerous cells. Checkpoint inhibitors bind to the checkpoints so that the T cells are not inactivated and are able to destroy the cancerous cells (National Cancer Institute, 2022). The results of this clinical trial were published in 2018 and the patient’s cancer has still not returned, showing the success of this treatment. The use of DNA sequencing and analysing the tumourinfiltrating lymphocytes shows how highly personalised this successful treatment was and that the current “one size fits all approach” for breast cancer treatments is not the way to move forward. In November 2022, the NHS announced that Pembrolizumab in combination with chemotherapy would be available on the NHS for women with triple negative breast cancer (NHS, 2022). However, it is not going to be offered in combination with cellular based immunotherapy (extracted TILs from patients, growing them in culture and then returning them to the patient) and so, although there has been a clear move towards more cancer treatments for patients with breast cancer, there are still improvements that need to be made to make this treatment option as personalised and tailored towards the individual as possible. The Cancer Genome Atlas (TCGA) (Collins, 2010) was a project carried out by National Human Genome Research Institute and the National Cancer Institute. Its aim was to sequence the DNA of many different cancers, including those of breast cance r. These

77

sequenced breast cancers can be compared to those without breast cancer to identify the mutations, and thus decide on the most appropriate course of treatment. Therefore, cellular based immunotherapy using a checkpoint inhibitor is an example of how personalised medicine can be used in the treatment of breast cancer.

Personalised medicine also shows use at the beginning stages of breast cancer treatment. Ductal carcinoma in situ (DCIS) is the earliest form of breast cancer and is when ‘the cells lining the milk ducts turn malignant (cancerous) but stay in place (in situ).’ (Ductal Carcinoma in Situ, 2022). At this point, the cancer is not invasive, but does have the potential to turn into invasive ductal carcinoma (IDC). However, it is estimated that only about 12% of cases of DCIS turn into invasive breast cancer if not treated (Wilson et al., 2022) Despite this, almost all cases of DCIS are treated, usually with either lumpectomy (surgery to remove the abnormal tissue) or mastectomy (surgery to remove a breast) and these are both invasive procedures. The surgery is normally followed by radiotherapy (National Cancer Institute, 2015). Not only are these procedures costly for the NHS and a stressful experience for the patient but are also unnecessary if the DCIS would have never progressed to IDC. Therefore, a personalised approach can be taken to determine those who need treatment and those who do not. Individual cases of DCIS can be grouped into high, intermediate and low risk. Low risk DCIS grows much slower than intermediate and high-risk DCIS and is much less likely to develop into IDC. Therefore, these low-risk cases of DCIS are the ones which are being overtreated. Genomic testing can be used to group cases of DCIS into low, intermediate and high risk and an example of a genomic test which has been developed to do this is the Oncotype DX Breast DCIS Score Test which is regularly offered to those who have been diagnosed with DCIS. Currently, the Oncotype DX Breast DCIS Score Test is used by doctors to determine the best course of treatment for the patient alongside the use of the Oncotype DX Breast Recurrence Score Test (which is used to determine the probability of the DCIS returning within 10 years after treatment) (DePolo, n.d.) The use of these tests is evidence of how personalised medicine has been incorporated into breast cancer treatment on a national scale (Carlson, 2006) but this personalised approach could be taken much further. Those who are classified as having ‘low risk’ DCIS could be offered active surveillance as an alternative treatment pathway to the highly invasive ones that are currently offered ( Fan et al., 2020). Active surveillance (AS) is defined as ‘A treatment plan that involves closely watching a patient's condition but not giving any treatment unless there are changes in test results that show the condition is getting worse.’ (Cancer.gov, 2011) AS is already used as a treatment plan for those with low risk and localised prostate cancer (Magnani et al., 2021). Therefore, the NHS clearly has the infrastructure needed to offer AS as a treatment plan and this could be replicated for low risk DCIS patients. A clinical trial called the COMET trial is currently testing this exact theory – whether those with low risk DCIS can be monitored using AS as opposed to having to undergo invasive procedures. The clinical trial is estimated to finish in July 2023 (Clinicaltrials.gov, 2020). Another significant benefit that AS would have is surrounding cost. Studies have shown that when AS is used in the treatment of prostate cancer, it is cheaper than the traditional route of surgery and radiotherapy. A study carried out by J. Magnani et al. with a sample size of 3433, found that after two years, AS cost $2.97/d compared to surgery and radiation which combined cost $15.01/d and at five years after diagnosis, AS remained cheaper (Magnani et al., 2021). Although these results are promising, the study was carried out in the US where the healthcare system is significantly different. Therefore, similar clinical trials will have to be carried out in the UK on patients with DCIS to determine a true figure for the cost benefit ratio. Therefore, personalised medicine has the potential to reduce the number of patients unnecessarily undergoing radiotherapy treatments and surgery and reduce costs by offering an alternative treatment option of active surveillance.

Personalised medicine also shows promise in the treatment of breast cancer in terms of pharmacogenomics. Tamoxifen is a hormonal therapy that is a common treatment for oestrogen receptor positive (ER+) breast cancer patients but mutations in the CYP2D6 gen e can affect the

success of this treatment (Dean, 2014). Some patients have a mutation in the CYP2D6 gene which means that the CYP2D6 enzyme does not metabolise the tamoxifen effectively, rendering the treatment significantly less effective and the side effects faced (commonly menopausal symptoms) would not outweigh the benefit of the treatment (Dean, 2014). A clinical trial carried out by M, Goetz et al. supports this theory. In the trial, 3901 women were split into groups with the aim of investigating the effectiveness of switching tamoxifen to anastrozole (another hormonal cancer treatment). The results showed that the switch to anastrozole was significantly more effective in those with mutations in the CYP2D6 gene, leading to the conclusion that the tamoxifen treatment is much less effective in those with mutations in the CYP2D6 gene (P.Goetz et al. 2012). A limitation of this trial is that all of the women were post-menopausal. Tamoxifen is effectively used in those who are both pre- and post- menopausal and so this study does not provide data for a large subset of the population that receive this treatment. However, ‘The risk of developing breast cancer increases with age. The condition is most common in women over age 50 who had been through the menopause. About 8 out of 10 cases of breast cancer happen in women over 50’ (NHS, 2019). Therefore, despite the study not providing information for those with breast cancer who are premenopausal, it does provide information which supports the majority of cases. In terms of personalised medicine, if the patient’s DNA was sequenced prior to prescribing tamoxifen, the doctor would then know if the patient had the mutation in the CYP2D6 gene and would be able to recommend an alternative course of treatment that would likely be more effective and would also save crucial time and money on wasted resources.

Personalised medicine could be also used to treat breast cancer in the form of genetic editing. CRISPR/Cas9 is a tool that can be used for genetic engineering and in the case of breast cancer can target mutations in oncogenes or tumour suppressor genes. Mutations in either oncogenes or tumour suppressor genes can cause breast cancer as both these genes affect the growth and division of cells. The BRAC1 and BRAC2 genes are both tumour suppressor genes (Collins, 2010) and so CRISPR/Cas9 could be used to ‘knock out’ the mutations in these genes (Yang et al. 2019) and this would reduce the rate of cell division and the risk of a malignant tumour forming. CRISPR stands for ‘Clustered regularly interspaced short palindromic repeats’ and edits the mutations in a gene. Firstly, the patient’s DNA is sequenced to find the mutations that need to be targeted. A guide RNA is made which has the complementary base pair to the mutation that is being targeted (Wong et al, 2015) and this allows the Cas9 protein to be directed to the desired part of the gene that needs editing. The Cas9 protein cuts out the mutation and this can either be deleted or a new section of DNA can be inserted (Redman et al, 2016) which in this case would be the ‘knocking out’ of mutations in tumour suppressor genes. By ‘knocking out’ the mutations in tumour suppressor genes the rate of cell division would be reduced and so would the risk of a malignant tumour forming. A study carried out by Meng Yang et al. trialled the effectiveness of using CRISPR/Cas 9 in the treatment of triple negative breast cancer. The CXCR4 and CXCR7 genes play a vital part in the growth of breast cancer tumours and in the study the CXCR4 and CXCR7 genes were ‘knocked out’ of cells in culture using CRISPR/Cas9 and this seemed to reduce the growth of the tumour (Yang et al. 2019). Using CRISPR/Cas9 is a highly personalised form of treatment because it requires DNA sequencing the patient’s genome to find the mutations and then editing these mutat ions which are likely to be different in every person.

Although using CRISPR/Cas9 is promising in the treatment of breast cancer, there are some issues that cannot be overlooked. The main issue surrounding CRISPR/Cas9 usage in human gene editing is the potential of off-targeting effects (Karn, V et al, 2022). Off-targeting in terms of CRISPR/Cas9 usage refers to the ‘effects that can occur when a drug binds to targets (proteins or other molecules in the body) other than those for which the drug was meant to bind. This can lead to unexpected side effects that may be harmful’ (National Cancer Institute, n.d.). Protospacer adjacent motif (PAM) sequences occur approximately every 42 bases in human DNA (Integrated DNA Technologies, n.d.). A

79

PAM site is a three base sequence that ends in GG and the strand that contains the PAM sequence is known as the non-targeting strand. These PAM sites are used to target the mutated section of DNA. The PAM site next to the mutation is identified and a complementary strand of guide RNA (gRNA), which contains 20 nucleotides, is made to match the PAM sequence (Gleditzsch et al., 2019). This way, the Cas9 protein is directed to the correct part of the gene (the section of DNA that is mutated and needs replacing). However, ‘More than three mismatches between target sequences and 20 nucleotides of gRNA can result in off-target effects’ (Naeem et al., 2020) because the Cas9 protein will replace the wrong section of DNA, leading to further mutations. If this occurs in the tumour suppressor gene p53 or (in the case of breast cancer) the BRAC1 or BRAC2 genes, then there is a possible increased risk of cancer due to the reasons explained previously surrounding mutations in tumour suppressor genes. If this were to happen, the use of CRISPR/Cas9 as a form of personalised medicine in treating breast cancer would have the opposite effect of what was intended – the risk of cancer would increase as opposed to being reduced. Despite the potentially fatal effects that offtargeting can cause, there are many ways that the chance of off-targeting occurring can be reduced. For instance, by increasing the content of cytosine and guanine bases in the gRNA strand, the stability of the DNA will increase and so the potential of off-targeting will decrease (Naeem et al., 2020). Increasing the percentage of cytosine and guanine bases in the gRNA strand increases the stability of the DNA because three hydrogen bonds form between cytosine and guanine bases whereas only two hydrogen bonds form between the adenine and uracil bases and the greater the number of bonds, the more stable the DNA. However, despite these measures, off-targeting can still occur. Many of the drugs that are routinely used to treat cancer patients have potential risks, some of which can be fatal. For example, many chemotherapy drugs are cytotoxic (they kill all cells including healthy cells). Many of these cytotoxic chemotherapy drugs can cause mutations which can lead to further malignant tumours, increasing the risk of a second tumour which is, arguably, a very similar risk to what is posed by off-targeting effects of CRISPR/Cas9 (Nature Medicine, 2018). Beneficence is one of the pillars of medical ethics and so, if chemotherapy is the viewed to be the best possible course of action for the patient, the judgement is made that the potential benefits it would have overrides the risk of malignancies (Nature Medicine, 2018). The same logic can be applied to the off-targeting effects of using CRISPR/Cas9 to ‘knock out’ mutations in tumour suppressor genes – if the benefit outweighs the risk of the treatment then it is likely to be in the best interest of the patient Therefore, despite the limitations of using CRISPR/Cas9 in the treatment of breast cancer, it can still be viewed as a viable future treatment option that revolves around personalised medicine

Personalised medicine provides a very promising pathway for the future treatment of breast cancer but the ethical issues surrounding this cannot be overlooked. A large problem faced by personalised medicine is that of data protection and the lack of public trust surrounding this (Brothers and Rothstein, 2015). A person’s genomic sequence is a highly personal piece of information and so needs to be securely stored and protected. The first issue that would need to be addressed is the storage and processing of such highly personal data. The data would need to be protected when in motion and at rest, both of which present challenges and vulnerabilities. However, the NHS already stores and process vast amounts of personal data, most of which is done so without any problems, and so have the capability to do the same with a patient’s genomic sequence. For example, methods such as storing the data in an encrypted fashion and using methods of authorisation to access it, alongside laws could be used to solve this issue of data protection (Understanding patient data, n.d.). Therefore, there shouldn’t be technological barrier for creating a secure system for handling this kind of data. Arguably, the bigger issue is the lack of public trust in the protection of their personal data. In 2017, the WannaCry cyber-attack directly affected NHS computers, putting patients’ personal data at risk. This highlighted the vulnerability of healthcare systems such as the NHS and reinforced a lack of patient trust. This lack of trust would pose as an issue for personalised medicine because if patients are not willing to give up their personal data such as their DNA

sequence despite all the protection measures that are put in place, then it will be impossible to tailor the treatment to the individual’s condition and needs. Since autonomy is one of the pillars of medical ethics, all patients (who are deemed as having capacity) are, of course, entitled to their own choice but when faced with a scenario that a personalised approach to their treatment (such as using cellular based immunotherapy to treat breast cancer) will have a significantly higher chance of success, many will be likely to prioritise this over a very small chance of a potential data breach. Therefore, the benefits of personalised medicine are likely to overcome the lack of patient trust in storing personal data.

Cost is another issue that would have to be addressed (Brothers and Rothstein, 2015). Treatments involving personalised medicine are likely to have high initial costs and those who cannot afford this will not be able to access it as easily. However, if personalised medicine results in the most effective treatment being chosen first, this higher cost may be counterbalanced by the money saved on ineffective treatments that would normally be tried first, for instance, prescribing tamoxifen to patients who have a mutation in the CYP2D6 gene as previously explained. In addition, the cost of genome sequencing has rapidly decreased over the past 20 years. In 2001 when the Human Genome Project was taking place, the approximate cost of sequencing a human genome was $100,000,000 whereas by 2015 the cost had fallen to $1500 (National Human Genome Research Institute, 2021). In comparison, it is estimated that the NHS spends £1.4 billion each year on chemotherapy (NHS England, 2016), and this is more than DNA sequencing would cost. Furthermore, the cost of DNA sequencing will continue to fall, especially once all the infrastructure has been set up to carry out DNA sequencing of breast cancer patients on a wide scale, making this even more cost effective. Therefore, the issue of cost is likely to become increasingly less significant as the technology develops

Overall, personalised medicine for the treatment of breast cancer shows promise in many areas and has the potential to transform many different stages of breast cancer treatment from using whole genome sequencing to find the mutations in the patients’ DNA and then target these using methods such as CRISPR/Cas9 to being able to prescribe the best possible treatment based on the patient’s genetic makeup Like any developing areas of medicine, further testing and clinical trials would need to be carried out before personalised medicine is likely to become a reality for treating breast cancer patients. Whole genome sequencing to decide on the best treatment option for the patient (pharmacogenomics) is likely to be the first step towards making breast cancer treatment more personalised as the risks of DNA sequencing are limited and benefits are likely to be high. However, cellular based immunology and CRISPR/Cas9 are likely to save many lives and so it is vital that clinical trials and studies continue to speed up the process of implementing it on first a local and then hopefully a national level. Although treatments that use personalised medicine do carry risks, these are likely to be outweighed by the benefits (the main one being that more lives would be saved). In terms of feasibility, the cost of DNA sequencing is rapidly decreasing and since the principal of personalised medicine is tailoring the treatment towards the patient, the method of treatment chosen should work, saving valuable time and money on what would otherwise be wasted resources if a less effective treatment was chosen initially due to the lack of information about the patient’s genetic makeup. In addition, going through cancer treatment can be a very distressing and worrying experience for both the patient and those around them and, hopefully, by choosing the most effective treatment for that patient the first time, this could be reduced. Although treatments that are based around personalised medicine are clearly needed for breast cancer treatment due to the current high levels of deaths, it is important to recognise that if a traditional treatment method is likely to be the most effective method for that patient, then this should undoubtedly be the chosen route. However, personalised medicine comes into this when deciding which treatment method is the most effective because the patient’s DNA and/or tumour DNA can be sequenced and this information can be used to choose the best treatment pathway whether this be a traditional method

81

or a more highly personalised approach such as cellular based immunology. Therefore, the potential that personalised medicine has to revolutionise breast cancer treatment is too great to be ignored.

References:

Brothers, K and Rothstein, M. (2015). Ethical, legal and social implications of incorporating personalized medicine into healthcare. [Online]. National Library of Medicine. Last Updated: 1 November 2015. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4296905/ [Accessed 26 March 2023].

Cancer Research UK. (n.d.) Breast cancer statistics. [Online]. Cancer Research UK. Last Updated: .. Available at: https://www.cancerresearchuk.org/health-professional/cancer-statistics/statistics-bycancer-type/bre [Accessed 9 March 2023].

Carlson, B. (2006). Oncotype DX Test Offers Guidance For Women Debating Chemotherapy. [Online]. National Library of Medicine. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3571077/#:~:text=Similar%20efficacy%20and%20s ubstantial [Accessed 25 May 2023].

Collins, F. (2010). The Language of Life: DNA and the Revolution in Personalised Medicine. 2nd ed. United States of America: HarperCollins Publishers. pp.104-108.

Collins, F. (2010). The Language of Life: DNA and the Revolution in Personalised Medicine. 2nd ed. United States of America: HarperCollins Publishers. pp.127-128.

Dean, L. (2014). Tamoxifen Therapy and CYP2D6 Genotype. [Online]. National Library of Medicine. Last Updated: 1 May 2019. Available at: https://www.ncbi.nlm.nih.gov/books/NBK247013/ [Accessed 8 February 2023].

DePolo, J. (n.d). Oncotype DX Tests This information is provided by Breastcancer.org. Donate to support free resources and programming fo. [Online]. breastcancer.org. Available at: https://www.breastcancer.org/screening-testing/oncotype-dx [Accessed 25 May 2023].

Fan, B et al. (2020). Analysis of active surveillance as a treatment modality in ductal carcinoma in situ. [Online]. National Library of Medicine. Last Updated: 26 June 2020. Available at: https://pubmed.ncbi.nlm.nih.gov/31925857/ [Accessed 25 May 2023].

Gleditzsch et al. (2018). PAM identification by CRISPR-Cas effector complexes: diversified mechanisms and structures. [Online]. National Library of Medicine. Last Updated: 18 September 2018. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6546366/ [Accessed 29 March 2023].

P. Goetz, M et al. (2012). CYP2D6 Metabolism and Patient Outcome in the Austrian Breast and Colorectal Cancer Study Group Trial (ABCSG) 8. [Online]. National Library of Medicine. Last Updated: 15 January 2014. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3548984/ [Accessed 8 February 2023].

Hui, E. (2019). Immune checkpoint inhibitors. [Online]. National Library of Medicine. Last Updated: 4 March 2019. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6400575/ [Accessed 8 February 2023].

Integrated DNA Technologies. (n.d.). What is the average frequency of the CRISPR-Cas9 PAM sequence in the mammalian genome? [Online]. Integrated DNA Technologies. Available at: https://eu.idtdna.com/pages/support/faqs/what -is-the-average-frequency-of-the-crispr-cas9-pamsequen [Accessed 29 March 2023].

J Magnani, C et al. (2021). Real-world Evidence to Estimate Prostate Cancer Costs for First-line Treatment or Active Surveillance. [Online]. National Library of Medicine. Last Updated: January 2021. Available at: https://pubmed.ncbi.nlm.nih.gov/33367287/ [Accessed 25 May 2023].

Karn, V et al. (2022). CRISPR/Cas9 system in breast cancer therapy: advancement, limitations and future scope. [Online]. BMC. Last Updated: 25 July 2022. Available at: https://cancerci.biomedcentral.com/articles/10.1186/s12935 -022-02654-3#ref-CR101 [Accessed 4 March 2023].

Ledford, H. (2019). Highly mutated cancers respond better to immune therapy. [Online]. Nature. Last Updated: 14 January 2019. Available at: https://www.nature.com/articles/d41586-019-00143-8 [Accessed 8 February 2023].

Naeem et al. (2020). Latest Developed Strategies to Minimize the Off-Target Effects in CRISPR-CasMediated Genome Editing. [Online]. National Library of Medicine. Last Updated: 2 July 2020. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7407193/ [Accessed 29 March 2023]. National Human Genome Research Institute. (2021). The Cost of Sequencing a Human Genome [Online]. National Human Genome Research Institute. Last Updated: 1 November 2021. Available at: https://www.genome.gov/about-genomics/fact-sheets/Sequencing-Human-Genome-cost [Accessed 29 March 2023].

National Cancer Institute. (n.d). active surveillance. [Online]. National Cancer Institute. Available at: https://www.cancer.gov/publications/dictionaries/cancer -terms/def/active-surveillance [Accessed 25 May 2023].

National Cancer Institute. (n.d.). Immune Checkpoint Inhibitors. [Online]. National Cancer Institute. Available at: https://www.cancer.gov/about-cancer/treatment/types/immunotherapy/checkpointinhibitors [Accessed 29 March 2023].

National Cancer Institute. interleukin-2. [Online]. National Cancer Institute. Available at: https://www.cancer.gov/publications/dictionaries/cancer -terms/def/interleukin-2 [Accessed 8 February 2023].

National Cancer Institute. (n.d.). off-target effect. [Online]. National Cancer Institute. Available at: https://www.cancer.gov/publications/dictionaries/cancer -terms/def/off-target-effect [Accessed 29 March 2023].

National Cancer Institute. (2015). Surgery Choices for Women with DCIS or Breast Cancer. [Online]. National Cancer Institute. Available at: https://www.cancer.gov/types/breast/surgery -choices [Accessed 25 May 2023].

Nature Medicine. (2018). Keep off-target effects in focus. [Online]. Nature Medicine. Last Updated: 6 August 2018. Available at: https://www.nature.com/articles/s41591-018-0150-3 [Accessed 29 March 2023].

83

NHS England. (2016). Chemo drug optimisation to improve patient experience of cancer treatment [Online]. NHS England. Last Updated: 23 May 2016. Available at: https://www.england.nhs.uk/2016/05/chemo-drug-optimisation/ [Accessed 29 March 2023].

NHS (2019). Causes - Breast cancer in women. [online] NHS. Available at: https://www.nhs.uk/conditions/breast-cancer/causes/.

NHS. (2022). NHS strikes deal for potentially life-saving breast cancer drug. [Online]. NHS England. Last Updated: 8 November 2022. Available at: https://www.england.nhs.uk/2022/11/nhs-strikesdeal-for-potentially-life-saving-breast-cancer-drug/ [Accessed 8 February 2023].

Redman, M et al. (2016). What is CRISPR/Cas9? [Online]. National Library of Medicine. Last Updated: 8 April 2016. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4975809/ [Accessed 8 February 2023].

Sun, B. (n.d). Ductal Carcinoma in Situ (DCIS). [Online]. Johns Hopkins Medicine. Available at: https://www.hopkinsmedicine.org/health/conditions -and-diseases/breast-cancer/ductal-carcinomain-sit [Accessed 25 May 2023].

Wang, S et al. (2021). Perspectives of tumor-infiltrating lymphocyte treatment in solid tumors. [Online]. BMC Medicine. Last Updated: 11 June 2021. Available at: https://bmcmedicine.biomedcentral.com/articles/10.1186/s12916 -021-02006-4 [Accessed 8 February 2023].

Understanding patient data. (n.d.). How is data kept safe?. [Online]. Understanding patient data. Available at: https://understandingpatientdata.org.uk/how-data-kept-safe [Accessed 29 March 2023].

U.S National Library of Medicine. (2016). Comparing an Operation to Monitoring, With or Without Endocrine Therapy (COMET) Trial For Low Risk DCIS (COMET). [Online]. ClinicalTrials.gov. Last Updated: 24 April 2023. Available at: https://clinicaltrials.gov/ct2/show/NCT02926911 [Accessed 25 May 2023].

Wilson, G et al. (2022). Ductal Carcinoma in Situ: Molecular Changes Accompanying Disease Progression. [Online]. National Library of Medicine. Last Updated: 14 May 2022. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9135892/#:~:text=Ductal%20carcinoma%20in%20si tu%20(DCIS [Accessed 25 May 2023].

Wong, N et al. (2015). WU-CRISPR: characteristics of functional guide RNAs for the CRISPR/Cas9 system. [Online]. BioMed Central. Last Updated: 2 November 2015. Available at: https://genomebiology.biomedcentral.com/articles/10.1186/s13059 -015-0784-0 [Accessed 8 February 2023].

Yang, M et al. (2019). Impact of CXCR4 and CXCR7 knockout by CRISPR/Cas9 on the function of triplenegative breast cancer cells. [Online]. National Library of Medicine. Last Updated: 17 May 2019. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6527053/ [Accessed 8 February 2023].

Zacharakis, N et al. (2018). Immune recognition of somatic mutations leading to complete durable regression in metastatic breast cancer. [Online]. National Library of Medicine. Last Updated: 4 June 2018. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6348479/ [Accessed 8 February 2023].

85

Aaryan Doshi

PHYSICS

Aaryan Doshi decided to pursue a research project on the possible existence of “Boltzmann Brains”. This topic was evaluated through both a scientific and philosophical lens, highlighting Aaryan’s interdisciplinary interest across both subjects. A critical analysis of Boltzmann Brains leads to the conclusion that we must heavily reconsider our current cosmological models. Aaryan Doshi is studying Further Maths, Physics, and Philosophy, and aspires to read Physics and Philosophy at university.

Are Boltzmann Brains Plausible?

An examination into the reality of our experiences

Aaryan Doshi

87

Introduction

For many centuries, the idea of skepticism has played a huge role in philosophy; however, in the late 19th century, Ludwig Boltzmann released a series of theories, which all aim to explain the low entropic state of the Big Bang, which is seemingly contradictory to the second law of thermodynamics. Perhaps the most successful of these explanations was the idea that the universe originated as a mere fluctuation in what would be a much larger universe, in which thermal equilibrium is reached However, the statistical nature of the second law of thermodynamics gives rise to The Boltzmann Brain hypothesis – a human brain, which momentarily fluctuates into existence from a high entropic universe

The Boltzmann Brain thought experiment posits that if all that is required for consciousness is the existence of a brain, why would we have any reason to suggest that our existence is anything more than just a brain? After all, the thought experiment is in accordance with Ockham’s razor –the philosophical principle that states the simplest hypothesis is the best. A hypothesis need not be multiplied beyond necessity. In this essay, I will evaluate Boltzmann's theory and any other assumptions required for the existence of Boltzmann Brains in order to establish whether Boltzmann Brains are plausible or not from both a scientific and philosophical standpoint

Entropy

We first need to understand the second law of thermodynamics. Generally, the law states that the entropy of a closed system never1 decreases. Entropy is usually defined as a measure of the amount of useful work that can be done within a system; however, taking a statistical stance, it can be more loosely defined as a measure of disorder within a system. A “chaotic” system would have higher entropy and an “ordered” system, such as the original state of the universe during the Big Bang would have a low entropy. Boltzmann, therefore represented the second law through the relationship (Hyperphysics, 2001):

Where S is the entropy, k is the Boltzmann constant and W is the “phase-space volume of a macrostate” (Carroll, 2017). The phase space is a collection of all the microstates (arrangements of particles). These microstates can now be partitioned into groups, each group containing macroscopically indistinguishable microstates Macroscopically indistinguishable microstates

S = k ln W
1 Spontaneously
Fig 1 – Phase space diagram

are systems that are distinguishable on the microscopic scale, however, have little to no effect on the observable product. This concept of phase space can be more easily understood through the above phase space diagram (Penrose, 1989):

For a given system with N particles, the phase space diagram will contain 3N dimensions (for each particle’s x, y, and z co-ordinate), so that each microstate can be represented by one point. The volume of the partitioned groups2 can be calculated to give a value for W. We can see from the diagram that the largest phase space volume is occupied by microstates in thermal equilibrium. The reason for this is purely statistical as demonstrated by the example below (Allday, 2000):

Assuming each particle must be allocated to one of the 22 positions above:

There are 22C21 = 22 microstates which give the macrostate on the left

There are 22C11 = 705432 microstates which give the macrostate on the right We can clearly see that the system in equilibrium (highest entropy) on the right would have a much larger phase space volume. In fact, Penrose predicted the phase space volume of our current universe to be ���� ���������������������������� of the total volume of the phase space (Penrose, 1989), showing the sheer ambiguity of the universe that we find ourselves in

Carrol deduces from Boltzmann’s formula the exponential relationship between phase space volume and probability. For a decrease in entropy, ��������3, the probability for the decrease occurring is proportional to ������������ , due to the natural logarithm, hence it can be stated that:

����(��������) ∝ ������������

(Carroll, 2017) We can therefore state with relative certainty that if the universe arose as a fluctuation, it would be almost infinitely more likely for our existence to be the product of a relatively small dip in entropy such as that of a human brain as opposed to a whole universe. Although this evidence may seem undefeatable at first glance, the Boltzmann Brain hypothesis relies on 2 crucial assumptions, which I aim to evaluate in the next section. (1) – The universe expands perpetually and is close to thermodynamic equilibrium (so that Boltzmann Brains are given enough time to form) and (2) – fluctuations that give rise to Boltzmann Brains must be possible (Carroll, 2017). To analyse these claims we must take a deeper look into the geometry of our universe, and what that suggests about how it will evolve in the future

It’s important to note the probabilistic nature of entropy here – it’s a periodic function and therefore the Poincaré recurrence theorem still holds. The theorem states that any system with a

2 These groups are the macroscopically indistinguishable microstates explained before

3 This value will be negative. Note that as the modulus of this negative value gets larger, e"# , hence probability gets smaller

89

finite phase space will return “arbitrarily closely to the original state”, given enough time (Carroll, 2017) If we consider a finite phase space made up of finite phase space volumes (each volume representing a different macrostate), there is always a non-zero probability that a system will return to its initial state as this state has a non-zero phase space volume. This recurrence time must be considered when discussing the possibility of Boltzmann brains as it’s possible for the recurrence time to be short enough whereby Boltzmann Brains aren’t given enough time to form.

The Cosmological Constant and the Expansion of the Universe

I first aim to tackle condition 1. We must understand the evolution of our universe and discuss the universe’s eventual fate. The cosmological constant, ���� was first added by Einstein to his field equations of general relativity in order to achieve a static universe – it acts as an anti-gravity force, used to prevent the universe from collapsing in on itself. Upon Hubble’s discovery of the expanding universe, ���� was eventually scrapped until the late 20th century, when redshift measurements proved the universe's accelerated expansion. A mysterious force labeled “Dark Energy” guaranteed a positive cosmological constant.

To understand the ramifications of this so-called “Dark Energy”, we must consider the density parameter, ����, a value used to calculate the curvature of spacetime. The density parameter can be calculated through the relationship (Carroll, 1998):

���� = 9 ������������ ������������ ���� ���� @ × ����

Where H and G are the Hubble and Gravitational constants respectively. The equation above can be simplified to a ratio of the universe’s average energy density (ρ) to the critical energy density (ρcritical), that is the mass-energy required for a flat universe, whereby (Carroll, 1998):

If ���� = 1, the universe is flat

ρ = ρcritical

If ���� > 1, the universe has positive curvature

ρ > ρcritical

If ���� < 1, the universe has negative curvature

ρ < ρcritical

The diagonal line on the graph above shows the relationship ΩΛ (the density parameter from the cosmological constant) + ΩM (the density parameter from regular matter) = 1 The circle represents the value of ΩΛ and ΩM based on current observations. It centers around the point (0.3,0.7) (Carroll, 1998), giving ΩTotal to be approximately 1, suggesting that the universe is flat. In order to see what is really going on, we must take a closer look at the true nature of dark energy through the second Friedmann equation (Carroll, 2016):

Fig 3 – Graph of ΩΛ against ΩM (Carroll, 1998)

Where ���� is the expansion rate, ���� is the expansion scale factor, ���� is density and ���� is pressure. We can see that in order for the expansion to be outward, ���� ���� must be positive. Just like normal matter, the energy density of dark energy must be positive, hence we can deduce that the pressure, ���� of dark energy must be negative in order to cancel out the initial negative (Carroll, 2016). This concept of negative pressure may seem ambiguous – just as a positive pressure would require work to compress, a negative pressure would require work to expand. Therefore, from the Friedmann equations, we can infer that as the universe expands, more dark energy is created as work is done on the universe, hence it must be true that the later universe will be substantially dominated by this dark energy as the normal matter dilutes In other words, the density parameter from the cosmological constant is constant over time. However, as normal matter dilutes, its density parameter decreases. Analysing Fig 3, we see that over time the point corresponding to our current observations (0.3,0.7) would move to the left. The universe will have negative curvature. Such a universe would perpetually expand, and it closely resembles a De Sitter Space - De Sitter space is a maximally symmetric solution to Einstein's field equations in general relativity that describes a universe with a positive cosmological constant, driven by dark energy. We have now fulfilled the first condition – a perpetually expanding universe.

It’s now important to clarify what actually causes the cosmological constant. The most successful contender is vacuum energy, which can be described as quantum fluctuations that arise from Heisenberg’s uncertainty principle, which states (Hyperphysics, 1998):

���������������� ≥ ℏ ��������

Where �������� is uncertainty in position and �������� is uncertainty in momentum and ℏ is the reduced Planck’s constant To understand this concept better, we must understand the basics of quantum mechanics and the wave function, ����. Quantum mechanics posits that before measurement, a particle exhibits a wave-like form called the wave function. This wave function can be used to calculate certain properties of its corresponding particle, like its momentum and location, however, there will always be an uncertainty in either variable. Wavelength determines momentum according to De Broglie’s formula ���� = ���� ���� (Hyperphysics, 1998). Fig 4 shows how the superposition of multiple waves of different wavelengths forms a wave function with a more definite (localised) position as the square of the wave function determines the probability distribution of the position of the particle. However, this precision in position comes at the expense of momentum as a range of different wavelengths4 are used to form the final wave function, demonstrating the reason behind Heisenberg’s uncertainty principle.

91 ���� ���� = ������������ ���� 9���� + �������� ���� ���� @
4 Hence momenta Fig 4 – Superposition of wave function (Hyperphysics, 1998)

Mathematically, we would say that momentum space wave function (probability distribution for measuring specific momenta) is the Fourier transform of the position space wave function (probability distribution for measuring specific position). This is represented through the equation (Schneider, 2022):

���� ������������ ����(����)

Where ���� O (����) and ����(����) are the momentum and position space wave functions respectively. The variable k is proportional to momentum5 ���� ������������ represents a Fourier wave and is an application of Euler’s identity (������������ = ������������(����) + ���� ������������(����)) (Schneider, 2022). The integral shows that we are taking the superposition of all Fourier waves to form our final probability distribution. Examine the following position space wave function:

We can see that a given particle has an equal probability of being found at any point between -a and a and a 0 probability of being found anywhere else. The height, ����(����) is ���� √�������� so that the total area under the wave function is 16 Now we can use the Fourier transform equation ����(����) and ���� √�������� are constants, hence are ignored in this simplification. The limits become -a and a as all other probabilities are 0. The integral becomes (Schneider, 2022):

∫ ���� ������������ = ���� �������� ���� ������������

Using Euler’s identity and plugging in limits, this can be simplified to ����������������(��������) ���� 7 . If a is sufficiently small (i.e. we are very certain about the position of the particle), we can use a small angle approximation (sin(ka) = ka). The k cancels and we are left with a constant 2a (multiplied by another constant, C, related to those that we ignored previously). The momentum space wave function is now simply a non-zero horizontal line, ���� O (����) = ������������ (Schneider, 2022). In other words, k (hence momentum) can take a large range of values (there is a large uncertainty in k) as the probability distribution is even across a large range of values for k. Note that we cannot

5 The specific relationship is p = ℏk

6 This is the total probability

7 Note that the cosine’s cancel as cos(x) = cos(-x)

���� √�������� U �������� + +
���� O (����) =
Fig 5 – Simplified position space wave function (Schneider, 2022)

measure an infinite range of values for momentum as the small angle approximation is not exact (the line is not perfectly horizontal – it will eventually go down to 0). This is Heisenberg’s uncertainty principle – by increasing our certainty in one variable, we have decreased our certainty in another.

The same reasoning applies to an energy-time uncertainty principle, which goes as (Hyperphysics, 1998):

∆����∆���� ≥ ℏ ����

Therefore, it’s possible for something to be created from nothing in empty space, so long as this increase in energy annihilates back into nothing in a time that is so short that it’s immeasurable. Hence, the uncertainty principle is still obeyed. These are known as vacuum fluctuations, and it’s thought that these fluctuations, which give rise to vacuum energy, are what drive the universe’s expansion - Vacuum energy is dark energy. So why does this matter? Well, if space is constantly filled with these virtual particles created from nothing, it’s very much possible for a Boltzmann brain to be the product of a quantum fluctuation, albeit very rare, fulfilling condition 2

More importantly, these fluctuations give rise to Hawking Radiation in De Sitter space Just as a Black Hole has an event horizon, a De Sitter Space too has a horizon, beyond which, the expansion of the universe exceeds the speed of light. Relativistic effects near the horizon causes the creation of “real” particles from “virtual” ones (those from quantum fluctuations). Most of these particles would be massless such as photons and gravitons. Although such particles cannot form Boltzmann Brains, this phenomenon is still crucial in the debate surrounding Boltzmann Brains. Classically, a De Sitter space would be diluted of “real” matter altogether and will cool to absolute zero8 (Carroll, 2017) However, due to Hawking radiation, there exists a fixed, non-zero temperature in De Sitter space (Carroll, 2017), which allows for the existence of particles, albeit massless particles. However, these photons can undergo pair production, converting their energy to particles with mass. Such particles can accumulate over time and can form Boltzmann Brains (Carroll, 2017). This process is known as nucleation (Carroll, 2017). Boltzmann Brains which form due to quantum fluctuations annihilate instantaneously, barely enough time to process a single thought. However, Boltzmann Brains formed by nucleation are able to exist for much longer periods of time, and hence could explain how a Boltzmann Brain could perfectly simulate human experience.

Probability

Now that we have shown that in theory Boltzmann Brains are indeed possible in our universe, we must assess probabilities to see how plausible they really are. The probability that I am a Boltzmann Brain, is substantially greater than the hypothesis that I am an ordinary observer. However, it’s not infinitely greater. The Poincare Recurrence theorem means that the universe will eventually fluctuate back to a low entropic state in a time proportional to the universe’s maximum entropy. This takes a value of ������������������������ seconds (Carroll, 2017). The probability of a decrease in entropy occuring when forming a Boltzmann Brain would be of the form ������������ , where s is a negative value for entropy. This value can be thought of as the rate at which Boltzmann Brains form. Therefore, the number of Boltzmann Brains which form within the recurrence time for our universe would be ������������ × ������������������������ Due to the sheer size of the recurrence time, ������������ would not make a big difference to the result, hence the number of Boltzmann Brains which fluctuate into existence would be approximately ������������������������ (Carroll, 2017). Meanwhile, we can assume that real

8 And statistical fluctuations would not be possible

93

observers, what we assume ourselves to be, can only form at the start of the universe’s lifetime. It would take a minimum value of just over the current population of earth (around 1011), however, a reasonable maximum would be 10100 (Carroll, 2017) Even if we use the upper bound of 10100, it’s still a negligible proportion in comparison to the number of Boltzmann Brains. But not all hope is lost. We have merely calculated the prior probabilies of whether we are Boltzmann Brains or not. We have not considered the conditional probabilities given our experiences.

Philosophical arguments

The proposal that I am a Boltzmann Brain may fit with our current cosmological model, however many such thought experiments have existed in philosophy since the initial accounts of skepticism, alongside multiple objections. I will specifically be looking at Putnam’s Brain in A Vat hypothesis as it’s the most comparable to the structure of a Boltzmann Brain. Highlighted below would be Putnam’s argument that I have altered to fit the Boltzmann Brain Hypothesis (Brueckner, 1986):

(1) If I were a Boltzmann Brain, I would not, for example, be reading this paper

(2) The hypothesis that I am a Boltzmann Brain Is a counter-possibility to the idea that I am actually reading this paper

(3) If I were to know I was reading this paper and (2), then I am not a Boltzmann Brain

(4) I know that (2)

(5) I do not know that I am not a Boltzmann Brain

(6) Therefore, I do not know that I am reading this paper

Now we must assess both hypotheses, that is that I am actually reading this paper; I am a real observer (H1), or a Boltzmann Brain has caused me to have the experience of reading this paper; I am a Boltzmann Brain (H2). To assign probabilities to these hypotheses, we must take a look at Bayes Theorem, which states that for any piece of evidence E, and hypothesis H (Huemer, 2016):

����(����|����) = ����(����)����(����|����) ����(����)

Traditionally, applying Bayes’ Theorem to the BIV hypothesis, we get the above distribution The total area under each graph is the total probability, 1 (Huemer, 2016). We can see a BIV theory predicts a more spread-out distribution, whereas H1 predicts that it’s much more likely that we would only be able to experience a handful of scenarios. Such scenarios, such as the fact that “I am reading this paper” must be highly ordered and coherent. This is a crucial factor when considering Boltzmann Brains as it may explain the low phase space volume (and low entropy) of our current observations, assuming that we are ordinary observers. This is an application of

Fig 6 – BIV vs Ordinary Observers (Huemer, 2016)

the Anthropic Principle in cosmology, which states that life is only able to emerge where the conditions allow it. Even if the majority of phase space is taken up by thermal equilibrium and the universe does indeed spend most of its time in such a state, it would be unreasonable to think that humans could withstand such harsh and dead conditions (Carroll, 2017) Rest assured; I have now shown that Penrose’s calculation of our phase space volume to be ���� ���������������������������� of the total volume no longer poses as much a threat. I have shown that there is at least some chance that we are ordinary observers, however we must now address the key issue and key difference between the BIV and Boltzmann Brain hypothesis. That is that we have strong scientific evidence to accept Boltzmann Brains, unlike the BIV.

Cognitive instability

All this strong evidence for the existence of Boltzmann Brains becomes redundant when we consider the possibility that we are a Boltzmann Brain. If I were a Boltzmann Brain, I would have no reason to trust any of the empirical evidence that led me to that conclusion in the first place –neither would my observations have any reason to be coherent from one moment to the next. The thought experiment is self-undermining. Although one could still argue that it’s more likely to be a Boltzmann Brain that by chance has coherent experiences in accordance with the physical laws than it’s to be a real observer, the problem is just shifted – it would be significantly more likely to be a Boltzmann Brain with disordered and false experiences, so why would that not be the case? Carroll (Carroll, 2017) argues that a theory that creates such a contradiction should just be assigned a zero prior probability. It’s much more likely that there is a fault or something missing in our current cosmological model. In fact, it’s almost certain that there is something missing, as similar arguments to Boltzmann Brains can be proposed. For example, it’s still much more likely that our experience is the product of a randomly fluctuated “Boltzmann solar system” or “Boltzmann galaxy” (Carroll, 2017). In such examples, we would be able to trust empirical evidence, hence if they were true, we should not observe multiple galaxies, let alone multiple solar systems. Yet we do. Even though “Boltzmann galaxies” or “Boltzmann solar systems” are significantly less likely than Boltzmann Brains, they are still much more likely than the low entropic universe that we currently find ourselves in. Therefore, our current cosmological model does not line up with our experiences – there must be a flaw somewhere. Or more to discover.

The Big Rip

One potential issue with our current cosmological model is that the cosmological constant is not actually constant. Such a form of dark energy, known as “phantom energy”, would mean that the universe will not tend towards a De Sitter state, but rather at some point, the force of dark energy would be large enough such that particles themselves are torn apart into their constituent parts in a so-called big rip. Although most evidence points to the cosmological constant being constant, some observations lead some to believe that this big rip scenario is viable. In such a universe, Boltzmann Brains or any other Boltzmann entities would not be given enough time to form, hence overcoming the Boltzmann Brain problem.

95

We can find evidence for such a scenario by studying quasars – supermassive black holes that are “feeding” on surrounding gas9 in its gravitational pull. The gas forms an accretion disk, where the gas’ gravitational potential energy is converted to heat. UV photons produced in this process undergo collisions with relativistic electrons in a layer known as the “corona” above the accretion disk (Risaliti, 2019). These collisions result in the photons gaining energy and moving to X-ray frequencies through a process known as inverse Compton scattering (Risaliti, 2019) The relationship between the UV and X-ray brightness is not linear. Therefore, by measuring the ratio of observed UV to observed X-ray brightness, it’s possible to calculate the actual UV brightness of the quasar. Comparing this actual UV brightness to the observed UV brightness, we are able to calculate the distance of the quasar from us. Plotting a graph of distance against redshift on a Hubble diagram, we get:

The pink dotted curve shows the redshifts that we expect to observe under a constant energy density of dark energy. The black curve, which is slightly lower than the expected curve shows the observed results. Quasars seem to be more redshifted for a given distance when compared to expected results This suggests that it could be possible that the energy density of dark energy does increase over time. This could potentially lead to a Big Rip scenario, eliminating the Boltzmann Brain problem

9 Potentially caused by the collision of two galaxies Fig 7 – Hubble diagram of quasar distance against redshift (Risaliti, 2019)

Conclusion

Upon analysing the scientific theory behind Boltzmann Brains, they do seem to be a threat In 2002, upon major cosmological breakthroughs, scientists became more concerned that Boltzmann Brains could very well have a possible existence. By merely considering the physics, I conclude that Boltzmann Brains are plausible.

However, weighing this up against philosophical arguments leads me to conclude that you are probably not a Boltzmann Brain. Despite our current cosmological model pointing towards the idea, the Boltzmann Brain thought experiment is self-undermining and hence worthy of no consideration. What we must instead consider is that our current cosmological model is incomplete or false. Therefore, although the Boltzmann Brain thought experiment provides little skeptical threat, it’s still useful. It acts as a “reductio ad absurdum” argument – any new cosmological model must eliminate the threat of Boltzmann Brains in order to be considered. My investigation into this theory has led me to expose the gaps in our current model of the universe. A lot is yet to be discovered – potentially a dark energy whose energy density increases over time leading to a “Big Rip” Further research has led me to discover more potential hypotheses that eliminate the Boltzmann Brain problem, such as a Bohmian model of wave function collapse10 in quantum mechanics, however the mathematics and concepts in such a theory is beyond the scope of this essay.

10 Such a model predicts that particles are guided by physical waves. Their behavior is determined by an extra equation of motion on top of the Schrodinger equation. In such a theory, a process known as “freezing” arises and Boltzmann Brains would not be able to function

97

Works Cited

Allday, A., 2000. Very Large Versus Very Small. In: Advanced Physics. 1st ed. Oxford: Oxford university press, pp. 284-285.

Brueckner, A. L., 1986. Brains in a Vat. The Journal of Philosophy, Volume 83, pp. 148-167.

Carroll, S., 1998. The Cosmological Constant. Encyclopedia of Astronomy and Astrophysics,p. 1.

Carroll, S., 2016. Why Does Dark Energy Make the Universe Accelerate?. [Online]

Available at: https://www.preposterousuniverse.com/blog/2013/11/16/why-does-darkenergy-make-the-universe-accelerate/ [Accessed 20 December 2022].

Carroll, S., 2017. Why Boltzmann Brains are bad. Current Controversies in Philosophy of Science, pp. 3-4.

Carroll, S., 2018. Cosmic equilibration: A holographic no-hair theorem from the. PHYSICAL REVIEW, p. 1.

Huemer, M., 2016. Serious theories and skeptical theories: Why you are probably not a brain in a vat. Philosophy in the Analytic Tradition, 173(4), pp. 1031-1052.

Hyperphysics, 1998. The Uncertainty Principle. [Online]

Available at: http://hyperphysics.phy-astr.gsu.edu/hbase/uncer.html [Accessed 20 December 2022].

Hyperphysics, 2001. Entropy. [Online]

Available at: http://hyperphysics.phy-astr.gsu.edu/hbase/Therm/entrop.html [Accessed 20 December 2022].

Penrose, R., 1989. Cosmology And The Arrow Of Time. In: The Emporer's New Mind. Oxford: Oxford University Press, pp. 402-445.

Risaliti, G., 2019. Cosmological constraints from the Hubble diagram of quasars at high. Natural Astronomy, Volume 3, p. 7.

Schneider, E., 2022. Discovering the Fourier Transform Through Quantum Mechanics. [Online] Available at: https://www.physicswithelliot.com/fourier-mini-notes [Accessed 03 06 2023].

99

Aparna Shankar

BIOLOGY

Aparna Shankar chose her project on ‘The Genomics of Inequality” after being told her literature review on plant biology was uninspired. She realised her ERP advisor was right, and that she had not chosen a project she was interested in, more one she thought would be of use to her. Her chosen project encompasses many different disciplines, ranging from the history of race science to the genetics of diversity to behavioural psychology and sociology. Aparna currently studies Biology, Chemistry and Maths and aspires to read Human Sciences at university, an interdisciplinary degree which reflects the nature of her project.

The genomics of inequality – to what extent does science play a part in race and how has scientific racism shaped societal inequalities?

‘I think I shall avoid the whole subject as so surrounded with prejudices, though I fully admit that it is the highest and most interesting problem for the naturalist’; is what Charles Darwin wrote in a letter to Alfred Russel Wallace. [Darwin, 1857] Darwin acknowledged the difficulties of prejudice in science, but he avoided the topic entirely. This project however will look to delve into the history, sociology, psychology and genetics behind scientific racism, looking at evolutionary pathways and systems to truly try to understand the origins of race-based societal inequality, and see how much, if at all, science plays a part in it all.

Science and race have always seemingly been intertwined, with race-based prejudice and discrimination being ‘explained’ by biological phenomenon throughout history. It is worth noting that race is something exclusive to humans, we don’t use the term in the same sense to classify any other species. 18th and 19th century scientists were determined to study and prove ‘race science’, with countless theories and hypotheses being tested and seemingly ‘proven’ Of them, the most noteworthy were proposed by Johann Friedrich Blumenbach and Samuel George Morton. Blumenbach was one of the first to categorise human beings into sects, describing five human types in the third edition of On the Natural Varieties of Mankind – ‘Caucasians, Mongolians, Ethiopians, Americans and Malays ’. [Saini, 2019, p3] He coined the term ‘Caucasian’ in the process, which is still harmlessly used in society today. However, the word Caucasian had an entirely different intent in 1795, as Blumenbach conveniently claimed it was the “original” race and consequently the most “beautiful”. Similarly, Samuel George Morton was largely interested in craniometry, a pseudoscience, with the theory that individuals with a larger cranial capacity were intellectually superior. He is said to have possessed a large collection of skulls spanning over 600 in number, for which Caucasian skulls were the largest [Jordan, 2023] They were noted to be “distinguished by the facility with which it attains the highest intellectual endowments”, whereas Ethiopians were described to be “joyous, flexible, and indolent; while the many nations which compose this race present a singular diversity of intellectual character , of which the far extreme is the lowest grade of humanity”. [Menand, 2001] Whilst in the present day, measuring intellect using anthropometrics would be ridiculed as is, the way Morton carried out his experiments were not well thought out, as he failed to acknowledge many factors such as gender and overall body size, often information he did not possess, into his calculations. Nonetheless from Morton’s studies, stemmed possibly the most poignant theory in terms of historical racism – polygenism – the notion that human races were distinct species Which brings us onto eugenics, whose origins can be traced back to Francis Galton, a British explorer and natural scientist, who coined the term in 1883 He was influenced by Darwin’s then recently hypothesised theory of evolution, surrounding the ‘survival of the fittest’, and wanted to create a system in which “the more suitable races or strains of blood a better chance of prevailing speedily over the les s suitable”. [Wilson, 2016] By World War I, eugenics was widely renowned and supported by politicians and scientific organisations, however World War II, and the mass genocide we now know as the holocaust led to extreme criticism of eugenicist’s values and ideologies, ultimately failing as a scientific theory.

The fact that it took a mass genocide to rid renowned scientists of their notions of eugenics and polygenism of is certainly striking. And for that reason, I propose that race science then was simply wishful thinking. At the time, the notion that Caucasians were superior to the minority races was considered an obvious fact, and it was believed with or without evidence. The studies were conducted with innate b ias, which unsurprisingly led to any scientist disputing them being ignored. Stephen Gould, in a paper discussing Morton’s craniometry techniques suggested something quite drastic - “unconscious or dimly perceived finagling is probably an endemic in science, since scientists are human beings rooted in cultural

101

contexts, not automatons directed toward external truth” [Gould,1978] He proposed that unconscious bias when manipulating and concluding research and data may be a norm in science. And to an extent, especially in terms of race science in the 18th and 19th centuries, he is correct, but how far does this prevail in scientific society today?

Scientific racism. The Harvard Library describes this phenomenon as ‘a history of pseudoscientific methods “proving” white biological superiority and flawed social studies used to show “inherent” racial characteristics’ [Harvard, 2020] And the double quotes are rightly placed, because in the modern day there is a great deal of evidence going to show that race has no scientific basis, more so it is entirely a social construct, which will be discussed in due course. Yet our history being intertwined with eugenics perpetuates the belief that certain races are not even superior, but simply different to others. Even in genetics journals today, the words black and white are weighted, and still seem to hold some sort of value. But in terms of genetic diversity, how much do humans actually differ from one another?

Homo sapiens, as a species, are not as genetically diverse as one might expect with our range of phenotypic expression We are less diverse than our evolutionary ancestors , chimpanzees, and any two humans only differ by about 0.1%, or one out of every thousand base pairs. This genetic diversity exists most commonly in the form of single nucleotide polymorphisms, but can also manifest as insertions, deletions, duplications and rearrangements, all randomly arisen. [NIH, 2007] So if we have little variation between us, what is the idea of ‘race’ based on? The Oxford English dictionary defines race as “one of the main groups that humans can be divided into according to their physical differences, for example the colour of their skin”, but race being intertwined with behavioural characteristics, for example black people being perceived as more aggressive, or intellectually lacking throughout history, has tainted the meaning of race, taking it beyond the surface level to hold more significance, and deceivingly so. Because the solid boundaries that scientists (usually race scientists such as Blumenbach) have previously drawn in-between races, are nowhere to be seen in science and genetics. This shows that not only does grouping people into certain races have major flaws, but distinctively categorising their behaviours or intellects based o n this has little to no scientific basis to begin with.

Putting our genetic differences aside, the vast majority of our genomes we share with one another. Our genetic similarities far outweigh out genetic differences, which corresponds with the relatively high percentages of genes we share with other species which possess very contrasting phenotypic expressions, e.g. we share 98% of our DNA with pigs simply because the genetic code is universal, triplet codes code for the same amino acids, proteins are the same between species. In terms of human populations, the NIH states that “genetic variation around the world is distributed in a rather continuous manner” and that Homo Sapiens as a species are “continuously variable and interbreeding”. [NIH, 2007]. This is due to the fact that most human genetic variation exists within populations, as opposed to between them, proven by a study led by Noah Rosenburg in 2002. They studied human population structure by comparing 377 loci from 1056 people belonging to 52 populations, and proceeded to find that differences within populations account for 93 to 95% of genetic variation, whereas differences between major population groups only make up 3% to 5% of genetic differences. [A. Rosenburg et al., 2002]. In this case, populations are defined on a cultural and geographical basis, including but not limited to Africans, Caucasians, and East Asians. This is what most mean when talking about race in a broader sense, but it comes with its limitations. Because where do you draw the lines? Where does one ‘race’ start and one end? If races were genetically distinct enough to be prominent, homo sapiens would be able to be divided into subspecies, but this study proves the fact that our existence as a species is quite frankly messy, and too complex to be grouped, at least from a molecular biology perspective. And whilst many race scientists in the past have tried, their efforts have all failed once the findings from the human genome project were published.

Although if we look at it from a geographical perspective, there seems to be some argument for genetic ‘race’ corresponding with migration patterns. Theresa Duello, a social scientist, discusses the definition of genetic ‘race’, stating that it “has been viewed as a result of human migration with genetic isolation leading to the development of distinct populations that share DNA as the result of common descent”. [Duello, Rivedal et al. 2021]. If we return to the study led by Noah Rosenburg; the data from hi s findings was put into a computer program called STRUCTURE. Based on similarities, this program can sort and identify the data into clusters. When the program sorted the data into 5 clusters, the 5 most prominent geographical regions were grouped together: Africans, Europeans and the Middle East, Eastern Asians, Australians and the Americas. [Rutherford, 2016] This is understandable, due to migration patterns. But when the program sorted the data into 6 clusters, the Kalasha, a tribe originating in Norther n Pakistan arise as the sixth group. The Kalasha are a highly endogamous group of people, isolated, with their own language and culture. Furthermore, they only consist of around 4000 individuals. Even the most committed race scientist would not consider the Kalasha to be a n entirely different race altogether, it is more likely they would just be loosely grouped with other South Asians due to their features. There’s no denying that there is genetic variation between certain populations, just not enough in either size or specificity to be grouping populations into genetic ‘races’. And by grouping an incredibly diverse group of people together, perfectly masks the genetic variation between them.

It is worth noting that this study is often blindly utilised by race scientists to group geographical populations together under the notion of race, and whilst it is a valid and entirely admissible study in its own right, people who evidence it fail to acknowledge that it only takes into account genotypes, and not phenotypes which their perception of race is based on. Rutherford states in his book ‘How to Argue with a Racist’ that “this type of analysis is the basis of studying human history, migration and genetic variation between populations and people” [Rutherford, 2020, p26]

Additionally, one may assume that because of the difference in phenotypic expression between say East Asians and Africans, the cause would be major underlying genetic differences (major enough to be distinctive), but this isn’t the case. And whilst we cont inue to think of race as physically distinct populations, there is no biological basis for this assumption, as race doesn’t and has never fit a certain ‘model’. [Britannica, 2023] Features such as skin colour, facial features and hair texture – surface-level variation – whilst they may seem specific for different races; cannot distinctly identify between them due to their vast overlap, much like the genes coding for these features. For example, epicanthic folds are largely used as an identifier for a person with East Asian descent, but can be seen in other populations like Innuits, who wouldn’t be considered East Asian. This goes to show that there isn’t a conclusive list of features for each ‘race’. The term ‘black’ is generally rendered useless in terms of scientific terminology because research has gone to show that two black individuals are more likely to have more genetic differences between each other than with a white individual. [Rutherford, 2016]

The concept of race does not hold enough specificity within biology to have much value at all, because the fact of the matter is, race, as we know it, does not exist within genetics. Therefore, any claims of certain races’ superiority in areas of intelligence or strength can be rendered inaccurate.

We’ve acknowledged that race has no biological significance, but to explore the effects and consequences of inequality, we must first familiarise ourselves with the core origins of segregation and discrimination. The psychological and subconscious nature behind prejudice is acknowledged in society today, but the suggestion that prejudice is in fact an evolutionary trait, a protection mechanism designed for the safety of communities in the past [Neuberg, Cottrell 2005], proves even further that from a biological standpoint,

103

preconceived notions are inevitable. There is no doubt that there is a clear distinction between subconscious prejudices and hateful behaviour, but to what extent are prejudices under our conscious control? And where did they come from? In 2004, Fishbein theorised that there were three reasons for the evolutionary basis of hatred and prejudice, and that they began in hunter-gatherer tribes. He began by arguing that “prejudice underlies the development of hatred toward various outgroups. Hence, in order to understand the origins of hatred, it is essential to understand the origins of prejudice.” [Fishbein, 2004] Inclusive fitness, authority-bearing systems and intergroup hostility, are the three mechanisms said to be acting behind evolution-based prejudice, acts that were “appropriate and necessary for subsistence mode” in the past, according to the paper. The theory of inclusive fitness recognises that individuals will show a preference towards their family, who will often have the same or similar phenotypic expression as them, increasing the inclination of ingroup favouritism. Authority bearing systems are when individuals accept wholly what authorities tell them from a young age and internalise this information. This makes it increasingly difficult to rid individuals entirely of these opinions, because these systems ensure they are deeply ingrained into their thought processes from their youths And the third reason, intergroup hostility, has been observed in most primates. Their relationships, including hunter-gatherer relationships are said to customarily be tense and hostile, in efforts to protect the more vulnerable young and female members of the group as well as conserve food resources and maintain group cohesion. [Fishbein, 2004] These three systems work cohesively to underpin the reasons as to why prejudice naturally exists, but there is a fourth mechanism which contradicts these, which is outgroup attractiveness. Mentioned in Fishbein’s book, Peer Prejudice and Discrimination, this is largely based on the necessity of genetic variation within a population, to “accommodate to environmental changes, and to prevent the deleterious effects of excessive inbreeding and genetic drift”. [Fishbein,2002] This shows that it is important to consider all the factors surrounding the evolutionary basis of prejudice, as it shows that genetic diversity was important to maintain, for survival of the fittest and evolution by natural selection for homo sapiens, but prejudice has still prevailed through other mechanisms in an instinctive manner, denoting its importance in the survival and nature of our species.

The dynamic between humans has undeniably changed from when we were hunter-gatherers, to recent history, so the extent to which the evolutionary basis of prejudice can be blamed for racism is questionable. As we have seen, science has frequently been used as a scape goat for racism , like craniometry, and intellectual racists still exist in the modern day. Saini explains it wonderfully – “It takes some mental acrobatics to be an intellectual racist in the light of scientific information we have today. Racists will find validation wherever they can, even if it means working a little harder than usual”. [Saini, 2019, p158] This goes to show that people who want to be racist, will. They will try to find distinguished characteristics between races and use these to argue that certain races must be superior to one another or ranked in some way. Which takes us back to Stephen Gould’s words. Science is not objective; we strive to make it as objective as possible but with social science, especially when the scientist has an amalgamation of life experiences, that can cause them to become biased, it proves extremely difficult. However, there is a fine line between bias and racism. The Nobel laureate James Watson, heavily renowned for his work discovering the structure of DNA, has come out with many racist statements relatively recently, even after the general consensus within the scientific community was that race was not scientifically based. Some of these include him suggesting that Africans had lower intelligence levels compared to Europeans due to their genetics . Intelligence is a complex trait, which scientists believe to be caused by socioeconomic, cultural, environmental and possibly genetic factors which we don’t truly understand the mechanisms of yet. One would assume that such an educated, successful scientist would be more well informed of the science around him, but at the very least he proves useful as a prime example of Saini’s logic. And answers the question we had asked previously, as to how far bias persists in scientific society today.

Race science has unfortunately had everlasting effects on society, impacting social inequalities even today. We don’t need to look to Blumenbach or Social Darwinism to find actions skewed by what race is perceived to be, because the truth is racial inequalities have multiple, multifaceted causes. With or without race pseudoscience, racism would exist; racists in the past and even now haven’t needed and don’t need scientific evidence to hold racist beliefs, as James Watson has excellently proven. Therefore, scientific racism hasn’t ‘shaped’ societal inequalities perse. Colonialism and neo-colonialism exacerbate already existing inequalities between developed countries and undeveloped ones. Historical systems of discrimination and oppression, and socioeconomic differences also play a part in this And whilst an entirely new book, or multiple could be written on racism and its consequences affecting every wake of life as we know it, I’ll touch on one of the reasons truly highlighting the genomics of inequality.

The 0.1% of our DNA that is subject to genetic variation is a goldmine for genomic research – it allows genetic risk factors for disorders in different populations to be identified – allowing for many tailored health benefits such as lifestyle changes being suggested or precision medicine to be prescribed. However, large disparities in population representation maintain historical inequalities in genomics [Bentley. A, 2017], consequently stunting research which has the undeniable potential to help everyone on the planet. To expand, the majority of sequenced genomes from Genome-Wide Association Studies (GWAS) and other genetic databases are from individuals of European descent – resulting in what the western world would consider ‘minority populations’ being severely underrepresented in genomic studies. Europeans only make up 16% of the global population but account for nearly 80% of participants in GWAS [Nature, 2017]. This lack of diversity has and will lead to genomic research being advantageous to a select few, and whilst this research is still beneficial to Europeans and other well-represented populations, who inevitably have a higher GDP and more funds allocated for research, it does not even begin to give the bigger picture of genomic studies. The bigger picture is undoubtedly essential to prevent inequalities in medicine, where only well-represented populations benefit from genomic advances like pharmacogenetics and identification of polygenic risk scores for predictions of certain disorders. Minority populations would be put at a disadvantage simply because of this exclusivity in GWAS – leading to severe disparities in healthcare.

All in all, race exists. It doesn’t not have a solid scientific basis, but it exists as a social construct. To deny its existence may be well-intentioned but fails to acknowledge the extreme hardships that people of colour have been through throughout history and still endure today. Putting so much pressure on appearance, attaching whole identities to skin colour, pigmentation, seems crude, especially when we consider it can lead to racial profiling So perhaps it is the definition of race we need to change, because if we perceive race from a purely phenotypic perspective, there isn’t a solid biological basis behind it, although scientific racists will still likely subtly continue trying to prove the superiority of certain races. But if we perceive race to be an amalgamation, of not only the histories but the migration and cultures of humans coming together and interbreeding creating the diverse population we have today without segregation or discrimination, maybe there is a way forward, in the right direction. Because studying genetic race isn’t always negative, it can give much insight into anthropology and our human history, but conducting this research with inherent biases skews results that have the potential to seriously harm, and miseducate The objectivity of science should always be protected

105

Bibliography

Adam Rutherford. (2016). The end of race. In: -. (Ed). A Brief History of Everyone who ever lived. UK: Weidenfeld & Nicolson (UK). pp.357-359.

Adam Rutherford. (2020). How to argue with a racist. Great Britain: Weidenfeld & Nicholson. p.26.

Angela Saini. (2019). Superior, the Return of Race Science. London: Harper Collins. p.3. p.158

Bentley, A., Callier, S., & Rotimi, C. (2017, October 8). Diversity and inclusion in genomic research: Why the uneven progress? Retrieved March 29, 2023, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5614884/.

Britannica. (2020). Louis Aggasiz [Online]. Britannica Schools. Available at: https://school.eb.co.uk/levels/advanced/article/Louis -Agassiz/3993 [Accessed 8 February 2023].

Cambridge University. (1857). Darwin Correspondence Project. [Online]. Cambridge: Darwin Project. Last Updated: 2022. Available at: https://www.darwinproject.ac.uk/letter/DCP -LETT-2192.xml [Accessed 7 February 2023].

Cottrell, C. A., & Neuberg, S. L. (2005). Different Emotional Reactions to Different Groups: A Sociofunctional Threat-Based Approach to. Journal of Personality and Social Psychology. 88(5), pp.770789. [Online]. Available at: https://doi.org/10.1037/0022- 3514.88.5.770 [Accessed 8 February 2023].

Duello TM, Rivedal S, Wickland C, Weller A. (2021). Race and genetics versus ‘race’ in genetics. Evol Med Public Health. 9(-), p.289. [Online]. Available at: 15;9(1):232-245. doi: 10.1093/emph/eoab018 [Accessed 29 March 2023].

Facing History and Ourselves. (2017). The Science of Race. [Online]. Facing History. Last Updated: November 15, 2017. Available at: https://www.facinghistory.org/resource-library/science-race [Accessed 8 February 2023].

Fishbein,H. (2002). Peer Prejudice and Discrimination. 2nd ed. Mahwah, New Jersey: Lawrence Erlbaum Associates, Publishers. p.40.

Harvard Library. (2020). Confronting Anti-Black Racism - Scientific Racism. [Online]. Harvard Library Education. Last Updated: 2020. Available at: https://library.harvard.edu/confronting-anti-blackracism/scientific-racism [Accessed 8 February 2023].

Louis Menand. (2002). Morton, Agassiz, and the Origins of Scientific Racism in the United States. The Journal of Blacks in Higher Education. No.34(Winter, 2001-2002), pp.110-113. [Online]. Available at: https://www.jstor.org/stable/3134139 [Accessed 7 February 2023].

Nature. (2019). Genetics for all. Nature Genetics. 51(-), p.579. [Online]. Available at: https://doi.org/10.1038/s41588-019-0394-y [Accessed 29 March 2023].

Nature Editorial. (14 May 2019). Whose genomics? Nature Human Behaviour. 3(-), p.409–410. [Online]. Available at: https://doi.org/10.1038/s41562-019-0619-1 [Accessed 29 March 2023].

NIH. (2007). Understanding Human Genetic Variation. [Online]. NIH Curriculum Supplement Series. Last Updated: 2007. Available at: https://www.ncbi.nlm.nih.gov/books/NBK20363/ [Accessed 29 March 2023].

Philip K Wilson. (2016). Eugenics. [Online]. Britannica. Last Updated: Jan 6, 2023. Available at: https://www.britannica.com/science/eugenics-genetics [Accessed 7 February 2023].

Rosenburg. N , Pritchard. J , Weber. J. (2002). Genetic Structure of Human Populations. SCIENCE. 298(-), p.-

Takezawa, Yasuko I. , Smedley, Audrey and Wade, Peter. (2004). race human. [Online]. Britannica. Last Updated: Mar 13, 2023. Available at: https://www.britannica.com/topic/race -human [Accessed 29 March 2023].

107

Ayza Affan

COMPUTER SCIENCE

Ayza Affan chose to research the implementation of AI in genetic disease diagnosis for her ERP, specifically methods using computer vision to identify patterns of facial phenotypes. The project covered how AI is able to overcome the current methods and limitations of diagnosis, and some of the ethical challenges currently associated with such technologies. Ayza is studying Computer Science, Economics, Maths, and Further Maths for A Level and hopes to study Computer Science at university.

How can artificial intelligence be implemented in the diagnosis of genetic disease?

Overall, syndromic genetic conditions including Down’s, Angelman and Noonan syndrome have been shown to affect 8% of the population (Baird et al, 1988); their timely diagnosis is critical to minimise their severity, or to allow intervention before the onset of symptoms. Unfortunately, only a minority of patients currently receive a genetic diagnosis (Ferry et al., 2014), forcing the majority to live with symptoms greatly affecting their quality of life. Reaching an accurate genetic diagnosis can allow doctors to treat patients more suitably than by simply assessing the patient's clinical s ymptoms, as knowledge about the disease itself can allow a more appropriate response to the source of the patient’s condition, and therefore more effective treatment (Ferry et al., 2014). Recent developments in artificial intelligence (AI) have allowed its implementation in a variety of existing disease diagnosis techniques, creating the potential for more quick and accessible diagnoses However, concerns on the ethics of AI, specifically in medical applications, limit the current usage of such technologies

Genetic syndromes (GSs) are conditions which are caused partially or fully as a result of one or more mutations in the genome, the entire genetic material of an organism. These mutations, which can also cause predisposition to the development of certain diseases, can be inherited, or developed due to environmental factors known as teratogens, which include viruses and toxins (Genetic Alliance; District of Columbia Department of Health, 2010) Varying in severity, GSs can be categorised into single gene (monogenic) disorders, including cystic fibrosis and Huntington’s disease; chromosomal disorders, the most common of which is Down’s syndrome; and polygenic disorders, the effect of many different genes and their complex interaction with the environment, including cardiovascular disease and many cancers (Genetic Alliance, 2016) While genetic disorders are rare individually, their sheer number means that they collectively affect approximately 1 in 14 people (Jackson et al 2018).

There are a variety of approaches to genetic disease diagnosis. To reach a diagnosis, a clinical examination consisting of the following steps must be conducted: analysis of family history, genetic testing, and a physical examination (District of Columbia Department of Health, 2010)

According to the District of Columbia Department of Health, ‘family history can be a powerful screening tool and has often been referred to as the best “genetic test”’. This is because over time, both common and rare GSs have been observed to cluster in families For example, an individual may be flagged as high risk for hereditary breast cancer, which accounts for 5-10% of breast cancers, if more than one first or second-degree relatives have been diagnosed with it, particularly at a relatively young age. Other occurrences such as multiple stillbirths or miscarriages in a family can be indicative of a genetic disease. Hence, accurate records of a patient’s family history can be used to draw conclusions on the pattern of transmission of that disease, aiding clinicians in analysing patients’ risk and suggesting possible lifestyle adjustment, testing or treatment. However, many genetic disorders, including most cases of Down’s syndrome and other chromosomal disorders, are caused by sporadic mutations, which are random and spontaneous Therefore, the presence, or lack thereof, of a certain GS in a patient’s family is not always useful in ruling in or out the disease.

Another diagnosis technique is genetic testing: this is when a patient’s DNA is analysed to discover genetic variations which could cause (susceptibility to developing) a genetic disease. Cytogenetic testing is carried out by comparing patient’s DNA to that of non-affected individuals, examining every chromosome for abnormalities New technologies such as next-generation sequencing have been developed which allow the entire exome, the part of DNA which codes for proteins, to be sequenced simultaneously, and have become more accessible to patients (Schon, 2021). However, currently the success of this technique is extremely limited, as ‘projects that apply next generation sequencing to patients in clinical settings fail to report genetic diagnoses for approximately 80% of cases’ (de Ligt et

109

al., 2012). This is because as ‘each individual carries approximately 4 million differences’ (Ferry et al., 2014), specialists face difficulties in interpreting the results; this includes includes distinguishing the deleterious from benign variants, and predicting their clinical significance (Sundaram et al., 2019)

A physical examination can reveal other clinical indicators of a GS; these can include developmental delays, congenital abnormalities, dysmorphologies of the heart , wide-set or droopy eyes, short fingers, and a tall stature. While these may seem insignificant in isolation, the presence of multiple of these rare features can be strongly indicative of a GS (District of Columbia Department of Health, 2010). As craniofacial alterations are present in 30-40% of genetic disorders (Ferry et al., 2014), and the observation of ‘recognizable facial features [is] highly informative to clinical geneticists’, (Gurovich et al., 2019), these specifically are often used by doctors to rule certain diseases in or out For more frequently observed genetic conditions, it is often possible for a genetic expert to reach a diagnosis purely by observing the facial traits of a patient and comparing them with the known traits associated with each syndrome. For example, Down’s syndrome can be diagnosed quite reliably at birth by studying ‘the size and grouping of the facial features. The newly born infant with Down’s syndrome has eyes, nose and mouth which are not only individually relatively small but which are grouped more closely together towards the centre of the oval represented by the face and the forehead.’ (Strelling, 1976). However, as there is a large and complex range of phenotypes and syndromes, and genetic subtypes within syndromes, achieving the correct diagnosis can be a long, expensive process (Kole et al, 2009). Additionally, it is difficult for practitioners to know and recognise all the features associated with each disease, and to identify them quickly and correctly; hence, the diagnosis of rarer diseases by physical examination can be limited by the consultant’s prior experience.

As shown, the immense variety and complexity of genetic disorders can result in long, expensive and potentially inaccessible diagnoses, limited by availability of trained genetic consultants and their experience. The implementation of computerised systems in diagnoses, particularly those involving computer vision, have shown great promise in mitigating these limitations, a nd assisting clinicians in diagnoses.

Artificial intelligence (AI) is the training of computer systems to perform tasks that have previously required human intelligence. Recently, there have been developments in a subsection of AI algorithms which utilise deep learning, a technique which can ‘learn interpretable features from large and complex datasets using deep neural network architectures’ (Dias and Torkamani, 2019); these have allowed the implementation of AI in medicine and clinical diagnostics. The different classifications of tasks AI is able to solve, including ‘computer vision, time series analysis, speech recognition and natural language processing’, are ‘well suited to address specific types of clinical diagnostic tasks’ (Dias and Torkamani, 2019). These range from using natural language processing to create chatbots to enhance the existing abilities of genetic specialists to identify a disease through description of clinical symptoms, to using automatic speech recognition to diagnose potential patients of neurological conditions, through analysis of their basic elements of speech, such as tempo and pitch, and their use of language (Dias and Torkamani, 2019).

Computer vision, another subsection of AI and machine learning, aims to enable computers to analyse and interpret images, which can include photos or medical images or scans, to extract specific information relevant to a task to be solved. It generally only requires limited hardware including a sensor, typically a camera, to provide an input image, and a processor on which to run the software which is used to reduce the input image to useful information. Training a computer to effectively see and understand visual data is challenging, as although humans can process analogue data easily, for a computer to do the same the numerical basis of images, including photos and videos, must be analysed by complex algorithms and software (Rand, 2020). There are multiple methods of doing so,

including pattern recognition, in which objects or sub-parts of an image are identified, and image classification, in which either the objects in an image are placed into broader categories, or the entire image is classified based on the group of patterns or objects present in it.

The process of supervised machine learning, by which a model is trained ‘by showing it examples of desired input-output behavior’ (Jordan, 2015), is popular for training algorithms made to classify images. In this process, the model is trained using a vast dataset in the form of input and output pairs, and the algorithm is left to spot patterns and make connections between these itself; this is a more efficient method of building an algorithm that is more flexible to variations in input than hard coding an output for every possible input the algorithm could face (Jordan, 2015). The goal in this context is to produce a function which is able to produce the correct output, usually a classification or prediction, in response to a given input. In the context of diagnosis, the input is usually a medical image, video or scan, and the output a prediction of what condition is present in that image. These techniques can be trained to conduct to binary classification, in which the output can only take one of two values; multiclass classification, where the output can take one of many values; and multilabel classification, in which the output can take a combination of multiple values. Systems can also be trained to rank multiple possible outputs based on their probabilities.

One application of computer vision in genetic disease diagnosis is in assisting variant scientists in analysing genetic data obtained from genome or exome sequencing In order to diagnose genetic diseases, ‘Geneticists must locate the causative (or pathogenic) variant’ (Lange, 2023) in the patient’s genome, however due to the vast number of genes and variants, and uncertainties in their clinical significance, ‘Analysing data is becoming a bottleneck for labs’ (Lange, 2023). Computer vision techniques, by identifying ‘recurrent motifs in DNA sequences in a manner analogous to that in which pixel patterns are detected in images by convolutional neural networks’ (Dias and Torkamani, 2019), can aid scientists in locating variants which could be associated with certain diseases, potentially contributing towards quicker, more accurate diagnoses While new patients can be compared to large datasets containing previous patients’ data, it has been demonstrated that ‘common missense variants [where one amino acid is substituted for another] in other primate species are largely clinically benign in human, enabling pathogenic mutations to be systematically identified by process of elimination’ (Sundaram et al., 2018). In this study, a deep neural network was trained to identify pathogenic variants in the genomes of rare disease patients, using a training dataset consisting of the genetic sequences of six non-human primate species, including chimpanzees, thus encompassing hundreds of thousands of regular variants. It was able to achieve an impressive 88% accuracy

Another example of how computerised systems can be implemented in genetic disease diagnosis is aiding clinicians in facial analysis. Computer vision algorithms can be trained to ‘extract phenotypic features from medical images in order to provide recommendations for molecular testing in a manner similar to that performed by a skilled pathologist or dysmorphologist’ and have even been shown to ‘exceed the capabilities of human experts’ (Dias and Torkamani, 2019) at doing so in some cases. This is done through analysis of patient’s facies and comparison to known previous images of patients with that disease, allowing input images of a patient’s face to be mapped to possible disease hypotheses. As previously mentioned, ‘alterations in the face or skull are present in 30-40% of genetic disorders’ (Ferry et al., 2014), so the training of algorithms to recognise these craniofacial dysmorphisms has the potential to positively impact a large proportion of those suffering with undiagnosed genetic syndromes.

The studies approaching this task have shown that the systems can be adjusted to produce different outputs. This can be to; distinguish between syndromic and unaffected subjects (binary classification), classify one syndrome type from other subjects with or without a different genetic disease (binary

111

classification), or to identify the correct syndrome from a range of possible syndromes (multiclass classification) ‘The most common methods consist of three stages: face and landmarks detection, feature extraction and classification’ (Gurovich et al., 2017). First, the face is located in the input image and specific facial landmarks are marked; this allows the image to be accurately aligned and localised. Then, feature extraction is conducted. This is a process in which the initial large set of raw data, with many variables that require a lot of computing resources to process, is reduced to more manageable groups by selecting and combining variables. The data is then easier to process, without having lost any of the original data. Next, a method for classification is selected, and the mode l is trained using training data. Finally, the model is evaluated using measurements such as sensitivity and specificity and can be tested against clinical physicians. Specificity measures the algorithm’s ability to correctly identify a patient with a certain disease, whereas sensitivity measures the algorithm’s ability to identify those who do not have the disease; these are both measures of the model’s accuracy.

One example of this is an algorithm developed by Quentin Ferry et al. at the University of Oxford in 2014. Their algorithm ‘extracts phenotypic information from ordinary non-clinical photographs’ and, using recent developments in computer vision technology , ‘models human facial dysmorphisms in a multidimensional 'Clinical Face Phenotype Space'’. There are three key benefits to their algorithm. By situating patients in this ‘space’ in the ‘context of known syndromes’ and comparing their phenotypes to recognised patterns associated with a particular disease, possible diagnoses can be generated. This grouping of patients by phenotype is also possible when there is no exact syndrome diagnosis yet, which can further aid the identification of the disease in the future. Finally, it is estimated that this will significantly reduce the range of potential diagnoses for patients with a suspected genetic syndrome by 27.6-fold; the developers ‘envisage Clinical Phenotype Space becoming a standard tool to support clinical genetic counseling’. The algorithm was constructed using three main steps or modules:

The first of these was facial detection, for which a variety of pre-existing open-source algorithms were used. Facial detection algorithms are used to locate the face of interest in a given 2D image, often in the form of a box which bounds the centred face.

Next, an automatic annotation algorithm was trained and tested. The function of this algorithm is to ‘identify 9 central facial feature points… then used to initialize the placement of an additional 27 feature points’; this allowed the algorithm to effectively understand where certain features are on the face, and hence to make comparisons between the placements, shapes, and sizes of these features with other faces. This has greatly improved the clinical utility of such 2D imaging studies, as previous studies relied on manual annotation of features in controlled settings. To construct this automatic annotation algorithm, an image database of 2878 images comprised of ‘1515 healthy controls and 1363 pictures for eight known developmental disorders’ was constructed, using public datasets from the internet, of which the facial features in each image were manually annotated The analysed images in the training dataset were varied, as there were ‘minimal restrictions’ on image selection: these were that both eyes were visible, and that an expert clinician was able to verify the diagnosis. This allowed the building of an algorithm which is able to adapt to clinically insignificant variations in images, including ‘lighting, pose, and image quality which would otherwise bias analyses’. To implement more ‘robust, accurate and reliable annotation approaches’, the feature location algorithm returned a confidence index alongside the 9 main located features; transformations such as rotations and reflections could then be applied to each image with a low confidence index to produce 100 variations of the original, each of which could be input into the feature location algorithm again. These steps allowed further refinement of the model, reducing the inaccuracies in annotations.

From the ‘constellation of facial landmarks’, two feature vectors were created. One described the appearance of the patch surrounding each of the 9 central landmarks, as a ‘concatenation of pixel

intensities’, while the other was a shape vector, ‘constructed as the normalised pairwise distances between all 36 facial feature points’. Feature vectors are series of numbers describing one or multiple features or aspects of an object; they are useful as they quantify qualitative data, such as properties of images, in this case the face, allowing them to be more easily analysed and compared by computers

Next, an Active Appearance Model was used to generate an average face for a set of images of patients with the same genetic syndrome, to represent the standard, consistent phenotypes or features for each syndrome. These were constructed using the pattern of facial landmarks in individual images for each of the syndromes, to generate an ‘average shape constellation’. This was then used to create an ‘average face mesh’, for both control (non-syndromic) and syndromic groups of patients.

‘Distortion graphs representing the characteristic deformation of syndrome faces relative to the average control face. Each line reflects whether the distance is extended or contracted compared with the control face. White- the distance is similar to controls, blue- shorter relative to controls, and red- extended in patients relative to controls ’ (Ferry et al., 2014) Diagnostically relevant facial gestalt information from ordinary photos eLife 3:e02020 <https://doi.org/10.7554/eLife.02020>

113

‘Overview of the computational approach and average faces of syndromes’ (Ferry et al., 2014) Diagnostically relevant facial gestalt information from ordinary photos eLife 3:e02020 <https://doi.org/10.7554/eLife.02020>

<https://doi.org/10.7554/eLife.02020>

Clinical Face Phenotype Space enhances the separation of different dysmorphic syndromes The graph shows a twodimensional representation of the full Clinical Face Phenotype Space, with links to the 10 nearest neighbors of each photo (circle) and photos placed with force-directed graphing. The Clustering Improvement Factor (CIF, fold better clustering than random expectation) estimate for each of the syndromes is shown along the periphery. Ferry et al., 2014 Diagnostically relevant facial gestalt information from ordinary photos eLife 3:e02020

However, there are limitations to this method which are yet to be mitigated Some of these concern the dataset, of which ‘the average image quality… was low’, and the size was relatively small; the training dataset was less than 3000 images. Limited representation of each genetic syndrome, and a narrow spectrum of phenotypes represented could have had an impact on the accuracy of diagnosis. Additionally, while the model’s use of simply one normal, 2D photograph as an input image makes the technology more accessible ‘to any clinician worldwide with access to a camera and a computer’, the accuracy of the algorithm suffers from the fact that only 36 landmarks are currently used, which only represent the frontal phenotypes. Hence, ‘valuable information’ to diagnosis from the full cranium and other profiles was missed.

Another example of such an algorithm is DeepGestalt, a convolutional neural network (CNN)-based facial analysis framework, developed by Yaron Gurovich et al., which also uses deep learning algorithms and computer vision to quantify similarities, between over 215 genetic syndromes. CNNs are a powerful tool used in computer vision to identify patterns , in which images are passed through many convolutional layers in turn, each of which is able to detect increasingly sophisticated shapes. The developers aimed to create a function able to take an unconstrained input image of a face and output a list of potential genetic syndromes identified, ranked by a similarity score; the syndromes with the highest similarity scores would then be investigated further as possible diagnoses.

Before analysing an input image, it was pre-processed. First, the face was detected using cascaded DCNNs. Next, key facial landmarks were detected and annotated; these were used to ‘geometrically normalise’ and align the face. The image was then cropped into facial regions, which were individually fed into DCNNs. The algorithm analysed each facial region separately and produced a vector representing its similarity to each syndrome recognised by the model. The vectors of each region for each syndrome were then aggregated to give a score for the face as a whole and ranked by the overall similarity scores; the disorders at the top of the list with the highest similarity scores signify the most likely diagnoses.

In three initial experiments, the algorithm outperformed experts in identifying the target syndrome, and during its final experiment, a mock clinical setting situation, DeepGestalt achieved ‘91% top -10 accuracy in identifying the correct syndrome on 502 different images’, covering 92 different syndromes. Top-10 accuracy refers to the likelihood that one of the topmost similarly-ranked syndromes is the actual syndrome. These figures show that DeepGestalt ‘holds the promise of making expert knowledge more accessible to healthcare professionals’, especially beneficial for those wh o are in specialties other than genetics. The algorithm is currently trained on over 26,000 patient cases, ‘consisting of tens of thousands of validated clinical cases’, possibly allowing more certain and accurate diagnoses than algorithms trained on smaller datasets, such as that of Ferry et al. The model can also be optimised or trained to conduct binary classification, to identify a single syndrome from others, or to search for specific phenotypic subsets. Notably, this model works on the assumption that all input images are of a patient with a genetic syndrome and is therefore currently unsuitable for clinical use where it is unknown whether the patient has a genetic syndrome or not.

The most significant limitation to this method of diagnosis is that due to the rarity of genetic syndromes, it can be difficult to obtain a sufficiently large and general dataset of images with which to train the model; with the growing complexity of algorithms and deepness of neural networks, these factors are critical to the model’s accuracy Additionally, as ‘human bias, such as gender and racial bias, may not only be inherited by but also amplified by AI systems’ (Larrazabal et al., 2020) , a balanced training dataset is essential to avoid the creation of algorithmic bias, systematic and repeated errors in the outcomes of a computer system. A previously mentioned study, DeepGestalt, was shown to display ‘poor accuracy for the identification of Down syndrome in individuals of African versus

115

European ancestry (36.8% versus 80%)’, however the retraining of the model with a more diverse dataset improved the figure for those of African ancestry to 94.7% (Dias and Torkamani, 2019), displaying the harmful effects of underrepresentation in training datasets and how they can be resolved. Another study showed that ‘gender imbalance in medical imaging datasets produces biased classifiers for computer-aided diagnosis based on convolutional neural networks (CNNs), with significantly lower performance in underrepresented groups’, so concluded that ‘diversity should be prioritized when designing databases used to train machine learning -based CAD [computer-aided diagnosis] systems.’ (Larrazabal et al., 2020).

There are also potential ethical issues, the main one being genetic discrimination: this is when an individual is prejudiced against due to their genetic condition or carrier status. In a study done by E. Virginia Lapham et al., conducted using members of genetic support groups, it was found that ‘25% of the respondents or affected family members believed they were refused life insurance, 22% believed they were refused health insurance, and 13% believed they were denied or let go from a job’ due to their condition. In the United States, life insurance for adults is ‘widely available, and only 3% who apply for coverage are declined’ this is over 8 times smaller than the figure observed for respondents of the survey, suggesting that their (presumed) carrier status of a genetic disorder could have had a role in their rejection. The ability of people with genetic disorders to access affordable health insurance has the potential to determine whether they are able to access treatment necessary to their survival. Additionally, while these respondents had the choice to reveal their genetic condition to their employers or insurers, it is possible that with technologies such as computer vision being implemented in diagnosis, this choice is taken away. Appropriate photographs with which to make these diagnoses may be easily available on the Internet, potentially allowing genetic discrimination without the other party’s knowledge.

There were, however, some limitations to this study. Since insurers do not need to provide reasons for the rejection of applicants, whether the respondents were rejected due to their genetic condition cannot be certain Also, there were limitations to the sampling method. Firstly, the sample size was relatively small, only 332 people, meaning that the results may have not been entirely representative of all of those who suffer with a genetic condition. Secondly, as random sampling was not used, and that all the respondents were volunteers, it is possible that those who volunteered were motivated by the fact that they have experienced genetic discrimination, which could have caused biased results.

Other potential ethical issues include the obtaining and privacy of training data, transparency of the function of these algorithms to its users, and liability of possible prediction error. Some of these issues, such as transparency as to how the diagnoses are made, can be addressed through legislation enforcing algorithm developers to provide information such as the source code or model weights behind the algorithms, or other ‘common’ methods such as the creation of a ‘visual overlay of the portions of an image that contribute most strongly to an output prediction’ (E. Virginia Lapham et al., 1996), an effective solution for image-based prediction or diagnosis systems. This technique was implemented in the DeepGestalt system, which uses a ‘heat-map visualisation’ to simply display the similarity of areas of the input image to each suggested GS (Gurovich et al., 2019). This study also suggests requiring digital footprints of use of such technologies to prevent their abuse Solutions to other issues, such as questions about responsibility and liability of errors, are still highly debated

Overall, it is clear that there is great potential in implementing AI in genetic disease diagnosis, especially to increase the accessibility and accuracy of diagnoses on a larger scale, while requiring limited hardware This improvement can allow patients to receive more suitable treatments, improving their quality or length of life. However, careful legislation and monitoring about these system’s creation, use and access are necessary to ensure equity and privacy as they are implemented.

References:

1. Jackson, M., Marks, L., May, G. H. W., & Wilson, J. B. (2018). The genetic basis of disease. Essays in biochemistry, 62(5), 643–723.

Available at: <https://doi.org/10.1042/EBC20170053> (Accessed: 25/01/2023)

2. Genetic Alliance UK, 2016. Genetic Disorders [online]

Available at: <https://geneticalliance.org.uk/information/learn-about-genetics/genetic-disorders/> (Accessed 25/01/23)

3. Genetic Alliance; District of Columbia Department of Health. Understanding Genetics: A District of Columbia Guide for Patients and Health Professionals. Washington (DC): Genetic Alliance; 2010 Feb 17. Chapter 2, Diagnosis of a Genetic Disease. Available from: https://www.ncbi.nlm.nih.gov/books/NBK132142/ (Accessed 30/01/23)

4. Strelling, M. K. (1976). Diagnosis Of Down’s Syndrome At Birth. The British Medical Journal, 2(6048), 1386–1386. http://www.jstor.org/stable/20412466 (Accessed 30/01/23)

5. Baird, P. A., Anderson, T., Newcombe, H. & Lowry, R. Genetic disorders in children and young adults: a population study. Am. J. Hum. Genet. 42, 677–693 (1988) (Accessed 30/01/23)

6. Yaron Gurovich; Yair Hanani; Omri Bar; Guy Nadav; Nicole Fleischer; Dekel Gelbman; Lina Basel-Salmon; Peter M. Krawitz; Susanne B. Kamphausen; Martin Zenker et al, 2019. Identifying facial phenotypes of genetic disorders using deep learning Nature Medicine, [ejournal] DOI: 10.1038/s41591-018-0279-0 (Accessed 10/11/22)

7. Kole, A. et al. Te Voice of 12,000 Patients: experiences and expectations of rare disease patients on diagnosis and care in Europe. Eurordis http://www. eurordis.org/IMG/pdf/voice_12000_patients/EURORDISCARE_FULLBOOKr.Pdf (2009)

8. Rand, L., Boyce, T., & Viski, A., 2020. Computer Vision In Emerging Technologies and Trade Controls: A Sectoral Composition Approach (pp. 68–86). Center for International & Security Studies, U. Maryland.

Available at: http://www.jstor.org/stable/resrep26934.9 (Accessed: 08/02/2023)

9. Jordan, M. I., & Mitchell, T. M., 2015. Machine learning: Trends, perspectives, and prospects. Science, 349(6245), 255–260.

Available at: http://www.jstor.org/stable/24748571 (Accessed 05/02/23)

10. Quentin Ferry, Julia Steinberg, Caleb Webber, David R FitzPatrick, Chris P Ponting, Andrew Zisserman, Christoffer Nellåker (2014) Diagnostically relevant facial gestalt information from ordinary photos eLife 3:e02020

Available at : https://doi.org/10.7554/eLife.02020 (Accessed: 06/02/23)

11. Katherine Schon, 2021. BMJ Whole genome sequencing helps pinpoint a genetic diagnosis for patients [online]

doi: https://doi.org/10.1136/bmj.n.2680 (Accessed: 07/02/23)

12. De Ligt J, Willemsen MH, Van Bon BW, Kleefstra T, Yntema HG, Kroes T, Vulto-Van Silfhout AT, Koolen DA, De Vries P, Gilissen C, Del Rosario M, Hoischen A, Scheffer H, De Vries BB, Brunner HG, Veltman JA, Vissers LE, 2012 Diagnostic exome sequencing in persons with severe intellectual disability The New England Journal of Medicine 367:1921-1929.

Available at: https://doi.org/10.1056/NEJMoa1206524 (Accessed: 28/03/2023)

13. Lapham, E. V., Kozma, C., & Weiss, J. O. (1996). Genetic Discrimination: Perspectives of Consumers. Science, 274(5287), 621–624.

Available at: http://www.jstor.org/stable/2899643 (Accessed 27/05/2023)

117

14. Larrazabal, A. J., Nieto, N., Peterson, V., Milone, D. H., & Ferrante, E. (2020). Gender imbalance in medical imaging datasets produces biased classifiers for computer-aided diagnosis. Proceedings of the National Academy of Sciences of the United States of America, 117(23), 12592–12594.

Available at: https://www.jstor.org/stable/26968297 (Accessed 28/05/2023).

15. Dias, R., Torkamani, A. Artificial intelligence in clinical and genomic diagnostics. Genome Med 11, 70 (2019).

Available at: https://doi.org/10.1186/s13073-019-0689-8 (Accessed 01/06/2023)

16. Sundaram, L., Gao, H., Padigepati, S. R., McRae, J. F., Li, Y., Kosmicki, J. A., Fritzilas, N., Hakenberg, J., Dutta, A., Shon, J., Xu, J., Batzoglou, S., Li, X., & Farh, K. K. (2018). Predicting the clinical impact of human mutation with deep neural networks. Nature genetics, 50(8), 1161–1170.

Available at: https://doi.org/10.1038/s41588-018-0167-z (Accessed 01/06/2023)

17. Lange, A. (2023) The future of AI in genetic testing, The AI Journal. Available at: https://aijourn.com/the-future-of-ai-in-genetic-testing/ (Accessed: 2023).

119

Daniel-Samuel Bayvel Zayats

ENGINEERING

“Influencing Randomness”, delves into the controllability of dielectric breakdown within acrylic mediums. His project questions the inherent randomness of this phenomenon while exploring avenues for neural network-based simulation, modelling, and identification of this effect. Through the additional creation of the Weizmann Institute Competition safe, Daniel Samuel was able to demonstrate the physics principles involved. He is currently studying Maths, Further Maths, Physics and Russian, intending to pursue Engineering at university.

Influencing Randomness

Can the dielectric breakdown of an acrylic (PMMA) medium be controlled and constrained? Is this dielectric breakdown really random? And how can it be simulated, modelled and iden>fied using neural networks?

Weizmann Ins>tute Safecracking Compe>>on - Physics Prac>cal Project

1. Introduc,on

A Lichtenberg figure (LF) is the random forma8on of ‘branches’ in an insula8ng material, formed through a process called electrical treeing or dielectric breakdown. When a high-voltage electrical breakdown occurs, it creates a path of ionized plasma through the insula8ng material (Wood, M. 2015) (Takahashi, Y. 1979). This discharge can be achieved with a high voltage source (10-20 MeV) or naturally, for example from lightning strikes or sta8c discharge in electrical equipment (Takahashi, Y. 1979)

LF, named aRer the German physicist Georg Christoph Lichtenberg, who first studied this phenomenon in the 18th century, form a complex network of channels or branches within the breakdown medium (Wood, M. 2015 This network can take on many different shapes, including tree or fern branches or fractal-like forma8ons. In this paper, the factors affec8ng the forma8on of Lichtenberg figures, are described, as well as how they can be influenced, and if their forma8on is really random.

The forma8on of Lichtenberg Figures, and more generally dielectric breakdown, has applica8on in modelling lightning strikes, river deltas, material stress fracturing, moulds to support the growth of ar8ficial vascular 8ssue (Antonov, V. 2020) (Hsu. J. 2009), op8misa8on algorithms (Pereira. J. 2021), and even art! (Museum Trade 2018)

This project explores methods of genera8on of LFs, and methods of inves8ga8ng the effect that standing waves and electric or magne8c fields have on the forma8on of LFs As well as, describing mathema8cal techniques to inves8gate their randomness (Morris, A 1951) The similarity of this dielectric breakdown to the diffusion limited aggrega8on algorithm (DLA) is examined and the simula8on of this supposedly random electrical treeing is also explored DLA is the process in which par8cles take move in a random path, due to Brownian mo8on, and diffuse and aggregate to form complex pa`erns and networks. DLA is already widely used to model dendri8c growth which has major applica8ons in systems that freeze, and it is possible that DLA can also be used to describe the genera8on of networks and fractals formed when Lichtenberg figures are created (Bourke. P. 1991)

121
Fig 1: A Lichtenberg Figure contained in a block of acrylic (Captured Lightning. 2022). Fig 2: A sequence of zooms showing the self-similar proper8es of Lichtenberg figures (Captured Lightning. 2022)

2. Method of Inves,ga,on

To accurately and fairly inves8gate the forma8on of Lichtenberg figures polymethyl methacrylate (PMMA) blocks of uniform size (50×50×50 mm), and shape, could be used as the dielectric breakdown medium because they are transparent, and are excellent electrical insulators. Incident electrons (conven8onally 10-20MeV) would create an area of charge space within the solid. Some of the electrons would collide with molecules inside, ionising them. These posi8ve ions are a`racted to the nega8ve space and form a layer of posi8ve charge on the surface. Therefore, there is only a very small net charge and electrical field created (Morris, A. 1951).

The 8p of a conduc8ve object would be used to penetrate the surface, a simple nail can be used. The 8p has an external PD applied across it. Once the poten8al difference across the PMMA exceeds the impulse breakdown strength, the excess nega8ve charge from the nega8ve internal space creates, and flows through, branches - which originate from the 8p of the conduc8ve object medium (Wood, M. 2015) (Morris, A. 1951)

It would be interes8ng to inves8gate the effect of sta8c and variable, magne8c and electric fields on the path of the electrons (Morris, A. 1951) In addi8on, standing electromagne8c waves have been shown to exert a ponderomo8ve force on electrons (Freimund. D. 2001) (Cary, J. 1980) The ponderomo8ve force is experienced by charged par8cles as they pass through an oscilla8ng electromagne8c field (G. Khazanov. 2013). This results in some electrons remaining in sta8onary, in nodes of the standing waves, and some having their path affected as they pass through the medium The ‘general’ force exerted on the par8cle by the ponderomo8ve force (FP) is calculated by (Cary, J. 1980):

����! = ���� " 4�������� " ∇(���� " )

Where e is the charge on the par8cle, m is the mass of the par8cle, ω is the angular frequency of the field oscilla8on and E is the amplitude of the field.

By covering areas of the breakdown medium with lead plates some areas can be leR unaffected allowing for even further influence into the forma8on of LFs (Takahashi, Y. 1979)

Hopefully, these effects can be used to constrain the path which the electrons take, see the effect on the forma8on of LFs, and therefore the effect on the randomness of the LFs generated A detector such as the MicroMegas detector can also be used to detect electrons passing through the acrylic which is useful to see what por8on of the electrons are actually contribu8ng to the dielectric breakdown (Andriamonje. S. 2010) A possible method to generate Lichtenberg figures would be:

Fig 3: A Lichtenberg being formed by hiing a nail into the ‘charge space’ of the acrylic block (Gray, T. 2008)

1. The blocks will be thoroughly cleaned and sterilised, and then placed within the controlled environment, and electric or magne8c field will be applied, followed by the process of crea8ng the figures (detailed above).

2. The amplitude and frequency of the electric and magne8c field, as well as the standing waves that will be applied, will be controlled and varied throughout the experiment. Magne8c field – by varying distance, posi8on and strength of magnet. Standing waves – by varying the wavelength (through the use of different transducers) and separa8on distance between sources. And electric field – by varying intensity.

3. The Lichtenberg figures formed under each condi8on will be observed and photographed using a high-resolu8on camera, microscope, or a more advanced method such as transmission electron microscopy (TEM).

A thin slice of the acrylic breakdown medium containing the LF would be imaged A TEM microscope would be used for imaging. TEM uses a beam of electrons instead of light to visualize the sample's structure as the wavelength of an electron is much smaller than the wavelength of light resul8ng in an image with much higher resolu8on (up to 0.2nm) (Encyclopaedia Britannica. 2023) The resultant images can be captured using direct electron detectors, which consist of an array of pixels which when hit by an electron generate a signal which is propor8onal to the energy of the incident electron – these signals are then converted into digital images (AZO Nano. 2006) These recorded images could then be processed to virtually construct the three-dimensional structure of the sample (many 2D ‘slices’ of the sample combines to 3D) (Meador, M. (2009) which can then be closely analysed on a computer, or by a neural network

Fig 5: An example of what a fractal-like forma8on would look like with TEM imaging. This is an image of a carbon nano8p (Meador, M. 2009)

4. A MicroMegas detector, which is a gaseous par8cle detector (amplifies the charge produced by ionizing par8cles passing through the gas-filled (noble gases) detector), will determine whether the electron's path is affected by the acrylic (Andriamonje. S. 2010)

5. The data collected will be analysed. The branches can be analysed using 2D or 3D fractal analysis (See fractal analysis sec8on below).

6. Sta8s8cal analysis will be carried out to determine the significance of the observed effects.

7. The experiment will be repeated to ensure data reproducibility.

123
Fig 4: A diagram showing the func8on of a transmission electron microscopy setup (Encyclopaedia Britannica. 2023)

3. Fractal Analysis:

(a) 2D Fractal analysis

Fractal analysis can be used to test the randomness and complexity of the Lichtenberg figures produced in this experiment (L. Niemeyer. 1984). To quan8fy the fractal proper8es of the pa`erns, the box-coun8ng method can be used. The fractal dimension of each Lichtenberg figure is calculated by dividing the pa`ern into a grid of boxes of equal size, coun8ng the number of boxes required to cover the pa`ern. This is repeated for different sized boxes for the same fractal. A graph can then be plo`ed of log���� against log(# $ ) , where ���� is the number of boxes required to cover the fractal shape, and ���� is the side length of the boxes, so the fractal dimension is %&' ) %&'! "

The fractal dimension is then obtained as the slope of the best-fit line through the datapoints. The calculated fractal dimensions are compared to those of a known reference such as a Brownian mo8on model. If the Lichtenberg figures exhibits fractal dimensions that are significantly lower than those of the Brownian mo8on model this would indicate a lower degree of randomness and a higher degree of regularity in the pa`erns. If the product moment correla8on coefficient (PMCC) of the datapoints for the best-fit line is close to 1 there is evidence that the fractal has self-similar proper8es. This is because, it shows zooming into the fractal has no effect on the space that it covers, and so looks similar at different levels of magnifica8on.

2D Fractal Analysis

Fractal Dimension = 1.1296 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 0 0.5 1 1.5 2 2.5 3 3.5 4 log(N) log(1/r)
Fig 6: A graph to analyse the fractal proper8es of an example Lichtenberg figure branch end, and to calculate its fractal dimensions

The next step would be to explore 3D fractals as analysing the randomness of a 3D fractal over a 2D fractal will give a much more accurate representa8on of how the randomness has been affected and in which plane direc8on.

4. 3D fractal analysis

When looking at rela8onship between 2D and 3D fractal dimensions, it has been found that the rela8onship will vary dependant on the direc8on 2D slices are taken from - thus there is no rela8onship for the general case.

Therefore, in order to calculate the 3D fractal dimension, a modified version of the box-coun8ng method must be u8lised. This works as followed:

• Let ���� = [����# , ����" , , ����) ] be a set of ver8ces that are make up the faces. For this experiment, these would be the points at which the figure branches off and end points.

• For each of these ver8ces, a triple (����, ����, ����) will be defined as an indica8on of its distance from the origin (i.e. the source of the electrical current).

• The dimension is calculated by taking spheres at each of the ver8ces and expanding them to have radius ���� . The number of points that are found within or on each of the spheres is counted (denoted by V(r)). The dimension can then be calculated using the following formula:

• ���� = 3 log ���� (����) log ����

125
Fig 7: An example of a Lichtenberg figure branch end, with 3 different grid sizes, which I used for our example 2D fractal analysis. These par8cular three grid sizes are shown by the red dots on the graph.

• A graph can be plo`ed of log ����(����) against log ���� over a variety of r to provide a gradient ���� to find the dimension ���� = 3 ����. Analysis like that done on the 2D figure will apply here as well.

3D Fractal Analysis

A higher fractal dimension (���� ) is a direct indicator of the irregularity of the branches and how easy it is to predict their path (L. Niemeyer. 1984). A strong value of the PMCC will indicate that the fractal is reciprocated at a molecular level and therefore expresses self-similarity. Both variables can be studied through the graphs to determine whether or not the figures formed are random. Therefore, applica8ons of studying the containment of dielectric breakdown are wide ranging; par8cularly regarding electrical treeing.

Lichtenberg Figures with no external forces in PMMA

× 50 × 50

D = 3 - 1.3655 = 1.6345 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 -1.5 -1 -0.5 0 0.5 1 1.5 2 log(V(r)) log(r)
Fig 8: A graph showing an example 3D fractal analysis graph for a Lichtenberg figure branch end.
D = 3 - 1.1935 = 1.8065 0 0.2 0.4 0.6 0.8 1 -1 -0.5 0 0.5 1 1.5 2 log(V(r)) log(r)
Fig 9: This graph demonstrates what the standard PMCC and fractal dimension for a Lichtenberg figure under an electron beam of 10-20MeV in PMMA size 50 mm. This is based on exis8ng fractal calcula8ons regarding dielectric breakdown of materials (Tuncer. E. 2006). Graph 1: Average 3D Fractal Analysis for

The forces produced by standing waves are small compared to the large voltage through which they are accelerated. As such there won’t be a significant impact on the path of the electrons and therefore the irregularity of the fractal A fractal dimension of 1.8 +- 0.1 is to be expected (Tuncer. E. 2006)

Electric fields can accelerate electrons resul8ng in the forma8on of more branches and a more complex structure (G. Khazanov. 2013). However, the effect of these electric field on already fast moving electrons should not be hugely significant (Freimund. D. 2001) ���� will be lower in this case than in the case of the magne8c fields. A fractal dimension of 1.8 +- 0.5 seems likely

Magne8c fields provide a strong force which can distort the path of electrons and it is very possible there will be a high degree of irregularity produced for the fractals. A fractal dimension of 2.7 +- 0.5

5. Simula,on

To explore the simula8on of LFs and electrical treeing I first came across the “Visions of Chaos” program. This program allows one to set parameters by which to model 2D, fractals, 3D fractals, diffusion limited aggrega8on, cellular automata and much more. Using this program I modelled a poten8al Lichtenberg figure using the diffusion limited aggrega8on mod el (Visions of Chaos. 2023)

While the “Visions of Chaos” program does generate visually stunning and complex pa`erns, the pa`erns are complex to analyse and large amount of computa8onal power and therefore 8me is required to generate these pa`erns.

Fig 10: This is the result of a 3D simula8on, run on the ‘visions of chaos’ simula8on soRware (Visions of Chaos. 2023), using the diffusion limited aggrega8on model. I specified simula8on parameters as precisely as possible to emulate how the pa`ern would form in an acrylic cube.

127

I decided to use and modify a simpler, simula8on program for genera8ng 2D LFs in Python by op8mizing the way in which the ‘resistance array’ is generated and changing the way in which the program terminates (Chromia, 2023) These pa`erns can be simulated much faster and are much easier to analyse due to their simplis8c, 2D nature.

The program emulates the real process of dielectric breakdown by genera8ng a 2D ‘breakdown medium’ in the form of a large array of randomly generated numbers between 0 and 200, which correspond to the resistance of the atoms in the insulator to become conductors when their breakdown voltage is reached. Then the program plots points, from a specified star8ng posi8on, to one of the 8 surrounding points based on which point has the smallest resistance value. To prevent infinite loops the simula8on is prevented to go back to anywhere it has previously been even if this is the smallest surrounding resistance value (Chromia, 2023)

Figs 11+12: 2 matrix diagrams to show poten8al paths of the par8cle (pixel in this case) through the ‘resistance array’ (Chromia, 2023) Fig 13: The code used to generate the ‘resistance array’.

For simula8ons with very high or very low resistance values 0-20 (above), or 180-200 (below) the electrons have much less randomness in their movement due to a reduced ’choice’ of path.

A resistance value of 0-200 (arbitrary units) produce the most realis8c looking pa`ern of electrical ‘treeing’ due to the truly random path that the ‘electron’ (or in this case the pixel) can take.

129
Fig 14 (leR): Simula8on on a 50x50 grid with a resistance value of 180-200. Fig 15 (leR): Simula8on on a 50x50 grid with a resistance value of 0-20. Fig 16 (leR): Simula8on on a 50x50 grid with a resistance value of 0-200.

6. Use of neural networks for analysis:

1. First many images of the LFs are taken from the TEM. These images are then preprocessed, and labelled (EG each branch-end labelled) so they can be used as training data for the neural network. The images then all need to be standardized so the contrast and light levels are consistent across all the training data (Yamashita, R. 2018)

The most appropriate neural network for image processing and analysis is a convolu8onal neural network (CNN). Convolu8on is a term used to describe the mathema8cal combina8on of two different func8ons to produce a third func8on. It combines two sets of data. In the case of a CNN, convolu8on is where input images are taken and, with the use of a filter or kernel, ’bits and pieces’ are taken and recombined to then produce a feature map which contains the most recognizable and specific features across all the input data which is very helpful for recognizing images as they are fed into the network (Yamashita, R. 2018)

CNNs commonly use pooling layers aRer convolu8onal layers. Pooling layers decreases the spa8al dimensions of the feature maps, reducing the amount of new ‘features’ added which helps consolidate the informa8on previously learned keeps the most significant features undiluted. This down-sizing ensures that the most significant characteris8cs of the training data are kept, but the computa8onal power necessary does no grow beyond prac8cal limits (Yamashita, R. 2018)

2. The training data is then taken and split into training and valida8on sets (approximatley 80% training set, 20% valida8on set). The network uses training data to learn and recognize pa`erns and features relevant to Lichtenberg figures. A loss func8on is implemented into the training algorithm, where the func8on records how well the network responds to the training data – and the model aims to minimize this loss.

3. The network could then evaluated using the valida8on set. An F1 score can be calculated to quan8fy the accuracy of the model.

Fig 17: Diagram showing how a feature map is formed from a sample in the ini8al data set (Trimble. 2019)

TP: True posi8ve – number of samples network correctly iden8fies as true

FP: False posi8ve – number of samples network incorrectly predicts as true

TN: True nega8ve - number of samples network correctly predicts as false number of samples network incorrectly predicts as false

set is fully processed precision and recall scores can be calculated (Lipton, Z. 2014) calculated (between 0 and 1 with a score of 1

4. ARer the model achieves a performance level of >90% success (or F1 score >0.9) its performance can be evaluated on an independent test set. And the model can start to be u8lised for the recogni8on of LFs

131
= �������� �������� + �������� ������������������������ = �������� �������� + �������� ����1 = 2 1 ������������������������������������ + 1 ������������������������ = 2 × ������������������������������������ × ������������������������ ������������������������������������ + ������������������������
������������������������������������
Fig 18: ‘Confusion matrix’ represents the skill level of a computer model at analysing images (Kundu, R. 2022).

7. Weizmann Ins,tute Safecracking Compe,,on

During January – April 2024, I led the Habs Team in the na8onal and Interna8onal finals of the Weizmann Ins8tute physics safecracking compe88on, where we took the 5th place overall (out of 46 teams). The aim of the compe88on was to build a safe based on 2 principles of physics – our safe was based on the principles of acous8c levita8on and gravita8onal ion traps.

We placed highly in the na8onal round qualifying us the interna8onal finals in the Weizmann Ins8tute, in Rehovot, Israel. Of the marks available, 60% were for explaining the physics within our safe to the judges. Since the judges were PhD students and professors, we were ques8oned in depth on the two concepts we used. We placed 2nd/46 teams in the judges scoring, and we placed 5th interna8onally, bea8ng the two UK teams who we originally lost to in the na8onal round. This was an immense success for us as first-8me entrants especially given the very short deadlines we had to work to.

How does this relate to Lichtenburg Figures?

One of the ways in which the forma8on of LFs is influenced is through the control of the electrons before they enter, and as they are travelling through, the breakdown medium. The safe I designed was almost en8rely based on the principles of par8cle control and, thus, serves as a perfect demonstra8on as to how the par8cles would be manipulated and controlled.

The Safe and its Workings:

The principle opera8on of our safe is par8cle control and levita8on. The acous8c levitator visually demonstrates the ability to control par8cles without touching them and the gravity ion trap demonstrates geometrically how the changing field can result in no overall movement 8med correctly. From chemical drug delivery and isola8ng chemical reac8ons to containing high energy plasma in the fusion reactors of the future, the technology and principles displayed have numerous significant prac8cal applica8ons.

When teams come to crack our safe, they receive a polythene wand, a duster, polystyrene balls, a table tennis ball, and a small box displaying an LCD with jump leads coming out of it.

The code to access the prize of chocolates is only displayed once the crackers have solved both our riddles.

The safe crackers know they must block the lasers so that the LDRs no longer detect laser light. Inside the safe are 2 arrays of ultrasonic transducers, which together, when posi8oned at the correct distance from each other (mul8ple of the wavelength) act as an acous8c levitator (Weber, R. 2012). The crackers are unable fit their fingers through the holes to block the lasers. Instead, they must electrosta8cally charge the polythene wand by rubbing the duster on it. The

Fig 19: The final safe for the interna8onal round of the compe88on.

wand gains electrons since the fric8on between the duster and the wand causes the valence electrons in the duster to become excited and they then migrate to the polythene rod making it nega8vely charged. This will then allow them to pick up the small pol ystyrene balls as a result of electrosta8c induc8on. From here they will then put the wand between the two arrays of the levitator. Since the acous8c levita8on force is greater than that of the electrosta8c force, the ball will detach from the wand to go and ‘sit’ in a node.

Sound waves are made up of rarefac8ons and compressions of air and the two standing waves produced by both transducer arrays create a standing wave with nodes and an8 -nodes. Nodes are points of constant pressure along a standing wave (which is formed by the two arrays, since they are an odd number of half wavelengths apart). The downwards force of the weight of the ball is negligible compared to the quickly oscilla8ng upwards and downwards force of the changing pressure gradient caused by the neighbouring an8-nodes. This results in the ball remaining sta8onary within the node as a result of the forces ac8ng upon it due to the changing air pressure above and below it. Using the combina8on of bu`ons provided on the safe, the crackers can change where the balls are posi8oned in air. There is a lot of coding and computa8on behind this which is enabled by changing how in phase each individual transducer is compared to the others in the arrays. This allows the crackers to block the lasers. The LDRs detect this change, and the second puzzle becomes available to crack.

Fig 20: The levitator, levita8ng 2 polystyrene balls which are both in the path of a laser beam. Just before the regional round.

Fig 21: The levitator, levita8ng a polystyrene ball. This was version 2 of levitator as we turned it horizontally and split it into 2 halves so the distance between the transducer arrays could be adjusted.

133

(Phys.UK. 2010).

The second puzzle emulates a Paul trap/ion trap – A hyperbolic paraboloid shape is op8mal for the trap. This same idea of oscilla8ng electric fields is used in nuclear reactors to contain high-energy plasma that would otherwise melt through any solid container. The oscilla8ng field quickly swaps the direc8on of accelera8on of the charged plasma at a very high frequency so that on average the plasma remains sta8onary over 8me (M, Zhang. 2018). The upwards and downwards slopes of the gravity ion trap are analogous to the posi8ve and nega8ve electric fields from the plasma containers. Hence, the crackers need to work out at what speed to spin the hyperbolic paraboloid so that th e table- tennis ball stays in posi8on for 10 seconds; by balancing out the downwards force as the ball is pulled down the slope by gravity and the upwards force when the posi8ve slope pushes the ball back into the centre. This can be modelled by this equa8on (M, Zhang. 2018):

����*&+,+-&.,%

Where h is the height (distance between the lowest and highest point on the paraboloid), r is the radius of the paraboloid and g is gravita8onal field strength

The crackers cannot balance the ball with trial and error as there is a 8me delay aRer each a`empt for which they incorrectly choose the speed. To find the required speed, the crackers plug in the motor to the small separate box containing the LCD. They can then back-drive the motor to generate a current, and using their addi8onal knowledge of electricity, they can then turn on the LCD on to display the speed needed to keep the table tennis ball stable. The crackers then get the table tennis ball to balance for 10 seconds. Once we detect that the ball has balanced for 10 seconds using an IR distance sensor, the main LCD on the front of the safe displays the code for the combina8on lock guarding our stash of chocolates. The crackers can then enjoy a sweet reward for their hard work!

K2����ℎ ���� "
=
Fig 22: Diagram highligh8ng the nodes – areas of constant pressure in which the polystyrene balls can sit/levitate Fig 23: A table tennis ball (makeshiR par8cle) balancing in the centre of the spinning paraboloid on our safe.

8. Conclusion

In conclusion, Lichtenberg figures have countless applica8ons in modelling lightning strikes, river deltas, material stress fracturing, sta8c discharge in electrical equipment and much more. However, in order to use this dielectric breakdown for modelling we require a be`er understanding of the forma8on and subsequent randomness of these figures. In this project I have outlined how Lichtenberg figures would be generated, what kind of factors could influence their genera8on and randomness, how this randomness can be quan8fied, and how to simulate, image and automate this process using neural networks. In addi8on, the safe that my team and I built for the Weizmann ins8tute s afecracking compe88on demonstrates the concepts of par8cle control that would have a poten8al effect on this dielectric breakdown. The safe serves as a visual representa8on of these influencing effects and can help people understand the principles of standing waves, acous8c levita8on, and gravita8onal ion traps.

(Note: The safe is the prac,cal project and I can happily demonstrate it to the marker).

135

References:

1. Andriamonje. S. (2010). “Development and performance of Microbulk Micromegas detectors”. Available at: hDps://iopscience.iop.org/arFcle/10.1088/17480221/5/02/P02001/meta. Accessed: March 2023.

2. Antonov, V. (2020). “MalformaFons as a ViolaFon of the Fractal Structure of the Circulatory System of an Organism”. Available at: hDps://link.springer.com/arFcle/10.1134/S1063784220090042. Accessed: May 2023.

3. AZO Nano. (2006). Available at: hDps://www.azonano.com/arFcle.aspx?ArFcleID=1723#:~:text=TEMs%20work%20b y%20using%20a,passed%20through%20the%20sample%20material. Accessed: June 2023.

4. Bourke. P. (1991). “DLA - Diffusion Limited AggregaFon”. Available at: hDp://paulbourke.net/fractals/dla/. Accessed: April 2023.

5. Captured Lightning, (2022), “What are Lichtenberg figures, and how do we make them?” Available at: hDps://www.capturedlightning.com/frames/lichtenbergs.html, Accessed: March 2023.

6. Cary, J. (1980). “PONDEROMOTIVE EFFECTS IN COLLISIONLESS PLASMA: A LIE TRANSFORM APPROACH”. Available at: hDps://escholarship.org/uc/item/7354x7m4. Accessed: May 2023.

7. Chromia, (2023), “Lichtenberg Figures”. Available at: hDps://github.com/chromia/lichtenberg, Accessed: May 2023.

8. Encyclopaedia Britannica. (2023).“Transmission electron microscope”. Available at: hDps://www.britannica.com/technology/transmission-electron-microscope. Accessed: June 2023.

9. Freimund. D. (2001), “ObservaFon of the Kapitza–Dirac effect”. Available at: hDps://www.nature.com/arFcles/35093065. Accessed: May 2023.

10. G. Khazanov. (2013). “PonderomoFve force in the presence of electric fields”. Available at: hDps://doi.org/10.1063/1.4789874. Accessed: May 2023.

11. Gray, T. (2008). “Gray MaDer: Trap lightning in a block”. Available at: hDps://www.popsci.com/diy/arFcle/2008-02/trap-lightning-block/. Accessed: June 2023.

12. Hsu. J. (2009), “Harnessing Lightning Bolts to Build ArFficial Organs”, Available at: hDps://www.popsci.com/scitech/arFcle/2009-08/arFficial-organs-could-arise-flashelectricity/. Accessed: June 2023.

13. H. M, Zhang. (2018). “Ion traps and the memory effect for periodic gravitaFonal waves”. Available at: hDps://journals.aps.org/prd/abstract/10.1103/PhysRevD.98.044037. Accessed: January 2023.

14. Kundu, R. (2022). “F1 Score in Machine Learning: Intro & CalculaFon”. Available at: hDps://www.v7labs.com/blog/f1-score-guide#:~:text=for%20Machine%20Learning,What%20is%20F1%20score%3F,predicFon%20across%20the%20enFre%20dataset. Accessed: April 2023.

15. Lipton, Z. (2014). “Thresholding Classifiers to Maximize F1 Score”. Available at: hDps://arxiv.org/pdf/1402.1892.pdf. Accessed: April 2023.

16. L. Niemeyer. (1984). “Fractal Dimension of Dielectric Breakdown” Available at: hDps://journals.aps.org/prl/abstract/10.1103/PhysRevLeD.52.1033. Accessed: April 2023.

17. Meador, M. (2009). “Field Emission and Radial DistribuFon FuncFon Studies of Fractallike Amorphous Carbon NanoFps”. Available at: hDps://www.researchgate.net/figure/TEM-image-of-a-fractal-like-a-C-nanoFpsobtained-by-TEMEBID-method_fig5_44902835. Accessed: May 2023.

18. Morris Thomas A, (1951), “’Heat Developed’ and ‘powder’ Lichtenberg Figures and the IonizaFon of Dielectric Surfaces Produced by Electrical Impulses” Available at: hDps://iopscience.iop.org/arFcle/10.1088/0508-3443/2/4/303 Accessed: April 2023.

19. Museum Trade (2018). “Embellishing with Lichtenberg Wood Burning for Natural Plant-like formaFons”. Available at: hDps://museumtrade.org/embellishing-withlichtenberg-wood-burning-for-natural-plant-like-formaFons. Accessed: April 2023.

20. Pereira. J. (2021). “Lichtenberg algorithm: A novel hybrid physics-based meta-heurisFc for global opFmizaFon”. Available at: hDps://doi.org/10.1016/j.eswa.2020.114522. Accessed May 2023.

21. Phys.UK. (2010). “Standing Waves”. Available at : hDp://electron6.phys.utk.edu/phys250/Laboratories/standing_waves.htm Accessed: February 2023.

22. Takahashi, Y. (1979). “Two Hundred Years of Lichtenberg Figures.”. Available at hDps://doi.org/10.1016/0304-3886(79)90020-2. Accessed: April 2023.

23. Trimble. (2019). “ConvoluFonal Neural Network Algorithms” Available at: hDps://docs.ecogniFon.com/v9.5.0/eCogniFon_documentaFon/Reference+Book/23 +ConvoluFonal+Neural+Network+Algorithms/ConvoluFonal+Neural+Network+Algori thms.htm. Accessed: April 2023

24. Tuncer. E. (2006). “On dielectric breakdown staFsFcs”, Available at: hDps://iopscience.iop.org/arFcle/10.1088/0022-3727/39/19/020/meta. Accessed: May 2023.

25. Visions of Chaos. (2023) Available at: sovology.pro. Accessed: May 2023.

26. Weber, R. (2012). “AcousFc levitaFon: recent developments and emerging opportuniFes in biomaterials research”. Available at: hDps://link.springer.com/arFcle/10.1007/s00249-011-0767-3. Accessed: January 2023.

27. Wood, M. (2015). “Charging and Discharging of Lichtenberg Electrets”. Available at: hDps://www.semanFcscholar.org/paper/Charging-and-Discharging-of-LichtenbergElectrets-Wood/7e3dfd481417596e8aa0984cb89145e639f8b4ee. Accessed: May 2023.

28. Yamashita, R. (2018). “ConvoluFonal neural networks: an overview and applicaFon in radiology ”. Available at: hDps://insightsimaging.springeropen.com/arFcles/10.1007/s13244-018-0639-9. Accessed: April 2023.

137

Humanities and Social Sciences Faculty

139

Abigail Sleep

CLASSICS

Abi Sleep chose the topic ‘Ancient Greek Colour Perception’ as a result of a love of ancient literature and a fascination with the difficulty of transmitting concepts across languages. The essay explores scientific and literary uses of colour terms in the ancient world, attempting to untangle the seeming inconsistencies in their definition. Abi is studying Latin, Ancient Greek, Chemistry and History at A-level and wants to study Classics at university.

Reflections on a wine-dark sea: investigating ancient colour perception

The Homeric world is steeped in colour, from the wine-dark sea to rosy-fingered dawn, but the meaning of this colour imagery has been highly contested owing to its inconsistent and perplexing nature. This variation within colour terminology is innate to the Greek language and has divided scholars such as Gladstone and Whitmarsh for hundreds of years Gladstone (1858) and Platnauer (1921) have argued that the colour experience of the ancients was different to ours today, or that their colour descriptors were simply limited However, more recent study (such as Sassi (2017) and Whitmarsh (2018)) shows that the Greeks did not limit their colour descriptors to hue, but combined various allusions, shades, and characteristics into every term At the heart of this issue lies not the limitations of the understanding of the ancient Greeks, but instead our limited modern view of colour Society today has categorised colours with increasing precision, to the extent that alphanumeric codes can be used to represent certain hues, and without setting aside this rigid view of colour we cannot hope to comprehend ancient colour perception.

Colour perception is an elusive concept in any civilisation; nevertheless, in analysis of colour language, it must be defined as clearly as possible. Nesterov and Federova (2017, p.1-2) distinguish between a ‘colour environment’, which, although acting as a ‘source of emotional reactions’, is not changed by the perception of individuals. This is to be differentiated from ‘colour culture ,’ which is the interaction between a society and the colour environment It could be argued that there is a third layer to these building blocks of colour perception: colour vision, that is, the ability that a group has to see their true ‘colour environment’ This could be seen as a subcategory of ‘environment’, as that which is seen by the viewer is, to them, their ‘colour environment’.

Some prominent figures, such as William Gladstone (1858), have argued that the ancient Greeks experienced a different colour vision to the modern individual, which has led to discrepancies between their colour vocabulary and ours. Gladstone argued that the inconsistencies of Homeric colour epithets suggested that they could distinguish very little aside from light and dark, and that as one looks back over the centuries, ability to discriminate between colours becomes ‘less and less mature’ (1858, p.457); however, there are many limitations to this argument.

One such limitation is the placement of English as the absolute authority on colour definition and description. This phenomenon is noted by Wierzbicka (2008, p.407), who comments on the tendency of English-speaking academia to give ‘fundamental status in human cognition’ ideas that are ‘lexically encoded in English’. Gladstone’s (1858, p.459) comparative list of the modern rainbow and colours in Homer, and his comment that at least three of the English colour terms did not have an ancient counterpart, falls into this trap of conflating the existence of concept and terminology. It remains, however, that there is minimal direct correlation between English and Ancient Greek colour terminology, but simply because the Greek language did not capture certain chromic concepts that can be found in English, it cannot be assumed they were not able to perceive it.

Furthermore, an analysis of Aristotle’s comments on the rainbow in Meteorology Book 3 (Arist. Mete 3.372a) reveals not deficiency of colour perception, as he describes colours that we would recognise in the rainbow, naming red, green, and blue (‘φοινικοῦν καὶ πράσινον καὶ ἁλουργὸν’), with yellow (‘ξανθόν’) between the red and green. If we allow for wider criteria for colour terminology, red can be seen as the spectrum from red to yellow, green describing green, and blue the range between blue and indigo. This does not suggest an infancy of colour vision, but a wider range of colours represented by the terminology given.

141

Another argument, as Allen (1878) notes, is that we have evidence of vibrant shades from Egyptian artifacts, such as sarcophagi, dating from at least five hundred years before the works of Homer We also have evidence that Greek statues were often painted and gilded, and there is little reason for cultures with an underdeveloped perception of colour to paint their statues and buildings so brightly One example of evidence for painted statuary is a vase (Fig.1.1), as noted by Mary Beard (2019), which depicts an artist painting his work. It could be argued that ancient cultures did not perceive the same depth of colour in their vibrant decorations, but this argument too is belied by the rich colour language of ancient texts. Purple is often captured in the term πορφύρεος, a rich dark shade described by Pliny as the colour of clotted blood. Blood red is clearly reflected the term ἐρυθρός, and χλωρός used to describe moss greens and skin-yellows (Platnauer, 1921) 1 Gladstone’s (1858) assertion that the lack of direct correlation between ancient and modern colour descriptors meant that the Greeks had an underdeveloped comprehension of the colour spectrum excludes the possibility of alternative categorisation of colour, and therefore the conclusion that we have diverged from Ancient Greek ‘colour culture’, rather than ancient humanity being anatomically underdeveloped, can reasonably be explored.

Ancient ‘colour culture’ can be accessed through two main literary sources: scientific texts (particularly by Aristotle and Plato) and other literature (such as Homeric epic) It is through these media that four main ‘measures’ of ancient colour (aside from hue) can be derived, these being ‘saliency’ (Sassi, 2017), movement, transparency, and brightness 2

Some of these measures are seen in other languages, for example that of the Walpiri people in Australia, who, although they have no direct words for colour, use words like ‘kunjuru-kunjuru’ (‘smoke’) to convey the likeness of one object to that of smoke, which may encapsulate its colour (Wierzbicka, 2008, p.410). Wierzbicka (2008, p.411-412) presents some of the measures of visual description of Walpiri, such as ‘conspicuousness’ in the context of surroundings (described here as saliency), and ‘shine’ (a subcategory of brightness) Wierzbicka argues that these descriptors, rather than merely portraying colour, convey these other important measures, whereas in this essay the measures are assessed as subcategories of colour.

The first measure, saliency, refers to how ‘interesting’ a colour is to the viewer, a measure that is reflected in the precision of the language used to represent the colour Often this saliency results from the vibrancy of the colour For example, φοινός is consistently used to describe blood red, most commonly bloody stains (Platnauer, 1921), such as in book 16 of the Iliad (Hom. Il. 16.159), where φοινός is used to describe the bloody stains on the jaws of wolves after a successful hunt. Sassi (2017) comments that red is ‘the most salient colour’, referring to ἐρυθρός (another term for red or blood red) as ‘the first to be defined in terms of hue in any culture’ This can be clearly seen in Greek colour theory, as Plato named the four most notable colours as ‘white, black, red, and ‘shining’’ (Sassi, 2017)3 However, shades such as green, yellow, and blue are seemingly neglected in ancient Greek (Sassi, 2017), as can be seen in the descriptor χλωρός, which is used to refer to both moss green and the pale yellow of skin (Platnauer, 1921). Perhaps this human fascination with red is not only due to its vibrancy, but its link with blood, a substance unique to violence and pain. Blood is seen frequently in both the ancient and modern world, in hunting, battle, childbirth, and injury, but nevertheless its

Fig.1.1 ‘Red-figure vase depicting an artist painting a statue of Hercules, identified by his club and lion-skin cape.’ 360-350 BCE (Metropolitan Museum of Art)

colour unsettles and entrances humanity. The rich tone of red and its often violent context contribute to its saliency, and therefore it is clearly defined and recognised in the Greek language

Movement is another vital aspect of Greek colour understanding, and can be seen in the example of ξανθός, which is most notably used to describe the ‘blonde’ hair of Achilles. However, this term can also be used to describe brown or red shades of hair (Whitmarsh, 2018), and Platnauer (1921) comments on Plato’s description of the colour as a mixture of ἐρυθρός and λευκός (‘red’ and ‘white’) (Plat. Tim. 68b) Another aspect of this descriptor is its etymological link to ξουθός, which often refers to rapid, vibrating movement. Platnauer (1921) notes that Greek authors almost unanimously use ξουθός to describe winged animals, such as Euripides’ description of bees in Iphigenia in Tauris (Eur. IT 617 ) Rapid movement, therefore, such as the flittering vibrations of wings, can be linked also to the term ξανθός, owing to the etymological link between the two terms. Scholars, such as Whitmarsh (2018), have theorised that the ξανθός of Achilles’ hair may allude to his speed in battle, and to his emotional volatility. This volatility can be seen in his fury at Agamemnon’s slight against him in taking Briseis, a rage that causes him to refuse to fight for the Greeks.

The importance of movement in Greek colour perception can also be seen in the term πορφύρεος, which is often used to describe the purple hue, brilliance and shifting movement of the sea (Sassi, 2017)

The complex contextual history of πορφύρεος must also be understood to comprehend the allusions and characteristics conveyed by the descriptor. The colour can be tied to Tyrian purple, a dye created from the mucus of murex snails. These snails were collected in ‘early spring during the reproductive period’, (Jensen, 1963, p 108), and heated until their colour had reached a deep, earthy purple, so deep that the best dyes made through this process were, according to Pliny, almost black, the colour of clotted blood (Plin. Nat. 9.62) Owing to the complexity and expense of the manufacturing process, the dye was extremely costly, with the result that only the wealthiest members of society could afford clothes dyed with it. It became a symbol of wealth and power, an idea reflected throughout ancient literature, such as in the case of Agamemnon’s return from war. Upon his return, Clytemnestra dyed a set of tapestries with πορφύρεος, an act so lavish it could be compared to lining the walls with gold. She encouraged her husband to walk upon the tapestries, and he refused, suggesting that this is an act that only gods or barbarians would dare carry out (Aesch. Ag 914). This suggests that only those with incredible arrogance, power or ignorance could walk on such riches, which reflects the incredible value of the dye. Odysseus’ cloak is also dyed purple in the Odyssey (Hom. Od. 19.190), acting as a symbol of wealth, royalty, power, and hubris, but also a connection with the sea. The link between Odysseus and the sea on which he has been carried for twenty years is conveyed, perhaps, through this shifting, sea-purple of his cloak. Further to this, it could be suggested that this link to the sea also reflects his volatility of character when he returns to Ithaca. Upon his return, he orders the murder of all his wife’s suitors and even the slave-girls they slept with (Hom. Od 465-474), showing an unpredictable fury that is reminiscent of the unpredictability of the sea. To the ancient world, the sea was an unknown quantity, a bringer of nourishment and transport but also a cause of death and destruction, storms rising from flat seas and skies. Poseidon, the god of the oceans, was also the god of the storms, earthquakes and destruction, the god of protection and disappearance at sea. In this way, Odysseus’ πορφύρεος cloak is an image of his turbulent, powerful, sea-hewn character.

There is suggestion of a link between πορφύρεος and movement, as the verb πορφύρω can mean ‘to swirl’ (Sassi, 2017), and if so, the descriptor conveys perfectly the shifting of the tides, and the inconstancy of the sea’s character. However, one dissenting commentator is Rutherford (1983, p.126), who argues that πορφύρεος is only derived from the Greek name for the murex dye

143

(πορφύρα) Despite the controversy surrounding this etymological debate, it can be assumed that even if there is no definitive etymological link between the two terms, the ancient audience would have been aware of the similarity between them, and therefore recognised some connection

The ancient Greeks were fascinated by the sea, as can be seen in their attempts to rationalise and describe its shifting colour, transparency, and movement. Sorabji (2004) comments on Aristotle’s observation that the appearance of the sea would change with the perspective of the viewer; that reflection, surroundings, distance, and the angle of observation and resulting reflections could all affect the viewer’s experience of the sea. Aristotle suggests that the sea is an unstable body, rather than a fixed one, and this again reinforces the idea of the volatility of the sea in its varying character, as reflected by its constantly changing colour. This can be seen in the range of colours Homer used to describe the sea, as Griffith (2005) notes, such as γλαυκός (‘bright’, ‘white’, ‘grey’), μέλας (‘dark’, ‘black’), οἶνοψ (‘wine-dark’), but never κύανος (which we believe to be ‘blue’).

When assessing ancient descriptions of the sea, one cannot neglect arguably its most debated epithet: the οἶνοψ (wine-dark) sea. There have been a myriad interpretations of this epithe t, almost all of which hold some element of truth, and its range of interpretations reflects perfectly the richness of meaning conveyed in all Homeric language, which is rarely confined to one aspect of the noun described. Platnauer (1921) notes the occurrences of οἶνοψ in Greek literature, as it is not limited to description of the sea. Sophocles uses it to describe ivy, Euripides uses it to describe a snake and winereddened cheeks, Aristotle uses it to describe the colour of grapes. Even Homer does not confine it to the sea, but uses the epithet to describe cattle (Hom. Il. 13.703) It must be recognised that later authors were aware of their references to Homeric colour language, but this repeated use of the term suggests that this was a recognised colour term, or at least a well-known descriptor

There have been many suggestions as to the meaning of the wine-dark sea, such as Sassi (2017), who states that the epithet refers to the shine of wine at symposiums, an instance of the role of ‘shininess’ in colour. Rutherford (1983, p.125) suggests that this epithet refers to a particular meteorological phenomenon: a sunset at sea with a day of fair weather to come, as the phrase ‘red sky at night, shepherd’s delight’ references. Rutherford outlines the meteorological explanation for this saying: that particles of dust in the air at dusk lead to the red colour and are generally a good indicator of dry weather the next day. He cites many examples of these ‘sunset-red’ seas, such as at Patroclus’ funeral pyre, when Leucothea gives Odysseus a magical robe, and as Telemachus sails to meet Nestor. Rutherford (1983) argues that enough of these events explicitly take place in the evening to warrant the assumption that other events using the epithets reflect sunset or the navigation of ships by the stars.

However, there is one occasion in the Odyssey which Rutherford cites as a sunset sea, where ‘δύσετό τ ᾽ ἠέλιος σκιόωντό τε πᾶσαι ἀγυιαί’ (‘the sun went down and all the streets went dark’)7(Hom. Od. 2.388), and Telemachus is instructed by Athena to sail over the wine-dark sea Here, by the time Telemachus begins to sail, the sun has already set, a point emphasised by Homer’s comment that ‘all the streets went dark’. While the sea is depicted at sunset here, the actual use of the epithet is far later, in line 421, long after ‘all the streets went dark’, and so this is not as definitive an example of the wine-dark sunset sea as Rutherford (1983) seems to suggest. This issue rotates around the axis of tense, and whether the sun was setting as they sailed, or whether this event continued during their journey. This is debateable, as while the sun δύσετό (‘went down’) is aorist, suggesting a completed action, σκιόωντό (‘went dark’) is imperfect, and could be taken to be the inceptive imperfect, meaning that the streets were continuing to dim as they sailed. This raises the idea that the wine-dark sea may not exclusively refer to sunset seas, but also seas after the sunset, particularly in the twilight hours after dusk when the remnants of daylight cling to the sky. This

supports Rutherford’s (1983) additional theory that the wine-dark sea is sometimes tied to late-night navigation and voyages, though Rutherford does not specifically suggest any allusion to the luminosity of twilight in his argument In this way, Rutherford’s (1983) argument that the wine-dark sea is a sunset image, although insightful, is not fully applicable to every example of the epithet.

It is important to also consider the metaphorical implications that this sunset sea may have, which Rutherford (1983) does not address. Through analysis of the iterations of this epithet, the sunset sea can often be seen at times of tragic contemplation, which supports the idea of the sunset sea before a calm day as perhaps the epithet is intended to convey the moment of quiet contemplation that a clear night brings. One example of this can be found in the Iliad (Hom. Il. 23.143), where Achilles looks out over the wine-dark sea as the pyre of Patroclus burns, an event which takes place either just before sunset or as it begins. This moment is one of quiet and thoughtfulness as he cuts off a lock of hair to place on Patroclus in the flames, before he speaks to Spercheus, the river to whom Peleus, his father, had promised that Achilles would not cut a lock from his hair and would sacrifice to when he returned home. Since he knows he will not return home, he cuts his hair for Patroclus, a symbolic and weighty decision. It could also be suggested that the reflection of the burning pyre in the ocean conveys the effect of the setting sun, linking the ‘winey’ sea to the death of Patroclus, the subject of Achilles’ contemplation.

Another example can be found in the Odyssey, where Odysseus is given a magic garment by Leucothea, and told to throw it into the ‘wine-dark sea’ (Hom. Od. 5.349) Rutherford (1983, p.127) argues that since Odysseus falls asleep the moment he can when he reaches the shore and is then awoken by girls coming to wash (an expected morning activity), it can be assumed that his is a sunset arrival, and ‘it would thus be a sunset arrival he refers to when he tells Nausicaa of escaping the οἴνοπα πόντον’ (Hom. Od. 6.170) This argument seems sound, however his assumption that Odysseus returns the veil at sunset is less certain, as although Odysseus stumbles to a place of refuge to rest after his dangerous and exhausting journey, ‘he was exhausted by his struggle with the sea’ (Hom. Od. 5.454), and so it cannot be assumed that he fell asleep at sunset. However, the most striking aspect of this section is its link to the supernatural, a theme that occurs multiple time in Homer, which warrants exploration as a possible metaphorical connotation of the epithet. This can also be seen as Hera flies across the sky in the Iliad (5.770-72), as Rutherford (1983) notes, her horses galloping as far as a man staring across the wine-dark deep can see. Not only does this image elevate the metaphor describing the power of her horses, and tie the image of Hera riding across the sky to images of the majesty of sunset, it also ties together the idea of the mystical and the wine-dark sea again. This connection is one that links the uncertainty and wonder of dusk, as day shifts to night in a display of overwhelming splendour, to the unknowable majesty and wonder of the gods and the supernatural

To return to the core ancient measures of colour, transparency also lay at the heart of understanding and description of colour, both in a scientific and literary context. Aristotle considered transparency the ‘seat of colour’ (Sorabji, 2004, p.130), a theory explained by Sorabji (2004, p.129-30) where he highlights Aristotle’s differentiation between the idea of ‘own colour’, which acts on the transparency of the medium and light in between the object and viewer, and ‘borrowed colour’, which is used to understand the changeable colour of the sea. Where an object is predominantly transparent, as the sea is, it is obvious that the seat of its colour is transparency, but even in opacity, a lesser transparency is the seat

The final notable measure of colour was ‘brightness’, which can be defined as where a colour lay on a black-white spectrum. Sorabji (1972, 293-4) comments on Aristotle’s theory that colour is created through a mixture of black bodies with white, and assumes that these ‘bodies’ refer to the four elements, water, fire, earth and wind. Plato developed this theory in the suggestion that rays

145

from the eyes collided with the objects in our field of vision, and that these rays could be extended by λευκός (‘white’) and shortened by μέλας (‘black’) (Plat. Tim. 67e) This idea of ‘brightness’ in colour can be seen in the various translations of μέλας and λευκός, which can, of course, be interpreted as ‘black’ and ‘white’, but also ‘dark’ and ‘light’ or ‘bright’

This idea of the importance of brightness also encapsules the role of ‘shininess’, as Platnauer (1921, p.156) defines it, a role which can most be seen clearly in the term γλαυκός, which is usually translated as ‘grey’. However, this term can be translated as ‘flashing’ or ‘glinting’ in many situations, such as in the epithet γλαυκῶπις, which is often used to refer to the eyes of Athena ‘τὸν δ᾽ ἠμείβετ᾽ ἔπειτα θεά, γλαυκῶπις Ἀθήνη’, meaning ‘then the goddess, flashing-eyed Athena answered him’ (Hom. Od 1:44) is an example of such an epithet. Here, γλαυκῶπις is translated as ‘flashing-eyed’, conveying not only the colour of Athena’s eyes but their flashing brightness, a description that connotes her intelligence and quick mind. Wilson, a recent translator of the Odyssey, notes in a lecture (2019) that she did not always translate Athena’s this epithet consistently throughout the text, varying the epithet using terms such as ‘glinted’ and ‘sparkled’.

Sassi (2017) draws attention to Aristotle’s Meteorology (Arist. Mete. 3.375a), where in Book 3 he notes that manufacturers notice change in colour based on the light under which they work, which shows the importance also of brightness of the surroundings in the perception of colour.

The combination of these four measures coupled with the hues that we recognise create a flexible use of colour terminology that can appear alien to the modern reader, where each colour word may not directly refer to a single hue, brightness or transparency, but to a range of subtle combinations and contexts. These meanings sho uld also not be limited to colour, as can be illustrated by the various meanings of the word χλωρός, which can be translated, as recorded by Platnauer (1921) as moss green, honey-yellow, a shade of pale skin, or even fresh, and (Gladstone, 1858) metaphorically representing fear This is testament to the flexibility of ancient colour terminology, which was not confined to one meaning, but also to other aspects of nature.

Another example of this flexibility lies in skin colour descriptions, as noted by Whitmarsh (2018) where μελαγχροιής, translated usually as ‘tanned’ or ‘black-skinned’, conveys an image of masculinity. Whitmarsh’s example is of Odysseus when Athena magically restores his appearance, making him μελαγχροιής once more (Hom. Od. 16.175), and he suggests that this refers to Odysseus’ ‘rugged, outdoors life’ in Ithaca. In the same essay Whitmarsh (2018) also suggests a connection between Odysseus’ black skin and his characteristic of cunning, as his companion Eurybates is said to have the same μελανόχροος skin, and to be favoured by Odysseus ‘ὅτι οἱ φρεσὶν ἄρτια ᾔδη’ (‘because his mind matched his’ Hom. Od. 248). This idea of character, particularly cunning, reflected in colour terminology is supported by the aforementioned γλαυκός, used to convey the intelligence of Athena.

The reverse can be seen in the effeminacy implied by white skin (Whitmarsh, 2018), who notes it as a term of honour in reference to women and of derision to men. This can be seen in the Homeric Hymn to Dionysus (HH 1.7), where Hera is described as λευκώλενον (white-armed) (HH 1.7), a term of respect to describe the femininity of the queen of the gods In contrast, in Xenophon’s Hellenica, when the troops come across a λευκούς (white-skinned) people, they assume that the race will be weak and as easy to defeat as women (Xen. Hell. 3.4.19) (Whitmarsh, 2018 cites this as an example of the alienness of white-skinned people to Xenophon). Furthermore, these people are white due to the fact that they are constantly clothed, and also seen as μαλακούς (soft) and ἀπόνους (unused to toil) as they ride in carriages, which supports Whitmarsh’s (2018) comment that being ‘black-skinned’ was linked to a hard-working, ‘outdoors’ life, and whiteness a mark of effeminacy.

While this concept of flexibility of terminology may seem alien at first glance, it is less foreign to modern understanding of colour than one might think. Today, colour often carries the same complex codes, such as the connection between blue and melancholy. In fact, purple still connotes power and wealth much in the same way that it did in the Homeric world. It would not be presumptuous, therefore, to imagine civilizations in the millennia after ours wondering at the confused use of blue in our description of music and emotion, hypothesising a societal synaesthesia

In this way, the Ancient Greeks were not deficient in their experience of colour, as Gladstone (1858) suggests, or, for that matter, that their language was deficient, as Platnauer (1921) suggests Instead, we must rework the criteria by which we analyse ancient colour terminology and expand our understanding of the aims of colour language. Only then can we fully comprehend the subtlety of Homeric metaphor, and picture the vivid world in which the ancients lived.

1- Both here and on other occasions where I have cited texts on ancient literature without the ancient reference, see the cited text for ancient references

2- These measures have been gathered from ideas discussed in the majority of my reference list, with specific emphasis on the terminology of Sassi (saliency, movement), Platnauer (brightness and shininess) and Sorabji (who discusses Aristotle’s ideas on transparency)

3- Here Sassi is referring to Plato’s Timaeus 67e-68b

Reference List:

Allen, G. (1878) Development of the Sense of Colour. Mind, vol. 3, no. 9, , pp. 129–32. Available at: JSTOR, http://www.jstor.org/stable/2246625. Accessed 7 Feb. 2023.

Beard, M. (2019). Whiteness University of Edinburgh. Available at: https://www.youtube.com/watch?v=8QgP2DOkbpo Accessed 29 May 2023.

D I Nesterov and M Yu Fedorova (2017) IOP Conf. Ser.: Mater. Sci. Eng. 262 012139 Available at: Microsoft Word - 2_1_26.docx (iop.org) Accessed 12 June 2023

Gladstone, W.E. (1858). Studies on Homer and the Homeric age. [Online]. Oxford: University Press. Available at: https://archive.org/details/studiesonhomerho03glad/page/n477/mode/2up Accessed 18 May 2023.

Griffith, R. Drew. (2005) “Gods’ Blue Hair in Homer and in Eighteenth-Dynasty Egypt.” The Classical Quarterly, vol. 55, no. 2, pp. 329–34. Available at: JSTOR, http://www.jstor.org/stable/4493341. Accessed 9 Feb. 2023

Jensen, Lloyd B. “Royal Purple of Tyre.” Journal of Near Eastern Studies, vol. 22, no. 2, 1963, pp. 104–18. JSTOR, http://www.jstor.org/stable/543305. Accessed 16 Mar. 2023.

King, A. (2021). Greek Vase Painting of an Artist at Work. [Photograph]. New York: Metropolitan Museum of Art.

Oxford Classical Dictionary. (2016). colour, ancient perception of. [Online]. Oxford Classical Dictionary

Last Updated: 7th March 2016. Available at: https://doi.org/10.1093/acrefore/9780199381135.013.6980 Accessed 18 May 2023.

Platnauer, Maurice. “Greek Colour-Perception.” The Classical Quarterly, vol. 15, no. 3/4, 1921, pp. 153–62. Available at: JSTOR, http://www.jstor.org/stable/635862. Accessed 16 Nov. 2022

147

Rutherfurd-Dyer, R. “Homer’s Wine-Dark Sea.” Greece & Rome, vol. 30, no. 2, 1983, pp. 125–28. Available at: JSTOR, http://www.jstor.org/stable/642564. Accessed 16 Nov. 2022.

Sassi, M.M. (2017). The sea was never blue. [Online]. aeon.co. Last Updated: July 31, 2017. Available at: https://aeon.co/essays/can-we-hope-to-understand-how-the-greeks-saw-their-world Accessed 21 November 2022.

SORABJI, RICHARD. “ARISTOTLE ON COLOUR, LIGHT AND IMPERCEPTIBLES.” Bulletin of the Institute of Classical Studies, vol. 47, 2004, pp. 129–40. Available at: JSTOR, http://www.jstor.org/stable/43646862. Accessed 19 Nov. 2022

Sorabji, Richard. “Aristotle, Mathematics, and Colour.” The Classical Quarterly, vol. 22, no. 2, 1972, pp. 293–308. Available at: JSTOR, http://www.jstor.org/stable/638210. Accessed 27 Mar. 2023.

Whitmarsh, T. (2018). Black Achilles. [Online]. aeon.co. Last Updated: May 9, 2018. Available at: https://aeon.co/essays/when-homer-envisioned-achilles-did-he-see-a-black-man Accessed 21 November 2022.

Wierzbicka, Anna. “Why There Are No ‘Colour Universals’ in Language and Thought.” The Journal of the Royal Anthropological Institute, vol. 14, no. 2, 2008, pp. 407–25. Available at: JSTOR, http://www.jstor.org/stable/20203637. Accessed 29 May 2023

Wilson, E. (2018). The Odyssey. New York: Norton.

Wilson, E. (2019). Translating the Odyssey Again: How and Why Dartmouth. Available at: https://www.youtube.com/watch?v=YsU0jDHbRs4 Accessed 29 May 2023.

Bibliography:

MacKenzie, Donald A. “Colour Symbolism.” Folklore, vol. 33, no. 2, 1922, pp. 136–69. Available at: JSTOR, http://www.jstor.org/stable/1254892. Accessed 29 May 2023 .

St Clair, Kassia. (2016). The Secret Lives of Colour. 2nd ed. Great Britain: John Murray.

149

Jana Lai

PSYCHOLOGY

Jana Lai debated the pros and cons of “Deinstitutionalisation in the West” in her ERP. This is a title she decided on after (unfortunately struggling with indecision as always, and) noticing that psychiatric institutions appear to be shrouded in mystery for the general public due to the stigma surrounding this particular type of healthcare. Her project evaluates studies on the criticisms of psychiatric hospitals, along with the benefits and costs of major healthcare reforms in many Western countries, which aims to transfer patients in psychiatric institutions to community care, a trend accompanied by the closure of many psychiatric in-patient facilities. Jana is doing Biology, Chemistry, English Literature and Psychology and will be studying Psychology at university, with aims of pursuing a professional career in this field in the future.

Do the benefits of deinstitutionalisation in the West outweigh its costs?

Deinstitutionalisation is a process where patients in psychiatric in-patient facilities are transferred to community-based care, during which comprehensive services are designed to tackle each individual’s conditions outside the environment of institutions (American Psychological Assosiation, n.d.) Usually, this also involves closing or downsizing large asylums (Chow and Priebe, 2016) The rationale behind this process is to provide community care as a solution to the criticism many ex-patients, staff and researchers has directed towards the efficiency and quality of inpatient psychiatric treatments. Since the 1950s, many countries within Western Europe have been carrying out major mental healthcare reforms with the aim of deinstitutionalisation, resulting generally in a significant drop in the number of inpatient psychiatric beds and increased funding going to mental health services (Chow and Priebe, 2016) Studies have shown that deinstitutionalisation has been beneficial to the treatment and wellbeing of many people who require mental health support, notably being associated with “greater quality and service user ratings of care” and higher levels of autonomy for service users (Salisbury, et al., 2017) However, it has also been criticised for resulting in an increased number of homeless people, an endless cycle of discharge and readmission, and some patients who would benefit from the model of care in inpatient psychiatric facilities faring worse after being discharged (Salisbury, et al., 2017) (Tyrer & Johnson, 2011) This paper aims to evaluate whether the benefits of deinstitutionalisation outweigh its costs, specifically in Western countries, where most research on deinstitutionalisation is conducted. This will be achieved by assessing the issues of psychiatric inpatient treatment, the benefits of deinstitutionalisation and also its criticisms

Issues of Psychiatric Inpatient Treatment

Deinstitutionalisation stems from the huge amount of criticism from different stakeholders of inpatient psychiatric facilities on the poor quality of care (Rosenhan, 1973), unsatisfactory patient experience (Arnott, et al., 2015) and lack of support for patients to re-integrate into society once discharged (Flomenhaft, et al., 1969)

According to WHO, quality of care is “the degree to which health services for individuals and populations increase the likelihood of desired health outcomes”(WHO, n.d.). Studies have suggested that inpatient psychiatric facilities often make erroneous diagnosis that leads to inaccurate decisions on admitting and discharging patients, meaning many patients are not placed with a suitable plan of treatment (Rosenhan, 1973) (Bowers et al., 2005). Hence, many inpatient psychiatric hospitals are criticised of not achieving a satisfactory quality of care In the Thompkins Acute Ward Study, 47 multidisciplinary staff in acute psychiatric wards were interviewed on why they think people are admitted to psychiatric inpatient facilities, their treatment and care ideology, and how they define the roles different professionals take on Answers obtained from the staff on what constituted an inappropriate admission were very varied, with several groups of individuals that most felt were acceptable for acute inpatient psychiatry being rejected by other staff (Bowers et al., 2005). Another study provides further support for the criticism that inpatient psychiatric facilities are prone to make inaccurate judgements on the severity of condition and best course of treatment. 8 participants, disguised as pseudo-patients, complained of “hearing voices” and were admitted into a psychiatric hospital 7 got diagnosed with schizophrenia after one appointment. They stopped pretending to have symptoms once admitted, a change which was noticed by 35 out of 118 patients but none of the staff (Rosenhan, 1973) Although both studies have a small sample size and hence could be criticised to not be generalisable to all existing psychiatric hospitals in the West, they

151

provide insight into how these institutions may fail to provide the most effective plan of treatment for each patient. Additionally, these studies also have implications in that large-scale psychiatric institutions are likely to be disadvantaged from long communication chains and lack of personal contact, as reflected by the poor quality of care. This is because staff would have more difficulty being aware of the unique conditions and needs of each patient and are also limited in effective communication within the department on a uniform approach to treatment This assumption can be directly observed in the findings of a study of 338 inpatient psychiatric departments in the US (Hrebiniak & Alutto, 1973), where a quantitative analysis indicates a negative correlation between department size and discharge rate. Additionally, a positive correlation is seen between size of department in public hospitals and cost per discharge, and cost per patient day. These findings demonstrate the inefficient health and negative economic outcomes related to large-scale psychiatric in-patient departments, which are not uncommon in the West.

Besides being criticised for hindering patients’ recovery, there are also numerous reports of poor patient experience in inpatient psychiatric facilities. This is likely due to how these institutions often place their focus solely on medicine and constraining behaviour. In the Thompkins Acute Ward Study, interviewees generally mentioned medication first when asked what treatment was given to patients, with it also seen as the treatment to resolve patients’ mental illness/get behaviour under control (Bowers , et al., 2005) With little efforts placed on creating and improving stimulating activities, patients would easily become dull and unmotivated, leading to the formation of negative mindsets and habits. This argument is supported in the article debating “Should psychiatric wards completely ban smoking?”, where the opposition claims patients have stated that smoking culture is the reason they smoked and there was nothing else to spend their time on (Arnott, et al., 2015). This shows that the poor quality of life within inpatient psychiatric facilities not only has a negative emotional impact on patients, it also affects their physical health, which in turn holds back their mental health from making significant progresses In the same article, a former GP Michael Fitzpatrick suggests that “Blanket smoking bans deprive patients of autonomy, preventing them from taking responsibility”. This proposes that the regulations institutions often impose are too restrictive to be beneficial for the recovery of patients, instead leading to lack of self-motivation in recovery and resentment, since patients may feel degraded and patronised. However, Fitzpatrick also adds that “if their behaviour is judged to be a danger to themselves or others restrictions may be imposed on their liberties”, which seems to be self-contradictory as a lot of behaviour of patients, e.g. smoking, could be said to fall under the definition of danger, hence undermining his own initial argument (Arnott, et al., 2015).

Additionally, many former patients of psychiatric institutions find it difficult to reintegrate into their community and lead a “normal” life after discharge. The environment and routine established in these in-patient facilities are often very different from that of the society outside, the aim being to help patients recuperate with minimal distraction and stress from their daily lives However, the artificial setting of the institution also inadvertently causes patients lose touch of reality. The Dutch psychiatrist who established the Amsterdam Home Care Service, Querido, comments on the nature of hospitalisation, “The hospital tends to isolate itself from the rest of the community … is apt to filter out a most important attribute of the patient, that is his social and personal aspects” (Flomenhaft, et al., 1969) This commentary focuses on a specific part of “normal” life patients are often deprived of healthy relationships with those in their community. When patients are discharged into their community without any connections and supportive relationships, they will find it hard to adjust and feel isolated, decreasing their quality of life or even worsen their condition/lead to relapse Research has

shown that personal relationships are not only vital to having good mental health in terms of facilitating efficient management of stress, it also has positive effects for physical health, with a meta-analysis of 148 studies reaching the conclusion that people with strong social relationships are 50% less likely to die prematurely (Mary Jo Kreitzer, n.d.). Another factor hindering patients’ reintegration is the stigma associated to the permanent label of “mentally ill in remission”. Even in the US, one of the more progressive countries in terms of understanding and accepting mental illness patients, colleges and the military may reject an application due to their history of psychiatric admissions (Flomenhaft, et al., 1969) The practical barriers to preventing ex-psychiatric patients from pursuing their desired career paths like everyone else reduces the sense of motivation that patients would benefit from in their recovery. However, the above referenced journal article (Flomenhaft, et al., 1969) was published in 1969. This implies that it may have low temporal validity as Western society likely has progressed to be more aware and inclusive of people with a history of severe mental health issues since then. It may hence be inaccurate to say that ex-psychiatric patients nowadays receive the same social exclusion and judgement as more than 50 years ago.

Benefits of deinstitutionalisation

Besides the theoretical benefits of using deinstitutionalisation to resolve limitations of inpatient psychiatric care, many studies have shown that in practice, deinstitutionalisation can be a sustainable alternative to treating mental disorders across a range of severity in terms of improved quality of care, effectiveness in assisting patients in reintegration and potential in fully replacing hospitalisation.

The progress of deinstitutionalisation is correlated with better quality of care, according to patient self-reports and across numerous countries. A cross-sectional study of 193 longer-term hospital- and community-based facilities in Bulgaria, Germany, Greece, Italy, the Netherlands, Poland, Portugal, Spain and the UK was conducted to obtain 1579 patients’ ratings of care along with country level variables (Salisbury, et al., 2017) It was concluded that “Significant positive associations were found between deinstitutionalization and (1) five of seven quality of care domains; and (2) service user autonomy” (Salisbury, et al., 2017). This suggests that community care has an evident advantage over hospital care as patients are provided with treatment that they regard as more effective and respecting of their autonomy, which fosters a positive and self-motivated mindset crucial in speeding up recovery To evaluate the validity of this study, the highly representable sample size including large amounts of patients from various Western countries increases its generalisability to individuals seeking treatment for mental disorders in the West, and implicates that deinstitutionalisation is effective crossculturally (in the West). However, a cross-sectional study generates observational and correlational instead of causal data (Cherry, 2022), meaning researchers cannot ascertain whether deinstitutionalisation was the sole cause of an increase in ratings of care, since data was taken from a natural setting (different countries) instead of from a controlled setting Despite issues with internal validity, this study provides a broad perspective of the positive effects deinstitutionalisation brings across different Western cultures, suggesting that deinstitutionalisation is a practical approach that could be successfully applied in many areas of the world. Additionally, patients are given more autonomy and person-based consideration in community care, as seen in the change in language used to refer to individuals seeking help for mental health illnesses from “patient” to “client” (MacKinnon & Coleborne, 2003), which is one of the indicators of treatment of mental disorders shifting to a more individualistic and empowering approach when compared with that in psychiatric hospitals. Instead of focusing on the provision of services, community care has developed to base treatment off the

153

individuals’ needs (Zechmeister, 2005) Hence, individuals are able to recover faster and with more dignity as their decisions and unique needs are respected in community care.

Deinstitutionalisation also is shown to be highly effective in encouraging adaptive behaviour in patients, helping them to lead “normal” lives in their communities. A study investigated a group of 104 patients with different levels of intellectual disability who were relocated into the community after the psychiatric institution they had been staying in closed down (Bredewold, et al., 2020) (Young & Ashman, 2004) Over two years, the ex-residents had been shown to adapt very well to living in their community, as shown by the quantitative increase in levels of adaptive behaviour, choice-making and objective life quality, alongside with stable levels of maladaptive behaviour. This suggests that discharging patents into the natural environment of their communities is more advantageous to their reintegration and improvement of mental health condition than limiting them within the artificial surroundings of the institution. However, the increases began levelling off after two years despite continued presence of staff in the participation of daily life activities, e.g. cooking, cleaning the house, with the patients. The study’s report suggests that this plateau is due to a decrease in motivation and enthusiasm in joint participation of these activities from both staff and patients. This indicates that deinstitutionalisation may not be effective in the long term without a rigorous treatment plan, which would require a higher amount of effort and time to maintain than in a more centralised environment, such as that in psychiatric in-patient facilities

Furthermore, case studies have shown that community care, when carried out with high levels of commitment, can fully replace hospitalisation without its disadvantages (as discussed in the previous section). The Family Treatment Unit of Colorado Psychiatric Hospital applied crisisoriented therapy to 150 patients who required immediate hospitalisation due to the severity of their mental health condition and compared the results of their treatment with that of a group of 150 patients who were inpatients at the hospital (Flomenhaft, et al., 1969) Results shows that the therapy group were seen for an average of 2.5 weeks, while the inpatient group had an average hospital stay of 26.1 days, meaning the family-crisis therapy took approximately a week less than treatment in the hospital. Additionally, six months after treatment, both groups performed similarly on two baseline measures of functioning. This has important implications as alternative approaches of treatment in community settings are shown to be able to exceed the effectiveness of hospitalisation, which is a strong argument for the establishment and continued progress of deinstitutionalisation in more Western countries.

Criticism of deinstitutionalisation

Despite the observable effectiveness of deinstitutionalisation in improving the well-being of ex-psychiatric in-patients across Western countries, there are also limitations seen that may cause hesitation in establishing/continuing the process within communities.

Deinstitutionalisation has raised concerns regarding public security. This is supported partially by reports that investigate the numbers of people in psychiatric hospitals and prison, which then goes on to suggest a cause-and-effect relationship between being in a psychiatric in-patient facility and offending behaviour (Salisbury & Thornicroft, 2016) Some researchers are worried about an increase in crime rate if patients with unresolved mental health issues in psychiatric wards are discharged into their communities, which would endanger the general population. However, other researchers have found that these studies criticising deinstitutionalisation has been based on ecological studies/personal observations, hence they then focused on cohort studies of patients discharged due to deinstitutionalisation in which data

was analysed individual by individual. A majority of the 23 studies involved reports that at follow-up no cases of incarceration were seen (Salisbury & Thornicroft, 2016). This stark difference in results may be due to the correlational instead of casual data that ecological studies provide or the subjective nature of personal observations, both factors lowering the internal validity of the studies arguing against deinstitutionalisation. However, other research has also indicated that the decrease in psychiatric beds during deinstitutionalisation fails to consider homeless people with mental disorders who would benefit from care in hospitals. In surveys of homeless people living in New York public shelters, approximately 63% either has a psychiatric history or are displaying symptoms of a serious mental disorder (Marcos, 1991). This suggests an increased risk in public security as homeless people with untreated mental disorders may act in danger of themselves or others in public spaces.

The increased cost due to the more individualised modes of treatment in the community is another major argument against deinstitutionalisation. A literature review of the economic impacts of deinstitutionalisation in UK, Germany and Italy shows that while current community care costs significantly less than current hospital care, new community-based care arrangements could be more expensive than long-term hospital stay (Knapp, et al., 2011) However, a counterargument the same source suggests is that the cost of community and hospital care cannot be directly compared since both systems provide very different forms of treatment (Knapp, et al., 2011) In particular, community care provides more individualised treatment that covers a range of varied service areas to best meet each patient’s personal needs, a crucial element of effective treatment institutions often fail to provide. Deinstitutionalisation may cost more in the short term, but will be more cost-effective in the long term as patients will recover quicker with higher quality of care provided. Additionally, an increased rate of recovery is not only for the benefit of the individual, but also promotes higher productivity in the economy. In 2003 it is estimated that the impact depression has on employment in the UK, e.g. depressed individuals being unable to work, when put into cost terms is 23 times larger than the costs allocated to the treatment of depression by the NHS 8685409 pounds to 369865 pounds (Thomas & Morris, 2003) (Knapp, 2003) Another study published in the same year concluded through self-reporting techniques that in the UK, depression/anxiety is the most important contributor to absenteeism at work (Almond & Healey , 2003) (Knapp, 2003). Hence, it is important for national economic growth that psychiatric and psychological treatments are conducted in a more efficient manner so individuals with mental disorders can rejoin the workforce sooner, which can be done through deinstitutionalisation and focusing on community care.

Although deinstitutionalisation is intended to improve quality of care, there have also been many cases of when downsizing/closing psychiatric institutions with inappropriate procedures has negatively impacted discharged patients. In an article debating “Has the closure of psychiatric beds gone too far?” (Tyrer & Johnson, 2011) , a professor of community psychiatry argues that extreme efforts to prevent admissions to psychiatric hospitals and rushed discharge of patients has resulted in great risks to their well-being. This view is supported by data demonstrating that after patients are discharged from institutions, the suicide rate in the first 28 days is over 200 times and over 100 times for men and women respectively when compared with the general population. This is whilst psychiatric hospitals have shown to be highly successful in reducing suicide rate of patients, emphasising the issue of deinstitutionalisation executed too aggressively without consideration for the essential role psychiatric hospitals continue to play in the recovery of individuals suffering from severe mental disorders that could threaten their own/others’ safety, despite the downfalls of the system Additionally, sometimes transintitutionalisation, i.e. the transferal of patients from one therapeutic community (in this

155

case psychiatric hospitals) to other institutions (Wikipedia contributors, 2023), is observed instead of deinstitutionalisation. For example, a study published in 2000 (Zechmeister, 2005) writes on transinstitutionalisation of ex-inpatients into nursing homes instead of into community care (deinstitutionalisation) in several countries, including Germany and Australia. This defeats the purpose of downsizing and closing psychiatric hospitals as other institutions are likely to have similar issues in efficiency of care and patient quality of life hospitals suffer from Against its intensions, deinstitutionalisation appears to have caused further obstacles to recovery or may not even be the best approach of treatment for those with severe mental disorders (Kendell, 1989).

Conclusion

Through the research and critical evaluation of a range of studies, this paper demonstrates that deinstitutionalisation in the West have been greatly beneficial for the recovery of many expsychiatric in-patients by providing higher quality of care than psychiatric hospitals. This is due to more respect of patients’ dignity, personalising treatment approaches to best suit the individual and allowing recovery to occur in a natural environment to encourage reintegration into society. Despite concerns about the increase in expenditure when more patients are transferred from psychiatric hospitals to community care, it can be argued that in the long term, the latter is more cost-efficient and increases productivity of the economy.

However, it is crucial to be aware that despite the problems apparent in in-patient psychiatric facilities and suggestions of replacing treatment in these institutions with community care, other studies have shown that this is likely an unrealistic view, at least under current situations when there is not a better alternative in the treatment of those with severe mental disorders who may have a higher risk of harming themselves/others if not placed in a centralised environment of care. Instead of further stigmatising psychiatric hospitalisation, there should be action taken, especially by national governments and public healthcare systems in the West, to provide better care in these facilities through more funding and better training of policy-makers and other professionals working with patients. Individuals with mental disorders would benefit much more from deinstitutionalisation if the transfer to community care and treatments following discharge had more frequent monitoring of service efficiency and quality, along with a better recognition of the importance of psychiatric hospitalisation in the treatment of severe mental disorders

Works Cited

Almond, S. & Healey , A., 2003. Mental Health and Absence from Work: New Evidence from the UK Quarterly Labour Force Survey. Work, Employment and Society, 17(4), pp. 731-742.

American Psychological Assosiation, n.d. deinstitutionalization. [Online] Available at: https://dictionary.apa.org/deinstitutionalization [Accessed 9 Feburary 2023].

Arnott, D., Wessely , S. & Fitzpatrick, M., 2015. Should psychiatric hospitals completely ban smoking?. BMJ, Volume 351.

Bowers , L. et al., 2005. The nature and purpose of acute psychiatric wards: The tompkins acute ward study. Journal of Mental Health, 14(6), pp. 625-635.

Bredewold, F., Hermus, M. & Trappenburg, M., 2020. ‘Living in the community’ the pros and cons: A systematic literature review of the impact of deinstitutionalisation on people with intellectual and psychiatric disabilities. Journal of Social Work, 20(1), pp. 83-116.

Cherry, K., 2022. How Do Cross-Sectional Studies Work? Gathering Data From a Single Point in Time. [Online] Available at: https://www.verywellmind.com/what-is-a-cross-sectionalstudy-2794978#toc-challenges-of-cross-sectional-studies [Accessed 12 6 2023].

Chow, W. S. & Priebe, S., 2016. How has the extent of institutional mental healthcare changed in Western Europe? Analysis of data since 1990. BMJ Open, 6(4).

Flomenhaft, K., Kaplan , D. M. & Langsley, D. G., 1969. Avoiding Psychiatric Hospitalisation. Social Work, 14(4), pp. 38-45.

Hrebiniak, L. G. & Alutto, J. A., 1973. A Comparative Organizational Study of Performance and Size Correlates in Impatient Psychiatric Departments. Administrative Science Quarterly, 18(3), pp. 365-382.

Kendell, R. E., 1989. The Future Of Britain's Mental Hospitals: Some Patients Will Still Need Long Term Care. BMJ: British Medical Journal, 299(6710), pp. 1237-1238.

Knapp, M., 2003. Hidden costs of mental illness. The British Journal of Psychiatry, 183(6), pp. 477-478.

Knapp, M., Beecham, J., McDaid, D. & Matosevic, T., 2011. The economic consequences of deinstitutionalisation of mental health services: lessons from a systematic review of European experience. Health and Social Care in the Community, 19(2), pp. 113-125.

MacKinnon, D. & Coleborne, C., 2003. Introduction: Deinstitutionalisation in Australia and New Zealand. Health and History, 5(2), pp. 1-16.

Marcos, L. R., 1991. Taking the Mentally Ill Off the Streets: The Case of Joyce Brown. International Journal of Mental Health, 20(2), pp. 7-16.

157

Mary Jo Kreitzer, R. P., n.d. Why Personal Relationships Are Important. [Online] Available at: https://www.takingcharge.csh.umn.edu/why-personal-relationships-are-important [Accessed 31 March 2023].

Priebe, S. & Turner, T., 2003. Reinstitutionalisation in mental health care. BMJ, Volume 326, pp. 175-176.

Rosenhan, D., 1973. Being Sane in Insane Places. Science News , 103(3), p. 38.

Salisbury, T. T. & Thornicroft, G., 2016. Deinstitutionalisation does not increase imprisonment or homelessness. Br J Psychiatry, 208(5), pp. 412-413.

Salisbury, T. T., Killaspy, H. & King, M., 2017. The relationship between deinstitutionalization and quality of care in longer-term psychiatric and social care facilities in Europe: A crosssectional study. European psychiatry: the journal of the Association of European Psychiatrists, Volume 42, pp. 95-102.

Thomas, C. M. & Morris, S., 2003. Cost of depression among adults in England in 2000. The British Journal of Psychiatry, 183(6), pp. 514-519.

Tyrer, P. & Johnson, S., 2011. Has the closure of psychiatric beds gone too far?. BMJ , Volume 343.

WHO, n.d. Quality of care. [Online] Available at: https://www.who.int/health-topics/qualityof-care#tab=tab_1 [Accessed 9 Feburary 2023].

Wikipedia contributors, 2023. Transinstitutionalisation. [Online] Available at: https://en.wikipedia.org/wiki/Transinstitutionalisation [Accessed 12 June 2023].

Young, L. & Ashman, A. F., 2004. Deinstitutionalisation in Australia Part II: Results from a Long-Term Study. British Journal of Developmental Disabilities, 50(98), pp. 29-45.

Zechmeister, I., 2005. Paradigm Shift in Mental Health Care: An Exploration of Mental Health Care Reform Objectives and Reform Processes. In: Mental Health Care Financing in the Process of Change: Challenges and Approaches for Austria, NED-New edition. s.l.:Peter Lang AG, pp. 83-124.

Bibliography

Adrian. (2013). Large Firms. [Online]. Get Revising. Last Updated: 22 May 2013. Available at: https://getrevising.co.uk/grids/large_firms [Accessed 31 March 2023].

Wikipedia. (n.d.). Cross-sectional study. [Online]. Wikipedia. Last Updated: 2 March 2023. Available at: https://en.wikipedia.org/wiki/Cross-sectional_study [Accessed 31 March 2023].

Wayne W. LaMorte. (2020). Ecological Studies (Correlational Studies). [Online]. PH717 Module 1B - Descriptive Tools Descriptive Epidemiology & Descriptive Statistics. Last Updated: 10 September 2020. Available at: https://sphweb.bumc.bu.edu/otlt/MPHModules/PH717-QuantCore/PH717-Module1B-DescriptiveStudies_and_St [Accessed 31 March 2023].

159

Rayaan Ahmed

ECONOMICS

Rayaan Ahmed chose ‘How has COVID-19 affected World Economies?’ as his ERP focus to measure the diverse effects of the pandemic across different countries. The project investigated the social and economic policies implemented by countries such as Sweden, New Zealand and the UK, determining their relative successes and drawbacks in relation to the country’s economic performance and social wellbeing. Rayaan Ahmed is studying Economics, History and Mathematics, and hopes to pursue Law at University.

ASKE PROJECT ERP: How has COVID-19 affected World Economies

The worldwide COVID-19 pandemic, which started on 11th March 2020 (according to the World Health Organisation) had an undeniably significant impact on the Global Economy, and the positive and negative repercussions from both a microeconomic and macroeconomic level will be analysed and assessed throughout this project Though the disease is still present, the sharp decline in cases, deaths and lack of national lockdowns indicate from a social perspective that the pandemic is effectively over. Following the development of the vaccines, the danger and effects on society and economies has largely diminished, and thus it is useful to observe the effects three years on. Covid-19 had a vastly differing impact across various households, businesses, governments and countries as a whole, and comparisons can be made between these levels of the economy to gain a wider and more accurate understanding of the true, overall effect of the disease.

Effects on Small Businesses

For the vast majority of small UK businesses, the effects of COVID-19 were detrimental to say the least. Local highstreets, consisting of small, lesser-known businesses and a small number of chain stores suffered the most, as these businesses lost face-to-face on-the-spot customers because they had to follow the government’s lockdown protocols and close their doors, and often had a very limited online presence, if any at all, so could not easily gain on-line sales, which resulted in a substantial drop in demand and revenue for these shops.

Since small businesses generally produce and sell many more ‘normal goods’ than ‘necessity goods’, many saw a rapid and significant decrease in demand for their items during the lockdown periods, as consumers were going out to solely purchase that which they needed, rather than that which they wanted. Normal goods, which can be defined as everyday items for which demand tends to rise as a consumer’s income rises, include things like electronics or clothing items. Naturally, the demand for these goods fell during a period where consumers were instructed to leave the house only for exercise and purchasing necessity items.

On the other hand, necessity goods, which can be defined as goods that consumers will buy regardless of changes in income, including items like toilet paper (which had an infamous rise in consumption during the pandemic), medications and electricity saw a rise in demand. This posed a problem for small businesses, some of which sold no necessity goods, or found that any increase in their sales did not compensate for the fall in sales of normal goods, either face-to-face or online.

The same can be said for most businesses that depended on, social interaction including restaurants, pubs, hairdressers etc. We can observe that overall consumer spending fell by a staggering 27.8%, whilst businesses such as restaurants saw a substantial drop in sales, an estimated 70%, and 80% in accommodation sales (Strain, 2020).

Economically, the very nature of the way in which small businesses operate allows for a clear explanation as to why the pandemic was as harmful as it was. Small businesses predominately operate with low profit margins, meaning there is a ‘low margin of safety’, and a greater risk

161

that a decline in sales – in this case caused by the pandemic – would erase profits and lead to a net monetary loss.

Businesses, large or small, cannot afford to consistently operate with a negative margin, and thus are forced to close. This ties into the concept of cash buffers, which in essence refer to the amount of time a business could go without earning any money, with what cash they had in their ‘buffer’. As Michael Strain highlights in his text 1, only 40% of small businesses had more than 3 weeks of a cash buffer, which ultimately was a cause for the closure of millions of shops around the world.

As a result of problems in the retail sector and elsewhere in the economy, unemployment rose, reducing income and demand for normal goods Non-essential workers found themselves unemployed, in small businesses, once again, due to the lack of an online presence, as there was no capacity for workers to work remotely, nor was there enough money to pay their wages. In the USA the Bureau of Labor Statistics (BLS) measured the unemployment rates and concluded that unemployment had risen from 7% in February 2020 to 22.8% in April 2020, a 15.8% increase in the span of just 2 months (BLS, 2021)

Moreover, small businesses in the service sector – service sector referring to hairdressers, restaurants, laundromats etc – will have been affected to a greater extent than those in the manufacturing sector, because of the permanent revenue loss. Whilst manufacturing firms, big or small, are able to slowly begin to make up for the lost revenue by increasing production and fulfilling the backlog of orders, service firms cannot do this as there is no backlog. For instance, people will not start getting their haircut 8 times a month as opposed to 2 times a month just because they missed 4 months of haircuts during the pandemic. Similarly, people will not eat twice the number of meals, or dry clean their clothes twice the number of times, meaning the revenue lost by firms during the lockdown periods will never be repaid.

Ultimately, the impact of the pandemic on small businesses is overwhelmingly negative, primarily from an economic sense, as I explained above. Those in the service sector were hit harder than those in the manufacturing sector, due to their lack of cash reserves, inability to make up the permanent revenue loss, and subsequent sharp fall in consumption. This caused a reduction aggregated demand (AD) and in gross domestic product (GDP) and risked plunging the economy into recession, which the UK government avoided by subsidies and fiscal expansion, which risked causing another problem – inflation. Socially, local businesses which had been in operation for decades were forced to close, causing emotional stress and unemployment

Effects on the Wider Economy.

Up to this point, the effects of the COVID-19 on small businesses have been observed and analysed, focused predominantly on small businesses within HIC’s (High Income Country), such as the UK and USA. The focus will now be shifted to economies in a broader sense, including AD and GDP, recession and inflation, to gain a fuller understanding of the damage caused by the pandemic.

1 The text referred to is “Covid-19’s impact on small business: Deep, Sudden and Lingering”

The United Kingdom:

The COVID-19 pandemic had macro-economic effects similar to those in a typical recession

The most obvious similarities were the closure of businesses and consequent increases in unemployment and decline in GDP which came as a result of a fall in economic activity, resulting in a lower output of goods and services, and government intervention

For instance, to counter a recession, the government will use fiscal and monetary policy –fiscal policy refers to use of govt. spending and taxation, whilst monetary policy refers to the control of interest rates etc by the central bank - measures to support individuals and businesses and to moderate economic growth/ decline.

The UK Treasury and the Bank of England used these policies during the pandemic The furlough schemes, which provided grants to employers so they could retain and pay staff during lockdowns, by furloughing employees at up to 80% of their wages, were an example of government fiscal policy. Monetary policy measures during the pandemic included the Monetary Policy Committee (MPC) cutting the base interest rate to 0.1% in Q1 of 2020, an all-time low, whilst the quantitative easing programme was expanded to a peak of £895bn, an increase of approximately £450bn 2020 to 2021, according to the Bank of England.

On the other hand, there were many key differences between the COVID-19 outbreak and a typical recession, namely regarding the speed and depth at which they impacted the economy. The pandemic had a deeper, more rapid impact on the UK economy compared to a more typical recession, because of the sudden and significant disruptions to businesses and households. In a typical recession, economic activity contracts and recovers gradually

The COVID-19 pandemic saw a sudden contraction in economic activity, closely followed by a rapid recovery in some specific sectors, namely technology, e-commerce and online communication services (i.e. Zoom, Microsoft Teams, Google Meet). Using examples of previous economic recessions within Britain, such the recession of the early 1990s, or of the early 1980s, we can observe that these economic trends are specific to the pandemic, and not to other economic recessions such as those listed. The pandemic was distinctive from other recessions because it led to simultaneous disruptions to virtually all aspects of life, including work, education, health and healthcare, leisure, etc.

However, it is important not to over-exaggerate the severity of the “pandemic recession”. We can understand that in the UK, the pandemic did not look ‘especially remarkable compared to past recessions, with respect to its immediate impacts on the employment rate (fell from 61.7% in Q4 of 2019 to 60% in Q2 of 2021)’ (Blundell, 2022), which compares favourably with the most recent economic recession in Britain prior to COVID-19, that of 2008, in which employment rates fell from a peak of 73.1% in Q1 2008 to 70.2% in Q4 2009.

163

It is possible that this favourable comparison is partly the result of effective fiscal and monetary policy during and following the pandemic but according to Richard Blundell2 , though the government eased the situation in some senses, they left the public finances in an ‘unusually precarious position’, claiming the public sector net debt reached 98% of national income in 2021-22. The consequence of this will be future generations (such as my own) will have to finance these significant losses; in other words, short term success has caused long term problems, unless future inflation and / or rapid growth wipe out the debt.

The significant rise in remote working as a result of the pandemic also caused longer term economic implications. The shift to online work has created significant structural changes within the working environment, with many firms still offering remote working even after the easing of lockdown restrictions and the social acceptance of close of the pandemic. The changes made within the workspace has also led to the fear of rising wage inequalities, which would pose a problem for the government due to its conflict with one of the macroeconomic objectives, which is to even out income distribution.

The reduced working hours experienced by the vast majority of the labour force led to a substantial reduction in many people’s income. This disproportionately affected low-paid workers as opposed to higher-paid workers, due to the increased likelihood that lower paid workers worked in sectors where hours have been reduced and For instance, people who worked in jobs that relied on face-to-face communication such as hairdressing (where online working was impossible) were more likely to experience reduced working hours than those branded as ‘essential workers’, such as doctors, who if anything experienced a rise in working hours due to increased demand, as more people were getting ill.

The pandemic is likely to have also caused admin, managerial, office workers to increase their skillset, because of increased reliance on technology. For instance, many in the labour force are likely to have learned skills such as touch typing, coding, or increasing their familiarity with useful applications such as Microsoft Excel, to better aid them in their new working environment. Additionally, companies and individuals were forced to innovate and find new ways of doing things due to the constraints of the pandemic. As a result, the productive capacity of the economy in fact increased to an extent, but has also caused an increased reliance on technology, and thus a greater need to educate/ train people in the labour force to develop the skillsets needed to accommodate this. The increase in productive potential was only experienced in some select sectors, and thus the effects have been uneven across different areas within the economy, and therefore the extent to which the productive capacity actually increased in limited to a large extent. Consequently, following the end of lockdown restrictions, workers within the labour force have emerged with ‘mismatched’ skill sets, causing disruption amongst several sectors of the economy.

Sweden

It is useful to compare the economic effects of the pandemic in several countries, more specifically, in countries which had differing levels of lockdown restrictions. Sweden serves as

2 The text referred to is: ‘Inequality and the COVID-19 crisis in the United Kingdom

a significant example of a country that imposed extremely light restrictions, and comparing it with Britain will show how differing levels of government intervention affected the respective economies.

The Swedish government took a more lenient, albeit controversial, approach to handling the coronavirus pandemic, opting to focus on economic rather than social issues, its social regulations being limited to bans on large gatherings and travel restrictions The consequence of such restrictions, or more accurately, lack of restrictions, resulted in Swedish COVID-19 related death rates being amongst the highest in Europe during spring 2020, with overall excess mortality between 2020-21 rising to 0.79 per 100 inhabitants.

However, the lack of social regulation was perhaps partly responsible for the Swedish economy performing at a far greater level of production and efficiency than its European counterparts, such as the UK. This directly contradicted the widely accepted assertion that Sweden’s risky laissez-faire approach would not only result in the country experiencing a very similar degree of economic turmoil as other nations but would do so at a much greater social cost of lives and wellbeing, the latter of which holds some degree of truth.

The economic policies implemented by the Swedish government were successful in maintaining and preserving the economy during the pandemic. Sweden was unique in its economic approach to combatting the pandemic, with one of the major economic measures including the implementation of a work compensation program. This program was available for companies’ use when faced with pandemic related challenges, including those relating to finance and production, as described in Labour Market Effects of COVID-19 in Sweden and its Neighbours. The significance of this program was that it allowed for employees of companies to reduce their working hours up to a maximum of 60%, whilst the government would recoup the money lost by firms due to decreased working hours, by offering a short-time work allowance This measure was employed between 16th March 2020 and 30th September 2021, with approximately 73,500 companies having applied for support by 13th July 2020, and 94,289 applications as of 15th January 2021, resulting in a total valuation of government support paid out being SEK 26 billion (approx. $2.26bn) and SEK 31 billion (approx. $3bn) respectively (Juranek, 2021).

The result on businesses was a reduction in wage costs by 50% and therefore a general fall in costs of production, whilst workers retained almost 90% of their original pay. Moreover, the macroeconomic implications of this involved a slight increase in aggregate supply - aggregate supply referring to the total amount of goods and services firms are willing to supply at a given time and price level - and far smaller decrease in consumption relative to other nations, as consumers disposable incomes fell by a much lesser margin in Sweden than its neighbours.

Overall, it can be clearly observed that Sweden’s economy experienced a far greater degree of success than that of the UK’s and other European nations in general during the pandemic This can be indicated most clearly using figures from Q1 of 2020 in which there was a percentage decrease in quarterly GDP amongst many European HIC’S including Germany, Italy, France, Spain and the UK. Contrary to this, Sweden experienced a percentage increase in quarterly GDP of around 0.1%.

165

Following the principle of ‘correlation does not equal causation’ implies that Sweden’ economic success was not necessarily a direct consequence of limited social restrictions, but it can be argued that limited social restrictions and carefully considered economic policies combined to produce Sweden’s economic success, a clear difference from the UK and other western European countries.

New Zealand:

New Zealand’s restrictive social policies during the COVID-19 pandemic contrast that of Sweden’s in almost every way, with the New Zealand government opting to impose firm social measures in an attempt to eliminate, or at least heavily reduce infections within the country. As a result, New Zealand serves as a perfect comparison to analyse the impact of differing social restrictions on these country’s respective economies, and in turn gain a more accurate understanding of the effects of the pandemic as a whole.

New Zealand’s social restrictions were severe, but highly effective, as can be observed using the following graph provided by the International Monetary Fund (IMF):

The graph indicates that New Zealand managed to consistently and effectively contain the virus, more so than Eurocentric, North American and Asian countries, particularly during the latter years of the pandemic. The restrictions responsible for this include inflexible quarantine requirements, encouragement of frequent testing, contact tracing and social distancing. Perhaps the main reason for the success of these measures socially was due to their consistent and calculated implementation, contrary to other countries such as the UK, who experienced constantly changing social regulations, which resulted in uncertainty both from a social and economic standpoint.

How did New Zealand perform economically?

New Zealand’s economy rebounded at a very fast rate, performing comparatively better than most other countries both in terms of economic recovery as a whole, and other measures such as monetary and fiscal support, which were highly influential factors in facilitating the recovery.

The economic prosperity of New Zealand may seem surprising initially when considering the extent of their lockdown restrictions, which would lead one to assume businesses, specifically in the service sectors, would have suffered hugely. In truth, the economic damage inflicted by the pandemic, though undoubtedly present, was largely limited as a result of the successful economic stimulus measures implemented by the government.

One such measure includes the wage subsidy scheme, in which NZD 5.1 billion was granted by the government towards subsidising wages for businesses across all sectors and regions. This was used in conjunction with the Small Business Cashflow Loan system, in which loans of up to $10,000 could be claimed by applicants with a further $1,800 for each full-time employee. These loans had specific criteria that needed to be met in order for applicants to be eligible, including declaring that the funds would be used towards the business only, and loans would be repaid within 5 years, excluding the first 2 years. The value of these loans to businesses was substantial, allowing many to stay afloat and push on through the turmoil of the pandemic. Further to this, there were changes made to business tax measures, including a tax loss ‘carry back’ rule, which effectively provided businesses with cash flow relief, as well as easing their recovery and offering financial stability. Clearly, businesses were heavily supported by the New Zealand government during the pandemic, with reported spending of between $8-10 billion towards easing the burden off them, which was a hugely significant and successful way of limiting the widespread damages.

This notion is reiterated when observing the following graph, provided by IMF (produced by the WEO):

167

The graph indicates substantial spending on fiscal policy measures by the New Zealand government, placing higher than both Sweden and the UK in this aspect (Raman, 2021). This is significant as it highlights why the New Zealand economy was able to perform as it did during the pandemic, even with the severe social restrictions implemented by the government.

However, the New Zealand did not perform unexpectedly well in all aspects of the economy by any means. One such aspect includes productivity, in which New Zealand fell well below the average rate achieved by other countries. This is unsurprising, given the severity of the lockdown measures taken by the government, as workers were bound to be much less productive/ efficient with heavy restrictions. Though the government was successful in supporting many businesses financially, the direct cost of their social measures was productivity.

Moreover, New Zealand is relatively low compared to other countries in terms of research and development, with the government, universities and businesses directing a much smaller proportion of the country’s GDP towards this area than others during the pandemic. Expenditure into research and development is crucial for innovation and maximising efficiency, a leading factor in increasing productivity, therefore the lack of it can certainly be recognised as a cause of the aforementioned low productivity growth.

Overall though, New Zealand’s economy performed relatively well both during and following the pandemic, despite the firm, restrictive, social measures implemented by the government. The economy grew by 1.6% in Q4 of 2020 (Joyce, 2020), which implies a strong recovery, a faster rebound than most others. The unemployment rate has seen a steady decline since the last major lockdown, an indication of a productivity increase, albeit relatively slower than other countries such as UK and Sweden. The overall economic success can be attributed to strong government fiscal and monetary response, with a large significance having been placed on supporting businesses throughout the pandemic, a crucial and highly successful measure. New Zealand’s stringent lockdown measures did come at the cost of productivity however, performing at a far lower level of productivity than Sweden whose lockdown measures were much more lenient. Ultimately, the effect of the COVID-19 pandemic on the New Zealand economy was largely limited, due to a combination of strong government decision-making both economically and socially.

Overall Conclusion

To conclude this project, the answer to the original statement of ‘How has COVID-19 affected world economies’ is unsurprisingly ambiguous. The COVID-19 pandemic had a vastly differing impact on different economies, as assessed throughout the course of this project. It can be said that some economies performed better than others, for instance both Sweden and New Zealand can be convincingly argued as having performed better economically than the United Kingdom during and following the pandemic, in part due to highly differing levels of social restrictions, but mainly as a result of the economic policies implemented by each, and how these worked in conjunction with their respective social policies. The overall effect of the pandemic on small businesses within the UK was detrimental, specifically in the service sector, despite the monetary and fiscal policy measures implemented the UK government to

combat the virus. The question of whether the level of social restrictions impacted economic performance can only be answered in part, as it was observed that both ‘extremes’ outperformed the ‘control’ that was the UK, with no real indication of whether one ‘extreme’ – i.e., lenient or strict – was particularly more favourable economically than the other. Ultimately, the affect of COVID-19 on world economies was highly damaging, but the extent of the damage differed between countries, depending on their economic and social countermeasures

Bibliography

Strain, M. (2020). Covid-19’s Impact on Small Business: Deep, Sudden, and Lingering. Available at: https://www.aei.org/research-products/testimony/covid-19s-impact-on-small-businessdeep-sudden-and-lingering/

Blundell, R. et al (2022). Inequality and the COVID-19 crisis in the United Kingdom. Available at: https://www.ucl.ac.uk/~uctp39a/annurev-economics-051520-030252.pdf

Juranek, S. et al (2021). Labor Market Effects of COVID-19 in Sweden and its Neighbours: Evidence from Administrative Data. Available at: https://onlinelibrary.wiley.com/doi/10.1111/kykl.12282

Raman, Kido and Hussiada (2021). The Land of the Long White Cloud: Turning New Zealand’s Recovery into Sustained Growth. Available at: https://www.imf.org/en/News/Articles/2021/05/25/na052521-the-land-of-the-long-whitecloud-turning-new-zealands-recovery-into-sustained-growth

Greenaway-McGrevy, R. et al (2020). New Zealand’s Economic future: COVID-19 as a catalyst for innovation. Available at: https://apo.org.au/node/309364

(Joyce, T. 2020). New Zealand Government and institution measures in response to COVID-19 Available at: https://kpmg.com/xx/en/home/insights/2020/04/new-zealand-governmentand-institution-measures-in-response-to-covid.html

169

Sophie Graham

ENGLISH

Sophie Graham chose to write a Gothic horror short story highlighting discriminatory portrayals of autistic people within that genre. She chose this focus because, as an autistic person who is fascinated with horror, she wanted to examine how bigotry can influence the definition of a ‘monster’, and make the reader question their preconceptions in doing so. The project told the story, through a series of letters to a Priest, of a religious Christian mother abusing her autistic son as she believes he is possessed by a demon. Sophie is studying English Literature, History and Politics and wants to pursue English Literature and Creative Writing at university.

A Mother’s Love

Sophie Graham

171

Abstract

This short story explores the presentation of autistic people within the horror genre, and also touches on attitudes towards autistic traits within wider society. It examines how horror stories often encourage readers to think of autistic people and traits as terrifying or evil. By employing classic horror tropes, the reader is misled into initially sympathising with the mother and believing the son is possessed, and then slowly realising the child is autistic and being abused by the mother.

The format of letters shows the progression of the mother’s views over time. The aim is for readers to question preconceptions around autism as a result of this.

The other main function of the story is to look at the phenomenon of religious ‘autism moms’ in America. Caroline Miller acts as an over-dramatised version of real Christian mothers who believe their child’s autism is evil and seek to cure it. Her descent into paranoia and blatant abuse of her son within the story highlight how nonsensical these views on autism are. The tragic ending of the story is a commentary on how ableism towards children only causes more suffering than accepting them for who they are.

The story raises the question of what true horror is - is it supernatural ideas like possession or human actions, which can be just as terrible? Can the horror genre help challenge harmful stereotypes, or only perpetuate them?

Words: 231

October 14

Dear Reverend Father,

For months, I told myself I wouldn’t write to you except as a last resort. I didn't want to disturb you from writing your sermons, especially given what you must think of me now. I’m perfectly aware that I haven’t attended Church for months. I’ve given you and your congregation no reason to help me after I abandoned you without a word of explanation. All this time, I’ve told myself this; that if the demon took over enough of my child that I began to genuinely fear for his and my safety, as well as Jeremy’s, I would reach out to you for help.

Jeremy used to say that I’m paranoid. Apparently I have a tendency to blow things out of proportion. You know, I flat-out refused to see him the day before our wedding, in case it caused bad luck. I’ve always been a bit superstitious like that, since I was a kid. It’s because I just get so anxious about everything. It’s hard not to be, though, knowing that the Devil could be tempting me to turn to sin at any opportunity.

I’m sorry. I’m getting off topic, trying to avoid making my actual point. Even thinking of what this demon has done makes me tremble, so I apologise if my handwriting is hard to read, or if the ink is smudged. A letter is an archaic method of communication, I know, but I’m too scared to use my computer. I’ve seen him operate it, and God forbid he sees this letter. If he did, would he find me in my sleep? Would he take my soul too, or would he have the mercy to kill me? Oh Lord, would he still pretend to be my son in the moments before he ended it?

173

No. I can’t let myself think like this, or I’ll let him win. At least I have the chance to post this letter before he - it - returns from school.

I don’t know when this began. My earliest memories of my son are of him being the sweetest child, round-cheeked and happy, always willing to play games with me. He’s now nine, and it must have been two or three years ago when the Devil took hold of him. It started small, so small that I barely noticed - occasional outbursts, strange behaviour, typical childish things. I thought it was just a difficult age, but as he grew, he showed such a sudden shift in personality that it became almost impossible to think of him as the same boy.

It must have been a year ago when his lessons were taken over by his old first grade teacher whilst his current teacher was on maternity leave. They called me after he threw a fit in class. I came into school to talk to the teacher about this and realised she’d taught him for four weeks and assumed he must be a brother of my son, or coincidentally share the same name, because he’d changed so much in two years. Although I tried to deny it for as long as I could, it is now clear to me that there is a demonic aspect to my son’s change.

I don’t know if you’ve noticed, but I don’t write his name anymore. I can’t. It feels like sacrilege, somehow. It’s not right, not when I’m sure, more sure than I’ve ever been, that this is not the same gentle boy that I gave birth to, and it makes me sick to use his name in reference to the monster that has taken his place. That’s one of the reasons I can’t admit this to anyone except you. They’d

think I’m mentally unwell, and they’d try to lock me away and silence me because they’ve all been taken in by the Devil, they’d force me to keep raising it like my son!

But this is not about me. I have to make you understand.

His own body is constantly fighting against itself. He twitches his hands in an unnatural manner, moves his body in ways that it was not designed to move. The demon used to be able to disguise this as a child’s restlessness. Now, it seems to revel in making me watch my son fight. Especially in public, as if it wants as many people as possible to witness my torment, it will drop to the floor and roll around, letting out the most terrible whining sounds. I once walked into my son’s room to find it making horrible, utterly inhuman screeching noises from the back of its throat. I feel ill when I think about the fact that my son is in there, trying helplessly to escape.

Is he in pain? Is there enough of him left to feel anything? Am I too late? If only I’d written to you earlier, maybe even approached you at Church, you could have stopped this. He must be nearly lost by now. Why have I been so selfish?

I don’t think he feels anything. It was the hardest thing for me to accept, as a mother, that my son does not love me. At least, the thing that is not him doesn’t. He recoils from my embrace, can barely meet my eyes, doesn’t react when I present him with the sort of treats that other children would beg for. He has never expressed any feelings towards me, let alone love. I threw him a birthday party and he… it…. Didn’t eat a single slice of cake, even as the children around him stuffed their faces. I pleaded with it to eat and be a normal child for once, for me, but it just feigned

175

distress. The demon has stolen his soul, taken his ability to feel love, the one thing that makes us human. Filled only with sin, of course he cannot stand my Christian goodwill.

The incident happened the last time I attended Church. Oh Lord. I don’t know if I can. I’m sorry, Reverend Father. My hands are shaking, you can probably tell. I did mention that I would tell you, even though it’s - I have to. My nightmares about that day will only continue if you don’t save my son, which I must. So, here it is.

The service was almost finished and the congregation had joined together in song. I had closed my eyes, lost in the beauty of the rising organ music, my community and my faith.

It was at that point I heard it scream.

My eyes opened and I saw what I had until that point believed to be my son writhing on the floor, crying out and rocking back and forth. The Lord’s name was sung, and the demon convulsed violently. It couldn’t bear to hear His name. Overtaken by this sudden and terrible knowledge, I grabbed the demon and ran. Once outside and further from the presence of the Lord, it seemed to recover itself. Other members of the congregation didn’t think anything of it - they assumed this was a child’s tantrum - but I knew better. What other than possession could make my son break down into a frenzy from hearing Christian prayer? I have not brought the demon, nor myself, to Church since, for fear of corrupting such a holy place.

You must understand why I’m writing to you so urgently. I don’t know what the demon will do next, whether it will target me or Jeremy or - God forbid - the innocent children at its school. You know that all the time I’ve been a member of your congregation, I’ve been nothing but faithful, except in the past eight months.

I come to you in desperation. Please find the kindness in your heart that the Lord grants to all of us and hear my cries. All I ask is that, having read this account of my suffering, you visit me and tear this thing from the body of my child. I’m begging you.

Respectfully yours in Christ,

November 3

Dear Reverend Father,

Your visit was unsuccessful. Although I appreciate your efforts immensely, and I prayed every night that it would work, it seems that you alone were not enough to exorcise the demon.

It reacted badly, as I’m sure you remember, convulsing and spitting and attempting to attack both of us. This behaviour has only grown worse and more common. The demon must have realised

177

my efforts at extraction, and this has made it become violent. Its teachers have informed me that it will frequently go into fits on the playground, and it makes no effort to engage with the other children except to scream at them if they attempt to come near - which is a small relief, I suppose, for the childrens’ sakes. I don’t want to think about what would happen if it engaged with them.

Last night, I woke up to find the demon standing above my bed, watching me. I screamed and it screeched in response, and even after I took it to my son’s room and locked the door from the outside, I was unable to get back to sleep because my heart was pounding so fast. I saw it standing over me every time I closed my eyes.

Today, I found it alone in my son’s bedroom repeating demonic chants, and when I tried to shut it up it left deep scratches in my arms and my neck.

I don’t feel safe in my own home. I look around every corner, jump at any slight noise, and now I’m struggling to get to sleep. I’ve started to lock the bedroom door, as well as the demon’s. Jeremy is worried, for both me and the thing that is most definitely not our son. We have no options left, but I live in terror at the thought that I will get taken too if I do nothing. What can we do? Is there any option left except submitting?

Yours in Christ,

November 18

Dear Reverend Father,

Thank you for your fast reply, but you don’t need to worry anymore. Your offer of a place to stay was incredibly kind, and I’ll keep it in mind in case things deteriorate. However, my last letter was sent when I felt a desperation that is no longer relevant, and I would like to think that I’ve made the first steps in getting the situation under control.

I’ve been experimenting. After I sent my last letter, terrified, I realised that I needed to take more drastic action. I’d like to think I acted logically, but honestly I was in such a haze of panic after handing that letter over in the post office that all I knew was I had to do something, anything. I couldn’t stop thinking about the possibility that the demon would come into my bedroom at night and corrupt me - or worse, it already had, and I didn’t even know.

I sat in the post office for so long that Sue had to make sure I was alright. She works at the front desk, taking the parcels and handing out labels and what-not. Have you ever met her when you’ve come to town? I remember when I was a child and she would give me a lollipop every time I came to drop off a letter for Dad. I thought she had one of the most important jobs in the world, guarding the post until it gets delivered. I suppose, in a way, she does, standing at her desk so that the demon wouldn’t be able to get to these letters if it tried. Still, I didn’t like the way she looked at me, with a mix of pity and disgust. As I said, I hadn’t been sleeping.

179

By the time the sun started to go down, I’d decided that I wouldn’t let this creature of the Devil ruin my life and take away my happiness. Its presence in my house sickened me. If it wanted my son, then it would have to fight for him.

I wandered home in a daze, unable to quite decide if I should (or if I even could) commit to my newfound motivation to fight. It took so long to convince myself to go home I almost considered running as far as possible, to anywhere except that house, until I remembered the possibility of getting my son back and kept walking in the right direction. The first thing I heard was screams, and I found the demon writhing at the kitchen table whilst Jeremy rushed around frantically, trying to calm it down and clean up the mess and calling for me to help. There was a plate of spaghetti thrown onto the floor. I was filled with such an immense rage at the sight of the thing that I grabbed it and dragged it to my son’s room. It screamed and kicked but I simply shoved it in and locked the door from the outside. I wouldn’t give it the reaction it wanted, and I couldn’t bear to look at it any longer.

Jeremy was angry. He said that I couldn’t do this to our son, even if I did really believe he’s possessed. I hadn’t felt safer in months.

That’s perfectly reasonable, isn’t it? I’m sure you’ll agree. You were horrified when you visited us, you’ve dedicated your life to spreading the Lord’s message, and you’re the only person I can trust to know I’m not being overdramatic. After seeing my constant fear you’ve got to understand

why this was necessary. That was why, without telling Jeremy, of course, I waited almost 24 hours to let it out.

Father, I can’t begin to express to you how dramatic the change in behaviour was. You see, it protested at first, as per usual. For a good few hours, yells and crying echoed around the house, though I invested in a pair of noise cancelling headphones months ago and was able to mostly block it out. It even pleaded with me for a short while, pretending that it was still my son and that it actually loved me, though I’m not close to stupid enough to fall for that one. But after a while, it fell silent. I spent some time enjoying what I assumed was a brief moment of bliss, before it occurred to me that I should have heard something by then. I was thrilled by the idea that I had either forced the demon out, or at least forced it to be quiet. So I went to bed. When I woke up, I almost let it out, until I saw the mess in the kitchen again and decided it could do with a few more hours in there. To teach it a lesson.

Evening came again. I unlocked the door and found the demon shaking in a corner. It didn’t even protest beyond a whimper as I grabbed it to take it to the kitchen to eat (my son’s body still needs nourishment, though I loathe helping the demon get stronger). Even better, over the last few days, it has not bothered me with a single word. It’s become almost docile, exactly as my son used to be. I saw it cry when it thought I couldn’t see - real tears, not the snotty, loud ones that the demon fakes for my sympathy. It’s clear that I’ve been able to successfully weaken the demon, and this is my son coming back through.

181

I feel invigorated. The Lord has finally decided to show me light in my terrible situation. It gives me strength to think I now have the power to make that vile creature leave, bringing my son back to me and holiness back to my home. Following this success, I am going to attempt to extract the demon through new methods.

Wish me luck. If I don’t write within a few months, assume that I’ve been taken too.

Yours in Christ,

Caroline Miller

December 30

Dear Reverend Father,

I couldn't believe the way you looked down on me in your last letter for the way I’d treated the demon, purely because it takes the form of my child. I do understand your scepticism, perhaps I would also take your stance if I was watching this happen to someone else, but you’ll never truly understand what this thing has done to my life. Not even Jeremy does, and he’s the one who’s seen the bags under my eyes. Every day I must contend with having a mother’s infinite and unconditional love for her son, balanced with the knowledge that that boy no longer exists. Even if you don’t trust me, I know the Lord is on my side.

My experiments have continued to work, with the demon growing quieter with every passing day. On the rare occasions when it screams, I’ve discovered that if the threat of no dinner isn’t enough to shut it up, a smack will do the trick nicely. The crying hasn’t happened again, but I can’t complain - the demon has finally been silent enough for me to focus on my reading, which I haven’t had the time or energy for in years. I’d never realised just how much of my time it was taking up until I discovered how easy it is to shove it in its room. I’ve had a few questions from the schools, but they’ve mostly been on how I managed to make my ‘son’ into such an agreeable child in such little time. The bruises are easy enough to explain away as being from soccer games.

I’m doing this all for my son, of course, who I love so much. Every time I slap the demon, or force it to stand still for an hour (it especially hates when I do that), I’m thinking of him. I’m certain that now that I’m managing to force the thing back, my son is going to reappear, the tears were proof of that. I can’t wait for the day when this evil is gone from my home.

I wish I could end my letter there, with the news that I seem to have got to the bottom of my problem. Maybe I could, if I was more naive, but I can’t ignore this. I know what this Devil is capable of.

I mentioned in my last letter that Jeremy got ridiculously angry at me for locking the demon in a room. This has only become more pronounced since. Absurdly, the night he discovered I’ve been refusing the demon food when I get particularly fed up with it - it pretends to care, sometimes, as if it even needs food - he was so upset he refused to sleep in the same bed as me. Came back the

183

next night without saying a word, though. At first, I thought his reaction was only because he knows so little. Of course, a father will never understand a child in the same way their mother will. Whilst he’s seen the demon’s actions, perhaps there was a part of him that still believed it was his son - but I now think that something much, much worse is happening.

Jeremy has been taking the demon out of the house at times when it doesn’t have school or clubs, and thinks I haven’t noticed. He says it’s to take it to soccer, or choir, or whatever other excuse he pulls out of thin air, as if I don’t relish the time it’s out of the house enough to know the demon’s schedule by heart.

I see him sometimes, at the dinner table, looking at me and then glancing away as soon as I meet his gaze. He switches to another tab on his phone when I come close, orders packages and grabs them from the doormat before I can see what they are, and I’ve noticed him whispering with the demon through the kitchen window when I return from the shops. Last night, I put my arm around him in bed and he flinched. Is he simply angry at me? Or is it more? Is he hiding something? I love Jeremy, I didn’t think he would be the kind of man to do this, yet. What is he hiding?

I lied when I implied that I don’t know what this is. I do know. Or, at least, I think I do. The horrible sinking feeling at the bottom of my chest does. I just don’t want to admit it to myself.

My hand is shaking again, I’m sorry for the handwriting. I’ve been trying to put this off, so I’ll write it in simple terms, to spell it out for both you and myself: the demon has realised I’m forcing it from my son, and has begun to influence Jeremy as well.

Once I’ve written it, it feels undeniable. Why else would he whisper with the Devil, take it out of the house without my knowledge, or get so unnecessarily riled up over my attempts to stop the possession? Looking at these words on the page makes me deeply nauseous, and I’m now shaking harder than before. I love my husband so much, as any wife should. I can’t believe that Satan would come for my family like this when all I’ve ever done is be a faithful Christian, when I’d really thought I was successfully driving the demon out. But that’s how the Devil corrupts us, isn’t it? The influence on Jeremy is obvious, and if I don’t do anything, the demon will surely come for me too.

I feel even less safe in my own home than I did before. Every night, I have to share a bed with a man who is under the influence of something that isn’t my son, and during the day I must endure its twisted shows of affection. They’re conspiring against me, I know it. How can I even trust that my thoughts are my own, or if the demon is forcing them into my mind? How do I know anything I do is my own choice?

I can’t get taken too. I’m going to get my son and husband back from the Devil, save my family, whatever it takes. I would appreciate any help you can send, but by now I doubt there’s anything anyone can do. Don’t trust anything I send you if it seems suspicious, because it’s likely the demon has taken control of me.

If you have any more comments to make on the ‘morality’ of my actions like you did after my last letter, then don’t bother responding.

185

Yours in Christ (at least I can still write His name - doing so fills my heart with hope),

Caroline Miller

February 11

Dear Reverend Father,

I’ve won. Or, at least, I’m winning. I think I am. It’s hard to trust my own mind, lately. Still, I know when I see the terror in the demon’s eyes that it’s working.

The demon is tied up in my basement. It hasn’t eaten for days. I can see it suffering, and it fills me with the greatest joy I’ve experienced in years. I know how this sounds, but please don’t try to interfere, as this is between me and the Devil. If anything can kill it (and I am increasingly certain that it’s possible), I want it to be me, after everything it’s done to my family. That abomination against God and all things good came to the Earth to torment me specifically, and I will relish when I can end its existence, just as much as I do seeing it in pain.

Jeremy has been gone for five days now. Or has it been a week? It feels like an hour, or maybe a year. I don’t know where he is, and honestly I can only hope that he’s dead by now. If not, he must be fully in its control, beyond help. I remember him yelling at me in the kitchen, pointless words

like ‘cruel’ and ‘heartless’ and ‘unfair’. He seemed to truly believe that I’m somehow the evil one in this situation, as if this Devil hasn’t been making my life hell for years. It’s not fair that this happened to me and my family, that I was stuck with this demon child who takes joy in my pain, that I’m the one who had to clean up after it for years! How dare he suggest that my attempts to make my life just a little easier make me the bad one? After that, he left, and I haven’t heard from him since. It’s hard to be upset that he’s gone, knowing who he was by the end and what he could have done - I can only mourn the version of Jeremy that existed before he was taken.

The demon has become more violent since I tied it up, twitching its body and letting out these horrible noises that are between a cry and a scream. It disgusts me so much I have to restrain myself. Well, I’m sure you don’t want to know the gory details of how I think about hurting it, but let’s just say that sometimes I give in. Frankly, by now, I don’t think my son can be saved, or he would surely have come back to me by now. All I want to do is make sure that the creature that did this suffers.

Now it’s taken Jeremy, it can take me too, I’m sure of it, even that it’s planning to. Jeremy’s faith may not have been as strong as mine, but he was still a good man. I’m beginning to fear that even Christ Himself cannot protect me from this thing (blasphemy, yes, but what else is there for me to believe after everything). I’ve already noticed my concentration slipping, irrational moments of anger overcoming me. I need to end the demon for good, before it’s too late for me. If the Lord has truly decided in all His wisdom not to intervene, then it’s become my duty as a Christian to rid the world of this stain.

187

We’ve had a baseball bat in the garage for years, from when Jeremy used to play from time to time. Although it might not be the cleanest method, I’m sure it will work. Pray for me (I’m not sure I have the strength to, anymore).

Yours in Christ,

Caroline Miller

February 18

Dear Reverend Father,

Damn you. Damn Jeremy, damn the un-Godly men and women who took my revenge from me, damn every single ‘Christian’ in that Church who truly believes that the demon who ruined my life deserves saving. It sickens me that you read every single one of my letters, the most detailed possible accounts of my suffering, and still decided to turn against me. Satan has taken control of you as well, hasn’t he? Of course he has, he was manipulating you all along, I’m sure of it, that’s why you were ‘unable’ to stop the demon when you visited my house, isn’t it? Don’t bother lying. It all makes so much sense, in retrospect. Was there ever anyone in my life who was untainted ?

I’ve been living in constant fear for my soul. Is that what you wanted? It must be. Jeremy - the Devil in his body, more accurately - came back to live with me, to pretend it cared about me and

‘wanted the best for me’ and thought I needed time to ‘recover’ away from my ‘son’. I can’t begin to describe what it was like being trapped in a house with an angry demon that I knew had complete power over me. All I could do was wait for the moment when it had had enough of toying and took me over completely.

Jeremy’s been staring at me from the couch, his eyes wide and beady. Honestly, I’d never imagined a corpse would look so dead so quickly, but I suppose he didn’t have a soul in him to begin with. All my knife did was finish the job. I hope, at least, that whoever moves into the house after us doesn’t have too hard a time getting the blood out of the carpet - it seems to be spreading incredibly fast against the white wool, maybe the carpet might even need to be replaced entirely. That can be very expensive these days, you know, what with inflation and all.

I think I’m getting ahead of myself, though the blood stains on the paper probably tipped you off. I’ve tried to clean my hands, but no matter how hard I scrub, the blood stays. It’s a good reminder of what I need to do next.

You did this to me, with your and ‘Jeremy’s’ schemes to expose me. I hope every single ‘social services’ worker who took away my chance at revenge, my chance to save the world from this demon, rots in hell. They’re the ones who will have to deal with the consequences of letting this thing walk free, and their made-up excuses about ‘autism’ won’t be able to save them then. I hope the Church realises you’re corrupted and completely unfit to be a Priest, and takes everything from you. I don’t even have to hope that things end badly for ‘Jeremy’, because thanks to me, his fate has been assured; if his actual soul is still out there somewhere, I don’t doubt that he has made it

189

to Heaven. I only worry for the next family the demon will be sent to, whose lives it will take over just like it has mine.

But, after everything I’ve been through, I don’t think that should have to be my problem anymore. I’ve tried to do my Christian duty, and at every opportunity I’ve been stopped. I have one option left, and I’m going to take it.

I know that it will succeed soon, or it maybe already has. I can feel it worming its way into my mind. It wasn’t hard to come to the decision I have - I won’t let a version of myself possessed by that thing walk free, and taking out another of its slaves along the way, that’s an added bonus. I’m sure that if the true Jeremy were here, he would thank me. The Bible may consider murder and suicide to be sins, but it would be a much greater sin to allow myself and Jeremy to continue to live, knowing we’d be doing the Devil’s bidding. If God really is fair and loving, He will know that I belong in Heaven regardless.

As I write my final words, all I can think of is my son. It’s harder and harder to distinguish between before the demon took him and after. One moment, I’m remembering his chubby little cheeks when he smiled, and the next I see him in my mind’s eye convulsing on the floor and flapping his hands at another child’s pizza party. This must be a sign that the demon is already twisting my memories. Now that I’ve dealt with Jeremy, I’ve got to be quick.

All I have left is the burning hatred in my chest for this Devil, with its horrific contortions and tantrums and ridiculous demands. It took everything from me.

Goodbye, ‘Father’. The Lord will understand that I am only doing what I know is righteous.

Forever yours in Christ,

Caroline Miller

Words: 5237

Commentary

I wrote a horror short story as I was fascinated by how some horror stories, such as ‘The Horla’, spread harmful rhetoric around marginalised groups whilst others like ‘The Lottery’ and ‘The Tower’ used the genre to question these views. I wanted to write a story which similarly used the

191

horror genre to question bigotry, and I decided to base it on the common trope in horror media for autistic traits (especially in children) to be demonised. I hoped to use my piece to make the reader question their views and highlight that true horror comes from ableism, not autistic people.

I decided to write my story in the format of letters, which I was inspired to do by reading the novel ‘Dracula’. The epistolary format of my story creates the sense of time passing, which is especially important to show the mother’s descent into paranoia. Over the course of the story, she goes through a dramatic change in personality, so the format is important in making this change believable by presenting it as gradual.

The story being written in first person helps to ensure the reader initially sympathises with the mother. She is an unreliable narrator who nonetheless makes a convincing argument at the beginning by showing her despair and misrepresenting her child’s behaviour. Only her perspective is presented, misleading the reader. This is important so that when it becomes clear later on that her views are misguided, the reader is led to question their own perception of autism. This happens over the course of the story, as the first person perspective begins to give a clear insight into her increasingly twisted views. I used a slow build-up of uncomfortable information as the story progresses to reveal to the reader that the mother is abusing her child, with her perverted logic being shown.

I rely on the reader having not enough knowledge of autism to immediately realise the child is not possessed, but to be able to realise that the child is autistic as the story progresses. This is addressed by the direct reference to autism close to the end of the story. However, I am not sure the story

would achieve the same level of success for a reader who quickly realises the truth, though it would still be horrifying.

I created an anxious tone, which eventually becomes paranoid, through my use of language. In her first letter, the mother uses language such as ‘help’ ‘tremble’ and ‘pain’ which creates a sense of her desperation. In her appeal to the Priest, she describes herself as ‘desperate’ and ‘begging’. This contributes to her coming across as sympathetic at this point, clearly being very upset and not yet using contradictory language. However, as the horror of her actions is slowly revealed, the language she uses changes. She starts to consistently refer to her son as ‘it’ and ‘the demon’ and uses incredibly disquieting phrases like ‘an immense rage at the sight of the thing’ ‘I shoved it’ ‘a smack will do the trick nicely’ and ‘teach it a lesson’. As the language she uses becomes more powerful and aggressive, it is shown that she is the one in control, not her son.

Religion and horror are interlinked within the story. Initially, her pleas to God use overdramatic language to convey her terror. By the end of the story, however, her references to religion are horrifying in themselves by showing how she fully thinks her abuse is justified. She writes ‘The Lord will understand that I am only doing what I know is righteous’ as her final sentence before killing herself.

The ending of the story is shocking, intending to leave the reader horrified and remembering the story. It is a subversion of usual tropes around autistic characters, as the abused autistic child is saved and it is the mother who ultimately dies. I intended this to demonstrate that the true horror was always the mother’s actions, with her suffering the consequences of her ableism.

193

Zach Fecher

POLITICS

Zack Fecher chose to research the Electoral System in Israel after learning about the unstable nature of Israeli politics and the high frequency of elections in the country. A solution to electoral reform focused on creating a system that had worked well in other countries, would be popular with the public and would ensure stability and representation. Zack is studying Politics, French, and Spanish and is looking forward to reading Modern Languages (French and Spanish) at university.

What is the best electoral system for Israel?

Background

To even begin to debate on how or even if Israel’s electoral system should be reformed, one must first understand Israel’s political culture and history dating back to the beginnings of the Zionist movement in the late 19th century. The Zionist movement centred on the Jewish return to the land of Israel and rebuilding a state there, but it was made up of several different factions that disagreed on the process of achieving this common goal and how a Jewish state would function. The ideas of Political Zionism, the strand of Zionism supported by Theodor Herzl, the founder of Zionism, were not fully accepted by other strands of the movement. The Labour Zionists wanted a socialist state, the Religious Zionists a religious one, the Revisionists a large state on both sides of the Jordan River, the General Zionists were more centrist and Ahad Ha’am, the founder of cultural Zionism, wanted "a Jewish State, and not merely a State of Jews" (Ha’am, 1897, n.p.) - a view that favoured the spread of Jewish culture over the land of Israel rather than focusing on exclusive political control or religion Therefore, as Doron and Harris (1999, p.21) write, the Zionist movement and the Yishuv, the assembly set up by Jews in Mandatory Palestine, worked in a way that ensured as many groups as possible would be represented, so the Zionists could call on a large support base. The highly proportional and inclusionary system of the Yishuv’s Constituent Assembly before the state of Israel’s creation was therefore kept after 1948. This is why Israel was founded in 1948 with a system of proportional representation and one that was highly centralised with one nationwide constituency of 120 seats (a result of the centralised system of governance used by the British in Palestine from 1917-48).

We can therefore see that Israel’s society is full of many different factions and its political culture is one of inclusion and representation of as many groups as possible . Israeli society is divided on many different lines: political ideology, religion, and ethnicity (which is not divided merely between Arabs and Jews but also between the different Jewish ethnic subdivisions) all play a part in deciding what parties people vote for. Proportional representation has allowed many of these distinct groups of society (from Russian immigrants represented by Avigdor Lieberman’s Yisrael Beytenu to Islamist Arabs represented by Mansour Abbas’s Ra’am) to have seats in the legislature.

Problems with the current electoral system

However, the drawbacks of this system have been evident throughout its use over the last seven decades. PR results in a multi-party system that means no party ever wins a majority on its own, instead the party which gets the most seats in an election must form a coalition with smaller parties, who have increasingly played a larger role in the coalition forming process , thanks to the increased fragmentation of Israeli politics Parties that lie on the ‘pivotal point’ of the political spectrum - the parties that could form a coalition with parties on different sides of the pol itical spectrum - have an inflated influence as their demands and conditions for joining a coalition can hold larger parties hostage (Diskin and Diskin 1995, p. 33) Popular dissatisfaction with the left-wing governments of the 1990s that oversaw the failed Oslo Accords- the attempted Israeli-Palestinian peace process- has led to a rightwards shift of Israeli political thought and in recent years divided opinions of Binyamin Netanyahu has increased fragmentation. Increasingly, extremist parties’ demands have had an influence on the largest party in government’s ability to operate. This puts the integrity of the whole government at risk, leading to 25 elections in Israel’s nearly 75-year history, and recently, 5 elections in less than 4 years, where the constant withdrawal of support of the government by small parties have led Israel to a seemingly perpetual cycle of elections. Israel is now ruled by a coalition that includes extremist small right wing and religious parties like Otzma Yehudit whose leader, Itamar BenGvir, was once convicted of incitement to racism; and Bezalel Smotrich’s Religious Zionism that ran on

195

a joint list with Otzma Yehudit in the 2022 election. The right-wing parties also want to reduce the power of the Supreme Court which will upset the system of checks and balances. It shows there is a clear flaw in the electoral system if a party that won 10% of the vote controls whether a government remains stable or not

Previous attempts at electoral reform

The fragility of the multi-party system and problem of a lack of accountability on the part of legislators (as all MKs represent a single national constituency and different regions have no local representatives), has made electoral reform a popular subject of debate for the last three quarters of a century (Diskin and Diskin 1995, p. 34). For example, David Ben-Gurion, the first Israeli Prime Minister, favoured a first-past the post system where the country would be split into 120 electoral districts each returning one MK. Mixed systems have also been proposed, like those proposed by Pinchas Lavon of Mapai in in 1953 and the General Zionists in 1954, that were not proportional and would have reduced the influence of small parties (Diskin and Diskin 1995, p.35). Other proposals made include one made in the 1950s involving raising the electoral threshold to 10% to create a twoparty system. However, these proposals never came to fruition, as they never received majority support in the Knesset.

Despite this, Israel has experienced two main forms of electoral reform throughout its history, one that failed miserably and has since been revoked, and the other helped reduce the number of parties in the Knesset, but the main problem of government instability remains. The electoral reform of 1992 changed the way the Prime Minister was elected. Instead of the PM being the leader of the largest party in the coalition, the Prime Minister was to be elected by a separate popular ballot. The reform, used in the elections of 1996 and 1999, were meant to strengthen the position of the Prime Minister and his party but it had the opposite effect: it encouraged split ticket voting which meant the small parties ended up receiving more votes than before, giving them more influence and making the country ungovernable. The system was then promptly discarded before the next election. Therefore, any system that is proposed for the future must consider the dangers of promoting split ticket voting and that it may result in the smaller parties gaining more power- which would fail in making the Israeli political system more stable.

The Israeli electoral threshold has already been changed four times, with the aim of each change to reduce the number of parties in the Knesset (Troen 2019, pp.12-13). Israel’s electoral threshold now stands at 3.25%, equivalent to 4 seats.

While it has reduced the number of parties in the Knesset as shown by this table, the problem of small parties acting as kingmakers and holding disproportionate power remain. The recent crisis of four elections in five years was not prevented by the raising of the electoral threshold. Raising it again however will further reduce the number of parties in the Knesset , although at the expense of voter choice and causing votes to be wasted. As shown in this graph, Isr ael’s current electoral threshold is in line with most other European democracies- it lies between 3% and 5%.

Election Year Electoral Threshold Number of parties 1988 1% 15 2003 1.5% 13 2013 2% 12 2015 3.25% 10 2019 (April) 3.25% 11

(Troen 2019)

What is different between these European democracies is not so much the threshold, but the electoral system used- most used mixed systems or electoral districts. The benefits of these systems and the feasibility of their use in Israel will be discussed later.

Solutions

Some systems can be dismissed as unfeasible, as the possibility of them being accepted by the Knesset are so low, they can be almost disregarded. For example, a First-past-the-post system is unlikely to ever get the support of the Knesset because the smaller parties know that it will damage them, so they will never accept it. First-past-the-post systems favour parties that have a geographically concentrated support base, and since only the votes for the winning candidate in a constituency really count, the number of wasted votes are very high. Therefore, small parties would never accept a first past the post system. Israel has a political culture that isn’t suited to a two-party system: the political views of Israelis fit into several main camps, unlike in the UK and USA where, despite a diversity of opinion within the left-wing and right-wing parties, they fit into a two-party system. Walter Bagehot (1867) wrote: "the principal characteristics of the English Constitution are inapplicable in countries where the materials for a monarchy or an aristocracy do not exist”. One cannot simply export a system used in other countries and implant it somewhere else with a different political culture and expect it to work smoothly.

One potential reform that will improve the Israeli electoral system is the introduction of constituencies. Currently in Israel, there is a single, nationwide, 120-member constituency. This means that there is no MK-constituency link that is enjoyed by many European democracies. As all 9 million Israelis are represented by the same 120 MKs, they do not have the ability to write to their local MK about problems in their local area, since they have no local MK that is directly accountable to them Due to this MKs have distanced themselves from the needs of local people. (Atmor 2008) Combined with the weakness of local government of Israel, where especially in cities the state has the final say on local management decisions (Elazar 1988), this can result in decisions being made on the local level that are not in the best interests of the inhabitants. Local governments in Israel are underfunded, as shown by this graph:

197

(OECD 2021)

Compared to the OECD average of 31.1%, local government revenue as a share of total government revenue in Israel is 15% As a result, local government in Israel is weak and the executive plays a large role in decision making Israelis want more power to be held locally: a 2021 poll (Hermann 2021) showed that 67% of Israelis wanted more power to be transferred to the local authorities. The issue of local representation in Israel is significant.

The lack of local representation and direct accountability has led to a diminishing trust in the political system and the electoral process: resulting in lower turnouts, less faith in elected representatives, and some regions feeling disenfranchised. Israel is a country that is divided mainly on socio-cultural lines but also on geographic ones. The needs of those living in Sderot, a Jewish town just a mile from the Gaza border that nearly went bankrupt in 2010 from having to repair damage from the Qassam rocket fire from Gaza that ravaged it during Operation Cast Lead in 2009 (Haaretz, 24 March 2010); will clearly be different than the needs of those in Tel Aviv awaiting a new light rail line; or those in Jerusalem having issues with planning permission or wanting protection from terror attacks. If these areas had a local representative that could then communicate its inhabitants’ concerns directly to the Knesset, it would be more likely that their needs could be met. This would significantly reduce the disenfranchisement of Israel’s periphery: the rural localities outside of Israel’s two main metropolitan areas of Tel Aviv and Jerusalem that have suffered from decades of underfunding (the poor funding of local government means it must spend extended periods of time negotiating with the national government for funding, with the state getting the final say (Elazar 1988)) By giving the periphery’s inhabitants local MKs, they will finally have a seat on the table and have a political system where it is politically beneficial for MKs to care about their needs (Atmor 2008).

In addition to this, increasing the number of constituencies should reduce the number of parties while still being able to keep a system of proportional representation that is suited for Israel’s political culture: if the country is divided into districts , it may make it harder for small extremist parties to win many seats. A further advantage of constituencies is it gives the voter more choice over who represents them, especially if a proportional system like STV is used in a multi-member constituency. Voters have arguably never had less control over who their MKs are and what they do in the Knesset than now: party discipline has fallen over the years from its height in the 1950s, where MKs would often follow the party line and private member’s bills were uncommon. Since the 1990s, the number of private members bills have risen to a number that is higher than of any other western democracy

(Friedberg, Fridman and Shoval, 2019). For example, during the 20th Knesset, which sat from March 2015 to December 2018, no less than 6018 private member’s bills were proposed (of which only 246 came into law). This shows how MKs have developed a predisposition to propose their own bills rather than following party policy. In addition to this, over the course of the last decade, there have been more and more cases of MKs leaving their parties to join new ones or retire from politics altogether. For example, in June 2022, Mazen Ghanaim (Ra’am) and Ghaida Rinawie Zoabi (Meretz) both announced they would leave their parties before the November 2022 elections (TOI, June 2022) In 2014 Tzipi Livni’s Hatnua party nearly fell apart after four of its six MKs at the time left the party in protest of Hatnua’s new alliance with Labor. We can see from this that MKs act more and more independently of their parties, yet closed party lists are still used to elect them. With the 2015 Norwegian Law (Jpost, July 2015) allowing ministers to resign from the Knesset and be replaced by the next person down on the party list (allowing politicians to enter the Knesset when they weren’t even elected to have a seat at the time of election), it is only right that the Israeli public can elect their MKs not just based on what party they represent, but also on who they are.

Furthermore, adopting a multi-constituency-based system will bring Israel in line with many successful western democracies, all with different political cultures and electoral systems This table shows a number of democracies around the world and how many districts their legislatures have. (District magnitude refers to the number of representatives per district and compensatory seats refers to the number of representatives that are elected without being tied to an electoral district.)

Country Population Size Number of Electoral Districts District Magnitude (M) Upper Tier (Compensatory Seats) 1. Single-Member Districts (or SMD) Canada 308 308 1USA 435 435 1India 543 543 1France 577 577 1UK 646 646 12. Multi-Member Districts Chile 120 60 2199
Table 1: A Comparative Look at Legislative Districts

The Netherlands and Slovakia are the only EU democracies that use PR with a single national constituency like Israel, albeit Slovakia has a higher electoral threshold (5% for each party, even if in a coalition) (Inter-Parliamentary Union 2023). Countries like the UK and USA use single-member constituencies elected using First-past-the-post; France uses single-member constituencies using Second Ballot Majority Runoff (where an absolute majority is required for a candidate to win a seat, and in the absence of these a second round of voting is held between the two most popular candidates); and Australia uses single member constituencies using the Alternative Vote (which can have multiple runoff rounds until an absolute majority is reached) (Roper 2000)

While these are majoritarian systems that would not be best suited for use in Israel, the rest of Europe uses either mixed systems like the one used in Germany (where there are both single and multi-

Ireland 166 42 5-3Norway 169 19 16-3Finland 200 14 33-6Switzerland 200 26 34-1Spain 350 52 35-1Sweden 349 1+29 34-2 39 Denmark 179 1+19 16-2 40 Israel 120 1 120Slovakia 150 1 150Netherlands 150 1 1503. Mixed Systems Japan 500 1+300 1 200 New Zealand 122 1+70 1 52 Germany 598 1+299 1 299 (Atmor 2008)

member constituencies), or fully proportional systems using multiple multi -member constituencies, such as Finland, Denmark, Belgium and Luxembourg. We can see from this that the multiple constituency model works in countries of many different political cultures. Therefore , a constituency model could kill two birds with one stone: it would solve the problems of the lack of accountability on the part of the legislators to the public, and at the same time would allay the fears of those who worry about the loss of a proportional system. It would be a real reform of the electoral system unlike those that attempted to reform it in the 1990s by ignoring it altogether and failed dramatically: the adoption of primaries and the direct election of the Prime Minister, which led to a Knesset more unstable in its composition than before the reform’s introduction (Rahat and Hazan 2005)

Moreover, increasing the number of constituencies is far from a new concept in Israel. During the prestate era, elections sometimes used one nationwide district, and others had multiple multi-member districts divided on ethnic or regional lines. The multi-district model with Proportional Representation was also considered by the Constitutional Committee of the Provisional State Council, the Council that ran the fledgling state of Israel from its establishment in 1948 to election of the first Knesset in 1949 (Rahat and Hazan 2005). While the current model is now enshrined in the Basic Laws of Israel that need a majority of the Knesset (61 of 120 members) to be changed, proposals for multiple constituencies have been put forward multiple times over Israel’s history , such as the April 2008 bill put forward by four Knesset members from parties across the political spectrum proposing that 60 members are elected in 60 single-seat constituencies, and the other 60 by proportional representation (Atmor 2008). Therefore, we can see that there has always been political will for this kind of electoral reform in Israel.

Public opinion

In order to make a final decision on what electoral reform should be introduced in Israel, one must consider the views of the Israeli public themselves. Any reform must be popular with the public or it risks illegitimacy, which would likely cause a participation crisis, where voters refuse to engage with the political process. This would severely undermine Israeli democracy and elections, which despite the flaws of the current system and a fall in turnout in the 21st Century, still enjoyed a 70.6% turnout in the November 2022 election (International IDEA, 2023). Polls conducted by the Israeli Democracy Institute in 2021 as part of its annual Israeli Democracy Index (Hermann et al. 2021) show that Israelis rank their municipality or local authority as one of their most trusted institutions, as shown below. Interestingly, political parties find themselves at the bottom of this scale, perhaps reflecting the public’s frustration with the political deadlock of recent years and the unwillingness of them to work together.

201

(Hermann et al. 2021)

When faced with different reform proposals, as shown by this graphic, around one-half of Israelis thought the current electoral threshold was ‘about right’, two-thirds thought local government should be transferred more power from the government ministries, and over one-half agreed with proposals to use an “open ballot” in Knesset elections (where voters could choose specific candidates and influence their position on the party list, thereby increasing voter choice ). A majority also supported incorporating regional representation into Knesset elections.

(Hermann et al. 2021)

One can therefore see from this that a system which favours local representation and gives voters more choice is likely to be popular with the public and will solve the current problem of the lack of local representation and direct accountability of MKs to their constituents. Therefore, a multiconstituency system is likely to be successful with the public.

Limitations

However, the main issue with introducing such a reform is deciding how the multi-constituency model will function. Atmor (2008) explains that there are three main issues when it comes to deciding how the use of electoral districts will work:

1. Deciding the number of constituencies. There could be as many as one district for every seat (such as in the UK) and as little as the current system in Israel (just one district). Most countries lie in between this and use multi-member constituencies, which would suit Israel the best, yet deciding how many constituencies there should be is extremely difficult and it may be impossible to find a right answer. Also, deciding where the boundaries are to be drawn will be controversial and difficult. Many parties will be worried about gerrymandering such as what happens in the drawing of American Congressional Districts that could harm their chances of winning an election.

2. Deciding how many representatives are returned by each district. This is hard to decide as there is no clear model that works the best based on looking at other countries. The districts could all be of the same size, like in Chile, or be different sizes, such as in Luxembourg, Spain and Switzerland.

3. Deciding whether there should be a mixed system that tops up the vote. The more districts that are divided, the less proportional the election result gets, which is why systems like that of the UK that use only single-member constituencies do not yield proportional election results. Therefore, a solution to this problem is using a proportional mixed system that ‘tops up’ the vote by giving extra seats to parties that received a lower proportion of district seats than their share of the vote. This system is used in Sweden and Denmark.

Therefore, while it may be easy to argue in principle that introducing constituencies in Israel will benefit its political system, quite how this system is to be implemented is very difficult to decide. The process of working out how electoral districts should operate is bound to be fraught with challenges and disagreement, which is perhaps one of the reasons the Constitutional Committee decided not to implement it in 1948.

However, Atmor (2008) offers an easy solution to this problem: drawing district boundaries by using existing ward boundaries used by the Central Elections Committee. While the boundaries may not be perfect, and the wards are not equal in population size, meaning each ward will have a different number of seats, it will at least, initially, avoid the inevitable arguments about boundary drawing if it was to be done from scratch. In the case of changes of population distribution in the wards, seats can be reapportioned, or boundaries could also be adjusted later by an independent commission, allaying fears of gerrymandering. Using the 2006 electoral boundaries, this table shows how seats can be distributed to the districts. The table considers what seat distribution would look like in case of constituency seats only taking up a portion of the legislature (with the rest being determined by a ‘top up’.)

203

(Atmor 2008)

Conclusion

With the use of existing boundaries solving issues 1 (number of constituencies) and 2 (representatives per district), what remains to be decided is whether a mixed system should be used, or a simple multimember constituency system without a ‘top up’ vote. Using constituencies of around 10 members each without an extra top up from party lists (as shown in the third column of the above table) seems to be the most effective solution, for the following reasons:

1. The large size of the constituencies, with each having roughly 10 members each, ensures that results can still be mostly proportional as each constituency allows multiple parties to win seats. This strikes a balance with the need to provide regional representation and the need to allow a diverse range of parties to enter the Knesset. It also negates the need for a confusing ‘top up’ system.

2. The introduction of constituencies creates regional representation for the first time, making MKs directly accountable to the constituents of their ward. The stakes for MKs are higher as MKs that do not listen to the needs of their constituents risk losing their seats in subsequent elections, as they count on their support.

3. Voters have more choice as they can decide which candidates in a party that they like best. If constituents had as many votes as the seats up for election, they can choose multiple candidates that they support. This also ends the party control of who in their party is most likely to win a seat, leaving it to the electorate to decide.

4. Representation of women and minorities will improve. Constituencies with large Arab populations will be guaranteed to have Arab parties representing them , and female candidates are more likely to win seats when voters can choose their candidate.

5. The system effectively raises the electoral threshold. In a district of 10 MKs, a party will gain representation in the Knesset if it receives 10% of the vote in the constituency. This reduces the number of parties and means it is harder for smaller parties to become kingmakers, as the larger parties will have more seats This should create more stable governments.

While it must be conceded that any system will have flaws, this system will solve the two main issues of the current electoral system: lack of stability and local unaccountability.

References

Ha,am, A. (1897) "The Jewish State and Jewish Problem" (Ahad Ha'am). Available at: https://www.jewishvirtuallibrary.org/quot-the-jewish-state-and-jewish-problem-quot-ahadha-am (Accessed: February 9, 2023).

Harris, M. and Doron, G. (1999) “Assessing the electoral reform of 1992 and its impact on the elections of 1996 and 1999,” Israel Studies, 4(2), pp. 16–39. Available at: https://doi.org/10.2979/isr.1999.4.2.16

Diskin, H. and Diskin, A. (1995) “The politics of electoral reform in Israel,” International Political Science Review, 16(1), pp. 31–45. Available at: https://doi.org/10.1177/019251219501600103

Troen, J. (2019) The National Electoral Threshold a comparative review across ... - knesset. Available at: https://main.knesset.gov.il/EN/activity/mmm/The%20NationalElectoralThreshold.pdf (Accessed: February 9, 2023).

Bagehot, W. (1867) The English Constitution.

Atmor, N. (2008) District elections in Israel: Pro and con, The Israel Democracy Institute. Available at: https://en.idi.org.il/articles/3304 (Accessed: March 30, 2023).

Dr. Chen Friedberg, A.F.and N.S. et al. (2019) 6,644 bills, 5,756 queries: Was 20th knesset a tale of quantity over quality?, The Times of Israel. Available at: https://www.timesofisrael.com/6644-bills5756-queries-was-20th-knesset-a-tale-of-quantity-over-quality/ (Accessed: March 30, 2023).

Elazar, D.J. (1988) State-local relations in Israel. Available at: https://www.jcpa.org/dje/articles2/statelocal.htm (Accessed: March 30, 2023).

Rahat, G. and Hazan, R.Y.(2005) “16,” in The Politics of Electoral Systems: A Handbook. Oxford: Oxford University Press, pp. 334–335.

205

Hoffman, G. and Sharon, J. (2015) Knesset passes controversial 'Norwegian law', The Jerusalem Post | JPost.com. Available at: https://www.jpost.com/Israel-News/Politics-And-Diplomacy/Knesset-passescontroversial-Norwegian-Law-410563 (Accessed: March 30, 2023).

Roper, S.D. (2000) Electoral Systems in Europe: An overview. Available at: http://www.stevendroper.com/elect_system.html (Accessed: March 30, 2023).

TOI Staff et al. (2022) Rebel Ra'am, Meretz mks bow out of their parties after turbulent stints, The Times of Israel. Available at: https://www.timesofisrael.com/rebel-raam-meretz-mks-bow-out-ofparties-after-turbulent-stints/ (Accessed: March 30, 2023).

IPU Union (2023) IPU parline database: Slovakia (Národná Rada ), electoral system. Available at: https://data.ipu.org/content/slovakia?chamber_id=13526 (Accessed: March 30, 2023).

Yagna, Y. (2010) Rocket-battered Sderot Faces bankruptcy, Haaretz.com. Haaretz. Available at: https://www.haaretz.com/2010-03-24/ty-article/rocket-battered-sderot-facesbankruptcy/0000017f-dc5c-db22-a17f-fcfd15790000 (Accessed: March 30, 2023).

OECD (2021) A review of Local Government Finance in Israel, OECD iLibrary. Available at: https://www.oecd-ilibrary.org/urban-rural-and-regional-development/a-review-of-localgovernment-finance-in-israel_a5bc4d25-en (Accessed: 12 June 2023).

International Institute for Democracy and Electoral Assistance (2023) Israel, International IDEA Available at: https://www.idea.int/data-tools/country-view/144/40 (Accessed: 12 June 2023).

Hermann, T. et al. (2021) The Israeli 2021 - en.idi.org.il, The Israeli Democracy Institute 2021. Available at: https://en.idi.org.il/media/18096/the-israeli-democracy-index-2021.pdf (Accessed: 12 June 2023).

207

Aryan Janjale

PHILOSOPHY

Aryan Janjale explored ‘Philosophical responses to Cartesian scepticism’ as his ERP focus and evaluated the success of contemporary versions of Contextualism, Externalism and Pragmatism. This project delves into the unwelcome epistemological consequences of scepticism, the importance of overcoming it and how to do so. Aryan Janjale is studying Maths, Further Maths, Economics and Philosophy and hopes to pursue PPE at university.

Is Contextualism is the best philosophical response to the problem of Cartesian scepticism?

Section I: Introduction

Scepticism has been a central topic in epistemology, the branch of philosophy concerned with knowledge and belief. It is more of a potent hindrance than a constructive school of thought, dating back to Ancient Greece with sceptics like Pyrrho of Elis and Sextus Empiricus. They developed the idea that knowledge is uncertain and that we should suspend judgement on all beliefs, including those that appear to be well-established. This model of scepticism has adapted to the modern era with only beliefs that have been rigorously tested against the scientific method can be accepted. Rene Descartes where he induced philosophy through Cartesian Scepticism. His ‘Method of Doubt’ helped establish the modern tradition of sceptical inquiry.1 This argument’s core problem is that it makes us wonder whether knowledge is even possible, foiling the entire branch of epistemology. AJ Ayer writes scepticism is such a large problem that in the modern era of philosophy, refuting a ‘sceptic is the aim of epistemological theorising.’ 2 Furthermore, the cartesian paradigm is focalised as the primary source of scepticism but there are multiple different types.

Rene Descartes, like us, used to believe things that were false (Santa Claus or the tooth fairy). This led Descartes to question if any of his existing beliefs actually contain elements of truth but instead, are completely false without his realisation. So, the ‘logical’ step he took was to disbelieve everything, at least temporarily. In justification of this wild exercise, Descartes offered an analogy. Imagine one has a basket of apples, and they are concerned some of the apples are rotten. Since the rot can spread and ruin the fresh apples, the only way to ensure there is no rot in the basket is to dump out all the fruit and inspect each one. From this hypothetical yet very possible scenario, we I can infer the only way to “reach certainty” about any knowledge is to examine every belief and only accept those about for which there could be no doubt. Considering Descartes’s shows that empirical knowledge (and general sense experience) can mislead us, we cannot accept all beliefs ‘proven by the scientific method. Descartes realised he had cause to doubt everything, everything except the fact that he was doubting. He can be sure that he, himself, was doubting and therefore, must exist as a thinking thing. In Meditations on First Philosophy, he declared the famous statement “Cogito ergo sum”, I think therefore I am. This was the first thing Descartes was certain of and later followed God.

This essay will focus only on Descartes’ take on scepticism and how (if at all) it can be successfully countered. There are multiple ideologies that attempt to resolve this issue but before we can evaluate their approach, I have set out certain properties and criteria that must be followed for the ideology (X) to be accepted:

(1) There must be leeway for the existence of an R level of properties.

(2) Cartesian Scepticism and X must be incompatible (cannot both be accepted witho ut contradiction)

(3) Accounts for the existence of an external world.

(4) X will achieve an accurate theory of knowledge.

1 Descartes, Rene, 1641, Meditations on First Philosophy

2 Ayer, AJ, 1956, The Problems of Knowledge

209

Section II: The R esponses

I will define each clause in-depth throughout this study, but we need to introduce the most agreeable attempts: Contextualism, Pragmatism and Externalism.

(A) Contextualism is a philosophical view that asserts that the interpretation and evaluation of knowledge claims are context-dependent. This means that the justification for a belief and the standards of evidence required to support it can vary depending on the context in which the belief is being formed or discussed. I see contextualism as reconciliation between common sense and thorough philosophical discovery. The central idea is to question the relationship between 'knowing' something and the context in which it is known. A proposition, for example, could be influenced by conversational context, such as implied intentions or presuppositions. We can determine whether beliefs are knowledge or can be disregarded based on the context. Contextualism, in some ways, best explains our epistemic judgement that we accept we have knowledge in most cases but question it in others. As a result, this is a proposed solution to scepticism because the context provided by a sceptic will lead us to believe that we lack knowledge when, in fact, we do not. Many contextualists, including David Lewis, a leading proponent of the theory, have made this argument. Ernest Sosa writes that he 'accepts key elements' of the theory but has some detailed 'reservations' in refuting scepticism.3 Sosa rejects Lewis' 1990s defence, questioning whether contextualism is even epistemology. He refers to the 'contextualist fallacy,' which describes the erroneous inference of an answer based on implications in the formulation of vocabulary or punctuation in a question. Even if the vocabulary is 'ambiguous,' the application of contextualism tends to favour a literature review of statements over epistemological scrutiny. Sosa believes that contextualism fails as a response to scepticism, but many defenders are eager to point out their rebuttal.

(B) Pragmatism is a philosophical school of thought that holds that a proposition is true if the consequences of accepting it are practical and satisfying. Pragmatism, which originated in the United States, can be seen as anti-Cartesian to some extent. If we accept Descartes' belief, it is hardly 'practical' to believe that the only truly existing things are himself and God. In the eyes of pragmatists, Descartes' "quest for certainty" is quixotic. To combat scepticism, they believe that beginning epistemology with a world of doubt is impractical. Scepticism is false because the fundamental idea of scepticism is doubt, and pragmatists believe an impractical consequence is a false one. This argument, however, is not without flaws. One common counter is that pragmatism is counter-intuitive, the movement that encourages practicality just is not practical. It stems from the distinction between Ideal and Real-World pragmatism. Ideal pragmatism is complete freedom from ideological constraints, only what is ‘practical.’ But Ideal pragmatism is impossible as no decision can ever be made without some form of moral undertaking affiliated with it. Therefore, we are left with the Real-World process. The value judgements of individuals-which are driven by cognitive bias rather than empirical evidence-is considered. As Michael Williams states, the select few with ‘vested interests’ may purposefully skew evidence in their favour.4 The same could be done for knowledge to create an artificial spin on ‘real-world facts.’

(C) Barry Stroud considers G.E. Moore's 'Externalism' as a response to scepticism. 5 Justification or warrant is central to epistemic externalism theory. Moore seeks to demonstrate that they are indeed 'external things,' thus defeating scepticism, which is the doubt that certain 'things' even exist. In his thesis, Michael Bergmann mentions a few 'standard examples' of early process externalism,

3 Sosa, Ernest, 2000, Scepticism and Contextualism

4 Williams, Michael, 2011, Problems of Knowledge

5 Stroud, Barry, 1984, The significance of philosophical scepticism

such as reliabilism, certain virtue theories, tracking accounts, and proper function accounts.' 6 Moore's 'rigorous proof' begins with an illustration. He claims there are at least three misprints on a specific page, which can be 'conclusively settled' in the affirmative by locating three misprint examples. From this, Moore infers that the best proof we could possibly have of something’s existence would be to find it through sensory perception. The same could be said for material objects which clarifies the relation between the epistemological problem and out ordinary procedures and claims to ‘know’ things in everyday life. There are multiple counterarguments to Moore’s proof and externalism as a response to cartesian scepticism. As mentioned before, externalism zones in on justification. If and only if a belief satisfies the externalist conditions, then that belief is justified. However, a sceptic would argue that this is ‘philosophically unsatisfying’ and misinterprets their arguments. The crux of a sceptic arguments is whether the antecedents of such beliefs are true, not justified.

S Section III: Evaluation

Aside from these 3, there have been many differing approaches to the problem of Cartesian scepticism. However, I will rule them obsolete as they instantly fail the criteria I had set before. An example of this is an idealist ideology who argues only minds (and God) exist. This is in complete contradiction to criteria (3) as it neglects the existence of an external world. Not only is Idealism rarely accepted in contemporary society, but it violates the verifiability principle. Furthermore, I find idealism devoid of criteria (1). So, it and the many other futile attempts not listed should be rejected as a possible response to solve the cartesian paradigm. In clause (1), I refer to R level of properties as a standard of measure of the existence of entities. Unlike idealism (which rejects the existence of all physical objects), some models of perception and responses to scepticism stand indifferent. Some account for this existence of some entities while others disagree and advocate for others. This ambiguity has no clear standard of measure so using logical principality, I can create an inequality. If X > R, then it meets criteria (1) where X is the response in question. Using R, I have created a definitive boundary to an open-ended, eternal question like scepticism and can now give a true answer to the question; Do the 3 arguments above succeed or fail in tackling Descartes’ argument?

I will first evaluate Contextualism, the supposedly most revered response to Descartes in the modern age. David Lewis, as an advocate for the former, would claim Contextualism fulfils each criterion. Yet, the response fails to answer clause (2) and (4) due to its ambiguity around the impact it will have on knowledge when put in practice. But first, contextualism notes the existence of entities greater than R. The theory suggests that truth and meaning are always contingent on context whereas, cartesian scepticism is a universal rejection of all entities (aside from God and oneself). The context of the belief in physical, material objects is grounded in an amalgamation of common sense and general theory of perception. Contextualism accounts for the existence of a multitude of entities, far greater than R. This same reasoning is used for clause (3) as if so, many entities exist, it only contextually follows an external world where these entities lie also exist. These two clauses that contextualism meets shows it to be a less radical, and more plausible, response to scepticism yet, it must be rejected due to its violation of the remaining criteria. If both Contextualism and Cartesian scepticism were to be enforced, it would create no contradiction in conception which does not meet clause (2). The error of uncertainty is a clear example of this compatibility problem. Both contextualists and sceptics also acknowledge the importance of epistemic humility and the need to remain open to the possibility of error. Contextualists argue that we should be aware of the limitations of our own knowledge and be open to the possibility of

211
6 Bergmann, Michael, 2008, Externalist responses to scepticism

revising our beliefs considering new information or changing contexts. This gap in our knowledge can be exacerbated to the point where it lies in agreement with Descartes’ extreme understanding of knowledge. If errors in uncertainty are caused from new and changing contexts, there could be a world where both contextualism and scepticism are enforced without contradiction as the same conclusion is reached. We cannot be certain we have knowledge in specific scenarios and contexts. Clause (4) is a major criticism of contextualism. It can lead to a more relativistic understanding of knowledge, in which what counts as true or false is always dependent on the specific context in which it is being evaluated. While this may be seen as a weakness of contextualism by some, others argue that it allows for a more nuanced and sophisticated understanding of the complexities of human understanding and interpretation. If truth and meaning are always dependent on context, and there are no absolute or context-independent criteria for evaluating them, then it becomes difficult to establish shared standards or criteria for evaluating knowledge claims. This can lead to a situation in which different individuals or groups hold incompatible or even contradictory beliefs, with no way to reconcile or adjudicate between them. It can also lead to a sense of epistemic uncertainty or relativism, in which there is no way to establish objective or reliable knowledge claims about the world. Another problem with a more relativistic understanding of knowledge is that it can make it difficult to evaluate or critique certain beliefs or practices that may be harmful or unjust. If all beliefs and practices are viewed as equally valid or contingent on context, then it becomes difficult to challenge those that may be based on misinformation, prejudice, or oppression. Therefore, contextualism fails the criteria I set out and simply cannot be accepted as a viable route to reject scepticism.

Pragmatism is a philosophical approach that emphasizes the practical consequences of beliefs and actions as the ultimate criteria for evaluating their truth or value. This approach suggests that we should focus on what works and what leads to successful outcomes, rather than on abstract or theoretical considerations. It fails clause (1) and (2) as if we adopt a pragmatist perspective, there are beliefs or assumptions that we must question that lead to a speculation of entities below R. Pragmatism challenges the notion that there is an objective, context-independent reality that we can access through our beliefs or knowledge claims. Instead, it suggests that truth is always contingent on the goals or purposes that we are pursuing. Pragmatism also challenges the idea that there are universal moral principles or values that apply in all contexts or situations. Instead, it suggests that moral judgments are always contingent on the particular social and historical contexts in which they are made. The idea of fixed or essential identities: Pragmatism disagrees that individuals or groups have fixed or essential identities that can be defined or categorized in a universal or objective manner. Instead, it suggests that identity is always contingent on the particular social and historical contexts in which it is constructed. One could argue that these are fundamental components to our individuality. So, pragmatism not only fails to refute scepticism, but it also somehow violates our very humanity. Clause (2) highlights overlaps between pragmatism and scepticism. One area is their emphasis on the importance of evidence and experience in shaping our beliefs and knowledge claims. Pragmatists argue that we should base our beliefs on what works or what is supported by empirical evidence, while sceptics emphasize the need to be cautious and circumspect in accepting claims to knowledge, and to be aware of the potential for error or deception in our sources of information. Both pragmatists and sceptics also acknowledge the limitations of human knowledge and the need for epistemic humility. Pragmatists argue that our beliefs should be subject to ongoing testing and revision considering new evidence or changing circumstances, while sceptics emphasize the need to remain open to the possibility of error or uncertainty in our claims to knowledge. Interestingly, pragmatists acknowledge the importance of context in shaping our beliefs and knowledge claims. Pragmatists argue that meaning and interpretation are always contingent on the specific context in which they are being evaluated, while sceptics argue that our beliefs and knowledge claims are always subject to various contextual and

situational factors that can influence their reliability and accuracy. It is rather fitting that there is a shared flaw between two responses that fail as theories.

Adopting a Moorean externalist perspective would lead to a similar failure in refuting scepticism. Unlike the previous attempts, externalism violates every single clause, including clause (3). Moore's externalism inspects the idea that the external world is fixed or stable in its properties and characteristics. They fault the belief of this characteristic of the external world and instead, it suggests that our knowledge of the external world is always partial and contingent on the specific perceptual experiences and contexts in which it is constructed. Moore's externalism suggests that there are certain basic facts about the external world that we can know with certainty through our perceptual experiences. However, this challenges the idea that knowledge can be infallible or completely certain. Instead, it suggests that knowledge is always subject to some degree of uncertainty and revision based on new evidence or experiences Having a ‘partial’ knowledge on an external world is implausible enough without it also ultimately leading to an agreement with scepticism in uncertainty and a theory that lies below the R level of properties required. One would think a response to scepticism would eliminate all possibility that our beliefs are not concrete (rather than enforce it), yet all three of these theories have failed to do so. This only raises more questions than needed, all of which I will attempt to answer now. For example; Can there be a successful response to cartesian scepticism and if so, what is it?

Section IV: An Original Approach

Over this case study, I have discovered elements of these three revered responses that can be utilized in my pursuit of knowledge. The justification for beliefs being dependant on context, practicality or externality all leads to a single measure of success: plausibility. Instead of modelling knowledge on specific contexts that fall foul to scepticism of sensory perception, we can formulate a theory that focuses on the most plausible belief. A plausible belief is one that encompasses common sense, practical consequences and elimination of falsehood. Using this, here are a set of premises that fulfil the four-fold criteria.

(1) The belief that is generally assented to is true

(2) If unclear, the belief that is contains the least number of entities should be accepted

(3) If unclear, the belief held under common sense should be accepted

(4) If still unclear, consider the practicality of the consequences of the belief held.

(5) The belief that follows the pragmatic model is correct

My epistemological principle draws upon contextualist, pragmatist and externalist beliefs as each (while still fails in its own regard) has some components on the right track. This model also takes inspiration from Ockham’s Razor, particularly premise (2). ‘Entities should not be multiplied beyond necessity’ prioritizes simplicity. Using this principle, more plausible origins of physical objects – like the Big Bang – will be accepted over an evil genius hypothesis. This is because a high-density explosion of mass is more practical, context-dependant, simple and assented to than a perfectly evil, omnipotent and omnibenevolent being set on deceiving us for no other apparent reason than fun.

While my epistemological model is strong and only necessitates the strongest principles, some premises can still be devalued by scepticism. For example, Samuel Arbesman's ‘half-life of fact.’ This metaphorical concept notes that a high proportion of beliefs we hold true today will be rejected as we advance in technology and research in the future. The numerous positive advancements in scientific methodology which means the beliefs we uphold as truths currently, may be false in 50

213

years. Before Copernicus, the common belief was that the sun orbits around us.7 Using my premises, this belief would be true in the 16th century but today - as it heavily depends on the context and general assent of the belief – the geocentric model would be considered a falsehood.

Another problematic issue with the definition is the use of Ockham’s Razor. As an English friar, William of Ockham claimed that the simplest and best explanation for the existence of everything is God. It is perceived that the Big Bang and the God of classical theism are incompatible which only brings more scepticism (particularly about the involvement of God). There is no certainty to Ockham’s razor and the use of plausibility, only justification on the grounds of context and common sense. Yet, a cartesian sceptic’s main criticism would be that certainty is impossible to achieve. The primary devotion to God is faith and hope, rather than empirical evidence. So, while my model fulfils each clause, it also allows the existence of God and, inherently, scepticism about God. Throughout this study, certainty is almost impossible to achieve. Having researched responses to scepticism, it is indubitably difficult to surpass the universal nature of doubting things. At this time, there will always be holes for sceptics to doubt, no matter how absurd their claims may be. However, we can use Abersam’s theory in a positive way highlighting that in 50 years' time, we could have made philosophical progress as well and finally achieve some form of certainty.

S Section V: Conclusion

To conclude, my venture for epistemological certainty has led to it being non-existent. Yet even if we do not have a solidified defence for sceptics, they will always be refuted because of context, practicality and externality. While the wondrous works of Descartes may have caused the unwelcome conditions of cartesian scepticism, it is a position no one will hold today. Therefore, contextualism fails to defeat the problem of cartesian scepticism but elements of the former’s procedure will be noted and used in future attempts.

Bibliography:

Williams, Michael (2011) Problems of Knowledge

Stroud, Barry (1984) The significance of philosophical scepticism

Sosa, Ernest (2002) Scepticism and Contextualism

Bergmann, Michael (2008): Externalist responses to scepticism

Pryor, James (2004) What’s W rong with Moore’s Argument?

Sosa, Ernest (1994) Philosophical Scepticism and Epistemic Circularity

Vogel, Jonathan (1990) Cartesian scepticism and Inference to the Best Explanation

Hilary Kornblith (2004) Does Reliabilism Make Knowledge Merely Conditional

Greco, John (1999) Externalism and scepticism

Chisholm, Richard (1982) Externalism and scepticism

7 Copernicus, Nicolaus, 1543, On Celestial Spheres

215
217
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.