![]()
Innovative Writers
Verity Powell
Emily Feeke
Nina Bucekova
Sarah Mackel
Sarah Kurbanov
Adam Brittain
Violet Melcher
Zara Ward
Shiksha Guru
Milan Wood
Diya Rajesh
Revathi Ramachandran
Mia Cammarota
Ulyssa Fung
Thomas Burton
Ideja Bajra
Josie Sequeira-Shuker
Editors
Laila Deen
Tom Burton
Ideja Bajra
Toby Lawson
Sushmhitah Sandanatavan
Committee
Co-founder Thomas Burton
Co-founder Ideja Bajra
Natural Science Josie Sequeira-Shuker
Technology Milan Wood
Engineering Zara Ward
Medicine Sarah Mackel
Editing & Design Laila Deen
Logistics Melissa Ieger Gaeski
Head Melissa Ieger Gaeski
Treasurer Côme Naegelen
Events Kimberly Pederson
Outreach Molly Keane
Social Media Ruby Sloan
shortly evolved into the creation of STATIC (St. Technological and Innovation Club) and our Scientific Innovation Review.
Our Review outlines recent and significant innovative technologies across the fields of Natural Science, Technology, Engineering and Medicine. Our innovative writers have researched their report titles in great depth, using various sources, scientific journals and databases. The STATIC committee have worked consistently throughout the semester to guide our teams in researching their titles and formulating their reports.
STATIC is a growing initiative and will continue to host inspiring and influential collaboration and inter-club events. We aspire to continue establishing a networking system between like -minded, accomplished and motivated individuals. STATIC’s ethos can be summarised in a quotation from a TED talk that our head of Natural Sciences recently gave on the ‘synergy between successful women’.
“The interactions between different people with different strengths combine to create new ideas. These new ideas combine to create unique and life changing solutions to climate change, the energy crisis and future pandemics” ~ Josie Sequeira-Shuker.
This review is a product of the synergy across the committee and our entire team of innovative writers, collaborating to outline the innovations that have contributed towards the betterment of society.
Thomas Burton and Ideja Bajra.
Deep learning and web accessibility: a commentary on how deep learning could improve accessibility online, Milan Wood……;.07
The importance of understanding protein folding in relation to computational complexity, Verity Powell...................................17
Machine Learning for Causal Inference: Bridging the Gap between Prediction and Causality, Nina Bucekova……………………...26
Initial Explorations: A computational approach to 3x3 magic squares of squares, Verity Powell……………………………………….35
Reviewed and edited by
S. SandanatavanABSTRACT: The aim of this report is to provide an insight into the potential of deep learning as an ingenious solution to the web accessibility challenge and to illuminate its opportunities and implications for all users. By harnessing the power of deep learning, we could strive towards a digital landscape where everyone, regardless of their abilities, can reap the benefits from the vast resources available on the web. The exploration investigates the diverse avenues through which deep learning could improve accessibility on webpages for viewers with learning difficulties, disabilities or visual impairments. I will be exploring the various avenues through which deep learning could be applied in enhancing image, video and text recognition, facilitating natural language understanding and personalising the user experience through predictive modelling. The start of this report clarifies the definitions of key terminology that will be used throughout the exploration, before diving into potential avenues alluded to above. Finally, we will conclude on the feasibility of using deep learning for web accessibility.
The rapid expansion of the digital age presents a myriad of opportunities for worldwide communication, education and entertainment, but it also introduces significant challenges in terms of ease of access. Deep learning is a branch of machine learning that uses complex neural networks to identify patterns within large, unstructured datasets. Recent advancements in the capabilities of artificial intelligence (AI) have given deep learning the potential to significantly improve inclusivity in web accessibility. The ever-evolving nature of web content demands innovative approaches that go beyond established standards – deep learning could be the key to a seamless online experience (Dean, Jeffrey 2022).
To understand deep learning, we must start at the top of the hierarchy with AI. Broadly speaking, AI is the simulation of human intelligence processes by machines with the ability to learn (i.e. acquire information and rules for using the information), reason (using these rules to reach approximate or definite conclusions), self-correction and the ability to perceive and interpret the surrounding environment (Scharre, Paul, et al., 2018).
Machine learning (ML) is a subset of AI involving the development and application of algorithms and statistical models to enable computer systems to automatically learn and improve from experience without being directly programmed. These algorithms are design-focused to analyse complex data and make predictions and decisions based on that data – the fundamental concept underlying machine learning is to allow systems to identify relationships between data in a progressive manner, similar to that of a human gathering knowledge.
Deep learning is a subset of machine learning with a design inspired by the structure and function of the human neural network in the brain. Deep learning models are composed of multiple layers of artificial neural networks, each providing a different interpretation of the data it feeds on The layers are hierarchical, with each successive layer using the output from the previous layer as its input.
These models are trained by looking at many pre-designed example datasets to trigger automated learning – the more training the model receives, the further it excels in identifying patterns in both structured and unstructured data. Signals are passed from neuron to neuron, initially being generated by simple yes-or-no choices, where signals with a positive value continue through the network and signals with negative value are inhibitive (Hayes, Brian. 2014). This is why deep learning is suitable for fields such as image and speechrecognition and natural language processing.
The key differences between ML and deep learning are: representation of data, architecture and model complexity, data requirements, training and computational resources and interpretability (Janiesch, C. et al. 2021).
Traditional ML requires data to be handcrafted by domain experts before it can be interpreted whereas raw data is used directly in deep learning to extract information which gives deep learning models the ability to automatically learn hierarchical representations of data to be more effective in capturing complex patterns Leading on from this, deep neural networks consisting of multiple layers of interconnected artificial neurons that can potentially contain millions, or even billions, of parameters make deep learning more advanced than comparatively simplistic machine learning architecture in understanding highly intricate and abstract representations of data.
The inherently less complex nature of ML results in ML models needing minimal data description and computation resources to work effectively. Limited labelled data and small datasets are sufficient for ML models to operate compared to the data-hungry temperament of deep learning that requires large amounts of descriptive data to achieve optimal performance.
Consequently, ML models can be trained on standard hardware as they are generally less computationally intensive than deep learning models that are expensive, time- consuming and often require high-performance processing units.
It can be difficult to understand the predictions drawn by deep learning models due to the high complexity of deep neural networks that make it hard to trace the decisionmaking process back to specific milestones. ML models are more easily understood and often more interpretable than their deep learning counterparts.
According to Tim Berners-Lee (Berners-Lee, Tim 2023), web accessibility is the degree to which the Internet and its tasks are put at the distribution of all types of users whatever their requirements, locations, languages, physical and mental aptitudes, etc. Web accessibility is the inclusive practice of allowing all users to have equal access to information and functionality on web pages by removing barriers that hinder interaction with, or access to, sites by people with disabilities There are several standards and guidelines to ensure diverse webpage interaction – these include Web Content Accessibility guidelines (WCAG) developed around four principles of perceivable, operable, understandable and robust and Accessible Rich Internet Applications (ARIA) which are a set of special attributes optional for HTML, JavaScript and similar technologies (Simeone, Jonathan. 2007).This report will be focusing on client-side applications of deep learning to assist digital environments in image and video recognition, speech synthesis, natural language processing (NLP), predictive text and personalisation.
The discussion portion of this report is divided into four topic sections: Image and Video Recognition, Speech Recognition and Synthesis, Predictive text and Personalisation. Each section will highlight current issues within this area of web accessibility and highlight how applying deep learning techniques could provide a solution.
Images and videos are required to have alternative text (alt text) for audible descriptions to help users who struggle visually This text is a brief description of the information represented by the image or video that is crucial for individuals with visual impairment or rely on screen readers to interpret visual elements on a webpage. Alt text is handwritten by the website developer in text fields that add an alt attribute to the HTML tag used to display the webpage image – it can be automatically generated for video descriptions once it has been manually inputted, but it is often wildly inaccurate (Raju Shrestha 2022) This is a time-consuming process which is often overlooked by content creators
Deep learning techniques offer a promising solution for automatically generating accurate and descriptive alt text by training models, on large datasets of labelled images and videos, to recognise objects, scenes and actions depicted in visual content (Waldrop, M. Mitchell. 2019). For example, an initial training dataset of labelled images could contain common objects, such as “dog”, “car”, to introduce familiarity of a generalised ‘body’ to the model Further details such as colour and size could be added to future datasets once the model is able to skilfully analyse and recognise basic image concepts.
The significance of the information produced by these models would reduce the burden on human content creators and produce a real-time narrative of visuals that will change how the viewer interprets the webpage content.
Speech recognition allows a user to interact with a web page through voice commands. The transcription and interpretation of spoken language for individuals with physical or speech impairments, or those who simply prefer voice input, makes websites easily navigable without having to use alternative hardware like a keyboard and mouse.
Deep learning models can acquire knowledge on the nuances and patterns of spoken language through iterative training on large speech datasets to enable accurate recognition of speech. These datasets could range in attributes such as accent, age and language for the broadest coverage of natural language. The models would convert spoken word into text or commands to manoeuvre the website, eliminating the need for manual typing for a more natural means of interaction This benefits all users as they can still access information on the webpage when multi-tasking or when visual attention is limited. It also promotes inclusivity by accommodating users with varying levels of literacy or language proficiency.
Speech synthesis (or Text-To-Speech (TTS)) converts written text into spoken word using deep learning-based techniques. TTS models learn intonations and linguistic nuances of human speech to produce high-quality synthetic voices that closely resemble natural speech. The natural and intelligible speech generated benefits users with visual impairments or learning disabilities.
A focus on refining deep learningbased TTS models could help overcome challenges such as maintaining prosody, inflection and personalisation, tailoring the voice output to the user’s preferences to ensure they receive a high-quality auditory experience. Moreover, using deep learning techniques to develop more compact and efficient TTS models for easier integration of speech synthesis directly into websites to ease reliance on external services and reduce latency. These improvements will make webpage information more accessible to a wider user base and is overarchingly more inclusive
Predictive text offers valuable assistance to users as they type by making suggestions or automating responses – a service that is particularly valuable to individuals with cognitive impairments or motor disabilities with challenges typing. The system uses algorithms to analyse the context of the input text, including factors such as words already entered and syntax, to generate relevant and likely predictions (Tlamelo Makati. 2022). Probability distributions are then applied to a language model, powered by machine learning algorithms, to assign the likelihood of each potential suggestion Some text systems can continuously adapt based on user interaction by learning from user selections to generate suggestions that better align with individual preferences
The key areas in which applying deep learning could improve within predictive text are accuracy of predictions and personalisation for individual writing styles and preferences. Extensive training of a deep learningbased predictive text language model on diverse datasets would better capture the intricacies of language and result in more accurate predictions. The outcome of an improved predictive text system would enhance typing efficiency and overall user experience. The integration of multi-modal input (combining text with audio or visual cues) will further improve accuracy and richness of predictive output. Some deep learning language models, such as OpenAI’s GPT-3, are starting to incorporate this feature. The incorporation of user-specific data, such as typing patterns, preferred phrases, or commonly used vocabulary, on a deep learning-based predictive text language model will adapt the suggestions to the individual, and the model’s ability to continuously learn ensures it remains up-to-date and adapts to new linguistic trends These personalised models will align better with individual writing styles
The defining factor of web accessibility is how easily a website can adapt and learn from the individual – using deep learning to improve user personalisation in areas such as interfaces, text size and colour schemes may be useful for those with cognitive disabilities who may find standard website layouts challenging.
Creating user profiles is one way deep learning is currently used to tailor the web experience. By understanding individual users’ characteristics, interest and accessibility needs, websites adapt their content layout and functionality to provide an optimised experience (Abou-Zahra, Shadi et al. 2018). The deep learning algorithms identify patterns, correlations and preferences by training on user data to refine recommendations. Dynamically adjusting an interface based on user feedback and behaviour is an addition that deep learning models could perform. Adaptive interfaces can change aspects like font size or colour contract based on an individual’s disability or cognitive capabilities. This will ensure all users can access and interact with content more efficiently. Similarly, these models could suggest content recommendations that suit user interests and accessibility requirements by leveraging user preferences, browsing history and contextual information. The more data available to the deep learning models, the better understanding it can develop to refine web personalisation
The inclusion of multimodal forms of i nput (as mentioned in the commentary above) will enable more comprehensiv e personalisation by considering various accessibility dimensions simultaneously. For example, combining image recognition with user preferences can lead to personalised image descriptions that cater to individual accessibility require ments.
Deep learning-based models can evidently enhance user experience on the web by their capacity to discern patterns that improve personalisation and alternative media or speech recognition. A large downfall with the application of deep learning in this way is user privacy (Pellegrino, Massimo, et al. 2019) There are concerns related to data collection, storage, anonymisation, consent, security, and user control. Protecting user privacy through measures such as informed consent, data minimisation, secure storage (i.e. using encryption), and transparency is crucial. Addressing these concerns ensures responsible use of data and fosters user trust in deep learning systems for web accessibility. Overall, continued research and innovation within the areas mentioned in this report for deep learning techniques can responsibly pave the way for more effective accessibility approaches in the future
This section contains analysis on a simplified deep learning model written in the Java library, Deeplearning4j (DL4j) The model requires a suitable library that can handle complex mathematical operations and allow for scalability – a considerable amount of customisation is needed to tailor this model for web accessibility. The example is a MultiLayerNetwork model with one hidden layer.
The program begins by setting the initial configuration parameters for the model, including the seed for the random number generator (seed), the learning rate for the optimisation algorithm (learningRate), the batch size (batchSize), the number of inputs to the model (numInputs), the number of outputs from the model (numOutputs), and the number of nodes in the hidden layer (numHiddenNodes).
A NeuralNetConfiguration builder follows the design pattern of method chaining and is used to set up the network configuration. This builder is used to initiate a MultiLayerNetwork object.
The configuration starts with the seed value for replicable results and the optimisation algorithm set to Stochastic Gradient Descent (SGD). SGD is a commonly used optimisation algorithm in neural networks which iteratively adjusts the model parameters to minimise the loss function
Figure 2: DL4J MultiLayerNetwork model
Dean, Jeffrey “A Golden Decade of Deep Learning: Computing Systems & Applications.”Daedalus 151, no. 2 (2022): 58–74. https://www.jstor.org/stable/48662026.
Scharre, Paul, Michael C Horowitz, and Robert O Work “What Is Artificial Intelligence?” ARTIFICIAL INTELLIGENCE: What Every Policymaker
Needs to Know. Center for a New American Security, 2018. 4-9. http://www.jstor.org/stable/resrep20447.5.
Scharre, Paul, Michael C Horowitz, and Robert O Work “What Is Artificial Intelligence?” ARTIFICIAL INTELLIGENCE: What Every Policymaker
Needs to Know. Center for a New American Security, 2018. http://www.jstor.org/stable/resrep20447.5.
Hayes, Brian “Computing Science: Delving into Deep Learning ” American Scientist 102, no. 3 (2014): 186–89. http://www.jstor.org/stable/43707183.
Janiesch, C., Zschech, P. & Heinrich, K. Machine learning and deep learning Electron Markets 31, 685–695 (2021)
https://doi org/10 1007/s12525-021-00475-2
Tim Berners-Lee, 2023. W3.org. https://www.w3.org/mission/accessibility/ Simeone, Jonathan. “Website Accessibility and Persons with Disabilities.” Mental and Physical Disability Law Reporter 31, no 4 (2007): 507–11 http://www.jstor.org/stable/20787031.
Raju Shrestha. 2022. A Neural Network Model and Framework for an Automatic Evaluation of Image Descriptions based on NCAM Image Accessibility Guidelines In Proceedings of the 2021 4th Artificial Intelligence and Cloud Computing Conference (AICCC '21). Association for Computing Machinery, New York, NY, USA, 68–73. https://doi.org/10.1145/3508259.3508269
Waldrop, M Mitchell “What Are the Limits of Deep Learning?” Proceedings of the National Academy of Sciences of the United States of America 116, no. 4 (2019): 1074–77. https://www.jstor.org/stable/26580207.
Tlamelo Makati 2022 Machine learning for accessible web navigation In Proceedings of the 19th International Web for All Conference (W4A '22). Association for Computing Machinery, New York, NY, USA, Article 23, 1–3. https://doi.org/10.1145/3493612.3520463
Abou-Zahra, Shadi, Judy Brewer, and Michael Cooper "Artificial Intelligence (AI) for Web Accessibility: Is Conformance Evaluation a Way Forward?" Web4All 2018, 23-25, 4.1.2 Personalisation, April, 2018, Lyon, France, ACM, 2018.
Pellegrino, Massimo, and Richard Kelly “Intelligent Machines and the Growing Importance of Ethics.” Edited by Andrea Gilli. The Brain and the Processor: Unpacking the Challenges of Human-Machine Interaction. NATO Defense College, 2019. http://www.jstor.org/stable/resrep19966.11.
Warner, Brad, and Manavendra Misra “Understanding Neural Networks as Statistical Tools.” The American Statistician 50, no. 4 (1996): 284–93. https://doi.org/10.2307/2684922.
Reviewed and Edited by S. Sandanatavan
ABSTRACT: Since the turn of the century, advancements in computational biology have allowed us to understand biological processes and relationships through the utilisation of big data, mathematical modelling, theoretical methods, and computational simulation techniques. Therefore, the computational complexity of biological processes can be studied to better understand their limitations and performance. Some biological processes exhibit complexities that relate to the infamous P versus NP problem, as seen in protein folding. This report delves into the significance of comprehending the inherent complexity of protein folding, considering experimental evidence and the persistently unsolved protein folding problem. Levinthal’s paradox and the energy landscape theory are explored to better understand how the physicochemical properties of amino acids constrain potential solutions to the protein folding problem. Potential computational solutions to the protein folding problem are discussed with reference to heuristics and hypercomputation. We conclude that understanding the mechanics of protein folding could be hold the key to resolving other important computational problems. Furthermore, we argue that considering the computational complexity of a biological process can help us to better understand how they operate
biology deals with the detailed residue-by-residue transfer of sequential information. It states that such information cannot be transferred back from protein to either protein or nucleic acid.”.
The dogma gives us a framework for understanding the relationship between DNA and Proteins. A gene can be expressed to manufacture its corresponding protein via the processes of transcription and translation (Clancy et al, 2008).
Computational complexity refers to the resources required to solve a computational problem (Cirillo et al, 2018). This report delves into time complexity and more specifically asymptotic time complexity, which details the behaviour of complexity as the input size of an algorithm increases. Computational complexity is not only important in creating faster algorithms but for understanding the limits of computation. For example, the asymptotic complexity of a problem can be finite but take so long to complete that it is an impractical solution
Such problems are said to be intractable signifying any attempt at resolution demands too many resources to be useful (National Science Foundation, 2016).
Intractability underscores the absence of an efficient solution, where efficiency is defined as a polynomialtime algorithmic solution This closely relates to the complexity classes of polynomial (P) and nondeterministic polynomial (NP) time P problems can be solved in polynomial time. A problem is classed as NP if it’s solution can be guessed and then verified in polynomial time where the production of the ‘guess’ is nondeterministic. Whilst these two complexity classes seem distinctly different, the P versus NP problem questions if a NP problem be solved in polynomial time (Cook, 2001)
Figure 1 shows the relationship between P and NP and further introduces the complexity classes of NPhard and NP-complete for the cases P = NP and P ≠ NP. A program is NP-hard if an algorithm to solve it can be translated into any other NP problem. A problem is NP-complete if it is not only in NP but also NP-hard. Stewart (2000) explains “Specifically, an NP problem is said to be NP-complete if the existence of a polynomial time solution for that problem implies that all NP problems have a polynomial time solution"
Figure 1: Euler diagram for P, NP, NP-Complete and NP-Hard problems shown under the assumptions that P ≠ NP (left) and P = NP (right).
A polypeptide chain undergoes a transformative process into a biologically functional protein upon assuming its intricate threedimensional configuration (Cheriye dath, 2019). The comprehension of "how a protein’s amino acid sequence dictates its threedimensional atomic structure” (Dill et al, 2008) is known as the Protein Folding Problem (PFP). The PFP lies at the intersection of biology, physics and computer science and its potential solution could have a profound effect on each discipline This report will explore the PFP from a computational perspective considering the potential effects of the problem’s computational complexity against the backdrop of the P versus NP problem.
In 2021 Deepmind released AlphaFold, a ‘computational approach capable of predicting protein structures to near experimental accuracy in a majority of cases’ (Jumper et al, 2021). However, it is important to note the distinction between the PFP and protein prediction. The process of predicting how proteins fold into a three-dimensional structure from their amino acid sequence, does not teach us anything about how proteins fold ( Moore et al, 2022).
Within the scope of this report, we will only focus on the PFP, with only relevant reference to protein prediction.
In a broader sense, studying complexity allows us to better understand fundamental aspects of science with profound implications on our understanding of the world
that is consistent with the dynamic responses of biological processes. Furthermore, the introduction of non-linearity within biological systems presents a counterpoint to deterministic explanations. Here non-determinism disrupts a reductionists idea of reducing complex phenomena into simple terms. Computational biology can be used to study complex and non linear biological
Modern embraced explain reducing complex data into simple terms However, scientists may be reaching the limits of this approach (Mazzocchi, 2008). Mazzocchi (2008) insists that it is the emergent properties of biological systems within both components and external factors that hinder a reductionist approach. For example, when proteins fold it is not only the order of their polypeptide chain but factors such as cell acidity and temperature effects their threedimensional structure.
Biological systems exhibit nonlinear behaviour (De Haan, 2006)
biological processes and discover their underlying algorithmic complexity
P versus NP takes its place amongst the Millennium Prize Problems (Clay Mathematics Institute, 2022) as one of the most well-known and complicated unsolved mathematical problems. Whilst no proof currently exists for a solution that proves whether P = NP, there is an expectation amongst computer scientists and mathematicians that P ≠ NP (Gasarch, 2002)
Whilst the PFP is of immense biological significance, its computational complexity should also garner interest. The complexity of the PFP is known to be NP-hard meaning it is “conditionally intractable” (Fraenkel, 1993) However, it is the question of whether protein folding is NP-complete that could revolutionise our whole understanding of computation. If NP-complete, the PFP, unlike other NP-complete problems, appears to have an effective solution in nature If the biological process of protein folding is NP-complete, and can be solved in polynomial time, then P would equal NP.
There is debate about the NP-complete nature of the PFP Berger and Leighton (1998) provide evidence that the PFP is NP-complete for the hydrophobic-hydrophilic model on the cubic lattice. Conversely, Guyeux et al, (2013) (in their work focusing on models used for protein prediction which model the PFP) state that ‘the SAW requirement considered when proving NPcompleteness is different from the SAW requirement used in various prediction programs, and that they are different from the real biological requirement.’ Therefore, we will explore the implications of the PFP being both NP-complete and not NP-complete.
Furthermore, as no proof exists for P versus NP no assumption about its outcome will be made. Instead, this report aims to detail the importance of definitively understanding the complexity of the PFP and highlight that understanding the PFP would shed light on the mechanisms behind NP problems. Therefore, we will explore the biological process of protein folding and make comment on the problem in relation to P versus NP.
One hypothesis for finding the native folded state for a protein is that it could be achieved by a random search along all possible configurations (Zwanzig et al. 1992). This thought experiment is known as Levinthal’s Paradox (Levinthal, 1969) and shows the intractable nature of the PFP (Martinez, 2014)
The notion that exhaustive search contributes to the intricacies of protein folding aligns with an intuitive understanding that P is not equal to NP. This concept suggests that a polynomial solution is improbable for addressing an NP-complete problem. In an example provided by Srinivas and Bagchi (2003), we can understand how an implementation of a non-polynomial solution (in this case exhaustive search) is simply not viable Consider that for a string of 101 amino acids (where only conformations are considered) there exists 3100 different possible states (Srinivas & Bagchi, 2003). If we extrapolate from Levinthal's hypothesis that proteins fold via random and exhaustive search, even with a protein's ability to explore 1013 configurations per second, the process of folding would extend to 1027 years (Srinivas & Bagchi, 2003). Therefore, in Levinthal’s paradox, finding the native folded state of a protein is bounded by the sheer combinatorial complexity of its components (Tompa & Rose, 2011)
The paradox raises questions about the likelihood of a rigid algorithmic pathway being used in the protein folding process In response to Levinthal’s paradox small energetic biases towards the native state of the protein can be used to decrease folding rates to a realistic time frame (Martínez, 2014). Whilst this is certainly not the definitive way proteins fold, the existence of such a solution to Levinthal’s paradox shows that the PFP has a solution with better time complexity than exhaustive search. Furthermore, this shows that protein folding is not simply using exhaustive search on ‘supercomputer’ style biological hardware
Therefore, protein folding could take place using a rigid algorithmic pathway which we could assume would take place in polynomial time. Such a solution would prove that P = NP if the PFP is NP-complete However, these ideas presented at a high level and the experimentally observed function of protein folding must be considered in tandem As stated in the biological context of this review, a reductionist approach has often not reached the desired solution to complex biological problems such as protein folding.
Anfinsen showed that proteins fold into their shape because it is “thermodynamically the most stable in the intracellular environment” (Reynaud, 2010). Furthermore, Reynaud (2010) explains that “the information needed for proteins to fold in their correct minimal energy configuration is coded in the physicochemical properties of their amino acid sequence.”. To represent the pathway to the free-energy minimum a statistical approach can be used on the energetics of protein conformation leading to the energy landscape theory (Bryngelson et al 1995) Figure 2 shows a 2dimensional view of a protein folding tunnel Unfolded configurations occupy the top of the funnel-like energy landscape. Proteins reach their free-energy minimum by using routes determined by the physicochemical properties of the protein's amino acid sequence (Schug & Onuchic, 2010)
The most fascinating quality of the energy landscape theory is that there is likely no single unique folding pathway, instead Bryngelson et al. (1995) propose that protein folding is a complex self-organising process that occurs through various routes down a folding tunnel Whilst this theory does not constrain a defined pathway model, it is widely interpreted that proteins fold through independent pathways
(Englander & Mayne, 2014). The idea of independent pathways seems to conflict with any idea that if NP-complete, protein folding takes place via a rigid algorithmic path.
The energy landscape theory further polarises the implications of the potential computational complexities Answering the these multiple within the protein would give understanding complexity Firstly, we implications landscape theory complete. If environmental temperature or the pathway, could still be algorithm where
physiochemical properties of the amino acids dictate the path, but also the environmental factors. Alternatively, proteins might undertake folding through distinct routes even within the same environmental conditions. In this case, such a solution might exist well beyond our current understanding of the limits of computational models as the algorithm would contain multiple correct paths with no apparent benefit.
In contrast, if the modelling of the PFP is not NP-complete then there are other possible explanations for how proteins fold. For example, if protein folding has any tolerance for imprecision, then the process might not be able to be represented by a precise analytical model. Instead, protein folding might have a likeness to the concepts of soft computing (neural networks, fuzzy logic etc.) (Gupta & Kulkarni, 2013).
Figure 2: 2-dimensional view of a of protein folding tunnel.In computer science a heuristic methodology is one used to produce a good enough solution in a shorter time (Datta, 2022) Heuristics are favoured in creating near-optimal solutions to NP-hard optimisation and stay in line with the assumption that P ≠ NP (Zerovnik 2015). Protein folding is NP-hard and therefore the process could be computed via a heuristic style solution which would consider an independent pathway model. This would, however, introduce the possibility that there is an inherent error rate in the way proteins fold.
NP-complete problems such as the Travelling Salesman Problem are also often approximated by heuristic methods (Gao, 2020) and researchers often look for parallels with the natural world to form new insights For example, researchers have demonstrated that “bumblebees make a trade-off between minimising travel distance and prioritising high-reward sites when developing multi-location routes” (Lihoreau et al 2011) leading to optimisation approaches that consider the behaviours of bumble bees such as swarm intelligence (Sahin, 2022) Whilst studies like this (Lihoreau et al 2011) provide no insight into how these routes might be optimised it is interesting to note the parallels between the natural world and mathematics Through this example we
see that heuristic methods exist in other aspects of nature Therefore, it is not inconceivable that through evolution, the protein folding mechanism could be an efficient heuristic solution.
Whilst this explanation provides no insight into a solution to P versus NP, the importance of discovering a potentially heuristic protein folding mechanism would also have substantial consequences This solution could be applied to other NP problems and might provide near perfect solutions to problems such as the travelling salesman problem, satisfiability problems and graph covering problems (Hosch, 2022).
The study of complexity theory can be viewed in relation to the study of computability theory. The later theory concerns the capabilities and limitations of computation and the concept of Effective Procedures (Enderton, 2011). Our definitions of NP and P rely on deterministic and nondeterministic Turing Machines (TMs) and therefore, the scope of the Church-Turing thesis from which the notion of computability is formalised This thesis can be explained as “the set of functions on the natural numbers that can be defined by algorithms is precisely the set of functions definable in one of a number of equivalent models of computation" (Daintih & Wright, 2008)
We will consider the form of this thesis in which every effective computation can be done by a TM (Copeland, 2000).
NP problems are computable and can be solved in exponential time. However, if P ≠ NP then NP-hard problems can be viewed as intractable as any solution takes too many resources to be useful (as seen earlier in Levinthal’s paradox) (National Science Foundation, 2016). In this case we reach a limit of effective computation and must once again ask how it can be possible that proteins fold if the PFP is NP-complete.
If it is proved that we cannot effectively compute NP problems, then it is possible that these problems could be computed in a way that surpasses the Turing model (Wells, 2004). The concept of computation that exceeds Turing’s Model is known as hypercomputation or Super-Turing computation (Wood, 2019). Therefore, it is possible that protein folding is a process completed via hypercomputation The conjecture that hypercomputation could be responsible for biological processes is debated by both philosophers and computer scientists (Arkoudas, 2008), often within the context of theories of consciousness (Bringsjord & Arkoudas, 2004)
classification as NP-hard could perhaps become trivial, as the
The unifying quality of both the problem P versus NP and the PFP is the breadth of consequences from their potential solutions. Respectively these problems divide opinion across mathematics and biology, yet their solutions seem astonishingly intertwined. Any potential solution to how proteins fold has the potential to revolutionise our understanding of P versus NP. Therefore, we see that by viewing the PFP in conjunction with computational complexity allows us to better understand the limitations and potential outcomes of the problem. Perhaps, through a lens of computational theory and not just computational assistance we have the power to find the solutions to our most important mathematical problems in the biological processes that surround us
1 Cobb, M (2017) ‘60 years ago, Francis Crick changed the logic of biology’ PLoS biology, vol 15, no 9, pp 1-8, Available at: https://doi org/10 1371/journal pbio 2003243 (Accessed: 21 December 2022)
2 Crick, F (1970) ‘Central dogma of molecular biology’ Nature, vol 227, no 5258, pp 561–3 Available at: https://doi org/10 1038/227561a0 (Accessed: 21 December 2022)
3. Clancy, S. & Brown, W. (2008). ‘Translation: DNA to mRNA to Protein’. Nature Education, vol. 1, no. 1, pp. 101. Available at: https://www.nature.com/scitable/topicpage/translation-dna-to-mrna-to-protein-393/ (Accessed: 21 December 2022).
4. Cirillo, D., Ponce de Leon, M., & Valencia, A. (2018). ‘Algorithmic complexity in computational biology: basics, challenges and limitations ’ [Preprint] Available at: https://doi org/10 48550/arXiv 1811 07312 (Accessed: 22 December 2022)
5 National Science Foundation (2016) ‘Tackling intractable computing problems’, National Science Foundation [online], 29 June Available at: https://beta nsf gov/news/tackling-intractable-computing-problems (Accessed: 22 December 2022)
6. Cook, S. (2001). ‘The P Versus NP Problem’, Clay Mathematics Institute. Available at: https://www.claymath.org/sites/default/files/pvsnp.pdf (Accessed: 22 December 2022).
7. Stewart, I. (2000). ‘Million-Dollar Minesweeper’, Scientific American [online], 1 October, Available at: https://www scientificamerican com/article/million-dollar-minesweeper/ (Accessed: 22 December 2022)
8 Cheriyedath, S (2019) ‘Protein Folding’, News-Medical [online], 26 February Available at: https://www news-medical net/lifesciences/Protein-Folding aspx (Accessed: 21 December 2022)
9 Dill, K A , Ozkan, S B , Shell, M S , & Weikl, T R (2008) ‘The protein folding problem’ Annual review of biophysics, vol 37, pp 289–316. Available at: https://doi.org/10.1146/annurev.biophys.37.092707 (Accessed: 21 December 2022).
10. Jumper, J., Evans, R., Pritzel, A. et al. (2021). ‘Highly accurate protein structure prediction with AlphaFold.’, Nature, vol. 596, pp. 583–589 Available at: https://doi.org/10.1038/s41586-021-03819-2 (Accessed: 22 December 2022).
11 Moore, P , Hendrickson, W , Henderson, R , & Brunger, A (2022) ‘The protein-folding problem: Not yet solved ’ Science, vol 375, no 6580, pp 507 doi: 10 1126/science abn9422 (Accessed: 22 December 2022)
12 Schneider, M , & Somers, M (2006) ‘Organizations as complex adaptive systems: Implications of Complexity Theory for leadership research’ The Leadership Quarterly, vol 17, no 4, pp 351–365 Available at: https://doi org/10 1016/j leaqua 2006 04 006 (Accessed: 22 December 2022).
13. Mazzocchi, F. (2008). ‘Complexity in biology. Exceeding the limits of reductionism and determinism using complexity theory’. EMBO reports, vol 9, no 1, pp 10–14 Available at: https://doi org/10 1038/sj embor 7401147 (Accessed: 22 December 2022)
14 Haan, J (2006) ‘How emergence arises’ Ecol Compl, vol 3, pp 293–301 Available at: https://www researchgate net/publication/222935027_How_Emergence_Arises#:~:text=10 1016/j ecocom 2007 02 003 (Accessed: 22 December 2022)
15. Huerta, M., Downing, G., Haseltine, F., Seto, B., & Liu, Y. (2000). ‘NIH working definition of bioinformatics and computational biology’, Biomedical Information Science and Technology Initiative. Available from: http://www.binf.gmu.edu/jafri/math6390bioinformatics/workingdef.pdf (PDF). (Accessed: 22 December 2022).
16 Clay Mathematics Institute (2022) ‘Millennium Problems’, Clay Mathematics Institute [online], 7 December Available at: https://www claymath org/millennium-problems (Accessed: 22 December 2022)
17 Gasarch, W (2002) ‘The P=?NP Poll’ SIGACT News, vol 33, no 2, pp 34–47 Available at: https://dl acm org/doi/10 1145/564585 564599 (Accessed: 22 December 2022)
18. Fraenkel, S. (1993). ‘Complexity of protein folding’ Bulletin of Mathematical Biology, vol. 55, no. 6, pp. 1199-1210. Available at: https://doi.org/10.1016/S0092-8240(05)80170-3 (Accessed: 22 December 2022).
19. Guyeux, C., Côté, N., Bahi J.M., & Bienia, W. (2014) ‘Is protein folding problem really a NP-complete one? First investigations’. J Bioinform Comput Biol Available at: DOI: 10 1142/S0219720013500170 (Accessed: 22 December 2022)
20 Berger, B , & Leighton, T (1998) Protein Folding in the Hydrophobic-Hydrophilic (HP) Model is NP-Complete Journal of Computational Biology Available at: http://doi org/10 1089/cmb 1998 5 27 (Accessed: 22 December 2022)
21 Zwanzig, R , Szabo, A , & Bagchi, B (1992) ‘Levinthal's paradox ’ Proceedings of the National Academy of Sciences of the United States of America, vol. 89, no. 1, pp. 20–22. Available at: https://doi org/10 1073/pnas 89 1 20 (Accessed: 22 December 2022)
22 Levinthal, C , (1969) ‘How to Fold Graciously’, Mossbauer Spectroscopy in Biological Systems Proceedings, vol 67, no 41, pp 22-26 Available at: https://faculty cc gatech edu/~turk/bio_sim/articles/proteins_levinthal_1969 pdf (Accessed: 22 December 2022).
23. Martínez, L. (2014) ‘Introducing the Levinthal’s Protein Folding Paradox and Its Solution’, Journal of Chemical Education, vol. 91, no. 11, pp 1918-1923, Available at: DOI: 10 1021/ed300302h (Accessed: 22 December 2022)
24 Srinivas, G , Bagchi, B (2003) ‘Study of the dynamics of protein folding through minimalistic models ’, Theor Chem Acc, vol 109, pp 8–21 Available at: https://doi org/10 1007/s00214-002-0390-6 (Accessed: 22 December 2022)
25 Tompa, P , Rose, G (2011) ‘The Levinthal paradox of the interactome ’, Protein Sci, no 12, pp 2074-9 Available at: DOI: 10 1002/pro 747 (Accessed: 22 December 2022)
26 Reynaud, E (2010) ‘Protein Misfolding and Degenerative Diseases’ Nature Education, vol 3, no 9, pp 28 Available at: https://www nature com/scitable/topicpage/protein-misfolding-anddegenerative-diseases-14434929/(Accessed: 22 December 2022)
27 Bryngelson, J , Onuchic, J , Socci, N , & Wolynes, P (1995) ‘Funnels, pathways, and the energy landscape of protein folding: A synthesis ’ Proteins, vol. 22, no. 3, pp. 167-195. Available at: https://doi org/10 1002/prot 340210302 (Accessed: 22 December 2022)
28 Schug, A , & Onuchic, J (2010) ‘From protein folding to protein function and biomolecular binding by energy landscape theory’, Current Opinion in Pharmacology, vol 10, no 6, pp 709-714, Available at: https://doi.org/10.1016/j.coph.2010.09.012. (Accessed: 22 December 2022).
29. Englander, S., & Mayne, L (2014). ‘The nature of protein folding pathways’ Proceedings of the National Academy of Sciences, vol. 111, no. 45, pp. 15873-15880. Available at: https://doi.org/10.1073/pnas.1411798111
(Accessed: 22 December 2022)
30 Gupta, P , & Kulkarni, N (2013) ‘An introduction of soft computing approach over hard computing’ International Journal of Latest Trends in Engineering and Technology, vol 3, no 1 Available at: https://www.ijltet.org/pdfviewer.php?id=894&j_id=2505 (Accessed: 22 December 2022).
31 Datta, S (2022) ‘Greedy Vs Heuristic Algorithm’, Baeldung [online], 8 November Available at: https://www baeldung com/cs/greedy-vsheuristic-algorithm (Accessed: 21 December 2022)
32 Zerovnik, J (2015) ‘Heuristics for NP-hard optimization problemssimpler is better!?.’, Logistics & Sustainable Transport, vol. 6, pp. 1-10. DOI:10.1515/jlst-2015-0006 (Accessed: 22 December 2022).
33 Gao, Y (2020) ‘Heuristic Algorithms for the Traveling Salesman’ Medium [online], 14 February Available at: https://medium com/opexanalytics/heuristic-algorithms-for-the-traveling-salesman-problem6a53d8143584 (Accessed: 22 December 2022)
34 Lihoreau, M , Chittka, L , & Raine, N (2011) ‘Trade-off between travel distance and prioritization of high-reward sites in traplining bumblebees’, Functional Ecology, vol. 25, no. 6, pp. 1284-1292 Available at: https://doi.org/10.1111/j.1365-2435.2011.01881.x (Accessed: 22 December 2022)
35 Sahin, M (2022) ‘Solving TSP by using combinatorial Bees algorithm with nearest neighbor method’ Neural Computing & Applications Available at: https://doi org/10 1007/s00521-022-07816-y (Accessed: 22 December 2022)
36 Hosch, W (2022) ‘P versus NP problem’ Britannica [online], 24 November Available at: https://www britannica com/science/P-versus-NPproblem (Accessed: 22 December 2022)
37 Enderton, H (2011) ‘Computability Theory’, Academic Press, pp 1-27, Available at: https://doi org/10 1016/B978-0-12-384958-8 00001-6 Accessed: 22 December 2022)
38 Daintith, J , & Wright, E (2008) ‘A Dictionary of Computing (6 ed )’, Oxford: Oxford University Press
39 Copeland, J (2000) ‘The Church-Turing Thesis’ AlanTuring net [online], June Available at: http://www.alanturing.net/turing_archive/pages/Reference%20Articles/The% 20Turing-Church%20Thesis html (Accessed: 22 December 2022)
40 Wells, B (2004) ‘Hypercomputation by definition’ Theoretical Computer Science, vol 317, no 1–3, pp 191-207 Available at: https://doi.org/10.1016/j.tcs.2003.12.011 (Accessed: 22 December 2022).
41. Wood, L. (2019) ‘Super Turing Computation Versus Quantum Computation’, Forbes [online], 25 February. Available at: https://www.forbes.com/sites/cognitiveworld/2019/02/25/super-turingcomputation-versus-quantum-computation/?sh=53c81ea049e2 (Accessed: 22 December 2022)
42 Arkoudas, K (2008) ‘Computation, hypercomputation, and physical science’, Journal of Applied Logic, vol 6, no 4, pp 461-475 Available at: https://doi.org/10.1016/j.jal.2008.09.007 (Accessed: 22 December 2022).
43. Bringsjord, S., & Arkoudas, K. (2004), ‘The modal argument for hypercomputing minds’ Theoretical Computer Science, vol 317, no 1-3, pp 167-190 Available at: DOI:10 1016/j tcs 2003 12 010 (Accessed: 22 December 2022)
Reviewed and edited by
T. LawsonABSTRACT: This report provides an overview of machine learning (ML) methods for causal inference, with an emphasis on its applications in social sciences. Much of machine learning literature centres on using and developing methods to make predictions. However, there is growing recognition that an understanding of causal relationships is of crucial importance in many disciplines. Much of the research in both social and natural sciences revolves around cause-and-effect questions, which had remained far beyond the reach of conventional ML approaches. In this report, we first discuss the fundamental challenges inherent in causal inference and examine how diverse ML approaches can aid in addressing them. We draw to examples primarily from the realm of social sciences, predominantly economics, which is a discipline renowned for its emphasis on causal questions. Finally, we conclude by discussing potential directions for future research and the intricacies of causal discovery.
Over the past few decades, we have witnessed remarkable advancements in machine intelligence, leading to enhanced performance of these systems across an expanding range of tasks. These breakthroughs are often attributed to the paradigm shift in the field of artificial intelligence (AI), moving from procedural approaches to empirical approaches based on statistical learning. As a result, contemporary machine learning algorithms primarily operate in an associational mode (Pearl, 2019). However, many questions are inherently causal in nature, and findings based solely on associations cannot be readily interpreted in terms of cause and effect (Liu, et al. 2021). Despite the considerable progress in the fields of machine learning and artificial intelligence, addressing causal questions remains beyond the purview of conventional machine learning approaches. The absence of suitable tools to address causal inquiries has also contributed to the slow adoption of ML approaches in many fields outside computer science, for example, social sciences (Leist, et al. 2022).
While (supervised) machine learning revolves around the problem of prediction, much of the research in social sciences builds on theories describing causal relationships and seeks to test them empirically. Social scientists strive to obtain unbiased estimates of causal effects based on some theoretical relationship, rather than placing emphasis just on minimising prediction errors, as is central in ML research. ML models observe the associations between features and outcomes to achieve accurate predictions, but discerning whether these predictions are
spurious relationships remains elusive. Consequently, the question arises as to whether any data mining algorithm can extract genuine causal relationships or, in other words, the answers to causal questions lie within data itself (Pearl, 2018). These questions have garnered interest from Judea Pearl, a computer scientist and Turing Award recipient for his contributions to both AI and causal inference. In this report, we explore how far we have gone in answering these questions. But, prior to delving into that discussion, we will introduce the fundamental problems of causal inference and present some popular ML approaches.
The purpose of this report is to offer a high-level review of the literature at the intersection between ML and causal inference, primarily with a focus on its relevance to the fields of social sciences. Notably, it was social scientists who spearheaded rigorous exploration of causal questions, and economists who played a pioneering role in both developing and adopting ML tools for causal inference (Kreif, and DiazOrdaz, 2019). To commence our exploration, we leverage the influential work of renowned econometricians Guido Imbens and Susan Athey, which serve as valuable starting points. Additionally, we draw upon a multitude of extensive studies that delve into the intersection between ML and causal inference, providing
comprehensive insights into the major challenges inherent in causal inference and the accompanying ML tools. Our examination encompasses a systematic search and review of relevant academic resources pertaining to these tools. Acknowledging the rapid advancements in causal inference frameworks and tools within the field of computer science and their potential implications for other disciplines, we also investigate the contributions of Judea Pearl, the pioneer of causal reasoning within computer science Additionally, we briefly discuss the most recent developments in approaches for causal discovery.
The body of this report is structured as follows. First, we introduce the fundamental problem of causal inference and the practical issues related to estimating causal effects. Subsequently, we present an overview of the most employed ML approaches for causal inference tasks. To conclude this section, we discuss the advantages and drawbacks of ML methods, and outline potential avenues for future research. It is important to note that this report does not aim to provide an exhaustive review of the literature on the utilisation of ML for causal inference; we discuss only selected approaches. For example, we do not cover in depth approaches such as causal reinforcement learning. This is primarily due to both the fragmented nature and substantial growth of this literature, necessitating a more extensive review for a comprehensive account. Moreover, our review primarily focuses on applied work in the domain of social sciences, as opposed to theoretical results.
For decades, under the influence of the founders of modern statistics, Francis Galton and Karl Pearson, causal questions had remained outside the realm of scientific investigation (Hernan, et al. 2019; Pearl, 2018). It was social scientists (and geneticists), particularly econometricians and epidemiologists, rather than statisticians, who were the pioneers of causal reasoning in science (Pearl and Mackenzie, 2018). After the series of influential papers by Rubin (1974, 1976, 1978, 1980) – who introduced the first formal mathematical framework for causal inference – econometricians, in particular, have made substantial contributions to the development of tools for causal inference (Kreif, and DiazOrdaz, 2019). Program evaluation and experimental economics have emerged as thriving subfields of economics, with a primary focus on estimating causal effects of interventions using randomised experiments and innovative quasi-experimental designs (Athey and Imbens, 2017). Notably, Joshua Angrist and Guido Imbens were honoured with a Nobel Prize in Economic Sciences for their instrumental role in popularising experimental techniques in economics. The allure of experimental methods lies in their ability to overcome key problems in causal inference, which we illustrate in the following example.
Consider the scenario in which our objective is to estimate the effect of attending university on future earnings. Since we cannot observe both counterfactual outcomes for each individual, we could attempt to estimate the causal effect of attending
university by obtaining a sample of individuals and dividing them into two groups: those who attended a university and those who did not. By calculating the average earnings for each group and taking their difference, we might infer the premium associated with attending university –or so it seems It is plausible to assume that the students who decided to attend university differ in various ways to those who chose not to –perhaps they are smarter, more industrious, or have more favourable socio-economic backgrounds These attributes also make them more likely to earn higher wages in the future, regardless of whether they attended university. These characteristics are known as confounders, which means they simultaneously affect both the treatment (attending university) and the outcome (earnings). In the absence of further assumptions, the presence of confounders represents a challenge in isolating causal effects (Kreif, and DiazOrdaz, 2019).
The example above highlights several notorious problems of causal inference. Firstly, for each individual, both potential outcomes cannot be simultaneously observed, which makes the identification of causal effects difficult based solely on observed data (Hill, et al. 2019; Kreif, and DiazOrdaz, 2019). In observational studies, we could simply compare outcomes between treated and untreated groups However, as demonstrated earlier, such a simple comparison would yield a biased estimate of the causal effect due to confounding While randomised experiments are considered the gold standard, they are often unethical and impractical In the absence of randomisation, identifying causal effects requires robust assumptions to be satisfied, with the unconfoundedness assumption being of utmost importance. This assumption posits that treatment assignment is independent of observed covariates (Leist, et al. 2022; Kreif, and DiazOrdaz, 2019).
Plausibility of this assumption cannot be tested using observational data and necessitates careful reasoning and understanding of the relationships between variables based on subject-matter expertise (Balzer and Petersen, 2021). These relationships can be conveniently represented through the so-called directed acyclic graphs (DAG) popularised by Pearl.
ML has proved to be highly valuable in addressing many of the challenges outlined above, including confounding adjustment and counterfactual prediction. In the following section, we provide an overview of some commonly employed ML tools for addressing causal inference tasks.
A large body of work on causal inference using ML relies on tree-based approaches. Tree-based methods, also known as classification and regression trees, can be used for the classification of binary or multicategory outcomes, or with continuous outcomes, respectively (Kreif, and DiazOrdaz, 2019). In its most basic form, a regression tree considers which covariate should be split, and at which level to ensure the sum of squared residuals is minimised (Athey and Imbens, 2017). A major challenge with tree methods is their sensitivity to the initial split leading to high variance, which is also the reason why single trees are rarely used in practice (Kreif, and DiazOrdaz, 2019). Thus, they are usually implemented as an ensemble of trees in various variants, some of which we present here.
Researchers and policy makers are often interested in how treatment effects vary across different population subgroups. Building on our previous example, in addition to determining the average effect of attending a university on future earnings, we may want to examine which specific types of students benefit the most from university attendance. To address such a question, Wager and Athey (2018) introduced a modified version of regression trees known as causal trees, specifically designed for causal settings. Unlike traditional regression trees optimised for prediction, in the case of causal trees, the splitting rule optimises for finding splits associated with treatment effect heterogeneity (Athey and Imbens, 2019). This method generates treatment effect estimates and a confidence interval for each subgroup However, a disadvantage of causal trees is that the tree structure itself is somewhat arbitrary, potentially resulting in different estimated partitions when using different subsamples of data. A further extension of this approach are causal forests developed by Wager and Athey (2015), which provide smooth estimates of treatment effects. Causal forests involve generating multiple trees and averaging the treatment effects across a large number of these causal trees, resulting in a smooth function of treatment effects, given some covariate.
An interesting tree-based approach are Bayesian Additive Regression Trees (BART), which can be distinguished from other tree-based methods due to its underlying probability model. BART was introduced by Chipman et al. (2010) and since then, has received increasing popularity due to several known advantages: it is simple to
use, yields excellent performance, provides uncertainty measures, and it handles many predictors and missing data (Hill, 2011). As a Bayesian method, BART includes a set of priors for the structure and the leaf parameters, which aim to provide regularisation to prevent a single tree from dominating the total fit (Kreif, and DiazOrdaz, 2019). BART fits trees iteratively in such a way that each new tree aims to capture the fit left currently unexplained by holding the other trees constant. BART provides a very versatile approach to causal inference producing accurate average treatment effect estimates, naturally identifying heterogenous treatment effects (Hill, 2011), modelling complex response surfaces, controlling for confounding, etc.
Several noteworthy ML approaches have been developed to adjust for confounding, with propensity score methods standing out as particularly notable (Leist, et al. 2022). The propensity score (PS) is defined as the probability of the treatment assignment as a function of all relevant observed covariates (Rubin, 2010; Abadie and Imbens, 2016). By comparing individuals with similar PS, one can obtain an unbiased estimate of the treatment effect. In randomised experiments, where the treatment assignment is equally likely for everyone, the PS is one half for all individuals, enabling an unbiased treatment effect estimation. In nonexperimental settings, the PS is not directly known, but can be obtained using observed covariates (Rubin, 2010). If it is reasonable to assume that a specific set of observed covariates eliminates confounding, adjusting solely for the propensity score is equivalent to adjusting for the entire set of covariates (Abadie and Imbens, 2016).
PS methods have been widely used to mitigate confounding in the pre-effect estimation step (Blakely, et al. 2020). These methods have demonstrated excellent performance especially through two key techniques: propensity score matching, and reweighting using the PSs as inverse weights. The propensity score matching estimator constructs the missing potential outcome by pairing it with the closest outcome from the other group based on its propensity score. The average treatment effect is then calculated as the mean difference between these predicted potential outcomes (Abadie and Imbens, 2016) Conversely, inverse weights balance the weighted distributions of covariates between treated and untreated groups (Li, et al 2018) Despite their growing popularity, one of the limitations of PS methods is their inability to account for unobserved covariates, thus potentially not eliminating the selection bias completely.
The ML tools have the potential to substantially improve empirical analysis of cause-effect problems across a diverse range of fields, offering benefits especially in certain settings. In contrast to most empirical research in various disciplines, ML typically focuses on optimising out-of-sample performance (Mullainathan & Spiess, 2017). The demonstrated capability of ML methods to outperform alternative methods holds considerable practical value, yet are often underappreciated outside computer science. Additionally, ML approaches prove particularly valuable when handling large datasets, especially in high-dimensional settings (Mullainathan & Spiess, 2017). These approaches enable structured exploration of relationships between numerous variables and facilitate the construction of optimal predictor combinations (Balzer and Petersen, 2021).
However, ML is not without dangers and challenges. A noteworthy practical challenge lies in selecting the most suitable algorithm for a given problem and implementing it effectively, a process that still largely necessitates domain expertise.
Most research on ML approaches for causal inference reframes cause-and-effect questions as prediction tasks, emphasising confounding adjustment, potential outcomes prediction, and treatment effect heterogeneity estimation (Blakely, et al 2020) Exploring novel methodologies for causal discovery – aiming to identify causal relationships from observational data without prior knowledge – emerges as an intriguing area for future investigation. While rarely employed in the realm of social science, the predominant framework for causal discovery is the so-called causal structural learning (Leist, et al. 2022). By leveraging the graphical representation of variable relationships in a form of DAG (Pearl, 2009), structure learning approaches aim to acquire such graphs from data. Causal structure learning extends this framework by inferring the directionality of the graph edges. The major limitations of structure learning approaches include reliance on several underlying assumptions (Heinze-Deml, 2018), and the need for more scalable and efficient algorithms. Other methods for causal discovery are being developed including causal reinforcement learning (e. g. Zhu, et al. 2019), but in most cases, efficiency remains an issue.
Numerous influential thinkers, including Pearl (2009, 2019), Deutsch (1997), and Popper (1934), have emphasised the importance of good explanatory models for the scientific understanding of the world. Cause-effect relationships are a fundamental aspect of such models both in natural and social sciences. In contrast, ML and AI research have traditionally focused on prediction tasks, representing a distinct statistical thinking approach. Only recently has there been a surge in development and adoption of ML approaches to cause-effect problems. However, many of these approaches –such as propensity score methods and tree-based approaches – essentially reframe causal inference tasks as prediction problems. Although these approaches have proven to be highly valuable in many settings, they still rely
on underlying domain expertise, much like other statistical methods. The true challenge lies in developing systems that can autonomously learn causal relationships solely from data, without relying on pre-existing knowledge. Further research in this area has the potential to revolutionise research not only in social and natural sciences, but also in the field of artificial intelligence itself.
Abadie, A , Diamond, A & Hainmueller, J (2010) Synthetic Control Methods for Comparative Case Studies: Estimating the Effect of California’s Tobacco Control Program, Journal of the American Statistical Association, 105:490, 493-505, DOI: 10.1198/jasa.2009.ap08746
Abadie A, Diamond A, Hainmueller J 2015 Comparative politics and the synthetic control method Am J Political Sci 59: 495–510
Abadie, A. and Imbens, G. 2016. Notes and Comments: Matching on the Estimated Propensity Scores. Econometrica, Vol. 84, No. 2 (March, 2016), 781–807. DOI: 10.3982/ECTA11293
Andrews, R , R et al 2021 A practical guide to causal discovery with cohort data arXiv:2108 13395 [stat AP] (30 August 2021).
Athey, S and Imbens, G 2017 The State of Applied Econometrics: Causality and Policy Evaluation Journal of Economic Perspectives Volume 31, Number 2 Spring 2017 Pages 3–32
Athey, S and Imbens, G 2019 Machine Learning Methods That Economists Should Know About Annual Review of Economics. Vol. 11:685-725 (Volume publication date August 2019). https://doi.org/10.1146/annurev-economics-080217053433
Athey, S. and Wager, S. 2021. Policy Learning with Observational Data. Econometrica, Volume 89, Issue 1, p. 133-161. https://doi org/10 3982/ECTA15732
Balzer, L. B, Petersen, M. L. 2021. Invited Commentary: Machine Learning in Causal Inference How Do I Love Thee? Let Me Count the Ways, American Journal of Epidemiology, Volume 190, Issue 8, August 2021, Pages 1483–1487, https://doi org/10 1093/aje/kwab048
Blakely, T., et al. 2020, Reflection on modern methods: When worlds collide Prediction, machine learning and causal inference. Int. J. Epidemiol. 49, 2058–2064 (2020).
Chipman, H. A., George, E. I., and McCulloch, R. E. (2010). “BART: Bayesian additive regression trees.” The Annals of Applied Statistics, 266–298 MR2758172 doi: https://doi org/10 1214/09-AOAS285 1051
Deutsch, D. 1997. The Fabric of Reality. Penguin Books, London. ISBN 987-1-101-55063-2
Fan, J and Lv, J 2010 A Selective Overview of Variable Selection in High-Dimensional Feature Space Statistica Sinica 20 (2010), 101-148.
Hartford, et al 2017 Deep IV: A Flexible Approach for Counterfactual Prediction Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017
Heinze-Deml, C., Maathuis, M. H., Meinshausen, N. 2018. Causal structure learning. Annu. Rev. Stat Appl. 5, 371–391 (2018).
Hernan, et al. 2019. A Second Chance to Get Causal Inference Right: A Classification of Data Science Tasks. CHANCE. Volume 32, 2019 - Issue 1
Hill, J. L. 2011. Bayesian Nonparametric Modeling for Causal Inference, Journal of Computational and Graphical Statistics, 20:1, 217-240, DOI: 10.1198/jcgs.2010.08162
Hill, J et al 2019 Bayesian Additive Regression Trees: A Review and Look Forward Annual Review of Statistics and Its Application. Vol. 7:251-278 (Volume publication date March 2020). https://doi.org/10.1146/annurev-statistics-031219-041110
Kreif, N. and DiazOrdaz, K. 2019. Machine learning in policy evaluation: new tools for causal inference. arXiv:1903.00402v1 [stat.ML] 1 Mar 2019
Leist, A K et al 2022 Mapping of machine learning approaches for description, prediction, and causal inference in the social and health sciences. SCIENCE ADVANCES, 19 Oct 2022, Vol 8, Issue 42. DOI: 10.1126/sciadv.abk1942
Li, F et al 2018 Balancing Covariates via Propensity Score Weighting Journal of the American Statistical Association Volume 113, 2018 - Issue 521.https://doi.org/10.1080/01621459.2016.1260466
Liu, T , Ungar, L & Kording, K (2021) Quantifying causality in data science with quasi-experiments Nat Comput Sci 1, 24–32 https://doi.org/10.1038/s43588-020-00005-8
Mullainathan, S. & Spiess, J. 2017. Machine learning: an applied econometric approach. J. Econ. Perspect. 31, 87–106 (2017). Pearl, J. 2019. The Seven Tools of Causal Inference, with Reflections on Machine Learning. COMMUNICATIONS OF THE ACM | MARCH 2019 | VOL 62 | NO 3 https://doi org/10 1145/3241036
Pearl J. 2009. Causality: Models, Reasoning, and Inference. Cambridge, UK: Cambridge Univ. Press. 2nd ed.
Pearl, J and Mackenzie 2018 The Book of Why New York, Basic Books ISBN 978-0-465-09760-9
Popper, K. 1934. The Logic of Scientific Discovery (as Logik der Forschung, English translation 1959), ISBN 0415278449
Rubin, D.B. 1974. Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology 66, 688–701
Rubin, D.B. 1976. Inference and missing data. Biometrika 63, 581–92
Rubin, D.B. 1978. Bayesian inference for causal effects: the role of randomization. Annals of Statistics 6, 34–58.
Rubin, D.B. 1980. Discussion of ‘Randomization analysis of experimental data in the Fisher randomization test’ by Basu. Journal of the American Statistical Association 75, 591–3
Rubin, D.B. 2010. Propensity Scores Methods. SERIES ON STATISTICS| VOLUME 149, ISSUE 1, P7-9, JANUARY 2010. DOI: https: //doi org/10 1016/j ajo 2009 08 024
Scanagatta, M., Salmerón, A., Stella, F. 2019. A survey on Bayesian network structure learning from data. Prog. Artif. Intell. 8, 425–439 (2019)
Schölkopf, B. 2019. Causality for machine learning. Preprint at https://arxiv.org/abs/1911.10500 (2019).
Vivalt, E. 2015. Heterogeneous Treatment Effects in Impact Evaluation. American Economic Review: Papers & Proceedings 2015, 105(5): 467–470. http://dx.doi.org/10.1257/aer.p20151015
Wager Stefan & Athey Susan (2018) Estimation and Inference of Heterogeneous Treatment Effects using Random Forests, Journal of the American Statistical Association, 113:523, 1228-1242, DOI: 10.1080/01621459.2017.1319839
Wasserman, L and Roeder, K. 2009. High Dimensional Variable Selection. Ann Stat. 2009 Jan 1; 37(5A): 2178–2201. doi: 10 1214/08-aos646
Zhu, S, et al. 2019. Causal Discovery with Reinforcement Learning. arXiv:1906.04477 [cs.LG]
Verity Powell, Computer Science
Reviewed and edited by T. Lawson and T. Burton
ABSTRACT: This report explores the construction of a 3x3 magic square of squares using 9 distinct integer numbers, a problem unsolved since Euler. In it, we will discuss the history and properties of magic squares alongside the prizes and efforts put forward towards further research in the field. This report uses a computational approach to the problem by using a general formula to try and generate a complete 3x3 magic square of squares. The results of such computation will be discussed, and the initial observations presented.
The combinatorial problem of placing 9 distinct positive integers into a 3x3 grid, such that each row, column, and diagonal sums to the same integer (the magic constant) dates to over 5000 years ago in Ancient China (Eves, 2022). The Lo Shu Square is the first recorded magic square of order 3 and shows that there is only one way of arranging the numbers 1-9 within a 3x3 grid (as shown in Figure 1) (Eves, 2022).
Magic squares have some interesting properties which will aid the reader’s understanding of this report. Sallows notes that a magic square of order 3 has 8 trivially distinct rotations, and reflection of its formula that one equivalence class (Sallows, 1997) As a result, these trivial cases are not regarded as unique magic squares. Magic squares can also undergo linear transformation, or the addition of a constant to each integer in the square and remain magic (Kraitchik, 1953).
Whilst the order 3 magic square has existed for millennia, the problem of constructing a 3x3 magic square consisting of 9 distinct square integers remains unsolved since Euler, as previously stated (Boyer, 2020).
Posed by Martin LaBar in 1984 (LaBar, 1984), the question gained notoriety when in 1996, Gardner offered $100 for a solution (Gardner 1996). It has since captured the attention of recreational mathematicians for decades. In 2010, Christian Boyer created multiple “enigmas” to incentivise progress on the smallest possible magic squares (Boyer, 2010). This report will focus on “Main enigma #1” outlined as follows
“Main enigma #1 (€1000 and 1 bottle) Construct a 3x3 magic square using seven (or eight, or nine) distinct squared integers different from the only known example and of its rotations, symmetries, and k² multiples. Or prove that it is impossible.” (Boyer, 2010)
Given that no solution has been presented even in spite of the large prize, it could be suggested that the required distinct square integers are incredibly large What is true, however, is that there is no known proof of its impossibility. Using a computational approach, this report will explore this intriguing problem further.
There are thousands of possible constructions of magic squares, each with a different magic constant. Therefore, when approaching the construction of magic squares, it is important to understand not only the definition of a magic square, but also the relationships between the numbers they contain. The 19th century mathematician Édouard Lucas devised a general formula for the construction of a magic square (Stewart, 1997).
Consequently, when aiming to construct a magic square of squares, we are looking for the integers ‘a’ and ‘b’ to satisfy (1), (2), (3) and (4) such that they form sequences of equidistant square numbers
The ‘enigma’ posed by Boyer asks for any solution with a magic square that contains more than 6 square numbers and is different from the known example shown in Figure 3 in terms of rotations, symmetries, and k2 multiples (Boyer 2010). As a result, multiple approaches can be taken to generate squares containing certain numbers of square numbers or numbers in a specific position within the square. The computational approach used in this report aims to generate a complete magic square of squares with 9 distinct integer square numbers. The only known solution involves a = 41496, b = 138600, and c = 180625; Figure 3 shows the relative positions of the square numbers within the known solution.
Figure 2. The general formula for the construction of a magic square where 0 < a < b < c – a, b ≠ 2a, magic constant = 3c.
This formula not only provides us with a foundation for constructing magic squares, but also allows us to visualise magic squares as a sequence of numbers A point of interest in the formula is that for a magic square of squares to exist, ‘c’ must be a square number. It is also the case that the formula can be viewed as four arithmetic sequences of three equidistant square numbers, as shown below.
a = 41496, b = 138600, and c = 180625.
In aiming to generate a magic square with 9 distinct integer square numbers, we will first assume that ‘c’ is itself a square number We must next find integers ‘a’ and ‘b’ to satisfy the sequences (1), (2), (3) and (4), as explained above. Further, knowing that ‘a’ and ‘b’ must fulfil the inequalities 0 < a < b < c - a and b ≠ 2a will allow us to avoid unnecessary computation
The computation for this report will be done using Java in a style that should be more understandable to a reader with less coding knowledge. The computation achieved in this report has most likely been completed before, however this initial exploration looks to understand not only the complexity of the problem, but also the trends and limitations of computational approaches. The code written for this report can be seen in Appendix A.
The most significant limitation in the computation is that only distances to square numbers less than ‘c’ are used to compute ‘a’ and ‘b’ Therefore, the output does not successfully enumerate all possible configurations Furthermore, in assuming that ‘c’ is square, we will not generate potential layouts for magic squares of squares with 7 or 8 entries that are different to the only known solution. For this report, the largest square number considered is 50002 which is most likely not sufficiently large to find a complete magic square of squares that does exist.
order of values of ‘a’, ‘b’, and ‘c’ for which a magic square with more than 7 square numbers exists. Lines starting with “count” show how many solutions with 3 to 9 square numbers have been generated at a given point in the program These points are when a solution containing more than 6 square numbers is found or the first solution is found with a given number of square numbers.
The output of the computation does not find any values for ‘a’, ‘b’, and ‘c’ that produce a solution that has more than 7 square numbers As this data does not enumerate all possible values for ‘a’, ‘b’, and ‘c’, we cannot conclude whether a solution exists where ‘a’, ‘b’ and ‘c’ are each less than 50012 .
The graphs shown in Figure 4 are created by using the data for counts collected from the output of the computation Perhaps unsurprisingly, as the constraints of the problem increase, the number of solutions decreases.
In addition, as the constraints increase, it is necessary to find more of the previous highest number of squares for a solution. For example, 3 solutions with 5 squares are found before a solution with 6, but 58 solutions with 6 squares must be found before a solution with 7 These results are congruent with the intuition that if values for ‘a’, ‘b’ and ‘c’ exist such that a solution has more than 7 square numbers, then the values of each ‘a’, ‘b’ and ‘c’ would be extremely large.
The values for ‘a’, ‘b’ and ‘c’ that produce solutions with 7
The graph in Figure 5 is therefore showing how the only known solution can have k2 multiples.
Figure 5.
a = 41496 * n2 , b = 138600 * n2 , c = 180625* n2
table without running my program with larger bounds I found that to get a given row ‘n’ in the table, the following equations could be used:
The computation in this report provides an interesting look at the problem of generating a magic square of squares. Whilst the solution contains limitations – as outlined above – it has at the very least reached the edge of what we currently understand about configurations of magic squares, by generating the only known example with the same rotations, symmetries and k2 multiples
Overall, this report provides a good initial look at generating 3x3 magic squares of squares In the future, the known limitations of the provided computation could be addressed to better enumerate solutions and specifically target the discovery of a solution containing 8 square numbers.
It is important to note that the code within this report is written in a style that is easy to understand; if further work was undertaken, then more consideration would be given to a programming language that could increase the speed of computation. For now, we still do not know if 3x3 magic squares of squares exist, and it is certainly likely that this enigma will continue to puzzle professional and recreational mathematicians alike
Eves, A. (2022). ‘The fascination of magic squares’. The Royal Institute [online].
Available at: https://www rigb org/explore-science/explore/blog/fascination-magicsquares Accessed 9 May 2023
LaBar, M. (1984). Problem 270, College Math. J. 15, p. 69.
Gardner, M (1996) ‘The magic of 3x3’, Quantum, vol 6 n3 (Jan -Feb 1996), pp 24-26
Available at: https://static.nsta.org/pdfs/QuantumV6N3.pdf Accessed 9 May 2023
Boyer, M (2010) ‘What are the smallest possible magic squares?’ multimagie [online]
Available at: http://www.multimagie.com/English/MagicSquaresEnigmasE.pdf
Accessed 9 May 2023
Sallows, L. (1997). ‘The Lost Theorem’, The Mathematical Intelligencer. Available at: https://www leesallows com/files/The_Lost_Theorem pdf Accessed 9 May 2023
Kraitchik, M. (1953). ‘Magic Squares’. Mathematical Recreations (2nd ed.). New York: Dover Publications, Inc. pp. 142–192. ISBN 9780486201634. Accessed 9 May 2023
Boyer, M (2020) ‘Latest research on the "3x3 magic square of squares" problem’ multimagie [online]
Available at: http://www.multimagie.com/English/SquaresOfSquaresSearch.htm Accessed 9 May 2023
The Untold Side Effects of Common Drugs: Brain Cancer and Antiandrogens, Sarah Mackel...............................................48
Are Brain Computer Interfaces a potential new treatment for tetraplegic patients? Revathi Ramachandran.........................60
Attempting to Treat Fibromyalgia, Shiksha Guru.......................67
Deep brain stimulation and Parkinson’s Disease: An in-depth review into long-term patient outcomes, Diya Rajesh............................79
To what extent does Lecanemab help treat Alzheimer’s Disease? A scientific review of the effects and safety of Lecanemab, Ulyssa Fung….............................................................................97
Reviewed and edited by L. Deen, T. Burton and S. Sandanatavan
ABSTRACT: This report is a literature review pertaining to the potential association between the antiandrogen medications and the occurrence of benign tumours of the central nervous system It aims to assess the risk of meningioma associated with use of cyproterone acetate, a progestogen indicated for clinical hyperandrogenism, in different patient groups. Articles pertaining to this association, including case studies and epidemiological studies including patients of all genders, were included. Data pertaining to asymptomatic meningiomas or untreated meningiomas was not included due to difficulty of acquisition Information was sourced from medical databases including PubMed, Google Scholar, Cochrane central register of controlled trials (CENTRAL), JSTOR, and the Virtual Health Library. Risk of bias was assessed qualitatively by examining study designs, study populations, and potential author biases. Overall, a review of existing literature examined the biological and epidemiological bases for an association between the pathology and the drug. Groups of patients with a higher dosage (above 25mg/day), treatment length above 1 year, and/or a male gender assignment at birth were correlated with higher risk of meningioma in comparison to other groups. This review was completed independently with no sources of funding.
First marketed in the 1970s, Cyproterone acetate (CA), sold under the brand name Androcur® in France, is an artificial steroidal progestogen and anti-gonadotropin used to suppress sex-hormone levels. It is prescribed under various names in doses of 1, 2, 10, 50, and 100mg per day, and used to manage androgen-dependent conditions such as prostate cancer, hirsutism, acne, hyperandrogenism, paraphilias, or hypersexuality (Weil, et al., 2021). It is also used at low doses in hormonal birth control.
Alternatively, cyproterone acetate can be used as hormone replacement therapy in both menopausal cisgender women and in transgender women of any age who wish to use feminizing hormones. In hormonereplacement therapy (HRT) for transgender women, CA is either taken orally at a dose from 10100mg/day or by intramuscular injection at a dosage of 300mg/month (WincklerCrepaz, et al., 2017; Urdl, 2009).
Route of administration is typically not a major concern due to its near-total oral bioavailability and its long elimination half-life
(Kuhl, 2005). The drug is also occasionally used as a puberty blocker, although it has largely been replaced by GnRH modulators. Common side effects of high-dose CA treatment include anemia in patients treated for prostate cancer, hormonal deregulation, and corticosteroid like effects. However, one sideeffect in particular has attracted clinical interest in the decades since the drug's release: The development of meningioma (benign brain tumours) seemingly associated with long-term highdose treatment with CA. Since its discovery, CPA has been controversially associated with an increased risk of meningioma, especially in long-term exposure.
Meningiomas are the most common benign intracranial tu mor of the central nervous syste m (Kalamarides and Peyre, 2017),
accounting for 30% of primary brain tumors, and 36.8% of the Central Brain Tumor Registry of the United States (Champeaux-Depond, et al., 2021), which make them the most frequently occurring brain tumors in patients over 35 (Bernat, et al., 2015). Meningiomas are typically benign neoplasms originating from the meningothelial or cap cells of the arachnoid and grow slowly. However, despite their usual non-malignancy, their location around the nervous system can cause serious issues. In 2011, meningiomas had an annual incidence ranging from 38 per 100,000 person years (Gil, et al , 2011) They occur more often in women than in men, and their incidence increases with age The vast majority (approximately 90%) of these meningiomas are benign, or Grade I, and typically have a good outcome (Champeaux-Depond, et al., 2021). Malignant forms only account for 1-3% of cases, but are aggressive tumours with a poor prognosis - however, without definitive prognostic markers for grading,
the WHO classification is still based on histological criteria, which can be prone to bias. Over the past few decades, there has been an observed increase in incidence of meningiomas which has been most pronounced among women (Claus, et al., 2013). The aetiology of meningio mas is largely unknown although hormones have been suggested to play a role (Cea-Soriano, et al., 2012), and the only unequivocal risk factor discovered to date is ionising radiation (ChampeauxDepond, et al., 2021; Gil, et al., 2011). One study cites “increased use of postmenopausal hormone repla cement therapy (HRT)” as the cause of this perceived increase in incidence (Klaeboe, et al., 2005), but also cites lack of justifying evidence due to past research inconsistencies and conflicting results in previous studies (Fan, et al., 2013). The theory that meningioma growth may be impacted by sex hormones is due to their known hormone-sensitivity and widespread expression of progesterone receptors, found in 88% of meningioma (ChampeauxDepond, et al., 2021). Whether this is true of both endogenous and exogenous sex hormones remains, however, up for debate.
To date, cyproterone acetate is still used widely throughout the world, with the highest usage concentrated in Europe (where it has been approved for use since the 1970s), Canada, and Mexico, but ostensibly not in Japan or the United States due to these countries' concerns about liver toxicity (Kalamarides and Peyre, 2017). The regulatory pharmaceutical bodies of Asian and Anglo-Saxon countries largely disapprove of the use of high-dose CA in gynaecology or dermatology, and CA is not approved at all for use in contraception (Weil, et al , 2021) The geographical distribution of CA-related meningiomas might reflect these prescribing practices: in countries such as Belgium,
Cyprus, Germany, Greece, Luxembourg, the Netherlands, and Portugal, a propensity to lower-dose prescriptions of the drug might correlate to lower risk of related meningiomas (Weil, et al., 2021). In France, which accounts for 60% of all sales of 50mg CA (Weil, et al., 2021), and many Latin American countries (especially Argentina and Brazil), the burden of CA-induced meningioma may be higher. However, to assume from these statistics alone is to draw correlation from causation. Existing research on the role of cyproterone acetate in meningioma growth is often contradictory: while authors such as Adams (1990) report that the medication inhibits neoplastic growth, other case series on longterm usage of CA yielded opposing results, showing increased incidence of meningioma with the drug. In the years since these studies, the latter conclusions have been strengthened by further studies (Nota, et al , 2018; Raj, et al , 2018) Thus, the relationship between the incidence of meningioma and exogenous hormone use has been discussed by scientific councils for decades, with the link that "the link [between the two] is therefore probable, although the risk level is difficult to determine" (PluBureau, 2019).
published over the period 2000-2022: PubMed, Google Scholar, Cochrane central register of controlled trials (CENTRAL), JSTOR, and Virtual Health Library The following terms were used to generate a search: Meningioma, Intracranial Meningioma, Cyproterone Acetate, Androcur,
Assurance:
Due to the nature of the data available, methods to ensure a high-quality review of existing literature were limited: excluding case-by-case reports or studies without control groups would exclude many of the early reports, which are vital to a complete understanding of the interactions between exogenous hormones and cancer cells. The same roadblock applies to restricting a potential study to only peer-reviewed articles. However, data
The finer biomolecular mechanisms underlying the actions of sex hormones in oncology remain unclear, and the evidence used in this introduction can only support correlation and not causation. Worldwide, the extent and strength of the association between the two is still heavily debated. So, what arguments might support the hypothesis that cyproterone acetate precipitates meningiomas, and how different patient factors affect this?
Most meningiomas arising after prolonged exposure to highdose CA grow on the anterior and middle portions of the skull base, with a 47-fold excess risk of growth on the anterior skull base in particular (Bernat, et al., 2015).
This involves the spheno-orbital region of the skull, which is linked to the eyes. This is consistent with the theory that hormones influence meningioma growth, as several publications have confirmed a higher density of progesterone receptors in the anterior skull base (Weil, et al., 2021). Incidence in women under age 20 is relatively weak, estimated at about 1.4 for 1 million CA users and more globally at around 4-8 for every 100,000 patients (PluBureau, 2019). The European Medical Agency estimates that this side-effect may occur in 1-10 out of 10,000 CA users depending on dosage and treatment duration (European Medicines Agency).
Biological arguments that may sup port this association include predo minance of meningiomas in female patients, increased risk of tumor enlargement during pregnancy (with subsequent tumor shrinkage after delivery), and increased risk of meningioma in patients with hormone-associated conditions such as uterine fibroids, breast cancer, and endometriosis (Bernat, et al , 2015). This contradicts an earlier finding by Cea-Soriano, et al, who report “no significant association between meningioma and prostate, breast, or genital cancers.” (CeaSoriano, et al., 2012). Admittedly, there are many potential confounding factors in the cases involving processes like pregnancy (as pregnancy is a hormonally complex process with many different variables).
However, Lusis et al.'s study of 17 meningiomas in pregnant patients found evidence to support the role of progesterone in tumoral growth and reject the idea that non-hormone-linked rapid cell division may be the cause. In this vein, CA treatment would mimic the effects of progesterone during pregnancy and cause tumoral size increasebut does this account for the growth of entirely new tumours? According to the European Medical Agency, long-term administration of CA at doses above 25mg "could at least be related causally to the occurrence of meningiomas'' and thus treatment should be stopped in women with previous meningioma (Kalamarides and Peyre, 2017). Many agencies now explicitly state that cyproterone acetate is contraindicated in, among other conditions, patients with a history of meningiomas. To summarise so far, though the global body of research on CA and meningioma has historically conflicted in its results, several factors studied in the last decade posit that a positive association between CA usage and meningioma growth may be supported. These factors include the following: increased expression of progesterone receptors in meningiomas, the growth of tumours in specific areas (the anterior base of the skull), the dose-effect relationship, and the observed tumoral reduction after cessation of CA treatment. There is currently even a healthbased Temporary Specialised Specific Committee (CSST) focused on the subject in France, who conducted an epidemiological study based on
Medical Journal identifies a crude relative risk ratio of 5 2 and an adjusted hazard ratio of 6 6 in the development of meningioma following cyproterone acetate usage - this data arises from the presence of 23.8 meningiomas per 100,000 person years in CA users, and 4.5 per 100,000 person years in the control group (Weill, et al., 2021). Overall, their study finds a positive dose-effect relation for meningiomas in patients using CA, with lower doses (cumulative dose less than 12g) having lower hazard ratios, which rapidly increased with cumulative dosage increase.
According to the studies selected, there is also an association between patient gender, age, and their risk of meningioma following CA use. Firstly, it is important to note that Gil et al. support that, even without CA usage, meningioma incidence is higher in women than men, and increases with age (Gil et al., 2011). This is important to account for when calculating or comparing CA-associated meningioma risk in different patient groups, as there are different base risk levels for different groups. The association of CA usage, meningioma risk, and increased age is supported by Weill’s work, which states that this risk is especially present in patients over the age of 65. In
drug, but note that a small study size (3 cases) complicates statistical analysis, concluding that further investigation is required. A clinically important note on CA usage is that the majority of cases so far have occurred in transgender women using the drug as part of a hormone replacement therapy (HRT) regime, making these patients a higher-risk population for CA-associated meningioma risk (Mancini, et al., 2018). Specifically, according to the British Medical Journal, incidence in transgender patients was reported at 20.7 per 100,000 person years (Weill, et al., 2021). In the examined studies, there is a noted increased risk of CA-related meningioma in transgender women and cisgender men as compared to cisgender women. However, the magnitude of the risk observed with hormone replacement therapy, which is maintained at a relatively low dose of CA, is much lower than that observed with higherdose cyproterone acetate multiple studies.
Population-based cohort studies such as those conducted by Gil, et al. support a positive association between CA use and meningioma risk - though this study finds that the link between the two is only significant in patients having been exposed to CA for more than a year. The risk of meningioma in CA users who have used the drug for 10-30 years is approximately four patients per 1000 person years (Weill, et al., 2021). Conversely, Cea-Soriano et al. report that risk in male patients specifically “was only observed with high-dose, short-term (<1 year) therapy” (Cea-Soriano, et al., 2012). At the time of authorship, it is difficult to determine whether this contradiction is due to biomolecular interactions in male patients, data bias, or differing study designs.
According to Weill, et al., one existing study concluded that, though there was an excess risk of meningioma in patients assigned male at birth (AMAB) with CA prescriptions (odds ratio of 3.3, 95% confidence interval 1.0 to 10.6 (Weill, et al., 2021)), long-term low-dose CA did not present a risk of the pathology in patients assigned female at birth (AFAB). Furthermore, in another study, low-dose CA-containing medications such as oral contraceptives were concluded not to pose a significant risk of meningioma (Cea-Soriano, et al., 2012). It is important to note that of the AMAB patients, only four presented with meningiomathis small study size may contribute to bias or an incomplete understanding of the pathology.
These findings are corroborated by Gil, et al., who draw the same conclusions. The risk of CA-related meningioma appears to be greatest in AFAB patients with a cumulative dose of more than 60g (Weill, et al.).
Meningiomas in patients taking CA are also documented to reduce in size after CA discontinuation (Weil, et al., 2021; Bernat, et al., 2015; Kalamarides and Peyre, 2017), making this the first-line treatment for meningiomas developed after prescription of CA, followed by neurosurgery However, the risk of meningioma in these patients does not return to that of control patients; instead, risk in exposed groups is 180% higher (95% confidence interval 100% to 320%) than in non-exposed patients (Weill et al., 2021), and is even 420% higher (95% confidence interval 220% to 800%) in patients with a cumulative CA dose of 12g or higher before discontinuation (Weill et al., 2021).
This review has discussed the possible factors contributing to an association between cyproterone acetate use and meningioma, including biochemical causes and observed precedents in clinical practice. Furthermore, this paper has examined the relationship between cyproterone acetate dosage, treatment length, and meningioma growth in various patient groups. The report’s findings support the hypothesis that increased dosage and treatment length of cyproterone acetate increase the incidence of meningioma in patients of all genders, with increased risk in transitioning and ageing patients.
With this information analysed, the question now remains as to how to incorporate
these findings into current clinical practice. Due to the rapid spread of colloquial clinical information across hospital settings, many clinical doctors are already aware of the purported negative effects of CA.This is especially true in oncological, sexual, or neurosurgical specialties, where these cases are more commonly observed. Recent legal and regulatory changes have also posited that CA usage should be more tightly controlled and that prescribers should weigh the potential positive outcomes against side effects. Recommended changes in CA dosage and use with respect to meningioma risk vary according to the population that the drug is used in, and its therapeutic use in those patients. CA for feminising hormone therapy is not dose-dependent: doses of 1050mg/day CA all result in full androgen suppression (Nota, et al., 2018; Raj, et al., 2018). Thus, lower doses may help to prevent or slow the appearance of negative effects associated with usage
Though the dose-effect relationship in other therapeutic uses of CA is less studied, this guideline may be useful for cisgender women taking CA as well. For patients taking the drug long-term, repeated regular MRI's should be considered to monitor for meningiomas (Bernat, et al., 2015). Furthermore, care should be taken when considering a change of medication: progestogens such as nomegestrol acetate, megestrol acetate, chlormadinone acetate, and medroxyprogesterone acetate have also been observationally associated with the development of benign brain tumours (Gruber & Huber, 2003). In light of these studies, changes have also been made to approved indications for cyproterone acetate: in 2020, the European Medicines Agency recommended that pharmaceutical regimens containing more than 10mg/day of CA should be avoided as a first-line treatment for androgen-dependent conditions and be started at lower doses. In the case that higher-dose medication is effective, the dosage should then progressively be reduced to the lowest effective dose. For these conditions, there is no risk for medicines containing only 12mg/day of CA, but the European Medical Association states that they should still not be used in patients with a history of brain tumours. In conditions like prostate cancer, prescribing practices remain unchanged. Conversely, in cases where patients have recently been diagnosed with a meningioma after CA treatment,
rapid medication withdrawal is imperative and surgical resection can be considered per the advice of an oncologist or neurosurgeon. After discontinuation of CA treatment, meningiomas have been known to decrease in size or stabilise, which may inform treatment provisions (Figure 1; Weil, et al., 2021; Bernat, et al., 2015; Kalamarides and Peyre, 2017) and reduce the urgency for surgical resection. Where possible, treatment for these benign tumours should remain conservative and first be observed with repeated MRIs, unless neoplasm-related neurological effects (e.g. visual deficits) worsen rapidly and consistently after diagnosis.
Though wider epidemiological studies have appeared over the past decade, the basis of this scientific inquiry lies in nonanalytic studies, including expert opinions and individual case notes. These types of studies fall in the lower categories of the SIGN levels of evidence, denoting that research on the subject still needs to be completed. Ideally, this would involve metaanalyses and systematic reviews with low risks of bias,
examined and accounted for; ergo, meta-analyses, case-control studies from different countries, and more conclusive epidemiological data is required. In the continuation of this investigation, international data is imperative - though this exists in some measure, with nonanalytic case series spanning from Latin America (Brazil & Argentina) to France or the UK, a wider range of data would be highly beneficial, especially considering the different prescribing practices of different countries. Though the format of most reported cases so far has functioned efficiently as an early indicator
do not mention stratifying for coprescription of other hormonal medications such as oestrogens, or their role, or do not account for the indication for prescription. In this vein, some do not account for previous medication history or family histories of illness. Each study type has its own limitations.
and consider alternatives with fewer potential side effects. In cases where extended CA use is unavoidable, patients should be warned about the potential side effects, and additional serial MRI screening for meningioma should be undertaken according to the prescriber’s discretion. This is especially recommended in the atrisk groups identified in the results above - namely, older patients, patients with comorbid hormonal conditions, male patients, or transgender women.
restating a question already posed by Plu-Bureau (2019): If incidence of meningioma is less pronounced in countries with lower starting doses of CA, is the standard dose of 50mg/day in France appropriate or excessive? Should this dosage be lowered to match that of surrounding countries? Furthermore, though CA has the most existing literature on its side effects out of most antiandrogen drugs, it may be worthwhile to investigate whether this relationship exists in other drugs with similar mechanisms of action such as chlormadinone and nomegestrol, which are similarly suspected to be associated with meningioma (Champeaux-Depond, et al , 2021)
In the case of a positive association, it could also be interesting to investigate whether the strength of this association between the medication and side effects is the same in different drugs, or whether the same groups are affected to the same extent.
that they have had no sources of support that could influence or change the work reported in this paper. This work was completed entirely independently without external aid
Declaration of Interest: The author declares that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Adams EF, Schrell UM, Fahlbusch R, Thierauf P Hormonal dependency of cerebral meningiomas Part 2: In vitro effect of steroids, bromocriptine, and epidermal growth factor on growth of meningiomas J Neurosurg 1990 Nov;73(5):750-5. doi: 10.3171/jns.1990.73.5.0750. PMID:
2213165
Bergoglio, M T , M Gómez-Balaguer, E Almonacid Folch, F Hurtado Murillo and A Hernández-Mijares (2013) "Symptomatic meningioma induced by cross-sex hormone treatment in a male-to-female transsexual " Endocrinología y Nutrición 60(5): 264-267
Bernat, A L , K Oyama, S Hamdi, E Mandonnet, D Vexiau, M Pocard, B George and S Froelich (2015) "Growth stabilization and regression of meningiomas after discontinuation of cyproterone acetate: a case series of 12 patients " Acta Neurochirurgica 157(10): 1741-1746
Cea Soriano, L , A Asiimwe and L A García Rodriguez (2017) "Prescribing of cyproterone acetate/ethinylestradiol in UK general practice: a retrospective descriptive study using The Health Improvement Network " Contraception 95(3): 299-305
Abrupt Regression of a Meningioma after Discontinuation of Cyproterone
Treatment American Journal of Neuroradiology Sep 2010, 31 (8) 1504-1505; DOI: 10 3174/ajnr A1978
Gruber CJ, Huber JC (2003). "Differential effects of progestins on the brain".
Maturitas 46 Suppl 1: S71–5 doi:10 1016/j maturitas 2003 09 021
Kuhl, H (2005) Pharmacology of estrogens and progestogens: influence of different routes of administration, Climacteric, 8:sup1, 3-63, doi: 10 1080/13697130500148875
Kalamarides, M and M Peyre (2017) "Dramatic Shrinkage with Reduced Vascularization of Large Meningiomas After Cessation of Progestin
Treatment " World Neurosurgery 101: 814 e817-814 e810
Klaeboe L, Lonn S, Scheie D, Auvinen A, Christensen HC, Feychting M, Johansen C, Salminen T, Tynes T (2005) Incidence of intracranial meningiomas in Denmark, Finland, Norway and Sweden, 1968–1997 Int J Cancer 117(6):996–1001
Mancini I, Rotilio A, Coati I, Seracchioli R, Martelli V, Meriggiola MC (2018) "Presentation of a meningioma in a transwoman after nine years of cyproterone acetate and estradiol intake: case report and literature review"
Gynecol Endocrinol 34 (6): 456–459 doi:10 1080/09513590 2017 1395839
198-205
Cea-Soriano, L , T Blenk, M -A Wallander and L A G Rodríguez (2012) "Hormonal therapies and meningioma: Is there a link?" Cancer Epidemiology 36(2):
Cebula, H , T Q Pham, P Boyer and S Froelich (2010) "Regression of meningiomas after discontinuation of cyproterone acetate in a transsexual patient " Acta Neurochirurgica 152(11): 1955-1956
Champeaux-Depond, C , J Weller, S Froelich and A Sartor (2021) "Cyproterone acetate and meningioma: a nationwide-wide population based study " J Neurooncol 151(2): 331-338
Claus E B , Calvocoressi L , Bondy M L , Wrensch M , Wiemels J L , Schildkraut J M (2013) “Exogenous hormone use, reproductive factors, and risk of intracranial meningioma in females.” J Neurosurg 118(3):649–
656
Coordination Group for Mutual Recognition and Decentralised Procedures - Human (2020) “Cyproterone-containing medicinal products” European Medicines Agency: Science, Medicines, Health _https://www ema europa eu/en/medicines/human/referrals/cyprote rone-containing-medicinal-p roducts
Fan Z X , Shen J , Wu Y Y,, Yu H , Zhu Y and Zhan R Y (2013) "Hormone replacement therapy and risk of meningioma in women: a meta-analysis " Cancer Causes & Control 24(8): 1517-1525
Gil, M , B Oliva, J Timoner, M A Maciá, V Bryant and F J de Abajo (2011) "Risk of meningioma among users of high doses of cyproterone acetate as compared with the general population: evidence from a population-based cohort study " British Journal of Clinical Pharmacology 72(6): 965-968
Gonçalves A M G , Page P , Domigo F , Méder JF , Oppenheim C
Nota NM, Wiepjes CM, de Blok C, Gooren LJ, Peerdeman SM, Kreukels B, den Heijer M (2018) "The occurrence of benign brain tumours in transgender individuals during cross-sex hormone treatment" Brain 141 (7): 2047–2054 doi:10 1093/brain/awy108
Plu-Bureau, G (2019) "Faut-il rayer l’acétate de cyprotérone de nos prescriptions ?" Gynécologie Obstétrique Fertilité & Sénologie 47(12): 823824
Raj R, Korja M, Koroknay-Pál P, Niemelä M (2018) "Multiple meningiomas in two male-to-female transsexual patients with hormone replacement therapy: A report of two cases and a brief literature review". Surg Neurol Int. 9: 109 doi:10 4103/sni sni_22 18
Ung, T H , Yang, A , Aref, M et al Preservation of olfaction in anterior midline skull base meningiomas: a comprehensive approach Acta Neurochir 161, 729–735 (2019) https://doi org/10 1007/s00701-019-03821-8
Urdl, W Behandlungsgrundsätze bei Transsexualität Gynäkologische Endokrinologie 7, 153–160 (2009) https://doi org/10 1007/s10304-009-0314-
9
Weill, A , P Nguyen, M Labidi, B Cadier, T Passeri, L Duranteau, AL
Bernat, I Yoldjian, S Fontanel, S Froelich, J Coste (2021) “Use of high dose cyproterone acetate and risk of intracranial meningioma in women: cohort study” British Medical Journal 372(37)
Winkler-Crepaz, K , Müller, A , Böttcher, B et al Hormonbehandlung bei Transgenderpatienten Gynäkologische Endokrinologie 15, 39–42 (2017) https://doi org/10 1007/s10304-016-0116-9
Revathi Ramachandran, Physical Therapy Neuroprosthetics
Reviewed and edited by I. Bajra, I. Kagoo and T. Lawson
ABSTRACT: In this review article, I explore what spinal cord injuries are and what BCIs are. This article considers BCI as an alternative treatment option and considers arguments for and against this technology.
and sensory function of the arms, legs, pelvic organs, and trunk of the body (8). For layman’s reference, tetraplegia and quadriplegia are the same condition (4)
the level of injury (2). The three main types of complete SCI are tetraplegia, paraplegia, and triplegia; in this report, I will be focusing on patients with complete tetraplegia. Tetraplegia is caused by injury to the cervical spine of the neck and results in complete or incomplete paralysis below the level of injury. This leads to the impairment or loss of motor
SCIs such as tetraplegia often occur in a sudden accident, e.g., in a car crash, and as a result, emergency treatment is needed to minimise the damage that has resulted from the injury.
This includes surgery to remove bone fragments/fuse broken vertebra together, traction to stabilise the spine, and administering drugs such as methylprednisolone (Medrol). Medrol is a steroid medication that should be given within 8 hours of the injury - for some patients, this improves the recovery as it works by reducing damage to nerve cells and decreasing inflammation near the site of injury (6). However, Medrol has limited efficacy, and a study conducted in Korea found that Medrol has led to an increased length of hospitalisation, higher complication rates in terms of pneumonia, GI bleeding, and UTIs (7).
Although surgery is an excellent option for immediate treatment of sudden SCIs, it can only have limited effects as the damage to the spinal cord has already been done and nervous tissue is not regenerative. Surgery can help to improve the state that the patient will ultimately end up in by attempting to minimise long term spinal cord damage, but it does not provide a solution to their reduced quality of life after their injury. Many traditional assistive treatments/therapies also require some level of voluntary muscle control, such as physical therapy. This is a useful method to help patients with other SCIs, such as ALS (amyotrophic lateral sclerosis), to improve function of their muscles and improve muscle strength, but unfortunately this is not an option for those with complete tetraplegia and as a result, we must look for additional treatments.
is brain computer interfaces (BCIs) These are devices that detect brain signals, then analyse and translate them to produce desired actions straight from the patient’s thoughts. There are a few types of BCIs currently used, such as the scalp electroencephalograph (EEG), intracortical BCIs, and EcoGbased BCIs (electrocorticograph). The key component of any BCI is its neural decoder, which transforms electrical signals in the brain to create an effect on an external device – this is known as BCI mapping. There are two forms of BCI mapping, the first of which is to have a decoder calibration where brain activity and the corresponding external device behaviour is assessed to calculate a decoding weight.
Scalp EEGs are one of the most widely researched EEGs, which have the advantage of being safe,
non-invasive, inexpensive to access, and expose the patient to very minimal risk as it is placed externally and does not require a craniotomy (surgery in the skull)
(5) Scalp EEGs have been widely used by epileptologists to evaluate the electrical signals of patients with epilepsy whilst they are experiencing seizures. During clinical studies, some limitations of scalp EEGs have been found by epileptologists. Due to using an external recording device, scalp EEGs are unable to pick up all the electrical signals in the brain as they are attenuated whilst passing through the many biological filters (9) from the neural tissue to the scalp. This results in many signals not being picked up that are vital to the evaluation of the patient’s illness – these signals, however, are picked up via internal electrodes through alternative BCIs that are detailed later in the article. Another issue with scalp EEGs is that they pick up the electrical signals from other biological structures on the cranium such as
the eyes, tongue, and facial muscles. This can all interfere with the recordings of the EEG which leads to less accurate results and evaluations from clinicians.
As a result, intracortical BCIs are being researched further as this type of BCI can detect signals that scalp EEGs cannot. Intracortical BCIs comprise 96 small microelectrode arrays that can be implanted in the cortex of the brain to detect local field potentials produced by the local area of neurons. They receive high spatial and temporal resolution from the individual’s cortex, which allows for greater accuracy
of readings which will then result in improved clarity for analyzing the signals from patient. Usually, the microarrays are embedded the motor cortex as this is the area of the brain that controls movement, and so would theoretically provide control of prosthetic limbs, though they can also be implanted in other areas
Similarly, to intracortical BCIs, there are ECoG BCIs which use electrocorticography to record This is the intra-operative recording of cortical potentials (11) through grid or strip electrodes on the exposed cortical brain surface or through stereotactic depth macroelectrodes. ECoG’s elect rode arrays also record intracranially, but have an advantage over intracortical microarrays by allowing for recording over larger areas than the local neural field detected by the microarrays (8)
Both intracortical and ECoG BCIs have the advantage of being more receptive to electrical signals than scalp EEGs due to being implanted directly into the brain, but this accuracy comes with the drawback of craniotomy and neurosurgical implantation Craniotomies are very invasive procedures, and require specialist neurosurgeons and expensive technology to apply this treatment for patients, which can lead to high healthcare costs either for the individual patient or the organisation paying (8). We also do not know the long-term effects of permanent electrodes in the brain and the efficacy of recording brain signals for extended periods of time – the lasting stability is currently unknown with certain technologies. With further research and investment in BCI technology, this obstacle can be overcome, as with any new treatment that is developed whether it is drugs or technology. We currently know how beneficial it is to have access to unfiltered brain signals and this is incredibly valuable as it allows us to apply it to clinical practice and enable tetraplegic patients to overcome their physical limitations.
company in the field of BCIs is Synchron. This is an endovascular BCI company that has been granted FDA approval to enroll in clinical trials in the US to implant their BCI technology in human patients as of June 2022 (13).
Endovascular procedures are ones that travel through your vasculature in your body to the desired area Due to this BCI - named as Stentrode - the patients receiving this treatment will not have to undergo craniotomy, one of the greatest drawbacks to implanted BCIs. By removing the need for surgery, the process of obtaining a BCI becomes much safer and has less risk of complications
Synchron had previously conducted the SWITCH trial on 4 patients in Australia and from looking at the longterm effects of Stentrode for 12 months, it has been demonstrated to be safe and reliable for permanent usage
COMMAND approved in New York in the US (15). The SWITCH trial is now complete, but the COMMAND trial has just begun and so is still ongoing.
The Stentrode has two elements to it, the node and the axon The node consists of all the internal structures, and the axon consists of the external structures The Stentrode is implanted into the motor cortex of the brain via the external jugular vein and has a receiver-transmitter unit that is placed in the chest under the skin which records brain signals from the motor cortex and provides wireless transmission of raw data to the external device - the axon This wirelessly detects the raw brain signals from the node and digitises them, so they are recognised by Bluetooth devices. This enables Synchron’s user interface, brain io, to allow patients to conduct actions with just their thoughts (14)
This enables tetraplegic patients to be able to text and send messages to people with their mind which is an incredible application of BCIs that allows people to regain the autonomy they lost or never had.
respect to BCIs, is autonomy as technologies such as the Stentrod by Synchron allow tetraplegic patients to regain the ability to perform actions that they no longer had the means to perform on their own This has such a vast impact on patient wellbeing, and greatly adds to the quality of life that patients have after experiencing traumatic injuries.
(a) shows the pre-implant projection cerebral venography roadmap of the external jugular vein and the joining of the proximal sinuses to it
(b) the blue arrows show the lumen diameter, and the red arrows represent the cortical veins
(c) shows the Stentrode self-expanding upon deployment from the catheter
(d) shows the post-implantation in plain X-ray of Stentrode in the superior sagittal sinus indicated by the yellow arrow
(e) post-implant superior projection contrast study of the Stentrode (16)
The practice of medicine relies on 4 pillars of ethics: justice, nonmaleficence, beneficence and autonomy. The most significant of these pillars, with
Since the technology is new, it will most likely be relatively expensive for patients to obtain, thus creating a socio-economic divide in the healthcare industry; this is not congruent with the pillar for justice, as it will likely not be readily available to the public. This is unfortunately not a new issue as it is seen with the rising prevalence of privatised care. The main difference between privatised healthcare and the services provided by the NHS is the speed at which the amenities are accessed. Similarly, the difference between the wealthy having access to BCIs earlier than those who may hopefully one day receive it through the NHS is the speed at which they receive the treatment. As was said before, this is not a new issue, and so a potential divide in the availability of healthcare should not be a reason to stop the development of BCIs. With time, as the research and development progresses, there will be more cost-effective options available with more and more people able to gain access to this revolutionary technology.
Some may argue that invasive procedures like this propose great risks including surgical complications such as infections, incorrect implantation, and technological malfunctions. However, the benefits proposed from technology as innovative as BCIs vastly outweighs the negatives. With time, as the technology develops further, there will be ways to eradicate these risks, and make BCIs as safe and accessible as possible. Patients are also made aware of the risks before they undertake the procedures for BCI implantation, whether they may undergo endovascular or surgical implantation.
In 2015 a paraplegic patient was recruited to trial the feasibility of walking using a scalp EEG (electroencephalograph) BCI. This patient was trained on how to use a BCI and underwent muscle reconditioning so that they would be able to have control over their legs more easily. Although this patient did not have a complete SCI, it showed that BCIs could nevertheless be utilised to help patients regain their autonomy. In the trial, the patient demonstrated the ability to use the BCI in 30 real-time overground and suspended off-the-ground walking tests over the course of 19 weeks
The scientists conducting the study have shown that this proves there is justification for the development of invasive BCIs that could help patients with complete paralysis and could also be developed for non-invasive use
in patients with incomplete SCIs (19) A survey conducted showed that they development of BCIs, more severely impaired were either high and their injuries. The were all able to moderate computer interested in BCIs to technology that the survey, it was arm was highly that they do not Studies such as this may be ethical and arise from this technology, patients with SCIs them a higher standard their autonomy. BCIs is a beneficent be researched further .
These graphs show what the patients in the case study would like to achieve through the usage of BCIs. Graph A represents all participants, Graph B represents what low functioning patients desire, and Graph C represents what high functioning patients desire
In summary, there are many types of BCIs and one of the most recent clinically investigated and researched is the endovascular Stentrode developed by Synchron. Despite there being potential drawbacks to BCIs, their positive effects far outweigh the ethical or economic issues that could arise. Stentrode in particular, overcomes physical barriers such as craniotomy, and has promising results from their past trials. BCIs have the ability to transform medical care, whilst allowing the healthcare industry to advance its technological capacities such that later developments could accommodate a wider range of treatments.
ABSTRACT: Fibromyalgia has the reputation of causing a vague and generalised pain across the body that is as hard to treat as it is to diagnose. This could be because, so far, we have not been able to come up with a solution that cures fibromyalgia, instead simply medications that help manage it. Diagnosis in and of itself is a controversial topic as several different methods have been developed. However, they are infrequently used because fibromyalgia syndrome is not normally considered in a differential diagnosis. The inclusion criteria for studies incorporated in this paper involved randomised control trials and systematic reviews for the effectiveness of paracetamol and amitriptyline (the most common prescriptions in India and UK) for managing fibromyalgia. The effectiveness of drug therapies were compared to that of physical therapies.
The review found high quality evidence theoretically supporting the drug therapies. However, the use of drugs in practice did not result in a significant improvement in patients’ quality of life. The medications did not resolve all the issues that came with generalised pain, and it did not improve comorbidities of the syndrome. Similarly with physical exercises - though it could be argued they provided more relieftheir effectiveness was not statistically significant; and the evidence for it was of average quality at best.
Due to these methods being ineffective in improving quality of life and pain scores, a novel method was proposed on the basis of an informal case series report-style observation in a clinical setting: using a neck brace and arm sling This showed more promising results compared to current treatments, although there is no formal study conducted on it
“It’s been 45 years since it was named fibromyalgia” – yet we can’t cure it.
During a neurology rotation in a hospital in India, I saw several cases of fibromyalgia, especially in working adults and stay-at-home mums. Most patients came in complaining of severe, long-lasting headaches with poor sleep and retention. There was also a common trend of the patients coming from a high work stress environment.
Although the condition was not causing severe disability, it appeared to cause a steep decline in their quality of life. The Right to Persons with Disabilities Act (Agrawal, 2020) recognises a list of disabilities for which there are various benefits. Fibromyalgia is not recognised as one of these disabilities despite the debilitating nature of this condition. There is no concept of reasonable accommodation in the workplace for people with Fibromyalgia in India.
The treatment strategy the doctor used was different from what is normally prescribed by most physicians, and seemed to almost cure the syndrome Hence, this paper aims to discuss the current treatment strategies used to treat fibromyalgia in India and the UK, and the introduction of a novel concept: using neck braces and arm slings.
This is a scoping review examining the effectiveness of prescribing drugs to treat fibromyalgia syndrome, such as paracetamol and amitriptyline; and proposing an alternative physical therapy (neck braces). This review will involve the following: understanding the pathophysiology of fibromyalgia, current treatments available for it in India and the UK, and the suggestion of a novel intervention.
Fibromyalgia is a syndrome (Rastogi, 2018) characterised by pain all over the body with increased sensitivity to touch. The central increase in sensitivity to pain due to malfunctioning neurocircuits causes the overall vague manifestation of pain in the musculoskeletal system It is associated with several symptoms, such as fatigue, insomnia, mood swings, headaches and even digestive problems. An article in the BMJ (Anisur Rahman, 2014) explains further: “Chronic widespread pain is defined in epidemiological studies as pain for at least three months, affecting both sides of the body, both above and below the waist.”
The symptoms tend to be spread out over several systems in the body, even though fibromyalgia does not cause organ damage specifically. Rather, it is triggered by emotional or physical trauma. Patients may complain of pain in the following areas (Frederick Wolfe, 2011); (Rheumatology, 2015):
Total = 19
Along with a symptoms severity score that took into account the severity of cognitive factors (like incomplete sleep and fatigue) –this combined to a maximum score of 31.
In the UK, more commonly, blood tests (Liza Straub MD, 2021) like the FM/a are used where cytokine levels in in vitro peripheral blood mononuclear cells are measured These levels are much higher in patients with fibromyalgia than those who don’t have it (Frederick G Behm, 2012)
Additional tests (like urine and x
people don’t feel the need to address it. In India, 2-4% of the population are estimated to suffer from the disease (Rheumatology, 2021).
It has already been established that it is the (Rosalba Siracusa, 2021):
“amplification of afferent pain signals within the spinal cord as a key mechanism underlying the development of chronic pain in rheumatic diseases including (but not restricted to) fibromyalgia”
But, the actual cause remains a grey area Trying to answer the question of what could cause pain characterised as vague and chronic, we could consider multiple approaches, including but not limited to: psychological, genetic, and biological. It could be argued that the reason for increased pain sensitivity is either increased stimulation or decreased inhibition Firstly, we need to understand how we recognise pain.
Flowchart recommended by the same BMJ article follows. Figure 2: Shows common symptoms patients with fibromyalgia suffer fromThe basis of feeling pain is (Dudley, 2021):
Moreover, some preclinical studies have been able to explain the role of glial cells in increased pain sensitivity. They have been able to show that glial cells are activated by several chemicals including nitric oxide, prostaglandins and reactive oxygen species, most of which are also pro-inflammatory chemicals. These further exacerbate the problem by acting in a positive feedback loop with the release of substance P and glutamate, thereby enhancing the hyperexcitability of PTNs. These result in the visceral pain patients feel in several neuropathic pain syndromes including fibromyalgia. To summarise, (Chanika Assavarittirong, 2022):
Although this pathway refers to pain coming peripheries, it is not too dissimilar from the visceral pain. At a molecular level, normally chemicals called neuropeptides (namely and glutamate) are present at the dorsal spinal activate post-synaptic receptors on the second
neurons (that are to leave the spinal column regions in the brain) which ultimately perceiving 'pain' as we feel it. These neurons are called second-order pain neurons (PTNs); in response to prolonged painful stimuli, they become overly sensitive hence the response is exaggerated There can mechanisms at a molecular level explaining causes PTNs to become oversensitive, including representation in a paper by the American Medicine (Bradely, 2009):
Whilst exploring the neuroendocrine relationships of the syndrome, studies have also been able to classify fibromyalgia as a stress-related disorder that was a result of a dysfunctional hypothalamus-pituitaryadrenal (HPA) axis The classic composition of chemicals that are usually involved in stress have been mirrored in blood tests for those clinically diagnosed with the syndrome including:
Figure 4: Flowchart Summarising Factors precipitating Fibromyalgia Syndrome
Patients can often get stuck in the unfortunate cycle of poor sleep and fibromyalgia that exists, where one worsens the other Poor sleep impairs healing of muscles and damaged tissue – pathologies that tend to present in stressed individuals. This is worsened by the fact that growth hormones and factors are reduced when quality of sleep is poor – chemicals that are needed to repair microtears in muscles. The conglomeration of these factors sustains a person’s fatigue by prolonging the transmission of painful stimuli from the damaged muscles to the brain. This once again contributes to poor sleep and the cycle continues, thereby exacerbating the problem.
There appears to be evidence suggesting that people with family members that have previously been diagnosed with fibromyalgia syndrome have a higher likelihood of also getting the same diagnosis in in their adult years. Recent studies suggest that first-degree relatives have a significantly higher chance (8-fold increase) of also having fibromyalgia (Lesley M Arnold, 2004).
A 2014 BMJ article also mentioned a genome-wide linkage study of over 100 American families which showed that siblings of diagnosed patients have over a 13-fold increased risk of also having a positive diagnosis. They all appeared to have abnormalities in one region on chromosome 17.
There have also been specific polymorphisms identified (B Bondy, 1999) that appear to increase the risk of developing fibromyalgia Serotonin receptors (including serotonin 5-HT2A receptor polymorphism T/T phenotype and serotonin transporter), along with dopamine 4 receptor, and catecholamine o-methyl transferase polymorphisms are the ones that have been identified so far as significantly increasing the risk of developing fibromyalgia when stressed.
This paper tries to understand the wide range of research conducted on the subject to measure the effectiveness of drugs aimed at treating fibromyalgia. The method for discussing the available treatment options included looking at several studies that were conducted in India or the UK (because these are the countries being compared). Further, the year of publication and journal it was published in were taken into account – this was a crude measure of the study’s reliability. The inclusion criteria also covered studies that were randomised control trials and systematic reviews, thereby ensuring the included evidence is of a high quality Resources were obtained from the National Library of Medicine or PubMed.
Considering there is still no absolute proof of its exact mechanisms, and the pathophysiology discussed is theory based on blood tests and patients' vitals, treatments so far have been aimed at general pain management and reducing certain chemicals in the body that appear to worsen it (for example, introducing drugs that would suppress plasma cortisol levels)
A 2014 review of treatments for fibromyalgia explores several different kinds of physical and drug therapies that have been tried and tested so far. These include:
Analgesics (pain relievers) including Non-Steroidal Anti-Inflammatory Drugs (NSAIDs) and paracetamol are commonly prescribed in India. Several patients come in complaining of headache and neck pain, and this usually is not reason enough to suspect fibromyalgia, hence doctors typically link it to stress; they recommend sleep and over-the-counter medicines like paracetamol. Informal anecdotal evidence suggests that this works short term, right when patients feel the pain; when they take an analgesic, the pain dampens. However, this does not get rid of the syndrome itself.
When the actual diagnosis of fibromyalgia is produced, doctors switch to stronger drugs and injections, for example, anti-depressants. These have shown to reduce only the pain aspect of symptoms, but not improve patient quality of life overall. In the UK for example (NHS GOV, 2022), most anti-depressants have not been licensed to treat fibromyalgia, though a tricyclic anti-depressant called amitriptyline is one of the most commonly-prescribed drugs. Other treatments include citalopram, duloxetine, fluoxetine (Prozac), paroxetine, and sertraline. Arthritis Research & Therapy published a paper (Seoyoung C Kim, 2015) about the use of varied pharmacological interventions to aid pain management in fibromyalgia. The study compared the drugs that were normally prescribed including amitriptyline, duloxetine, gabapentin, and pregabalin. Some differences between drugs were obvious: rates of hospitalisation and outpatient visits appeared to be lowest with amitriptyline relative to others; and, duloxetine was preferred over pregabalin, because it was an anti-depressant that managed low moods.
Whilst there was an increase in the number of patient visits after the introduction of the drugs, adherence to this medication was questionable. We could say patients had a reduced use of healthcare, but only by a small degree after starting to take these (then novel) drugs
Enzymes called cyclo-oxygenase 1 & 2 (COX-1 & COX-2) are involved in the synthesis of such inflammatory agents as prostaglandins, thromboxane and prostacyclin. They all act as vasodilators in response to pain So, one way to reduce the feeling of pain is to inhibit the production of prostaglandins by inhibiting COX enzymes, which paracetamol weakly does (Hawkey, 2001). This mechanism is congruent with the basis of treating fibromyalgia – getting rid of the inflammatory factors. Studies have shown paracetamol to be quite effective in managing headaches, such as a study by the University of Oxford (Guy Stephens, 2016) that explored the use of paracetamol in treating tension
were pain-free 2 hours later relative to 49% who took placebo This means the relative effectiveness of paracetamol is questionable: it did work, but it doesn’t necessarily provide much benefit over a sugar pill. It is important to keep in mind that tension headaches can occur in several episodes but still are an acute condition However, one benefit of paracetamol is that it won’t cause GI distress Another study published in Academic Emergency Medicine ( Andrew K. Chang MD, 2018) investigated the efficacy of treating acute pain in elderly patients with IV acetaminophen (American name for paracetamol), as it was deemed a lot safer than IV opioids Several vital signs and self-report pain ratings on a numerical scale were taken over several months. The results showed that “analgesic for acute, severe, pain in older adults within the first hour of treatment provided neither clinically nor statistically superior pain relief "
This evidence strongly suggests paracetamol is only slightly effective in acute pain and not recommended for chronic pain. Even though paracetamol is an over-the-counter drug, you can overdose on it; the only way it could treat chronic pain would be if a patient were to continuously take paracetamol on a weekly basis. Toxicity can build up over time, and the drug could overshoot the therapeutic window even if taken in spaced doses, which is dangerous. Based on this, if this drug was used to treat fibromyalgia, patients would have to be cautious about dosage and take it only when pain is severe. This is certainly not the optimal way to treat a chronic syndrome.
Amitriptyline is a tricyclic antidepressant (TCA) that works (Amit Thour, 2022) by blocking the re-uptake of serotonin (a neurotransmitter already established to be lacking in those diagnosed with fibromyalgia) and norepinephrine. This means these chemicals are not reabsorbed back into the pre-synaptic neuron, they spend more time in the
synapse, thereby having a prolonged effect. So, serotonin’s effect of making us happy lasts longer. It prevents the re-uptake by norepinephrine or serotonin transporter (NET or SERT) In chronic conditions, amitriptyline causes a desensitisation effect on the pre-synaptic receptors, leaving longlasting changes in neurotransmission. Hence, it is commonly used to manage (NHS.GOV, 2022) depression and certain types of migraines. This suggests that the drug has shown to be effective in chronic pain, though it is contraindicated in several conditions (including pregnancy, diabetes, glaucoma, epilepsy etc.). However, fibromyalgia is not the only neuropathic pain condition amitriptyline is used to treat. It is also used to treat anxiety, PTSD, insomnia, bladder pain syndrome, migraine prophylaxis etc
Pain Research and Management published a systematic review (Atef Mohamed Sayed Mahmoud, 2021) comparing 2 low doses of amitriptyline to aid with chronic neck pain – a condition that shares symptoms with fibromyalgia. The study was able to compare the effectiveness of 10mg vs. 5mg doses of amitriptyline and compare the willingness of patients to use it over SSRI’s (selective serotonin-reuptake inhibitors like fluoxetine) as well
Patients considered in this study were workers who were almost unable to work due to the severity of the pain. This seemed to be relieved significantly by the higher dose of amitriptyline (10mg than the lower dose of 5mg). Similarly, patients' mental health was also found to be far better with the higher dose. Patients were also more willing to purchase amitriptyline over fluoxetine because it was cheaper.
Lastly, a comprehensive review (Kim Lawson, 2017) of amitriptyline as a TCA listed its following advantages:
For these many reasons, amitriptyline is used as a first-line treatment for fibromyalgia today, though it is not shown to be as effective as expected. Considering a multi-modal approach as investigated above, one would almost expect the drug to cure the syndrome. Unfortunately, however, it still does not. It has nasty side effects which have shown to lower compliance in patients.
assays, the study concluded that amitriptyline causes toxic cardiovascular side-effects in patients, possibly because it also suppresses endothelial tube formation, reduces ASM activity, cell proliferation and angiogenic signalling related to the vasoconstrictor effect. The reduction in ASM activity is related to its desensitisation effect on the pre-synaptic receptors (ASM is believed to be the major pharmacological target of amitriptyline for its therapeutic effects such as treating major depression). The consequence of triggering ASM’s pathological pathway, as amitriptyline does, is that it tends to cause the aforementioned side effects. The mechanism is still not well understood, but the study does conclusively state that amitriptyline is anti-angiogenic, and this may be the reason it leads to cardiovascular disease in otherwise healthy psychiatric patients. Amitriptyline appears to have a beneficial effect on patients, but a high dose is needed for it to have a significant impact – this is what doctors try to avoid in clinical practice and prefer to prescribe lower doses to reduce stress on the liver. At the same time, more recent studies claim that it does not have a huge advantage over placebo. Paracetamol, on the other hand, is shown to be effective in acute pain, not chronic –and fibromyalgia is a chronic neuropathic pain. High doses of paracetamol on a regular basis would be needed to keep up with the triggered episodes of pain, but not much could be done with non-triggered idiopathic (vague) pain.
For this reason, this paper aims to suggest a different approach to treating it
The Cochrane Pain, Palliative and Supportive Care Group did a systematic review of the efficacy of amitriptyline in treating chronic neuropathic pain in older adults. The study showed amitriptyline was effective but only for a minority of patients (it was more beneficial than placebo in only 1 out of every 4 patients). Another study (R Andrew Moore, 2015) noted that although it has been effective for patients over decades of use, there is no unbiased proof of it having an advantage over placebo; at the same time, neither is there evidence showing it to be ineffective,
Though shown to be the lesser of two evils, amitriptyline is known for its cardiovascular sideeffects (amongst others). Though the mechanisms of how it affects this system is unknown, an experimental study (Yinglu Guan, 2020) investigated how it may have negative effects on the vascular system by preventing angiogenesis (formation of blood vessels). After conducting animal studies, cell cultures and several
Current pharmacological treatments appear to have a minimal effect on the resources used to treat fibromyalgia, meaning they do not reduce the number of patient visits/hospitalisations/the length of stay etc. (i.e. do not result in a significant improvement in patient quality of life). Though figures in studies say new medications are showing a statistically significant improvement in pain scores in volunteers, this does not translate to success in clinics.
Moreover, this affects compliance in patients because they are not necessarily able to gauge the treatment's effectiveness and they have to deal with several sideeffects. This, in turn, often impacts the efficacy of the drug as well; considering that we are trying to treat chronic pain, it is difficult to reduce dosage without increasing the amount of time they have to be on the medication (which was probably set as lifelong to begin with)
Pharmacological treatment alone is insufficient (this is why physical therapies and exercise are prescribed, which have been reported to be a lot more effective anyway).
Physical therapies including exercise have been the most effective intervention in patients thus far; it's almost as though pharmacological interventions only aid that impact.
Zooming out from the microbiological perspective and looking at fibromyalgia as it presentswidespread musculoskeletal painstudies have investigated the impact of different kinds of exercise on reducing the tenderness of muscles and pain that accompanies syndrome.
A systematic review in Cochrane Library (Alice Theadom 2015) explored the effectiveness interventions focusing on the mind body interaction for fibromyalgia These interventions included:
Psychological therapy alone did not produce much evidence in terms of significantly improving the mood of participants. However, it did show a 7.5% absolute improvement in physical functionality (pain upon movement was much lower in those who got effective mind-body treatment). Similarly, their mood was better, pain after 3 vs. 14 weeks improved, and fewer patients left the study (for whatever reason) This in some way demonstrates that mind-body therapies that focused on mental wellbeing were quite effective, and almost free from side-effects
Another study focused on the effect of aerobic exercise (Julia Bidonde, 2017) on patients with fibromyalgia. The study aimed to compare 3 conditions: comparing aerobic with anaerobic, aerobic with a non-exercise-based intervention, and different degrees of aerobic exercise. Unfortunately, they were only able to produce low to medium quality evidence regarding the overall effects. But, medium-quality evidence pointed out that health-related quality of life, overall pain and stiffness reduced, but not significantly
Another study in 2019 (Julia Bidonde, 2019) investigated the efficacy of mixed exercises on alleviating the symptoms of fibromyalgia. The mixed exercises included: aerobic, resistance training for muscle strengthening, and flexibility training. They were also unable to generate high quality evidence, but lower quality evidence agreed with the previous study, thereby suggesting that exercise was likely effective in relieving some symptoms, but not to a clinically significant level.
The issue with drawing conclusions from evidence investigating physical therapies as a treatment for fibromyalgia syndrome is that the subject has not been very well investigated. There is mostly low- to moderate-quality evidence supporting the theory to some extent.
There is a lot more evidence for several different kinds of drugs as treatments for fibromyalgia compared to the number of papers siding with physical therapy This goes to show the scientific community has focused on perfecting a drug that would help manage (or even treat) the syndrome. Few studies are currently available that provide high quality evidence to show the effectiveness of physical therapies. It is only autobiographical accounts from patients in a clinical setting that admit that stretching exercises help relieve the symptoms better than medications. Moreover, drugs are not free of side-effects, especially the first line treatment in the UKamitriptyline - which can negatively affect several systems in the body, and prolonged use (as is required for the chronic condition) may make the frequency of side-effects worse. Though treatments have improved, they are not suitable for everyone; they help the minority (although they do this well) There is management, but no cure.
From this information, the concept of using a neck brace and/or an arm sling as an alternative came about. The initial observation was made in a clinical setting, and it showed more promising results
A neck brace (or cervical collar) is a temporary device (that can be either soft or hard and suited to your size), that relieves neck pain following an injury or sprain (D., 2019). It has shown to be a lot more effective than propping up pillows or medication (which lets the pain come back neck brace head, allowing neck (including the trapezius, and the before it head again
Considering fibromyalgia headaches pain at onto the patients point, if and a proper is followed, fibromyalgia approximately observed If the pain add an hand to working observation appeared
have a higher success rate than the current drugs in the market, and the exercise therapies available There are more rules to this regime, including strict sleeping hours, always wearing the neck brace, and not physically exerting yourself for the duration. The suggested 'solution' is arguably more sustainable than drug therapy because this is something if kept up for 6-8 weeks cures the pain, but the suggested pharmacological interventions are practically permanent For this reason, it could be argued that patients would be more willing to accept it, although this would have to be clinically tried first.
Fibromyalgia is a pain disorder that can become a disability if left untreated. It causes generalised pain throughout the body along with reduced cognitive functioning (affecting memory and mood, for example) and poor sleep. By comparing the commonly used drugs in the UK and India (amitriptyline and paracetamol respectively), it is shown that the current medication available for fibromyalgia targets several pain pathways in the body whilst also trying to reduce the other pain-associated issues. This is supposedly effective when considering clinical trial data (i.e. relatively high-quality evidence), compared with the poorer quality evidence supporting treatment with exercise Though ironically in practice, physical therapy has shown to be more effective among patients.
Based on an informal case series type of observation conducted while interning with a neurologist in India, physical interventions demonstrated better results for patients in practice. The premise is similar to that of aerobic exercise which does not have good quality evidence supporting it, but is more popular among patients. The proposition can be more economical and would likely result in fewer adverse effects.
Agrawal, S , 2020 Newz Hook [Online] Available at: https://newzhook com/story/patients-fibromyalgia-invisible-disabilityswati-agrawal-rpwd-chronic-immunological-neurological-diseases/ [Accessed 2 November 2022].
Alice Theadom, M. C. H. E. S. V. L. F. K. M., 2015. Mind and body therapy for fibromyalgia Cochrane Musculoskeletal Group, 2015(4)
Amit Thour, R M , 2022 Amitriptyline StatPearls
Andrew K Chang MD, P E B P A A P C C M S P M D W M
A C M A R M E J G M , 2018 Randomized Clinical Trial of Intravenous Acetaminophen as an Analgesic Adjunct for Older Adults With Acute Severe Pain. Academic Emergency Medicine, 26(4), pp. 402-
409
Anisur Rahman, M U D C , 2014 Fibromyalgia British Medical
Journal, Volume 348, p 1224
Anon , n d GOV UK [Online]
Available at: https://www gov uk/rights-disabled-person [Accessed 10 December 2022]
Atef Mohamed Sayed Mahmoud, S G R M L B J M B , 2021
Comparison between Two Low Doses of Amitriptyline in the Management of Chronic Neck Pain: A Randomized, Double-Blind, Comparative Study Pain Research and Management, Volume 2021
B Bondy, M s M O K G T S M S S d J M K R R E L F D E P M A , 1999 The T102C polymorphism of the 5-HT2A-receptor gene in fibromyalgia Neurobiology of disease, 6(5), pp 433-439
Bradely, L A , 2009 Pathophysiology of Fibromyalgia American Journal of Medicine, 122(12)
Bradely, L. A., 2009. Pathophysiology of Fibromyalgia. The American Journal of Medicine, 122(12), pp. 22-30.
Chanika Assavarittirong, W. S. B. G.-G., 2022. Oxidative Stress in Fibromyalgia: From Pathology to Treatment Oxidative Medicine and Cellular Longevity, Volume 2022, p 11
D , N , 2019 Why Should You Wear A Neck Brace [Online]
Available at: https://www backbraces org/why-should-you-wear-a-neckbrace/
[Accessed 12 November 2022].
Dudley, P., 2021. TeachMePhysiology. [Online]
Available at: https://teachmephysiology com/nervous-system/sensorysystem/pain-pathways/
[Accessed 10 November 2022]
Frederick G Behm, I M G O K V L S G P A G B S G , 2012
Unique immunological patterns in fibromyalgia BMC Clinical Pathology, 12(25)
Frederick Wolfe, D J C M -A F D L G W H R S K P M
A S R I J R J B W 2011 Fibromyalgia Criteria and Severity Scales for Clinical and Epidemiological Studies: A Modification of the ACR
Preliminary Diagnostic Criteria for Fibromyalgia The Journal of Rheumatology, 38(6), pp 1113-1122
Guy Stephens, S D R A M , 2016 Paracetamol (acetaminophen) for acute treatment of episodic tension-type headache in adults Cochrane Pain, Palliative and Supportive Care Group , Issue 6. Hawkey, C. J., 2001. COX-1 and COX-2 inhiitors. Best practise & Research in Clinical gastroenterology, 15(5), pp. 801-820.
Julia Bidonde, A J B C L S S C W K E M T J O s M G V D B -H C B , 2019 Mixed exercise training for adults with fibromyalgia Cochrane Database of Systematic Reviews, 2019(5)
Julia Bidonde, A J B C L S T J O S Y K S M G C B H J F , 2017 Aerobic Exercise training for adults with fibromyalgia Cochrane Musculoskeletal Group, 2017(6)
Kim Lawson, S A M , 2017 A Brief Review of the Pharmacology of Amitriptyline and Clinical Outcomes in Treating Fibromyalgia Biomedicines, 5(2), p 24
Lesley M Arnold, J I H e V H a E W D A F M B A L O S P E K J., 2004. Family study of fibromyalgia. Arthritis and Rheumatism, 50(3), pp. 944-952
Liza Straub MD, A M M , 2021 FM/a Blood Test for Diagnosis of Fibromyalgia American Family Physician, 103(9), pp 566-567
NHS GOV, 2022 Amitriptyline for depression [Online]
Available at: https://www nhs uk/medicines/amitriptyline-for-depression/ [Accessed 12 November 2022]
NHS GOV, 2022 Overview: Fibromyalgia [Online]
Available at: https://www nhs uk/conditions/fibromyalgia/ [Accessed 12 November 2022]
R Andrew Moore, S D D A P C P J W , 2015 Amitriptyline for neuropathic pain in adults. Cochrane Pain, Palliative and Supportive Care Group, 2015(7).
Rastogi, D. A., 2018. National Health Portal India [Online] Available at: https://www nhp gov in/disease/neurological/fibromyalgia#:~:text=2%2D4%2 0percent%20of%20people%20may%20be%20affected%20by%20fibromyalgi a [Accessed 10 December 2022].
Rheumatology, A. C. o., 2015. 2010 Fibromyalgia Diagnostic CriteriaExcerpt, s l : s n
Rheumatology, A C o , 2021 Fibromyalgia [Online]
Available at: https://www rheumatology org/I-Am-A/PatientCaregiver/Diseases-Conditions/Fibromyalgia
[Accessed 12 November 2022]
Rosalba Siracusa, R D P S C D I , 2021 Fibromyalgia: Pathogenesis, Mechanisms, Diagnois and Treatment Options Update International Journal of Molecular Sciences, 22(8)
Seoyoung C Kim, J E L Y C L , 2015 Patterns of health care utilization related to initiation of amitriptyline, duloxetine, gabapentin or pregabalin in fibromyalgia. Arthritis Research & Therapy, 17(1).
Yinglu Guan, X L M U K M B P -L L Y Z , 2020 Tricyclic antidepressant amitriptyline inhibits autophagic flux and prevents tube formation in vascular endothelial cells Basic Clinical Pharmacology and Toxicology, 124(4), pp 370-384
Reviewed and edited by T. Burton, T. Lawson and S. Sandanatavan
ABSTRACT: Parkinson's disease (PD) is one of the most common neurodegenerative disorders. Sources of data search was Google Scholar and PubMed using keywords Parkinson's disease (PD) and Deep brain stimulation (DBS), supplemented with relevant reviews and reference lists. Patient outcomes analysed based on three factors – electrode implant site, cognitive features, age. It seems that Subthalamic nucleus (STN) DBS brings the most motor benefits along with dramatic reductions in medication use. Globus pallidus internus (GPi) DBS is associated with the best cognitive outcomes, however, subpar motor symptoms alleviation is a limiting factor. Increase in patient age was found to not have greater risks and usually has increased effectivity. This suggests the need to increase the therapeutic window for DBS. There is a lack of long-term studies (10+ years) for analysing DBS effects and this area will benefit from clinical research.
arkinson’s disease (PD) is a chronic degenerative disease affecting mainly the older population Researchers believe it is caused by a combination of genetic, age and environmental factors that cause a loss of nerve cells in the brain. This loss leads to motor impairments like rigidity, tremors, compromised balance, and more. The disease has distinguishing neuropathological brain changes – it is usually associated with the formation of abnormal proteinaceous spherical bodies called Lewy bodies.
In the pre-symptomatic stages of the disease, the inclusion bodies are confined to the medulla oblongata/pontine tegmentum and olfactory bulb/anterior olfactory nucleus
With disease progression, substantia nigra and other nuclei of the midbrain and forebrain become altered.
This is when symptoms start developing in patients. PD is a multifactorial disease and there is no treatment that will halt progression With disease p rogression, substantia nigra and other nuclei of the midbrain and forebrain become altered This is when symptoms start developing in patients. PD is a multifactorial disease and there is no treatment
That will halt progression. Pharmacol ogical treatment is symptomatic and usually utilises dopaminergic drugs that are aimed at correcting the motor disturbances, an example being Levodopa With disease progression and less capacity of the system to store dopamine, the majority of patients experience shorter duration of response to their medication, alternating between good and poor responses to it (on-off symptoms) (Sveinbjornsdottir, 2016).
The use of surgery in PD dates back to the early 1950s Patients with particularly unbearable symptoms, would be referred for ablative surgery usually to the contralateral thalamus. With the introduction of levodopa, surgical treatment became rare. The widespread recognition of levodopainduced complications prompted surgeons and clinicians to revisit surgical interventions. Initially, this was mainly lesion surgery like a form of pallidotomy which was successful in treating levodopa-induced dyskinesias. A change in this technique came with the introduction of brain stimulators. This involved high-frequency stimulation – Deep brain stimulation (DBS) - of discrete brain areas producing efficient and reversible inhibition of the target site (Davie and Charles, 2008). So, we come to question – What is DBS?
Deep brain stimulation (DBS) is a technique used in neurosurgery, which consists of implanting an electrode
to a particular neural brain tissue, to continuously or periodically stimulate it. It is usually connected to an internalized neuro- pacemaker or stimulator that can be programmed in amplitude, pulse, width and frequency (Benabid, 2003) The device is usually implanted into subthalamic nucleus (SN) and the globus pallidus internus (GPi) (Volkmann, 2004).
This paper aims to identify the main predictors of beneficial long-term outcomes. Parkinson's disease (PD) is expected to cause huge economic burdens and is the second most common neurodegenerative condition (after Alzheimer’s) In recent years, the interest of the scientific community in PD has grown significantly. This was mainly triggered by the discovery of several causative monogenetic mutations. (de Lau and Breteler, 2006).
Considering its global burden, a thorough understanding of the potential outcomes for these patients is vital Since the approval of DBS in 2002, around 300 patients undergo DBS every year in the UK (Parkinson’s UK, 2020). Despite its widespread use, there are many aspects of this therapy that remain widely unknown and remain controversial. This paper aims to outline the impact of DBS on quality of life based on certain criterion: DBS electrode implant site, patient age, difference in cognitive outcomes.
PubMed and Google Scholar was searched from January 2000, to September 2022 with the search terms “Parkinson disease” and “deep brain stimulation” and “English”, which yielded around 1179 papers. Data or additional articles were also used from other sources, such as recent reviews, reference lists of relevant publications, and a search of the authors' own reference database. From the retrieved papers, only meta-analyses and randomised controlled trials from the top 30 results were selected.
Motor dysfunction in PD: DBS is usually considered only after mainstream medications have become ineffective at managing PD symptoms. The first criterion to evaluate is the site of electrode implantation Both the globus pallidus pars interna (GPi) and the subthalamic nucleus (STN) are common and acceptable sites, yet they produce slight variations in patient outcomes. (Mansouri et al., 2018) A meta-analysis of 13 randomized controlled trials of the these two sites found that STN stimulation lead to a threefold reduction in medication (Mansouri et al., 2018). On the other hand, the study found GPi stimulation leading to a
drastic improvement with respect to mood and levels of depression. The study followed up with the patients in 36 months and found this to be consistent. Another study found that STN stimulation actually led to a worsening of speech and gait with disease progression (Bronstein et al , 2011) This comes with that caveat that PD is a degenerative disease and this could be normal with disease severity and time Several medium-term (5-6 years) and some long-term studies (10 years) have confirmed that STN DBS improves motor fluctuations, dyskinesias, and the cardinal motor signs of Parkinson's disease. It was also found that, after STN implant, the levodopaequivalent dose (LED), i.e.. the amount of levodopa required for functionality, had reduced by 55.9%. This might be irrelevant as this comes with the caveat that the DBS electrode stimulation rate increases with reducing levodopa use – a possible compensation By contrast, the effects of GPi DBS in medium term studies are less consistent. Some studies even report reduced benefits after 5 years since surgery (Fasano, Daniele and Albanese, 2012).
8 years after STN DBS, the improvement in rigidity was sustained even without additional drug treatment. Compared to a baseline treatment of drug use, 25 1% had a reduction in bradykinesia. However, 21.6% of patients worsened when a combination of STN DBS and drug treatment was used. (Fasano et al., 2010) This finding has been confirmed 10 years after surgery (Castrioto, 2011). It is probably due to the progressive nature of the disease and the discovery of combined stimulation and drug resistance. Also, there has been a reported reduction in motor benefits of GPi DBS 5 years post-surgery. The huge reduction of LED seen in STI DBS is not replicated in GPi The GPi is also a large stimulation site, causing battery life to drain very quickly (Fasano, Daniele and Albanese, 2012)
Another possible site for electrode implantation is the ventral intermediate nucleus in the thalamus which leads to marked reduction in tremor, however, has no effect on other symptomology (Volkmann, 2004). This region is not well studied and very little research is available for verifying its therapeutic use.
The psychological changes seen in Parkinson’s disease usually are usually major determinants of quality of life in patients Some of the common manifestations include cognitive dysfunction, sleep dysfunction, behavioural changes, dysautonomia, amongst others. These factors are also likely to be more treatment resistant (Fasano, Daniele and Albanese, 2012)
It has been found that postoperative mood disorders like depression or mania can occur after STN implantation. This can be acute, chronic, transient, or persistent conditions. The improvement in moods seen in postoperative patients with STN or GPi DBS may be related to the alleviation of the motor dysfunctions. However, the psychological symptoms soon reappear (Fasano, Daniele and Albanese, 2012). There are currently no guidelines as to which site is better for electrode implantation – STN or GPi. As previously discussed, STN stimulation is superior in terms of its motor benefits. However, research has found that, in terms of cognitive benefits, GPi stimulation seems to take first place. This needs to be evaluated as GPi stimulation comes with the lacking motor improvements (Fasano et al., 2010). Another important psychological s ymptom associated with PD is dementia. PD dementia is related to Lewy body formation and a reported 30% of patients are
affected by it (Freund et al., 2009). STN stimulation is associated with either no change or a decrease in frontal lobe function. (Witt et al., 2008) One study found that a stimulation of the nucleus basalis of Meynert (NBM) in addition to STN stimulation drastically improved cognitive function (Witt et al., 2008). There was memory improvement, but it still remained deficient compared to normal. The therapeutic potential of NBM stimulation in the reduction of dementia has been shown in many animal models and could potentially be the next avenue for research in the field (Buzsaki et al., 2000). Currently, there is very limited reliable research on human models to show the efficacy of this sort of stimulation or the use of STN stimulation in the improvement of psychiatric effects of PD. As previously mentioned, GPi stimulation is associated with reduction in depression and other moodsymptoms, however, no known positive effects are seen for dementia. Moreover, due to the subpar motor symptom alleviation and medication regimen, it is usually not preferred (Bronstein et al., 2011).
Age and DBS surgery: PD is a chronic disease in elderly populations and DBS therapy is usually only considered after
a 14-year disease duration on average (DeLong et al., 2014). The proportion of patients undergoing DBS has remained the same despite the number of PD cases increasing yearly (DeLong et al , 2014) This suggests that the therapeutic window for DBS has remained more or less constant despite strong research backing its advantages. The main reason for this is greater risk of surgical complications with increasing age. However, one study on 1,757 patients with an average age of 61.2 found no increase in hospital stay or rate of infection, haemorrhage, pulmonary embolism, or pneumonia with increasing age (DeLong et al., 2014). Although, there is evidence suggesting that older patients have higher rates of axial muscle deterioration which is resistant to dopaminergic therapy and DBS (Derost et al , 2007). Thus, making the surgery, with its increasing complications that are naturally associated with old age, not worth it. Despite this, there is still overwhelming evidence that suggests that DBS surgery outweighs the risks involved with old age versus the benefits to the quality of life for the patient (Hely et al , 2009) For some reason, elderly patient seemed to show increased benefits in terms of motor symptomology thanyounger patients. This may be due to the increase in severity of the disease with age, thus, improvements seen will be greater (Ory‐Magne et al., 2007).
In this paper, a consensus has been reached that DBS therapy markedly improved patient results. Depending on chief patient problems, varying sites of electrode implantation can be selected for symptom-specific cures. STN stimulation for reduction in motor symptomology and medication use and GPi stimulation for mood related conditions. STN and GPi stimulation represent two consolidated treatment options with known indications and adequate follow-up of functional variables, although high quality data have been mostly collected in patients with STN DBS (Fasano et al., 2010).
It can be concluded that, despite the age of the patient, if careful considerations are made on ailments, outcomes from DBS surgery are favourable and aren’t associated with increased surgery risks or hospital stay. Among patients with PD who are older than 75 years of age, the overall complication risk, as well as the risk of postoperative hemorrhage, pneumonia, pulmonary embolism, or infection, remains relatively stable, despite increasing age. This suggests a possible expansion of the therapeutic window traditionally considered for DBS candidates, or at least the removal of age as a rigid exclusion criterion. Finally, novel sites of stimulation like the NBM provide new therapeutic windows for patients with PD dementia. This paper has limitations that must be mentioned. The main one being that lack of strong foundational knowledge related to PD Many of these papers have been written by professionals who have many years of experience in the field. Attempting to analyse such papers as a third- year medical student comes with its problems. The level of understanding and experience is obviously limited. Nonetheless, an earnest attempt has been made to gain as much knowledge possible in the available time to properly appreciate these papers.
Benabid, A L (2003) Deep brain stimulation for Parkinson’s disease Current Opinion in Neurobiology, [online] 13(6), pp 696–706 doi:10 1016/j conb 2003 11 001Bronstein, J M , Tagliati, M , Alterman, R L , Lozano, A M , Volkmann, J , Stefani, A , Horak, F B , Okun, M S , Foote, K D , et al (2011) Deep Brain Stimulation for Parkinson Disease Archives of Neurology, [online] 68(2) doi:10 1001/archneurol 2010 260
Buzsaki, G , Bickford, R , Ponomareff, G , Thal, L , Mandel, R and Gage, F (2000) Nucleus basalis and thalamic control of neocortical activity in the freely moving rat The Journal of Neuroscience, 8(11), pp 4007–4026 doi:10 1523/jneurosci 08-11-04007 2000
Castrioto, A. (2011). Ten-Year Outcome of Subthalamic Stimulation in Parkinson Disease. Archives of Neurology,t[online] 68(12), p.1550. doi:10 1001/archneurol 2011 182
Davie and Charles (2008) A review of Parkinson’s disease [online] Research gate Available at: https://www researchgate net/publication/5454757_A_review_of_Parkinson%27 s_disease [Accessed 2022]
de Lau, L.M. and Breteler, M.M. (2006). Epidemiology of Parkinson’s disease. The Lancet Neurology, [online] 5(6), pp.525–535. doi:10.1016/s14744422(06)70471-9.
DeLong, M R , Huang, K T , Gallis, J , Lokhnygina, Y , Parente, B , Hickey, P , Turner, D A and Lad, S P (2014) Effect of Advancing Age on Outcomes of Deep Brain Stimulation for Parkinson Disease JAMA Neurology, [online] 71(10), pp 1290–1295 doi:10 1001/jamaneurol 2014 1272
Derost, P -P , Ouchchane, L , Morand, D , Ulla, M , Llorca, P -M , Barget, M , Debilly, B , Lemaire, J -J and Durif, F (2007) Is DBS-STN appropriate to treat severe Parkinson disease in an elderly population? Neurology, [online] 68(17), pp 1345–1355 doi:10 1212/01 wnl 0000260059 77107 c2
Fasano, A , Daniele, A and Albanese, A (2012) Treatment of motor and nonmotor features of Parkinson’s disease with deep brain stimulation The Lancet Neurology, 11(5), pp 429–442 doi:10 1016/s1474-4422(12)70049-2
Fasano, A , Romito, L M , Daniele, A , Piano, C , Zinno, M , Bentivoglio, A R and Albanese, A (2010) Motor and cognitive outcome in patients with Parkinson’s disease 8 years after subthalamic implants Brain, 133(9), pp 2664–2676. doi:10.1093/brain/awq221.
Freund, H -J , Kuhn, J , Lenartz, D , Mai, J K , Schnell, T , Klosterkoetter, J and Sturm, V (2009) Cognitive Functions in a Patient With Parkinson-Dementia Syndrome Undergoing Deep Brain Stimulation Archives of Neurology, 66(6) doi:10 1001/archneurol 2009 102
Hely, M.A., Morris, J.G.L., Reid, W.G.J., O’Sullivan, D.J., Williamson, P.M., Broe, G.A. and Adena, M.A. (2009). Age at onset: the major determinant of outcome in Parkinson’s disease. Acta Neurologica Scandinavica, 92(6), pp.455–463.
doi:10 1111/j 1600- 0404 1995 tb00480 x
Mansouri, A , Taslimi, S , Badhiwala, J H , Witiw, C D , Nassiri, F , Odekerken, V J J , De Bie,
R M A , Kalia, S K , Hodaie, M , Munhoz, R P , Fasano, A and Lozano, A M (2018) Deep brain stimulation for Parkinson’s disease: meta-analysis of results of randomized trials at varying lengths of follow-up Journal of Neurosurgery, [online] 128(4), pp 1199–1213 doi:10 3171/2016 11 JNS16715
Ory‐Magne, F , Brefel‐Courbon, C , Simonetta‐Moreau, M , Fabre, N , Lotterie, J A , Chaynes, P , Berry, I , Lazorthes, Y and Rascol, O (2007) Does ageing influence deep brain stimulation outcomes in Parkinson’s disease? Movement Disorders, 22(10), pp 1457–1463 doi:10.1002/mds.21547.
Parkinson's UK (2020) Reporting on Parkinson’s: information for journalists [online]
Parkinson’s UK Available at: https://www parkinsons org uk/aboutus/reporting-parkinsons- information-journalists
Parkinson’s UK (2020) Deep brain stimulation boosts the strength of brain cell batteries [online] Available at: https://www parkinsons org uk/news/deep-brain-stimulation-boostsstrength-brain-cellbatteries#:~:text=What%20is%20deep%20brain%20stimulation [Accessed 30 Oct 2022]
Pezzoli, G. and Zini, M. (2010). Levodopa in Parkinson’s disease: from the past to the future. Expert Opinion on Pharmacotherapy, 11(4), pp.627–635. doi:10.1517/14656561003598919.
Sveinbjornsdottir, S (2016) The clinical symptoms of Parkinson’s disease Journal of neurochemistry, [online] 139 Suppl 1(S1), pp 318–324 doi:10 1111/jnc 13691
Volkmann, J (2004) Deep Brain Stimulation for the Treatment of Parkinson’s Disease Journal of Clinical Neurophysiology, [online] 21(1), pp.6–17. Available at: https://journals lww com/clinicalneurophys/Abstract/2004/01000/Deep_Bra in_Stimulation_for_t he_Treatment_of 3 aspx
Witt, K , Daniels, C , Reiff, J , Krack, P , Volkmann, J , Pinsker, M O , Krause, M , Tronnier, V , Kloss, M , Schnitzler, A , Wojtecki, L , Bötzel, K , Danek, A , Hilker, R , Sturm, V , Kupsch, A , Karner, E and Deuschl, G (2008) Neuropsychological and psychiatric changes after deep brain stimulation for Parkinson’s disease: a randomised, multicentre study The Lancet Neurology, [online] 7(7), pp 605–614 doi:10 1016/S1474-4422(08)70114-5
Ulyssa Fung, Pharmacology
Reviewed and edited by
S. SandanatavanABSTRACT: Alzheimer’s Disease (AD) is a neurodegenerative disease and one of the main causes of dementia The cause of neurodegeneration is unknown, although there are several identified biomarkers of AD, including amyloid plaques and neurofibrillary tangles Currently, there are several different types of pharmacological treatments of AD which aim to slow the cognitive decline in AD patients Such treatments include acetylcholinesterase inhibitors (donepezil, rivastigmine and galantamine), NMDA antagonists (memantine), and Lecanemab Lecanemab is a humanised monoclonal antibody with a high affinity for betaamyloid protofibrils. The recently approved drug targets amyloid plaques, and findings from clinical trials have shown a decrease in amyloid concentration as well as a consistent slowing of cognitive decline. To investigate how this drug compares to the other existing treatments, data from high- quality systematic reviews on each drug were analysed and used for comparisons.Findings depicted that in terms of treating clinical symptoms, both Lecanemab and existing treatments seem to exhibit similar effects in reducing cognitive decline. Lecanemab may result in slightly longer lasting effects, as it not only targets the cognitive symptoms, but the biomarkers of AD as well. Additionally, Lecanemab may be slightly more tolerable than existing treatments, although more rigorous and comprehensive comparisons are required.
Alzheimer’s Disease (AD) is a neurodegenerative disease and one of the main causes of dementia. The cause of neurodegeneration is unknown, although there are several identified biomarkers of AD, including amyloid plaques and neurofibrillary tangles. Currently, there are several different types of pharmacological treatments of AD which aim to slow the cognitive decline in AD patients Such treatments include acetylcholinesterase inhibitors (donepezil, rivastigmine and galantamine), NMDA antagonists (memantine), and Lecanemab. Lecanemab is a humanised monoclonal antibody with a high affinity for beta-amyloid protofibrils. The recently approved drug targets amyloid plaques, and findings from clinical trials have shown a decrease in amyloid concentration as well as a consistent slowing of cognitive decline. To investigate how this drug compares to the other existing treatments, data from high-quality systematic reviews on each drug were analysed and used for comparisons. Findings depicted that in terms of treating clinical symptoms, both Lecanemab and existing treatments seem to exhibit similar effects in reducing cognitive decline. Lecanemab may result in slightly longer lasting effects, as it not only targets the cognitive symptoms, but the biomarkers of AD as well. Additionally, Lecanemab may be slightly more tolerable than existing treatments, although more rigorous and comprehensive comparisons are required.
The data and studies used in this paper were collected through scientific databases, such as PubMed and Cochrane. Key words such as ‘Alzheimer’s Disease’, ‘Lecanemab’ etc. were used to collect a plethora of scientific papers on the topic at hand. Papers which discussed Alzheimer’s disease, current approved pharmacological treatments of AD, and Lecanemab were included. Papers which discussed other forms of treatments in AD which are not approved for use were not included To ensure the studies were relevant, papers were only used if they were published between the years 2000 – 2023 Abstracts of papers that came up after using such key words were skimmed to determine the relevance of the paper. References of papers were also skimmed to look for more similar papers that may be relevant to the topic.
The main sources collected included systematic reviews on the existing treatments, including reviews from the Cochrane database of systematic reviews, as well as phase II and III clinical trial data on Lecanemab. The trials used were double-blinded, with random assignment to the treatment or placebo group This was done to ensure that the scientific evidence discussed were of high quality.
AD is a neurodegenerative disorder that causes widespread degeneration of the brain, leading to severe declination in cognitive functions (Kumar, et al , 2022) It has a purely biological basis but has effects on cognition and behaviour. In the early stages of AD, degeneration begins in the entorhinal cortex within the hippocampus.
As the disease progresses however, the degeneration spreads to the cortical regions. This results in progressive impairment in cognitive and behavioural functions.
There are 3 clinical stages of AD – preclinical, mild cognitive impairment (MCI), and AD dementia. In the preclinical stages, there is no manifestation of any clinical symptoms. Cognitive performance is within the expected range relative to the individual, however, degeneration in the brain has already begun At the MCI stage, there is a slight decline in cognitive functions, relative to the individual, and there is evidence that cognitive impairment is below that of the baseline level There may also be some evidence of neurobehavioral disturbances, such as changes in mood.
However, the individual is still capable of being fully independent, although more complex tasks may be functionally impacted. In the final stages of AD, there is substantial cognitive impairment that affects several domains, such as memory, language, attention, visuospatial and temporal orientation, reasoning etc.
There may also potentially be some evidence of neurobehavioral symptoms, such as depression and anxiety. Individuals are no longer fully independent and rely on carers to help with daily activities (Jack, et al., 2018). Dementia can be split further into several stages – mild, moderate, and severe. There are many symptoms of AD, all of which are dependent on the stage Most symptoms are cognitive, but there are also neurobehavioral symptoms as well (Birks & Harvey, 2009)
Initial symptoms include:
• Episodic short-term memory loss, but episodic long-term memory is relatively intact.
• Impairment in attention.
• Impairment in problem solving and executive functions – this leads to problems with multitasking.
• Neurobehavioral behavioural symptoms in earlier stages may include lack of motivation.
Later symptoms include:
• Incoherent speech and language impairments.
• Severe memory impairment – failure to recognise close relatives and friends.
• Visuospatial impairments, and trouble with spatial and temporal orientation
• Motor task impairment, sleep disturbances, extrapyramidal motor signs, etc.
In AD, there are two key biological markers –abnormal extracellular neuritic plaques and intracellul ar neurofibrillary tangles. The neuritic plaques are formed by the extracellular aggregation of betaamyloid (Aβ). Aβ is formed from an amyloid precursor protein (APP). This precursor is cleaved to form beta-amyloid by alpha, beta, or gamma-secretase. The cleaving of APP by alpha- or beta-secretase forms small amyloid deposits which are not toxic However, APP cleavage by betasecretase, followed by gamma-secretase leads to the formation of Aβ-42 High levels of Aβ-42 lead to aggregation of amyloid which causes neuronal toxicity. The aggregated amyloid forms plaques around meningeal and cerebral vessels, and grey matter in the AD brain (Kumar, et al., 2022).
On the other hand, the intracellular neurofibrillary tangles are formed from the protein tau, which is a protein used to stabilise microtubules within the axon. The aggregation of amyloid leads to the hyperphosphorylation of tau. This causes the aggregation of tau, leading to formation of intracellular neurofibrillary tangles. These tangles first start in the hippocampus and can be seen all throughout the cerebral cortex (Kumar, et al., 2022).
There is also a genetic component to AD The risk of developing AD increases by 10-30% for individuals who have first-degree relatives that have AD Additionally, individuals with siblings who have lateonset AD are 2 to 3 times more likely to develop AD. AD can also be inherited as a form of autosomal dominant disorder. The inherited form of the disease is linked to mutations in 3 genes – the APP gene (chromosome 21), presenilin 1 (chromosome 14) and presenilin 2 (chromosome 1). All of these mutations interrupt the normal cleaving of APP, resulting in the formation of Aβ aggregations AD is also linked to several genetic markers, including the presence of APOE e4 allele (the gene associated with forming Apolipoprotein E). 50% of individuals carrying one copy of this allele have AD. This increases to 90% in those with two copies of this allele (Kumar, et al., 2022).
As of 2023, there is no cure for AD. However, there are several approved pharmacological treatments aimed to reduce the clinical symptoms.
One common type of treatment is the use of acetylcholinesterase inhibitors (AChEIs). This form of treatment is based on the cholinergic hypothesis. Based on this hypothesis, the cause of AD is the reduction of acetylcholine synthesis. Cholinergic neuron death and degeneration are prominent in AD, which results in decreased acetylcholine activity (Birks & Harvey, 2009). Acetylcholine (ACh) is a neurotransmitter associated with learning and memory The cognitive impairment in AD is a result of the lack of acetylcholine activity. As a result, AD symptoms can be treated by increasing the activity of ACh (Sharma, 2019).
There are three types of AChEIs – donepezil, rivastigmine and galantamine. All these AChEIs are competitive, reversible inhibitors. These inhibitors compete with ACh to bind with acetylcholinesterase (AChE), the enzyme responsible for ACh hydrolysis. The activity of these inhibitors is dependent on the concentration of ACh. The higher the concentration of ACh, the less likely the inhibitors would bind to acetylcholinesterase. This results in a selective effect, where brain areas with low ACh are more affected by the inhibitors than areas that have high ACh (Olin & Schneider, 2002). Greater activity of AChEIs in areas of low ACh enhances ACh transmission in these areas as needed
Although all three AChEIs have slightly different mechanisms, their main purpose is to inhibit the hydrolysis of ACh. Inhibiting the action of the enzyme would increase the concentration of ACh in synapses, enhancing cholinergic transmission in areas with low transmission. Donepezil is one of the three approved AChEIs used to treat AD. It is commonly used to treat mild to moderate AD, although higher doses of donepezil have recently been approved to treat moderate to severe AD. In fact, it is the only AChEI used to treat AD across different stages and severity (Dou, et al., 2018). Donepezil inhibits the action of AChE by causing the simultaneous inhibition of the anionic and peripheral anionic sites on AChE, effectively inhibiting the activity of one of the two active sites in AChE (Sharma, 2019)
Rivastigmine is another AChEI used to treat AD and is only approved to treat mild to moderate AD, this is the only AChEI that inhibits both AChE and butyrylcholinesterase (BuChE). BuChE also hydroly ses ACh,although not selectively. Since both AChE and BuChE hydrolyses ACh, inhibition of both enzymes may result in a more potent and sustained clinical benefit. Rivastigmine competes with ACh to bind to AChE. AChE cleaves the rivastigmine into smaller phenolic compounds, which is rapidly excreted from the body. Rivastigmine binds to AChE for longer periods of time than ACh, and thus inhibits AChE activity by causing the enzyme to become inactive (Onor, et al , 2007) Galantamine is the final AChEI used to treat AD. In addition to inhibiting AChE activity, it also binds to nicotinic cholinergic receptors and enhances the effect of ACh at these receptors – as a result, it may be able to enhance cholinergic transmission (Olin & Schneider, 2002).
Although all three drugs have shown to reduce the progression of clinical symptoms, there are some adverse effects related to the AChEI as well. Common side effects of using AChEI include gastrointestinal problems, such as nausea, severe vomiting, muscle weakness, diarrhoea, loss of appetite etc (Sharma, 2019)
In addition approved approved AD Memantine dependent assumes that is partly NMDA receptors glutamate overstimulation brain. This of the neuron, block and calcium ions influx of damage, & Keating, Memantine by blocking the excessive is some evidence from Memantine to prevent numerous Additionally, suggest that memantine tau, as well non-amyloidogenic preventing Memantine
AChEIs, so treat AD (Robinson
Memantine
Common hypertension, somnolence, & Keating,
When assessing the efficacy of the drugs on improving cognitive function in mild to moderate AD, all drugs were significantly more effective in reducing cognitive decline than placebo. Galantamine emerged as the most effective drug, followed by Donepezil, Rivastigmine and Memantine (Dou, et al , 2018) Longitudinal studies have further indicated that only galantamine reduces the risk of severe dementia Nevertheless, all drugs displayed a consistent and moderate impact on reducing cognitive decline over longer periods of time. The drugs also decreased the risk of mortality in AD patients (Xu, et al., 2021). However,a tradeoff between the drug tolerability and efficacy exists. The higher the efficacy of a drug in decreasing cognitive decline, , the lower its tolerability. Among the drugs, memantine proved the most tolerable, followed by donepezil, rivastigmine and galantamine (Dou, et al , 2018)
For moderate to severe AD, treatment options are more limited. Only memantine and donepezil can be used for treating moderate to severe AD. All the potential treatments exhibited a significant effect when compared to placebo, with no statistically significant difference observed between the efficacy of the two individual drugs Combination therapy involving memantine and donepezil was demonstrated to be the most effective, followed by donepezil, and memantine. Similarly, there was a trade-off between efficacy and tolerability The most tolerable course of treatment was memantine, followed by donepezil, and the combination of memantine and donepezil (Dou, et al., 2018).
In terms of improving global function, rivastigmine yielded the most significant enhancement in activities of daily living, followed by combination therapy. Comparisons of all the possible treatment options showed that although the treatments are effective in treating cognitive symptoms, none exhibited improvement in neurobehavioral symptoms (Dou, et al , 2018)
However, it is important to note that despite the established hierarchy, the rankings are somewhat imprecise, due to overlapping confidence intervals in the efficacy scores. This implies that all treatments are effective and should be used based on the individual and how they respond to the drug.
Extensive research has been dedicated to treatments targeting the biomarkers of AD, specifically the extracellular plaques. In January 2023, the US FDA approved the use of a new drug, Lecanemab, to treat AD, following clinical trials that consistently demonstrated positive effects of the drug on reducing clinical decline (FDA, 2023). Lecanemab is a humanised monoclonal antibody, which has a high affinity for betaamyloid protofibrils. This drug targets the aggregation of soluble and insoluble beta-amyloid in AD, as it is possible that these beta-amyloid deposits on grey matter initiates or potentiates the pathological processes in AD. Lecanemab aims to reduce beta-amyloid aggregation, in hopes of reducing the progression of AD. Lecanemab is administered in early stages of AD, where there is evidence of beta-amyloid aggregation through cerebral spinal fluid testing or PET scans Patients are administered 10mg of Lecanem ab intravenously once every two weeks (van Dyck, et al., 2023).
Multiple consistent clinical trials have shown Lecanemab effectively reduces amyloid plaques and clinical decline over an 18-month treatment period During this timeframe, Lecanemab has shown to significantly reduce the amyloid PET levels in the brain compared to a placebo. Notably, a reduction in plasma beta- amyloid 42/40 ratio as well as plasma p-tau181 was also observed (McDade, et al , 2023). Comparison of performance on several cognitive and global function scales show a significant reduction in impairments across the different tests within the treatment group compared to placebo. This implies that the drug is effective in mitigating cognitive and global function decline (van Dyck, et al., 2023). Findings also suggest that the differences in cognitive functions between the treatment and placebo group are maintained even after discontinuation of treatment, although the clinical progression is the same in both groups after discontinuation (McDade, et al., 2023).
However, given that the drug has only recently gained approval, the long-term effects of the drug on clinical decline and amyloid plaques are unknown
Phase I clinical trials of Lecanemab demonstrated that the drug was well tolerated in AD patients (Logovinsky, et al., 2016). Further studies in Phase II and III of clinical trials continued to support the relatively high tolerability of Lecanemab Common adverse effects were also identified In trials with patients receiving multiple doses of Lecanemab, the adverse effects were also classified as mild to moderate, with common effects including headache, orthostatic hypotension, and respiratory tract infection (Logovinsky, et al , 2016) Trials had found that these Lecanemab-related reactions occurred in around 26 4% of participants in the study (van Dyck, et al., 2023). Additionally, there was an approximate incidence of 10% of amyloid-related imaging abnormalities (ARIA) with effusions/oedemas in phase II and III clinical trials (Swanson, et al., 2021; van Dyck, et al., 2023).
With a new drug on the market, it is important to evaluate its comparative merits against existing pharmacological treatments to determine the most suitable option. In this context, we delve into the comparison between Lecanemab and current treatments across various dimensions.
On a theoretical basis, Lecanemab exhibits the capacity to target the biomarkers of AD, whereas the current treatments are only symptomatic. Compared to current treatment, Lecanemab is able to reduce the number of extracellular amyloid-plaques, which could aid in slowing the progression of the disease. This could imply that Lecanemab may be a better treatment for AD than current treatments. However, a limitation of this is that the exact role of the amyloid-plaques in AD is unknown Thus, whether disease progression can slow by targeting these plaques is also unknown.
In current clinical trials, the effects of Lecanemab are only tested on patients in the early, preclinical stages of AD. These participants receive an AD diagnosis based on biomarker testing.
Consequently, Lecanemab’s utility is confined to those in the initial phases of the disease. On the other hand, the current treatments are useful in managing mild to severe stages of AD. AD is challenging to diagnose just based on clinical symptoms, and often requires PET scans and CSF testing to diagnose AD in earlier stages. This implies that current treatments may be more useful to a wider population of individuals with AD, as it can be administered to individuals in all stages of AD. Additionally, Lecanemab is administered intravenously on a biweekly basis, whereas the other forms of treatment are typically administered orally on a daily basis (Kumar, et al , 2022) The method of administration for Lecanemab is more complex, and requires professionals to administer the drug This complexity could potentially render the use of current treatments a more convenient option for patients
Upon examining the variations in the Alzheimer’s Disease Assessment Scale (ADAS-cog) scores across different reviews of the treatments, it becomes apparent that Lecanemab does not have a significantly better effect on symptoms compared to other current pharmacological treatments (see Table 1). The effect of Lecanemab and current treatments on clinical symptoms are similar. It is important to note that there was no statistical analysis conducted to compare whether these differences are statistically different or not. These conclusions were made simply by observing the mean difference for the ADAS-cog scores between baseline and post-treatment. To further determine the differences in efficacy, a more formal statistical analysis should be conducted.
Table 1. The difference in ADAS-cog scores for each treatment type between baseline and post-treatment Data was extracted from each of the sources as listed in the table. ADAS-cog is the abbreviation for Alzheimer’s Disease Assessment Scale – Cognitive Subscale The scale is listed from 070, with 0 being no errors, and 70 indicating severe cognitive impairment. The negative change implies that all treatment types resulted in reduced cognitive impairment.
Given the similarity in its effect on clinical symptoms, this could indicate that both Lecanemab and current treatments have similar efficacy. However, the studies investigating current pharmacological treatments only typically investigate the effects over 24-26 weeks, whereas the studies investigating Lecanemab range over 18- months. This disparity in time arises from the time it takes for Lecanemab to have a significantly higher probability of efficacy than placebo, as evidenced by findings from clinical trials (Swanson, et al , 2021) This implies that current treatments are more effective in slowing clinical decline in a shorter amount of time. Looking at the long-term effects of treatments, studies of the effect of AChEIs’ influence on cognitive decline shows a moderate effect on the reduction of the decline, yet it falls short in mitigating the risk of developing severe dementia. Only galantamine showed significant effect in reducing the risk of severe dementia. The long-term effect of Lecanemab cannot be compared to those of current treatments, as there is yet to be evidence of the long-term effects Further research should be conducted to facilitate a comprehensive comparison in this regard.
The nature of side effects associated with different treatment types seems to differ AChEIs are associated with gastrointestinal side effects, such as severe vomiting, nausea, diarrhoea, muscle weakness etc In studies for current treatments, the difference in discontinuation treatment groups were significantly higher than those in placebo groups (Takeda, et al., 2006). In studies for Lecanemab, approximately 36% of participants in treatment groups discontinued, mainly due to ARIA-E (amyloid- related imaging
abnormalities with effusion/oedema), compare d to 23.7% discontinuation in placebo groups. Discontinuation due to non-ARIA-E events were similar in placebo and treatment groups. Most of the ARIA-E cases occurred in patients with ApoE4 allele, and cases were mild to moderate in severity. Cases of ARIAE occurred within the first few weeks of treatment, and all resolved within the expected timeframe (Swanson, et al , 2021) This could potentially imply that Lecanemab is more tolerable than current pharmacological treatments, although one should consider the difference in the nature of adverse events.
There may be some evidence suggesting that Lecanemab is more cost-effective than the other drugs. There is little to no evidence to show that the AChEI improves the quality of life (Takeda, et al., 2006). On the other hand, economic simulations of treatment with Lecanemab indicate that using Lecanemab alongside the standard of care would improve quality of life and reduce the economic burden on carers and family (Tahami Monfared, et al., 2023). It is important to note that these are simply findings from economic simulations. There is no evidence yet of the effect of Lecanemab on the quality of life. Thus, it is difficult to draw conclusions on the societal value of Lecanemab compared to current treatments.
Through comparison across different dimensions, it can be concluded that Lecanemab may be useful in treating AD, as it provides more benefits to AD patients. However, it is important to note that Lecanemab and the current treatments differ, and which treatment may be more beneficial would depend on the patient themselves. Additionally, this is only a primitive comparison of treatments for AD. Further research is also required to formally compare the various different treatments in a rigorous way.
This could imply that tangles cause the neurodegeneration in AD more than plaques. Thus, it would be worthwhile to look into the role of the tangles in AD and developing treatment to target the hyperphosphorylation of tau
Lecanemab patients to examine the effects of the drug on clinical decline and the biomarkers of AD (beta-amyloid aggregation and hyperphosphorylated tau levels). As mentioned previously, more comprehensive and rigorous reviews on how Lecanemab compares to existing pharmacological treatments would be beneficial for patients. The interactions between existing pharmacological treatments and Lecanemab should also be studied to explore the potential for combination therapies Some of the current pharmacological treatments, such as donepezil and memantine, are combined to treat more severe cases of AD. It would be interesting to investigate whether Lecanemab can be combined with other pharmacological treatments to become more effective.
In addition to research on Lecanemab, future treatments of AD should also be investigated. Possible treatments include gene therapy, which targets the genes that are associated with AD Research on the use of pharmacological and nonpharmacological treatment in combination should also be conducted. Further research on the role of Aβ aggregation and neurofibrillary tangles in AD is crucial, as it could help us understand which biomarkers need to be targeted to treat AD. Research could also consider looking more into neurofibrillary tangles, as they are more strongly correlated than plaques in AD (Kumar, et al , 2022)
In conclusion, Lecanemab is a step in the right direction for treatment of AD It is one of the first drugs that targets and reduces the biomarkers of AD, and it consistently reduces clinical decline in AD patients. Although existing pharmacological treatments should still be used, especially for treating patients in more severe stages of AD, the approval of Lecanemab in treating AD may help provide further insight into the disease itself and the role the biomarkers play in the progression of the disease This enables more research into AD, which can lead to development of better treatments, and potentially a cure, for the disease.
Birks, J & Harvey, R J (2009) ‘Donepezil for dementia due to Alzheimer's disease’ Cochrane Database of Systematic Reviews, Issue 1 https://doi.org/10.1002/14651858.CD001190.pub2
Dou, KX., Tan, MS., Tan, CC., Cao, XP., Hou, XH., Guo, QH., Tan, L., Mok, V., Yu,JT. (2018) ‘Comparative safety and effectiveness of cholinesterase inhibitors and memantine for Alzheimer’s disease: a network meta-analysis of 41 randomized controlled trials ’ Alzheimer's Research and Therapy, 10(126) https://doi org/10 1186/s13195-018-04579FDA (2023) FDA Grants Accelerated Approval for Alzheimer’s Disease Treatment [Online] Available at: https://www.fda.gov/news-events/press-announcements/fda-grantsaccelerated- approval-alzheimers-disease-treatment [Accessed 6 May 2023].
Jack, C.R. Jr., Bennett, D.A., Blennow, K., Carrillo, M.C., Dunn, B., Haeberlein, S B ,Holtzman, D M , Jagust, W , Jessen, F , Karlawish, J , Liu, E , Molinuevo, J L ,Montine, T , Phelps, C , Rankin, K P , Rowe, C C , Scheltens, P , Siemers, E ,Snyder, H M , Sperling, R (2018) ‘NIA-AA Research Framework: Toward a biological definition of Alzheimer's disease ’ Alzheimer's & Dementia, 14(4), pp 535-562 https://doi.org/10.1016/j.jalz.2018.02.018
Kumar, A., Sidhu, J., Goyal, A. & Tsao, J. W. (2022) Alzheimer's Disease. Treasure Island (FL): StatPearls Publishing
Logovinsky, V , Satlin, A , Lai, R , Swanson, C , Kaplow, J , Osswald, G , Basun, H , Lannfelt, L (2016) ‘Safety and tolerability of BAN2401 - a clinical study inAlzheimer’s disease with a protofibril selective Aβ antibody ’ Alzheimer's Research & Therapy, 8(1). doi: 10.1186/s13195-016-0181-2.
McDade, E., Cummings, J.L., Dhadda, S., Swanson, C.J., Reyderman, L., Kanekiyo, M., Koyama, A., Irizarry, M., Kramer, L.D., Bateman, R.J. (2023) ‘Lecanemab in patients with early Alzheimer’s disease: detailed results on biomarker, cognitive, and clinical effects fromthe randomized and open-label extension of the phase 2 proof-of-concept study ’ Alzheimer's Research & Therapy, 14(1) doi: 10 1186/s13195-022-01124-2
Olin, J T & Schneider, L (2002) ‘Galantamine for Alzheimer's disease ’ Cochrane Database of Systematic Reviews, Issue 3. https://doi.org/10.1002/14651858.CD001747
Onor, M. L., Trevisiol, M. & Aguglia, E. (2007) ‘Rivastigmine in the treatment of Alzheimer’s disease: an update ’ Clinical Interventions in Aging, 2(1), pp 17-32
DOI:10 2147/ciia 2007 2 1 17
Robinson, D M & Keating, G M (2006) ‘Memantine: a review of its use in Alzheimer's disease ’ Drugs, 66(11), pp 1515-1534 https://doi org/10 2165/00003495200666110-00015
Sharma, K. (2019) ‘Cholinesterase inhibitors as Alzheimer's therapeutics (Review).’Molecular Medicine Reports, 20(2), pp. 1479-1487. doi: 10.3892/mmr.2019.10374
Swanson, C J , Zhang, Y , Dhadda, S , Wang, J , Kaplow, J , Lai, R Y K , Lannfelt, L ,
Bradley, H , Rabe, M , Koyama, A , Reyderman, L , Berry, D A , Berry, S , Gordon, R , Kramer, L D , Cummings, J L (2021) ‘A randomized, double-blind, phase 2b proof-ofconcept clinical trial in early Alzheimer’s disease with lecanemab, an anti-Aβ protofibril antibody.’ Alzheimer's Research & Therapy, 13(1). doi:10.1186/s13195-02100813-8.
Tahami Monfared, A.A., Ye, W., Sardesai, A., Folse, H., Chavan, A., Kang, K., Zhang, Q (2023) ‘Estimated Societal Value of Lecanemab in Patients with Early Alzheimer’s Disease Using Simulation Modeling ’ Neurology and Therapy doi: 10 1007/s40120-023-00460-1
Takeda, A , Loveman, E , Clegg, A , Kirby, J , Picot, J , Payne, E , Green, C (2006) ‘A systematic review of the clinical effectiveness of donepezil, rivastigmine and galantamine on cognition, quality of life and adverse events in Alzheimer's disease.’ International Journal of Geriatric Psychiatry, 21(1), pp.17-28. https://doi.org/10.1002/gps.1402
Van Dyck, C H , Swanson, C J , Aisen, P , Bateman, R J , Chen, C , Gee, M , Kanekiyo, M , Li, D , Reyderman, L , Cohen, S , Froelich, L , Katayama, S , Sabbagh, M , Vellas, B , Watson, D , Dhadda, S , Irizarry, M , Kramer, L D , Iwatsubo T (2023) ‘Lecanemab in Early Alzheimer's Disease ’ The New England Journal of Medicine, 388(1), pp 9-21 doi:10.1056/NEJMoa2212948.
Xu, H., Garcia-Ptacek, S., Jönsson, L., Wimo, A., Nordström, P., Eriksdotter, M. (2021) ‘Long-term Effects of Cholinesterase Inhibitors on Cognitive Decline and Mortality.’ Neurology, 96(17) https://doi org/10 1212/WNL 0000000000011832
Can the world's plastic pollution problem be Improved by Bioplastics Formulations? Josie Sequeira-Shuker......................................98
Donuts and Other Brainy Shapes: Topological Data Analysis in Neuroscience, Sarah Kurbanov................................................104
The Complexities Behind Feeding Ruminant Livestock Seaweed in Order to Reduce Methane Emissions, Violet Melcher.............111
Manufacturing a synthetic gut microbiome, Ideja Bajra..........117
Using electrochemistry and the ocean to harness green energy and reduce the presence of atmospheric carbon dioxide, Tom Burton………………………………………………………..124
The future of Hydrogen: Green or Green-Washing? Emily Feeke..................................131
To what extent can saving the world’s peatlands affect the climate crisis? Mia Cammarota............................................................137
Small eruption with huge impacts, Violet Melcher………….145
Reviewed and edited by T. Burton
ABSTRACT: There is an increasing demand for single-use plastic. Plastics have only been used widely since the 1950s, and yet the world would now seem unimaginable without them. The world has produced nearly nine billion tons of plastic since the 1950s (Geyer et al. 2017) and its use continues to expand at a prolific rate. For example, according to a high-profile report by the World Economic Forum (2016), more plastic has been produced in the last ten years than in the whole of the twentieth century. Of the nine billion tons of plastic ever produced, 6.3 billion tons is now waste and sits in landfill or the open environment (Geyer et al. 2017). This report found that despite the cultural shift towards ‘eco-friendly’ plastics, the world’s plastic pollution is not being mitigated by bioplastic formulations. In fact, the lack of transparency to consumers creates disingenuous purchasing decisions that can worsen environmental pollution.
The environmental problems caused by plastic came to world attention in 2016 with the publication of a report by the World Economic Forum (2016). The report made the disturbing claim that the world’s oceans will contain more plastic than fish by 2050. At the same time, David Attenborough’s BBC series Blue Planet (BBC Media Centre, 2017), brought the problem to public attention by showing images of wildlife ingesting and entangled in discarded plastic items around the world. Since then, many scientific studies have consistently found levels of environmental plastic contamination to be many times higher than previously recorded (Lebreton, 2018; Lavers and Bond, 2017). Furthermore, present in areas of the world that were previously thought to be untouched, e.g. polar environments and the deep sea (Harrison, 2018; Bergmann et al. 2019). Other research studies have documented risks to human health through the uptake of plastic constituents in the
food chain and plastic particles in the atmosphere (Kosuth, 2018). These credible, scientific studies corroborate the conclusion that the world is at a near-crisis point concerning the effects of plastic pollution.
In response, many commercial companies are producing products made of bioplastic formulations that they argue will offer a solution to the plastic pollution problem. Such products are frequently marketed with claims of being “natural,” “green,” “eco,” or “environmentally friendly”, despite evidence of these qualities seldom being provided.
With these issues in mind, the focus of this report attempts to answer the question: Are bioplastic formulations a realistic and practical solution to deal with the world’s plastic pollution problem?
According to European Bioplastics, (2019), a substance can be defined as bioplastic if i) it is Biobased and/or ii) it is Biodegradable. A material only has to fulfil one of these definitions to be defined as bioplastic.
Biodegradability can be defined as the destruction of organic compounds by microorganisms that break apart chemical bonds in the material (Siracusa, 2019). The argument put forward by commercial organisations producing bioplastics is
that if plastic is biodegradable, there is potentially less danger of plastic waste polluting the environment. However, perhaps surprisingly, there is no universal or legal definition of biodegradability. In other words, a plastic can currently be described as “biodegradable plastic” without any certification or adherence to official standards of “Biodegradability” (Siracusa 2019; Napper et al. 2019). Without legally enforceable standards, the claim of products to be “biodegradable" has limited credibility.
There have been some recent positive developments in biodegradable-bioplastics that commercial companies claim can decompose in the UK outdoor climate without industrial facilities (so long as oxygen is readily available) Examples of these made from potato starch include Bioplast 300, and Polywrap These are used as mail wrappings. Another example is Ooho capsules. These capsules made from seaweed can be filled with very small amounts of water and are both compostable and edible. They were used in the 2019 London Marathon to distribute water to competitors. A certification specification label known as “OK Compost Home” is used to endorse these plastics (British Plastics Federation, 2019). It is important to note, however, that “OK Compost Home” labelled items represent a tiny minority of plastics and these have minimal uses. This is because “OK Compost Home” plastic easily breaks down and is non-durable, a property
Table 1: Classification of plastics Kjeldsen et al. 2019;that makes it unsuitable majority of the more items that we need for. Furthermore, if compostable plastics landfill (without oxygen) or mistaken recyclable plastic confused consumers, problems (methane and contamination recyclable plastic) associated with the EN 13432 standard plastics discussed response to these the bioplastics industry academic institutions Imperial College, investing heavily in development of innovative materials have both composability s and higher durability usability
One of the recent criticisms of most scientific studies of bioplastics and is that they have been conducted in controlled laboratory conditions and not in the natural environment (landfill, oceans, open countryside) These scientists point out that materials often behave in diversely different ways when subjected to the
variables that occur in natural environments rather than in strictly controlled laboratory conditions
Only one study (Napper and Thompson 2019) could be found that has systematically attempted to compare (with an experimental methodology) what happens to different types of bioplastic when discarded into the natural environment This scientific study (conducted in the Faculty of Science and Engineering at the University of Plymouth) was also widely referred to in the national media. Its startling key finding was that “biodegradable” plastic shopping bags, buried in the ground or at sea could still hold shopping after three years. In this study, the authors examined what happened to plastic bags that were labelled by the manufacturer as ‘biodegradable’, ‘oxo-biodegradable’, ‘compostable’, and ‘high-density polyethylene’ (i.e., a conventional plastic carrier bag which did not claim and biodegradable properties). They tested each type of plastic in three environments 1. buried in the earth, 2. exposed in the open air and sunlight outdoors and 3. in the sea (salt water). They tested both whole (intact) bags and also 25mm strips that had been cut from the centre of the bags.
In the “whole bag” condition, that the bags that claimed ‘oxo-biodegradable’, density (conventional) remained all conditions (i.e. did not They even loaded the test bags with groceries local supermarket (weight and found that they could the weight. The authors said claimed these “environmentally friendly” s no better than conventional In the “25mm strip” condition biodegradable, biodegradable, conventional (non-bio plastic) materials did show some degradation into small pieces but did not biodegrade into harmless substances. The authors point out how this degradation leads to more significant potential environmental risks than the intact items because of the impossibility of removing micro-sized pieces of plastic from the environment and the increased risk of uptake of toxins into the food chain. The authors state that none of the plastics that claimed to have enhanced degradation consistently deteriorated faster than conventional polyethylene. The compostable bag was the only material that completely disappeared within the sea environment, but this notably, it remained wholly intact when buried in the soil. A limitation of the findings of this study is that it has yet to be replicated by researchers in other academic institutions. If the findings discussed above hold true when conducted in other settings by different academics, then greater credibility can be applied to the results.
Bioplastics is a fast-growing industry. It is driven by rising consumer awareness of the impact of plastic pollution and demand for products that are seen as ‘eco-friendly’. This report concludes that bioplastics are not a practical solution to the world plastic pollution. While it is accurate that some bioplastics do biodegrade and decompose, it is crucial to note that this will only occur under the right physical conditions such as those provided in an industrial composter or those manipulated in a laboratory. These are generally not the conditions found in the natural environment or in landfill where most bioplastic waste ends up. Even though industrial composters could be used to compost this bioplastic safely, most of the waste does not get to an industrial composter because of the lack of local authority composting infrastructure and waste collection practices. This problem highlights a real need for honest labelling of compostable plastic products.
In summary, while some bioplastic formulations are theoretically biodegradable, they are not making an impact on the world plastic pollution problem. Crucially, most consumers are not aware of these constraints and make their purchasing decisions based on the disingenuous message that these products are good for the natural environment. The conclusion is that they are currently no better and in some cases, worse than conventional plastics with regard to decreasing the world’s pollution problem.
1. Geyer, R., Jambeck, J. R. and Law, K. L. (2017) ‘Production, use, and fate of all plastics ever made’, Science Advances. American Association for the Advancement of Science, 3(7) doi: 10 1126/sciadv 1700782
2. World Economic Forum (2016) The New Plastics Economy: Rethinking the future of plastics, Ellen MacArthur Foundation
3. BBC Media Centre. (2017) ‘Blue Planet II Episode 7: Our Blue Planet"’
4. Lebreton, L. et al. (2018) ‘Evidence that the Great Pacific Garbage Patch is rapidly accumulating plastic’, Scientific Reports Nature Publishing Group, 8(1). doi: 10.1038/s41598-018-22939w
5 Harrison, J P et al (2018) ‘Biodegradability standards for carrier bags and plastic films in aquatic environments: a critical review ’, Royal Society open science, 5(5), p 171792 doi: 10.1098/rsos.171792.
6 Kosuth, M , Mason, S A and Wattenberg, E V (2018) ‘Anthropogenic contamination of tap water, beer, and sea salt’, PLoS ONE Public Library of Science, 13(4) doi: 10.1371/journal.pone.0194970.
7 European Bioplastics (2019) https://www europeanbioplastics.org/about-us/. Available at: https://www.europeanbioplastics org/about-us/ (Accessed: 1 August 2019)
8. Kjeldsen A, P. M. and Lilley C, G. E. (2019)
9 Siracusa, V (2019) ‘Microbial degradation of synthetic biopolymers waste’, Polymers. MDPI AG. doi:
10 3390/polym11061066
10 Napper, I E and Thompson, R C (2019) ‘Environmental Deterioration of Biodegradable, Oxo-biodegradable, Compostable, and Conventional Plastic Carrier Bags in the Sea, Soil, and Open-Air Over a 3-Year Period.’, Environmental science & technology, 53(9), pp 4775–4783 doi: 10.1021/acs.est.8b06984.
11 British Plastics Federation (2018) Packaging waste directive and standards for compostability. Available at: https://www bpf co uk/topics/standards_for_compostability aspx (Accessed: 29 September 2019).
Sarah Kurbanov, Neuroscience (Topology)
Reviewed
and edited by L. Deen and S. SandanatavanABSTRACT: Topological analysis has recently allowed neuroscience to go beyond the study of pairwise connections between neurons, to global network searches for persistent homologies in order to better model and understand how neurons communicate over varying distances in cliques and cavities through tools such as the simplicial complex. The fundamental applications of topology to neuroscience saw a heightened understanding of the neuronal code
Topology has allowed to build upon work discovering place cells and grid cells, the cells of responsible for a cognitive map of the environment and spatial navigation, to model the perceived environment through topological renderings of place fields. The search for persistent homologies, fundamental to topology, uncovered how grid cells represent an individual’s location in the environment on the surface of a torus.
Topological methods for processing large compilations of neuronal data have revealed increasingly complex structures in neuronal connectivity, impacting understanding of the connectome Not only the local properties of neuronal groups, but their significance to the global network.
n the eyes of topology, a doughnut and a coffee cup is the same thing, or ‘topologically equivalent’, because either can be bended until it resembles the other, preserving the same number of holes in the same dimensions.
Counting holes in dimensions is what topology does best. Topology is mathematics concerned with the flexible shapes and surfaces of data, looking to simplify and find patterns in multi-dimensional structures (Chazal & Michel, 2021)
Humans can look at a circle and know it is a circle, but for a computer algorithm this is far more complex, because most data analysis techniques are based on linear mathematics, which may not pick up on the significance of the circle and the relationships it may represent, even when the circle appears hundreds of times.
This is where topology comes in, with its usefulness in modeling and finding persistent homologies in noise of data (or in other words, shapes that won’t go away). Topological data analysis works like structuring a skyscraper: first by finding and connecting points to create the beginnings of scaffolding, and then once a structure begins to form it is built up to extend through many dimensions (sometimes hundreds) The essential materials come from the original data, but a new structure is built and observed under pressure and change (Chazal & Michel, 2021).
How does this all relate to neuroscience? Neuroscience primarily studies the structure and function of the brain, and how neurons interact with each other to produce the processes leading to a behavior. Recently, researchers have begun to use topological analysis to aid this This paper will explore how topological analysis has been used to discover place fields and neural codes are driven by topological notions (Curto, 2017). The persistent homology of the torus has been discovered by observation of a singular grid cell (Guanella et al , 2007). On the large scale of the entire grid cell network, the model of the torus was also found to persist (Gardner et al., 2022). Similarly,
topological analysis of large amounts of data led Sizemore et al , to discover the persistent homologies of cliques and cavities in the neuronal connectome, and largescale topological modelling utilising such homologies might become the key to understanding how they function (Reimann et al., 2017).
From looking at the structure of a single neuron, to the structural connectivity of the entire network, to how this network works to produce a behavior, topology has changed the game for neuroscientists, revealing itself as a Swiss army knife in a world of tools
The hippocampus is the part of the brain responsible for encoding memories, and it contains a system of complex specialised neuronal cells which encode multiple aspects at one time. For spatial navigational memory these include position and direction, both of where the individual has been and where they are going (citation), creating a cognitive map of the environment The topological name for a doughnut is a torus, and new
researcher show that the activity from neurons which for the circuits to map the environment, reside on a toroidal manifold, such that positions in the environment correspond to positions on the torus.
In a study which would go on to win the Nobel Prize years later in 2014, O’Keefe (1976) demonstrated, while recording singles from individual neurons in the hippocampus of a freely moving rat, that certain cells were activated in response to specific locations it went. It was concluded that hippocampal place cells, which are cells which encode the environment in the brain, were generating maps of the environment based off information from the sum of their activity, and the memory of this environments was stored within the cells (O’ Keefe, 1976)
Neurons themselves do not have access to our surroundings They receive information through action potentials, or the neuronal code. Topologists have created a model for how place cells map our environment, by covering the environment with arbitrary continuous shapes, for example circles (see Figure 1). Using a topological framework, it becomes possible to model the formation of a cognitive map of space. The information about the environment is gained from neuronal spiking activity and its organisation (Curto & Itskov, 2008).
For modelling purposes, a code can be created by mapping the co-firing of a string of neurons to a physical space. A neuron is given a 1 if it fires and a 0 if it does not, producing sets of codewords indicating where the circles overlap For example, if the codeword produced is 0111, it means 3 neurons fired together, while one did not (see figure 2) Translated, it means three circles overlapped while one did not The study by Curto & Itskov (2008) observed that place fields which correspond to locations which are nearby in physical space overlap, and the neurons in those fields will be active synchronously. They then used topology to work backwards from the neuronal firing code, to create simplicial complexes (discussed further on) and showed it was possible to reconstruct the topology of the represented environment from hippocampal place cell activity. This was the original argument for a fundamentally topological hippocampal place cell code. Further studies explored the specifics of place cell encoding through topology, and one study by Dabaghian et al., (2014) found that the activity of place cells in the rat hippocampus did not change after the shape of the tracks they were running on was altered. This suggested place cells encode more of a space’s topological qualities, relying on the intersection of overlapping space fields, rather than creating a geometric map of distances and angles (Dabaghian et al , 2014)
Figure 1: A topological representation of place fields (shaded circles) covering three different environments for a rat: square box environment, an environment with an obstacle in the center, and two arms of a maze (Curto, 2017)
Figure 2: Spike trains recorded for a population of neurons firing at the same time, translated to binary codewords (Curto, 2017)
The connectome is a term describing the network of links between neurons in the brain, composed projections called axons, which make up the brain’s white matter, connecting neuron cell bodies, which make up the grey matter. The grey matter is where cognition and information processing takes place, and the white matter is the network of roads upon which information travels Mapping the connectome is one the central quests of neuroscience, and it requires the right mathematical tools Topologists set themselves the task of looking for symmetries, another term for persistent homology – that is, anything that does not change even as the point of view on it changes Topological neuroscience looks for them in an attempt to understand the connectome.
Sizemore et al., (2017) compared the connections between 83 different regions of 8 different brains involved in cognitive systems to create a diagram, upon which topological analysis could be performed. To reveal these pathways, they used diffusion spectrum imaging, which is a technique for studying fibers of white matter by imaging the pathway of water diffusing along them The topological structural analysis revealed several persistent homologies, one of which was certain groups of nodes (neuronal points of intersection) form structures called cliques, which are sets of nodes that are all-to-all connected (each node is connected to each other node). Through these cliques the brain is able to perform processing rapidly and locally, as brain networks tend to be sparsely connected otherwise. An analogy could be a knot in a rope bringing the points in the knot very close together Sets of brain regions in a clique might possess the same function, need to share information quickly, or even operate in unison Another finding was topological cavities of different dimensions, which persisted consistently across the subjects’ brains - another example of a persistent homology. These are different from cliques in that they are not dense, and serve to extend paths of information transmission, spanning different brain regions. They link together nodes in closed loop cycles – one node to another and that to another and so on until the final node connects to the first. Computations flow serially along these cavities to affect cognition in converging and diverging patterns. A highly structured neuronal circuit is created by these cycles that carries signals around the brain, allows for feedback loops and complex cognitive processes such as memory (Sizemore et al., 2017).
begin modelling and understanding neuronal signaling
If one were to imagine three neurons, or three areas of the brain, imagining they communicate but not knowing exactly how, two possible models of connectivity might come to mind. Either these three regions function in some kind of loop – all activating in a temporal sequence of some sort, or they are all activate at the same time. If one were using a method which is only able to look at pairwise connections between neurons, this would be difficult to understand the connectivity between all three possible pairs of regions A different type of language, such as the topological, could encode a situation of triple connectivity using something called simplicial complexes (Giusti et al., 2016). The Blue Brain Group, Reimann et al., (2017), took a massive simulation of a rodent neocortex, which contains representations of the individual neurons and the synapses connecting them, and preformed topological analysis on it, drawing out cliques in the form of triangles at each dimension and creating a simplicial complex (see Figure 3) Three neurons with three
Figure 3: Example of how cliques of neurons of different dimensions are being mapped in the brain, with an illustration of simplexes (dimensions) above it. A vertex is 0-simplex, an edge is 1-simplex, and so on. Adapted from Centeno et al., (2022).
synapses transmitting between them form one hollow triangle Larger cliques of neurons were filled in with higher dimensions of triangles, for example, four neurons made a tetrahedron (a three-dimensional pyramid with four faces). The maximum dimension of triangle that was filled in was seven, because the maximum number of neurons observed co-firing was eight. A multi-dimensional structure was formed by all of these figures overlapping. Because this was a simulation, and not a real brain, researchers had temporal control of it and could pause the stimulation at any given moment to get a freezeframe. They worked with many freeze-frames to create simplicial complexes and analysed how they changed with time. Upon receiving a stimulus, the structures grew in complexity, through dimensions, until the stimulation collapsed. This reveals a high level of complexity and organisation in firing in response to a stimulus, seemingly unique to cells and therefore processes in the brain. The construction of simplicial complexes is an example of quantitative methods in topology which can be used to find persistent homologies, and to address questions of complex behavior processes in neuronal systems.
The study winning May-Britt Moser and Edvard Moser the Nobel Prize along with O’Keefe in 2014 was one by Hafting et al. (2005), discovering a type of neuron also involved in spatial memory called grid cells, located in the entorhinal cortex. These neurons create a grid relative to position, allowing for spatial navigation of an environment.
Topology was used by neuroscientists to better understand how grid cells encode the rat's location. Guanella et al., 2007 monitored a single grid cell, assigning it on and off values as seen in Figure 2, and drew a representation of the rat’s environment, marking where the freely-moving rat was in this environment when the neuron activated. As the rat ran around a square box, the repeating pattern of a hexagonal lattice emerged. This process was repeated with several neurons, until the representation looked a repetition of the same geometric pattern with the dots offset The point of this was to figure out how the grid cells represented the spatial locations in the environment, in essence working back from the trace they left
Imagine a parallelogram, with dots at each of the four corners, representing the places in the environment where the grid cells lit up. These four dots can be brought together into two dots on either end of a cylinder, by ‘gluing’ opposing long sides of the parallelogram. The cylinder can then be bent until the dots at the ends touch to be become one dot, forming a torus (see Figure 4). The torus was the answer to the question of how the grid cells represent the rat’s environment (Guanella et al., 2007) The model of the torus which emerged was clear while measuring a single grid cell as a rat ran around a box, however it needed to be tested on a larger scale to better understand the structure of the entire underlying cognitive map This is exactly what was done just recently by Gardner et al., (2022), when they tested the collective activity of co-firing neurons, and found the activity of the entire network of grid cells resided on the torus, moving along it (Gardner et al., 2022). The data from an entire network of grid cells was brought together to model global neuronal behavior. This was the ultimate search for a persistent homology - a search for the torus at a larger scale - which was a job for topological data analysis
Figure 4: From grid of neuronal cells, where each red dot at the corners of the parallelogram represents the firing of a single grid cell, to bending the parallelogram into a column, and twisting into a torus. The black likes represent the cuts made on the grid in panel 1, and the white line represents the diagonal distance running between two firings of a single neuron in panel 2. Adapted from Shilnikov & Maurer (2016).
They utilising the code created by spiking neurons in Figure 2, recording the state of the neuronal system at different times, accumulating data points. Topological analysis could then be performed to search for the persistent homology of the torus, again through simplicial complexes
Experiments were done with awake rats, rats in various stages of sleep, rats in a maze, rats running around a wheel shaped area – and in all of these states, the activity of grid cells moved along the torus. These results demonstrate the torus is a product of the internal wiring between neurons, and not of any outside stimuli brought in through sensory input Intrinsic to the cells themselves and how they encode space. The parts of the brain which are at this time hidden from us, which are unrelated to anything that can be measured externally, can possible be understood through similar topological analysis in a search for persisting patterns.
A limitation of topological data analysis in neuroscience currently is its accessibility – the level of abstract mathematical knowledge needed to apply its tools and interpret their results is quite high, and means those applying TDA to neuroscience probably need to have some type of mathematic training, which would narrow the pool of people actually using it More tutorials on how it works and how to utilise it properly, such as one very recently published by Centeno et al., (2022), will hopefully remedy this. Topological analysis has proved to be highly useful in global analysis of the neuronal connectome, and for exploring the larger structural architecture of the brain It has uncovered persistent homologies in the form of the torus from single neurons to the network of grid cells, and cliques and cavities in the connectome study by Sizemore et al., (2017) through the use of simplicial complexes. Modelling using topological analysis on large amounts of neuronal data by the Blue Brain Project (Reimann et al , 2017) also demonstrates how TDA is in the future of exploring the complexity of neuronal connections, and how these connections work to produce cognitive processes and behavior This has been a simple review, a window onto some of the examples of topology in neuroscience. The complexity of these techniques and ideas being applied and studied is far reaching and demonstrates how something seemingly abstract can be applied in quite a functional way. The application of topology to neuroscience seems natural from the studies discussed even in this paper – and this is because topology is fundamental to the way the brain is functionally organised, and therefore can be used to study fundamental unanswered questions.
Centeno, E.G. et al (2022) ‘A hands-on tutorial on network and topological neuroscience’, Brain Structure and Function, 227(3), pp. 741–762. doi: 10.1007/s00429-021-02435-0
Chazal, F. and Michel, B. (2021) ‘An introduction to topological data analysis: Fundamental and practical aspects for Data scientists’, Frontiers in Artificial Intelligence, 4 doi: 10 3389/frai 2021 667963
Curto, C. (2016) ‘What can topology tell us about the neural code?’, Bulletin of the American Mathematical Society, 54(1), pp 63–78 doi: 10 1090/bull/1554
Dabaghian, Y , Brandt, V L and Frank, L M (2014) ‘Reconceiving the hippocampal map as a topological template’, eLife, 3. doi: 10.7554/elife.03476
Gardner, R.J. et al (2022) ‘Toroidal topology of population activity in grid cells’, Nature, 602(7895), pp 123–128 doi:10 1038/s41586-021-04268-7
Giusti, C., Ghrist, R. and Bassett, D.S. (2016) ‘Two’s company, three (or more) is a simplex’, Journal of Computational Neuroscience, 41(1), pp. 1–14. doi: 10.1007/s10827-016-0608-6
Guanella, A , Kiper, D and Verschure, P (2007) ‘A model of grid cells based on a twisted torus topology’, International Journal of Neural Systems, 17(04), pp. 231–240. doi: 10.1142/s0129065707001093
Hafting, T et al (2005) ‘Microstructure of a spatial map in the Entorhinal Cortex’, Nature, 436(7052), pp 801–806 doi: 10 1038/nature03721
O'Keefe, J. (1976) ‘Place units in the hippocampus of the freely moving rat’, Experimental Neurology, 51(1), pp. 78–109. doi:10.1016/0014-4886(76)90055-8
Reimann, M W et al (2017) ‘Cliques of neurons bound into cavities provide a missing link between structure and function’, Frontiers in Computational Neuroscience, doi: 10.3389/fncom.2017.00048
Shilnikov, A L and Maurer, A P (2016) ‘The art of grid fields: Geometry of neuronal time’, Frontiers in Neural Circuits, 10. doi: 10.3389/fncir.2016.00012
Sizemore, A.E. et al. (2017) ‘Cliques and cavities in the human connectome’, Journal of Computational Neuroscience, 44(1), pp 115–145 doi: 10 1007/s10827-017-0672-6
Reviewed and edited by L. Deen
ABSTRACT: This paper aims to highlight the benefits and challenges of the recent discovery that the addition of seaweed in livestock feeds can curb methane emissions. Through the depiction of everincreasing global emissions, a need for change is illustrated. Seaweed is fast growing, and its growth is simpler than the terrestrial organism alternatives. Yet, the harvesting of seaweed presents a set of challenges: marine life may be threatened, coastal communities could be impacted, and invasive species have the potential to harm current habitats. Solutions must be found, but what can be done to continue revolutionising how we deal with greenhouse gas emissions while continuing to protect the current climate? What policies must be developed in order to ensure one environment is not helped while another suffers?
Human driven methane emissions are polluting our atmosphere at an exorbitant rate, and some believe these emissions can be effectively curbed through feeding livestock seaweed. However, this solution could disrupt marine ecosystems, allow invasive plants to spread, and negatively impact native species.
Methane is the second most abundant Greenhouse Gas after Carbon Dioxide (United States Environmental Protection Agency, 2022). Within the atmosphere, methane not only increases global temperatures but also majorly impacts the climate system as a whole.
Recent data from the National Oceanic and Atmospheric Administration shows a continuous increase in global monthly mean methane emissions since the early 1980s (Stein, 2022). In the past two decades, these emissions have nearly doubled in frequency (United States Environmental Protection Agency, 2022). The Environmental Defense Fund states that the warming powers of methane are 80% stronger than Carbon Dioxide, the most abundant Greenhouse Gas in the atmosphere, though it stays in the atmosphere for a shorter time (Brownstein, 2022). Humans are effectively poisoning the atmosphere with the non-stop emissions that are released for a variety of reasons. The methane emissions from the agricultural sector make up an impactful amount of all global anthropogenic methane production (Moumen et al., 2016). A main source of methane emissions within agriculture comes from cattle specifically Ruminant livestock can produce up to 500 L of methane per day, a staggering amount that, over the span of a year, can majorly add to global Greenhouse Gas emissions (Johnson and Johnson, 1995).
Illustrates the monthly mean atmospheric methane levels averaged across the globe General upwards trend from 1983 (Stein, T) Mole fraction is the amount-of-substance average. (NIST)
Displays the impact of cows in the total make up of greenhouse gas emissions, we see that on the whole Agriculture affect 14.5% of all greenhouse gases and thus cows contribute roughly 64.8% of all Agriculture emissions. (Gillman)
Figure 1: Global Monthly Mean Methane Emissions From Stein, T (2022)
Figure 2: Greenhouse Gas Emissions by Economic Sector from Gillman (2015)
The percentage of emissions produced by cows rival those produced by forestry and land use as a whole. Though there are bigger producers, the proportion of the emissions produced by cows has recently become a popular topic of debate. This recent interest is generated by the supposed easy fix many sources claim this problem could have.
Illustrates the amount of methane produced by cattle caused by microbes, feed, and acetate propionate butyrate found within the digestive system (Glasson et al., 2022).
As seen in Figure 3, 95% of methane produced by ruminant livestock comes from eructation (Glasson et al , 2022). Livestock creates this methane while digesting food in their rumen, the first of their four stomachs (Nelson, 2018). This stomach houses millions of microbes that, in a similar process to fermentation, break down the high fibre foods they have consumed (Nelson, 2018). Higher fibre foods include feed, grass, and hay (Nelson, 2018) As this food is digested gasses are produced that combine to form methane, as illustrated in Figure 3 (Glasson et al., 2022).
Feed additives as well as nutrients with methane suppressing abilities have been gaining popularity among experts, as it seems to be an easy way to curb emissions while not harming the profits created by the agricultural sector (Department for Environment, Food & Rural Affairs, 2022) From those suggested solutions, it has been found that some seaweeds can majorly decrease the amount of methane produced by cattle if ingested (Glasson et al., 2022).
Seaweeds, known as red, green, and brown macroalgae, can all reduce the methane production of cattle (Vijn et al , 2020). There are two ways that the additives in feed, which are made up of these
types of seaweed, can alter methane emissions (Roque et al , 2021) The additives can either directly change the environment within the rumen or inhibit the methanogenesis, the process that forms methane, in order to lower the overall methane production (Roque et al., 2021).
To many, this solution seems like one of the only viable options to keep producing products at the current pace while still reducing emissions. Additionally, it has been found that seaweed alternatives do not negatively impact the health or production of cattle (Glasson et al , 2022)
Seaweed growth is relatively straightforward, lacking the requirements of many terrestrial alternatives
The growth of seaweed does not require fertilizers, pesticides, fresh water, or land space (World Wildlife Fund, 2020). Recent studies have found that some species of red seaweed can reduce 95% of the predicted methane production when added to organic feed at a 5% inclusion rate (Roque et al., 2019). However, adding seaweed to the diet of livestock globally requires the disruption of ecosystems in order to maintain a market run solely for the benefits of human life
The harvesting of (native) seaweed may lead to invasive non-native species occupying this space instead, which poses a threat to the remaining native species within the environment (Grahame, 2018). The seaweed that could be used as livestock feed acts as an ecosystem for a plethora of other organisms within the ocean. Aside from the environmental impacts, humans may also face negative impacts if seaweed harvesting becomes a popular practice Native and coastal regions depend on seaweed for a range of uses and the degradation of such habitats may negatively impact thousands Are humans solving one issue while simultaneously creating another? The mitigation of greenhouse gasses is required for a sustainable future, but is this the most sustainable way to limit such emissions?
Figure 3: The Contributors to Methane Emissions Within Cattle from Glasson et al (2022)
Methane emissions must be reduced, it is a simple truth, though the solution is complicated At first glance, the change of cattle feed from traditional grains and vitamins to methane restricting seaweeds seems like a straightforward ground-breaking revelation. Seaweeds would majorly reduce methane emissions while not forcing the agriculture market to sacrifice profits. This point is the one often highlighted in the media. Professor of animal science and microbiology, Sharon Hews, said in an interview with Time magazine that “Using seaweed is a natural, sustainable way of reducing emissions and has great potential to be scaled up,” and that “There is no reason why we can’t be farming seaweed” (Baker, 2021). Yet, research shows that many different systems may be impacted if farming seaweed becomes a popular practice. Environmental consulting group “Enscape” also came out with a report in early 2021 stating that seaweed harvesting is a market on the brink of discovery (Menzies et al , 2021).
According to this report, seaweed harvesting is already being successfully harvested globally and in the North Atlantic, interest is strongly increasing from national and international investors (Menzies et al., 2021). This is all to say that sources currently see seaweed harvesting and the solutions it provides as not only problem free but also cost effective and a worthwhile investment. Though this solution seems easy and straightforward, issues lie below the surface. A consultation paper published by the Scottish government states that potential issues for widespanning seaweed farming include shelter loss for both plants and animals and loss of food sources impacting organisms both directly and indirectly (Department for Environment, Food & Rural Affairs, 2022). This may also lead to higher trophic levels suffering (Department for Environment, Food & Rural Affairs, 2022). Additionally, this farming may cause a loss of nursery and breeding ground for marine animals that may impact not only higher trophic levels but
commercial fishing (Department for Environment, Food & Rural Affairs, 2022). As seaweed is harvested and removed from marine habitats this allows nonnative invasive species to insert themselves into these systems. Additionally, since these invasive species are difficult to eradicate completely, this mechanism of removing native seaweed and thus making room for new species to thrive could alter the make-up of these habitats completely (Department for Environment, Food & Rural Affairs, 2022). The very composition of many marine habitats could shift. Although seaweed harvesting would be used to aid in one human caused issue, the issue of extreme methane and greenhouse gas emissions, there are also many environmental challenges it poses to the species where humans may be impacted as well. There are many native communities around the world who depend on seaweed as a key part of their diet If seaweed harvesting was to become a global practice, these communities may lose an essential aspect of their culture (Turner, 2001). According to Nancy Turner, an ethnobiologist from the University of Victoria, the Northwest Coast First People have been using algae, seagrasses, and seaweeds for millenia (Turner, 2001). Nereocystis, Macrocystis, Porphyra spp., and Zostera marina are the four most common types utilized by the Northwest Coast First people (Turner, 2001). These four types all reside in coastal areas, making them all vulnerable to harvesting (Turner, 2001). When the media highlights the revolutionary discovery of marine organism turned cattle feed, the negatives are rarely mentioned.
Though this solution has negatives, negatives that would impact more than one ecosystem, are there any equally effective alternatives? Put simply, yes. Methane emissions from livestock do not make up the majority of greenhouse gas producers. Other sectors are producing more methane such as the energy sector (Alvarez et al., 2018).
The image separates gas emission by sector, however what can still be observed is that the agricultural sector is not the biggest contributor (Ritchie and Roser, 2020).
If the main goal is solely to find the single most effective path to reducing methane emissions, feeding cows seaweed would not be the most effective, especially when factoring in the negatives. However, the goal is not to find the one path that would make the biggest impact. Emissions have been greatly increasing for decades, one solution will not fix all the issues that humans have created The rate at which the climate crisis is advancing requires not just one solution but several Though creating a seaweed-centered diet for livestock comes with impactful negatives, it cannot be wholly discounted. Any effective solution must be thoughtfully considered. Though complex, the negatives must be weighed in order to determine which solution will be most effective while causing the least amount of damage.
Factoring in both the impacts on the natural world, as well as on human life, to make the substitution of seaweed in livestock feed an effective and sustainable solution there must be additional policies put into place in order to protect current ecosystems. There are approaches that can be taken to minimize all possible negative impacts from the natural world to those in coastal and native communities Yet, as the agriculture market grows and demand increases, these practices may be overlooked in order to maximize profit The seaweeds of interest may be rare and may depend on the seasons to not only grow, but grow consistently, all factors that make harvesting on a large scale challenging as well as risky for the ecosystems being affected (Hafting et al., 2011). For a seaweed harvesting industry to exist, the harvesting practices must be heavily scrutinised to ensure the health of the at-risk environments is prioritised.
To move forward in harvesting seaweed, many state the importance of a best practice code of conduct (Rebours et al , 2014) Recent legislation within the Scottish government has stated that in order to sustainably harvest seaweed, the following must be examined beforehand: “the species to be harvested, the harvesting method, the amount taken, the harvesting location and its environmental context, the time allowed for regeneration prior to harvesting again, and the timing (season) of harvest” (Department for Environment, Food & Rural Affairs, 2022). Though this is a singular policy put in place by one country’s government, these restrictions are promising. Many uphold that though this process may have negatives, these tactics are still necessary. One solution cannot fix everything, but with the addition of this code of conduct while still upholding sustainability goals, a meaningful difference can be made.
Methane emissions have majorly increased for decades, but recent years have seen an exponential rise in emissions There are a multitude of causes for the ever-increasing amount of methane released into the atmosphere, and though the agriculture sector is not the highest producer, the yearly contribution is still astonishing To make a meaningful change more than one solution must be pursued. Recent findings have shown that the use of seaweed in livestock diets can reduce the amount of methane produced by cattle. This solution is a seemingly revolutionary discovery; however, it comes with its own set of challenges. Not only are many habitats at risk, but as the market grows many are participating in seaweed harvesting without taking the necessary steps to ensure a new environment is not harmed while attempting to fix another. With emissions showing no signs of slowing down, this is a project can benefit many. This discovery has both positives and negatives, but how can change be enacted while still protecting current environments? Policies must be developed, limits must be put into place, and habitats must be protected
Figure 4: Global Greenhouse Gas Emissions by Sector from Ritchie and Roser (2020)Alvarez, R A , Zavala-Araiza, D , Lyon, D R , Allen, D T , Barkley, Z R , Brandt, A R , Davis, K J , Herndon, S C , Jacob, D J , Karion, A , Kort, E A , Lamb, B K , Lauvaux, T , Maasakkers, J D , Marchese, A J , Omara, M , Pacala, S W , Peischl, J , Robinson, A L and Shepson, P B (2018)
Assessment of methane emissions from the U S oil and gas supply chain Science, [online] 361(6398), p eaar7204 doi:10 1126/science aar7204
Baker, A (2021) Surf and Turf: How Seaweed Helps Cows Become Better Climate Citizens [online] Time Available at: https://time com/6119791/seaweed-cows-methane-emissions/ Brownstein, M. (2022). Methane: A crucial opportunity in the climate fight. [online] Environmental Defense Fund. Available at: https://www edf org/climate/methane-crucial-opportunity-climatefight#:~:text=Methane%20has %20more%20than%2080
Department for Environment, Food & Rural Affairs (2022) Government seeks views on reducing livestock methane production [online] GOV UK
Available at: https://www.gov.uk/government/news/government-seeksviews-on-reducing-livestock-methane-p roduction#:~:text=The%20use%20of%20feed%20additives%20and%20ot her%20animal%20feed %20with [Accessed 24 Nov 2022]
Gillman, S (2015) Can we make cow burps climate-friendly? | Research and Innovation [online] ec europa eu Available at: https://ec.europa.eu/research-and-innovation/en/horizon-magazine/can-wemake-cow-burps-clim ate-friendly.
Glasson, C R K , Kinley, R D , de Nys, R , King, N , Adams, S L , Packer, M A , Svenson, J , Eason, C T and Magnusson, M (2022) Benefits and risks of including the bromoform containing seaweed Asparagopsis in feed for the reduction of methane production from ruminants Algal Research, 64(64), p 102673 doi:10.1016/j.algal.2022.102673.
Grahame, F (2018) Could Commercially Harvesting Seaweed Seriously Damage the Marine Ecosystem? [online] The Orkney News Available at: https://theorkneynews scot/2018/08/13/could-commercially-harvestingseaweed-seriously-damag e-the-marine-ecosystem/ [Accessed 16 Nov 2022]
Hafting, J T , Critchley, A T , Cornish, M L , Hubley, S A and Archibald, A F (2011) On-land cultivation of functional seaweed products for human usage Journal of Applied Phycology, 24(3), pp 385–392 doi:10 1007/s10811-011-9720-1
Johnson, K A and Johnson, D E (1995) Methane emissions from cattle Journal of Animal Science, [online] 73(8), pp.2483–2492. doi:10.2527/1995.7382483x.
Menzies, B , Brook, T and Parker, A (2021) Economic Feasibility Study on Seaweed [online] Crown Estate Scotland Enscope Environmental & Development Services Available at: https://www crownestatescotland com/resources/documents/economicfeasibility-study-on-seawe ed
Moumen, A , Azizi, G , Chekroun, K B and Baghour, M (2016) The effects of livestock methane emission on the global warming: a review International Journal of Global Warming, 9(2), pp 229–253 doi:10 1504/ijgw 2016 074956
Nelson, D (2021) Feeding Cattle Seaweed Reduces Their Greenhouse Gas Emissions 82 Percent [online] UC Davis Available at: https://www ucdavis edu/climate/news/can-seaweed-cut-methane-emissions-ondairy-farms
Rebours, C , Marinho-Soriano, E , Zertuche-González, J A , Hayashi, L , Vásquez, J A , Kradolfer, P , Soriano, G , Ugarte, R , Abreu, M H , BayLarsen, I., Hovelsrud, G., Rødven, R. and Robledo, D. (2014). Seaweeds: an opportunity for wealth and sustainable livelihood for coastal communities Journal of Applied Phycology, [online] 26(5), pp 1939–1951 doi:10 1007/s10811-014-0304-8
Ritchie, H and Roser, M (2020) Emissions by sector [online] Our World in Data Available at: https://ourworldindata org/emissions-by-sector
Roque, B.M., Salwen, J.K., Kinley, R. and Kebreab, E. (2019). Inclusion of Asparagopsis armata in lactating dairy cows’ diet reduces enteric methane emission by over 50 percent Journal of Cleaner Production, 234, pp 132–138 doi:10 1016/j jclepro 2019 06 193
Roque, B M , Venegas, M , Kinley, R D , de Nys, R , Duarte, T L , Yang, X and Kebreab, E (2021) Red seaweed (Asparagopsis taxiformis) supplementation reduces enteric methane by over 80 percent in beef steers. PLOS ONE, [online] 16(3), p.e0247820. doi:10.1371/journal.pone.0247820.
Scottish Government (2016) Wild seaweed harvesting: strategic environmental assessment - environmental report - gov scot [online] www gov scot Available at: https://www gov scot/publications/wild-seaweed-harvesting-strategicenvironmental-assessment-e nvironmental-report/pages/7/
Stein, T (2022) Increase in atmospheric methane set another record during 2021 | National Oceanic and Atmospheric Administration. [online] www.noaa.gov. Available at: https://www noaa gov/news-release/increase-in-atmosphericmethane-set-another-record-during-2 021 Turner, N (2001) COASTAL PEOPLES AND MARINE PLANTS ON THE NORTHWEST COAST [Paper] pp 69–76 Available at: https://core ac uk/download/pdf/4167045 pdf
United States Environmental Protection Agency (2022) Importance of Methane [online] US EPA Available at: https://www epa gov/gmi/importancemethane#:~:text=Methane%20is%20the%20second%20mo st
Vijn S, Compart DP, Dutta N, Foukis A, Hess M, Hristoc AN, Kalscheur KF, Kebreab E, Nuzhdin SV, Price NN, Sun Y, Tricarico JM, Turzillo A, Weisbjerg MR, Yarish C and Kurt TD (2020) Key Consideration for the Use of Seaweed to Reduce Enteric Methane Emissions From Cattle, Fron Vet Sci 7:597430 Doi: 10 3389/fvets 2020 597430
World Wildlife Fund (2020) Farmed Seaweed | Industries | WWF [online] World Wildlife Fund Available at: https://www worldwildlife org/industries/farmedseaweed#: :text=Unlike%20terrestrial%20crops %2C%20seaweed%20doesn.
Ideja Bajra, Microbiology
Reviewed and edited by S. Sandanatavan.
ABSTRACT: The gut microbiome: a seemingly deep, intricate concept that has puzzled scientists for an ever-growing number of years yet has been the source of focus for numerous diseases, including irritable bowel disease.
The human gut microbiome contains a plethora of microbes whose functions profoundly affect the overall human health. The microbes inhabit the human intestines and are directly affected by many factors led by humans, including lifestyle and genetics. With over 1000 species of bacteria, archaea and fungi, they all perform essential functions within their microhabitat, including neurological signalling, food digestion, the alteration of drug action and removal of toxins. The large number of functions that the microorganisms inhabiting the human intestines perform has recently become targets for therapeutic exploration, however, this has been hindered due to the complexity of the microbiome Researchers have attempted to understand the gut microbiome via genome sequencing platforms, yet microbial phenotypes and their population dynamics have not been predicted by sequencing data alone. A burst of sophisticated culturomic approaches have increased the gut microbial species that have been cultivated, therefore the challenge of investigating the human gut microbiome has the potential to be overcome through genomic sequencing techniques. The two techniques can be combined in a system where variables are tightly controlled and manipulated intentionally. Synthetic microbial communities have therefore recently been introduced, leading to the creation of the first synthetic gut microbiome (Mabwi et al., 2021).
synthetic gut microbiome represents a system that allows various combinations of intestinal microorganisms, isolated based on their function in the gastrointestinal tract, to mimic physiological conditions researchers are interested in (Bolsega, Bleich and Basic, 2021).
The most recent breakthrough in the field of microbiome engineering stemmed from a team of researchers at Stanford University in America, in which a successful transplantation of a synthetic microbiome, containing more than 100 selected bacterial strains, occurred in mice. The revolutionary development in synthetic microbiome modelling has the potential to aid the creation of further novel microbiome therapies (Cheng et al., 2022).
Although faecal transplants, a procedure involving the delivery of prepared stool material from healthy donors to the patient to restore gut microbial balance (Ser et al., 2021), have continuously been utilised to study the microbiome, there simply are not any tools that can modify, remove or add specifically selected bacterial strains. A synthetic microbiome, however, allows for the editing and evaluation of individual bacteria commonly present in humans that would provide knowledge for the stimulation of the
investigation of which gut bacteria affect disease and development. The Stanford team curated prevalent bacteria through data provided by the Human Microbiome Project, an initiative created to understand the human microbiome through the characterisation of all microorganisms present (Turnbaugh et al., 2007), by selecting bacteria present in 20% of individuals. The researchers cultured 104 bacterial species separately before mixing them into one culture, called human community one (hCom1). hCom1 was then transplanted into mice with no gut bacteria to identify the effect of the microbiome in vivo, leading to a successful bacterial cultivation in just two months. Further tests were also conducted, including introducing a faecal sample (to identify which bacteria were taken up by the synthetic microbiome) and an introduction of an E. coli strand to mice colonised with hCom2 (to identify the resistance to infection of the mice with engineered microbiomes). The bacteria added from the faecal samples and the removal of the unnecessary bacteria allowed the team to curate the final microbiome of 119 bacterial strains; hCom2, which was more successful than hCom1 when transplanted into germ-free mice.
The synthetic microbiome passed all tests, and the researchers now have the ability to edit hCom2 to identify which bacteria are responsible for the protection of the microbiome, which can lead on to the insight of immunotherapy responses and microbiome therapy development (Cheng et al., 2022).
The engineering of microbiome properties has the potential to address an increasing number of challenges that humans face in current society, more specifically in areas such as agriculture, environmental degradation and human health (Clark et al., 2021).
The formation of the Human Microbiome Project (HMP) has greatly advanced the knowledge of the body’s microbiota, with a particular focus on the gut, skin, and lung/nasal origin. The HMP has aided the identification of factors that differentiate healthy and diseased microbiota, and the information therefore gained will benefit microbiome engineering to create strategies that restore the balance of the microbiome for therapeutic functions.
The concentrated presence of the microbial communities in the gastrointestinal tract have led to the correlation of the gut microbiome to gastrointestinal diseases, including Clostridium difficile infection (CDI) and inflammatory bowel disease (IBD), alongside other nonintestinal related conditions such as diabetes, autism and metabolic syndrome (Foo et al., 2017). This knowledge has enabled scientists to apply the microbial community characteristics to the treatment of mentioned diseases, by creating various therapeutics to target particular microorganisms. However, this is only possible with the synthetic characterisation of the microbes to identify how their positions and sections of the gastrointestinal tract they inhabit affects the various chemical pathways involved in immunity, energy, lipid and glucose metabolism (de Vos et al., 2022).
Why would we create a "fake biome"?
The typical technique, as briefly mentioned above, consists of the selection of particular prevalent bacterial species from all major phyla in the gut microbiome in humans. Hypotheses on the dynamics of the growth of the bacteria are then created, leading on to the measurement of the growth of singular species’ over a set time. A wide variety of growth dynamics are typically then observed within each phylum, which can then be categorised depending on the nutrient levels and species abundance (Clark et al., 2021).
In order to accurately confirm the unique characteristics of the synthetic microbial community, omics-driven analysis is required. The current standard used by researchers is 16S rRNA amplicon sequencing, the most widely used method to identify intestinal microbiota. Despite this, metataxonomic studies are limited due to their genus-level resolution, creating a barrier to exposing the exact role of the human microbiome. On the other hand, using techniques such as shotgun sequencing metagenomics has allowed for the resolution to reach strain-level, in which various bacterial gene strains can be classified as disease causing or nondisease causing
An example of a pioneered technique developed was in Daniel Figey’s lab, called iMetaLab, in which metaproteomic data can be analysed. Meta-analysis was carried out, using in vitro microbial cultivation, which means that researchers can control the spatiotemporal variables so common in microbiota culture in order to aid the unveiling of functionalities
An alternative technique, faecal metabolomics, has been developed to monitor the metabolites that are commonly produced by gut microbes in order to formulate a link between host and diet Metabolomics has potential applications through a high-throughput analytical platform, including nuclear magnetic resonance spectroscopy, gas chromatography and liquid chromatography–mass spectrometry. Despite the human metabolome database recently growing to >40,000 chemicals, the identification of accurate microbial-derived compounds is highly limited. Thus, the metabolomic approach in the gut ecosystem is a particularly valuable method to understand the complex metabolic interactions between the gut and host microbes.
However, omics profiling alone does not have the capacity to predict bacterial interactions that are essential in reconstructing synthetic microbiomes Statistical association between the microbial community and other omics data is required. A method of this is through the integration of omics data to classify main interactions, through the classification of the microbes that have a positive or negative gut ecology effect. Nevertheless, omics data-based modelling can predict and establish essential mechanisms for particular multi-strain formulations.
In terms of the integration of mathematical models with controlled lab experiments, experimental data is fed into building model assumptions, and mathematical model parameters that minimise model prediction are estimated Calibrated model simulations then subsequently yield experimental observations that are tested in the laboratory The repetitive refinement of mathematical models ensure that the systems observed are properly understood (Mabwi et al., 2021).
The most common application of microbial biotechnology is evidently the capability to mass produce pharmaceuticals and drugs Examples of some of the most recent achievements in this domain include the usage of E Coli to increase functionality improvements through specialisation, yet also decrease the metabolic burden. In order to decrease any harmful competition between microbial populations (a factor identified that can lead to numerous diseases and a decrease in immunity) strategies including chemical symbiosis, horizontal gene transfer and spatial organisation have been employed. These techniques can then be applied to a variety of diverse gut environments (Mabwi et al., 2021). Particular probiotics, which are live microorganisms administered in set amounts that confer health benefits in hosts (Hill et al., 2014), have been formulated to identify and use quorum sensing molecules as biotherapeutics to prevent infections. Quorum sensing is the regulation of gene expression in response to fluctuations in cell-population density, and any quorum sensing bacteria produce chemical signal molecules that increase in concentration upon the density levels of the cell (Miller and Bassler, 2001). A recent array of studies has also conveyed that E. Coli engineered in the form of probiotics may inhibit the infection of Vibrio cholerae via quorum sensing chemicals, yet also B ovatus may secrete human growth factors in a bid to fight off inflammation.
Furthermore, Bacteroides, significant clinical gramnegative pathogens found in anaerobic infections (Wexler, 2007), can be engineered to exhibit a stable form of abundance, yet also long-term colonisation in the gut through a variety of tools, which can allow for the control of the expression of certain reporter genes (Mabwi et al , 2021). These genes enable the detection and measurement of gene expression (Csibra, & and Stan, 2022) This therefore conveys the idea that the application of gut microbe engineering to synthetic microbiomes will assist in understanding the effects microbial interactions have upon the development of various diseases, including diabetes, that have yet to be revealed (Mabwi et al., 2021).
of microbiome contribution to host health, which has enabled for the capitalisation of the newfound knowledge to aid disease treatment. The field of synthetic microbiome engineering is a pioneering approach to utilising the microorganisms that have lived with humans for centuries, in a co-symbiotic relationship
Techniques sequencing, and faecal understanding while simultaneously develop models environment
The application biotechnological the importance relationship microbial targeting not have information advancements, exploration
Microbiome and has increasingly pharma improvement of human health that is awaiting unlocking (Foo et al., 2017).
Thomas Burton, Inorganic Chemistry
Reviewed and edited by L. Deen and S. Sandanatavan
ABSTRACT: Our oceans may be the solution to mitigating the effects of global warming and excess carbon dioxide emissions. Given the abundant nature of seawater and the rising demand for ecofriendly fuels, there is a push to develop innovative solutions. This report outlines an area of research and a start-up that focuses on electrolysis-based technologies, revolutionising the process of manufacturing hydrogen fuel and reducing the concentration of atmospheric CO2, thus acting as a carbon sink. Chinese researchers have innovated a dual-purpose system which both desalinates and performs electrolysis to directly yield green hydrogen. This pioneering process efficiently purifies seawater using a unique low-energy technique, presenting one of the first feasible methods of using seawater as a hydrogen resource. 'The purification aspect leverages phase transitions to eliminate contaminants, and it may have further potential in wastewater treatment and resource recovery’ [1]. In tandem, chemists belonging to the start-up, SeaChange, have developed a barge installed with innovative electrochemical technology in the Port of Los Angeles to essentially remove atmospheric carbon dioxide.
Electrolysis can be defined as ‘the use of electric current to stimulate a non-spontaneous reaction’ [2] It involves the separation of substances into their constituent elements or compounds for extraction. An electric current is applied to induce a flow of ions, charged atoms or molecules, to facilitate reactions that wouldn't naturally occur; some applications of electrolysis are electrorefining, electro-synthesis, and the chloro-alkali process.
Researchers from China have innovated a dual-purpose system that both desalinates and performs electrolysis on seawater to directly yield green hydrogen. This pioneering process efficiently purifies seawater using a unique low-energy technique, presenting one of the first feasible methods of using seawater as a hydrogen resource. The purification aspect leverages phase transitions to eliminate contaminants, and it may have further potential in wastewater treatment and resource recovery. The following report will firstly explore Heping Xie (Shenzhen University) and Zongping Shao’s (Nanjing Tech University) research into using electrolysis to synthesise renewable hydrogen fuel. Furthermore, the start-up SeaChange, who’s aim is to “pull around 10 pounds (4.6 kilograms) of Carbon Dioxide from the atmosphere per metric ton of seawater processed”, will be discussed [6]. The extent of which Hydrogen can be considered a sustainable fuel is explored further in this review (E. Feeke, 2023).
Fig 1 [2]
The concept of electrically splitting water has been explored for centuries. At the inert cathode, H+ ions collect electrons and form hydrogen gas, while OH- ions at the anode relinquish electrons to create oxygen The process of splitting water is inherently energy-intensive and mandates the use of specialised catalytic electrodes, comprising of noble metal oxides, such as titanium or platinum oxide [2]. Thus, even with a basic chemical foundation, achieving efficient electrolysis remains intricate. Minor impurities can compromise the cell, predominantly resulting in corrosion and unwanted chemical productions [3].
Chloride ions in seawater pose a notable challenge; they undergo an undesired oxidation at the anode. This reaction not only diminishes the cell's electrochemical efficacy but also results in the production of chlorine gas a highly corrosive halogenous gas that swiftly damages the electrodes and nullifies the cell. Xie, elaborates, "Efforts to curb corrosion using catalyst coatings have seen limited progress. The ever-changing composition of seawater means that a one-size-fits-all solution for electrolysers isn't feasible" [3].
Through leveraging the natural cleansing ability of evaporation, Chinese chemists, Xie and Shao, have arguably created the first feasible and scalable system for seawater electrolysis. ‘Their innovative purification method employs a liquid–gas–liquid phase transition to directly produce fresh water from seawater within the electrochemical cell, facilitated by the ensuing electrolysis process’ [3].
With seawater having a typical salt content of about 3.5%, direct electrolysis becomes impractical and non-feasible, as mentioned by Shao”. Although desalinating seawater prior to electrolysis can sidestep these challenges, it demands more energy and space, making it less cost-effective and practical." [5] Thus, the energy required to desalinate outweighs the worth of the hydrogen produced via electrolysis.
A porous PTFE-based layer keeps seawater outside the cell; this membrane's dense fluorine atomic structure ensures it is waterresistant yet permits water vapor. On its exterior side, a potent potassium hydroxide solution envelops the electrodes, fuelling the migration of water vapor.
Fig 2 Heping Xie et al/Springer Nature Limited 2022 [4]
Fig 3 Heping Xie et al/Springer Nature Limited 2022 [4]
When isolated, this system would naturally reach equilibrium with equal water concentrations on both sides of the membrane. However, due to the ongoing electrolysis process, purified water is constantly used up, maintaining a continuous concentration difference across the membrane. By modulating the rates of water movement or electrolysis, the system self-regulates, utilising pure water as soon as it's produced.
Xie elaborates, "If electrolysis starts faster than water migrates, the electrolyte concentration grows, causing the water vapor pressure difference to rise, and thus, water migration speeds up” [5]. This system exhibits properties of that of a dynamic equilibrium system, one in which A dynamic equilibrium is a state at which “the rate of the forward reaction is equal to the rate of the reverse reaction” [6]. After positive lab tests, the team tested the method's scalability with a demo unit in Shenzhen Bay, China The device operated for 133 days and produced over a million litres of hydrogen, showing no noticeable corrosion or impurity build-up.
Xuping Sun (Shandong University) an electrocatalysis specialist notes, "This breakthrough addresses a major technical hurdle in the sector. Yet, it demands more refinement. For seawater electrolysis systems to be truly industrially relevant, they need higher current densities.” [3]. Xie and Shao are committed to optimising the device for industrial applications, exploring ways to cut energy use and boost catalyst performance.
Fig 4 [3]
In line with Xie and Shao’s vision to integrate this technology into various sustainable start-up companies, SeaChange’s Los Angeles based pilot project intends to extract and remove dissolved carbon dioxide in the saltwater, in attempt to create a concentration gradient between the air and the ocean. This effect will result of further dissolving and dissociation of carbon dioxide into the ocean, enhancing its carbon sink effect. The barge is installed with electrochemical technology and will convert dissolved carbon dioxide into calcium carbonate, a useful compound with countless applications to the pharmaceutical industry. The prototype “produces green hydrogen as a by-product, which can be used to power the process” [7]
Researchers estimate that for every metric ton of seawater SeaChange treats, it can remove roughly 10 pounds (4.6 kilograms) of CO2 from the air. tons of seawater daily. To capture 1 metric ton of carbon dioxide, it's necessary to process ‘220 metric tons of seawater’ Additionally, SeaChange generates ‘approximately 75 pounds (35 kilograms)’ [7] of hydrogen during this treatment. The team are striving to achieve CO2 sequestration at under $100 per metric ton; a benchmark for cost-effective carbon capture. The ecological impact of SeaChange’s electrolysis technology is currently being investigated; along with dissolved carbon dioxide and seawater, small organisms and plankton may be drawn into the system which could have further implications on costal biodiversity.
In conclusion, this report has reviewed ongoing technological innovation within the field of Inorganic Chemistry, exploring Xie and Shao’s research into using electrolysis to synthesise renewable hydrogen fuel. Whilst this technology is revolutionary, the ecological implications and viability of hydrogen as a sustainable fuel is currently being subjected to further research and debate.
Fig 5 [7]
Reviewed and edited by
T. LawsonABSTRACT: In our current environmental and economic climate, the race to discover new and cleaner energy sources is more important than ever. Specifically, those that will help us move away from fossil fuels, whose use has proven to be very damaging to our planet. Research into biomass fuels – which were set to be the fuels of the future – has shown that they are likely not fruitful alternatives (Transport & Environment, 2022). In fact, biodiesels have been shown to lead to 80% higher emissions than the fossil fuels they set out to replace (Transport & Environment, 2022), a clear victim of greenwashing. Focus has now shifted to hydrogen as a potential, clean fuel source. The question now stands: is research into hydrogen truly progressing in such a way that it can seriously be considered a sustainable contender to fossil fuels, or is it just another victim of greenwashing?
Hydrogen has been presented as a useful alternative to oil and gas primarily due to its application in fuel cells. Hydrogen fuel cells produce electricity without running down or needing recharging, provided hydrogen and air are continuously supplied. The result is clean electricity with water as its only by-product (U.S. Department of Energy). An additional advantage of hydrogen as a fuel source lies in its high specific energy density which means it can provide three times more energy than gasoline combustion per unit mass (Yue et al., 2021). It offers the possibility to decarbonise chemicals, steel, and heavy transport in areas where electrification – our current route –is not very feasible (Transport & Environment, 2022). To move hydrogen power from vision to reality, the hydrogen economy will need to be developed to convert the current energy delivery infrastructure to one based on hydrogen as a carbon-free carrier of energy (Boretti, 2021).
There are three processes through which hydrogen can be produced, the first of which has been termed “grey hydrogen”. The hydrogen produced in this manner is from natural gas in a process called “steam-methane reforming” (1). In this method, methane reacts with steam under 3-25 bar pressure, typically in the presence of a nickel catalyst (Johnson Matthey, 2022) to produce hydrogen, carbon monoxide and a small amount of carbon dioxide in an endothermic reaction. This is followed by a “water-gas shift reaction” (2) in which the carbon monoxide and steam are then reacted with a catalyst to produce carbon dioxide in addition to more hydrogen (Gaudernack and Lynum, 1998). The final stage of the process then removes the carbon dioxide and any further impurities from the gas stream.
(1) Steam-methane reforming reaction
CH4 + H2O (+ heat) à CO + 3H2
(2) Water-gas shift reaction
CO + H2O à CO2 + H2 (+ small amount of heat)
The evident issue with this method of hydrogen production is the large volume of CO2 generated. This initial method was improved upon using carbon capture and storage (CCS) to reduce the impact of this known greenhouse gas. Hydrogen formed in this way has been coined “blue hydrogen ”
Whilst these methods are efficacious, the true future of hydrogen lies in electrolytic hydrogen production, better known as “green hydrogen”. The asset of this method of hydrogen production lies in its ability to use energy from renewable sources via electrolysis.
There are four main technologies implemented to manufacture green hydrogen: alkaline water electrolysis (AWE), proton exchange membrane electrolysers (PEM), anion exchange membrane electrolysers (AEM) and solid oxide electrolysers (SOEC) (Johnson Matthey, 2022). AEM and SOEC are in the earlier stages of their development, but have many benefits with the potential to be actualised The dominant advantage of AEM is its use of advanced nickel catalysts rather than expensive precious metal catalysts – more expensive due to their scarcity thereby making them less desirable for scaling up. The dominant advantage of SOEC is its ability to operate at high temperatures using ceramic cells to make hydrogen efficiently. This is desirable due to their potential in desert areas where energy is produced abundantly through solar power which could be applied to electrolysis
The current technologies that are being readily used are AWE and PEM with AWE being the most seen electrolyser in use today. With the potential of producing twelve times more hydrogen than AWE, research into PEM electrolysers is increasing. PEM allows water to flow into the catalyst-coated membrane (CCM) where an iridium catalyst is present that uses electrical energy to break down water molecules into oxygen, protons, and electrons The electrons are driven through the external circuit and the protons are allowed to cross the membrane The platinum catalyst then puts the protons and electrons together to form hydrogen (Johnson Matthey, 2022) Benefits alongside the carbon-free production of hydrogen with this method include a fast start-up time, no corrosion, and simple maintenance compared to AWE. However, high manufacturing costs are currently holding back the development of this vital technology (Guo et al , 2019).
The potential of hydrogen’s use has been displayed in various projects, as seen in Germany where a fleet of hydrogen-powered trains has been developed and projected to keep 4000 tonnes of CO2 out of the atmosphere annually (Smithsonian Magazine, 2022) This is an incredible feat and a wonderful representation of the possibilities of hydrogen-based systems. However, a portion of the hydrogen being used to power Germany’s trains is from fossil fuels which lead to CO2 release upstream, undermining the progress projects like these are trying to parade. This is one example out of many where the industry is attempting to use blue or grey hydrogen as an easy way out rather than putting the focus on green hydrogen. The desire to do so can be seen clearly, especially when considering blue hydrogen It has the highest efficiency with the lowest capital expenditure and the lowest cost of hydrogen production (Johnson Matthey, 2022) The cost of green hydrogen production has been estimated by the Sustainable Gas Institute at 4-9p per kWh compared to 2-5p per kWh seen with blue hydrogen. Despite cost leverage, the reality is there are limitations to CCS. It has been shown that over the life cycle, it is not very beneficial for the environment, especially when looking at “fugitive” amounts of methane and CO2 that escape into the atmosphere upon extraction of natural gas (Bauer et al.,2022). In this way, it is not the holy grail of hydrogen production, but rather an intermittent source of hydrogen that could allow us to procure vast quantities of hydrogen more cheaply, helping to develop hydrogen infrastructure while research is done to reduce the costs of green hydrogen production.
Looking forward to coming years, green hydrogen is looking more favourable as the hydrogen market becomes more stable compared to its fossil fuel counterpart, which is prone to volatility and price shocks, thereby making fossil fuels more expensive and less reliable (RenewableUK, 2022). Additionally, scaling up will reduce the price gap between the two methods and will provide a means to harness renewable energy that is in excess and would otherwise be wasted (Hydrogen Council, 2017)
The largest barrier that stands in the way of any colour of hydrogen being used on a large scale is a delivery infrastructure that is currently lacking. Considerations need to be made that consider the region and market whether it be urban, interstate, or rural (U S Department of Energy) It is expected that as the demand for hydrogen grows, delivery technologies will be encouraged to develop and improve. From there, smaller yet still present issues will likely include the lack of a specialised workforce and high operation costs Many jobs are said to be created by the development of the hydrogen economy, but despite this, few people will have the necessary training and skills to carry out these newly created roles, which will hinder progress as the industry matures. Finally, there are high energy losses to be considered at every point along the hydrogen supply chain. Approximately 30-35% of the energy used to produce hydrogen is lost during electrolysis while the use of hydrogen fuel cells typically leads to a loss of 40-50% (IEA, 2019)
These issues are shadowed by hydrogen’s capability as an energy storer, enabling energy security to be supported. When used with electricity infrastructure, specifically electricity generated by renewables, the electricity can be converted to hydrogen and then converted back to electricity by fuel cells. This means that the final users are not as dependent on the specific energy source, making energy supplies more resilient (IEA, 2019) Furthermore, hydrogen acts as a form of chemical energy which means it can be stored and transported in a stable way like oil and natural gas are; in this way, hydrogen can compete with them as an energy source for electricity generation (IEA, 2019). Hence, the positives associated with a hydrogen-led future outweigh the costs and issues linked to its development. John Clipsham, hydrogen development manager at the European Marine Energy Centre in Orkney, UK, said, “What is required is nothing short of reimagining our entire energy system.” The promises that hydrogen, specifically green hydrogen, presents outweigh any qualms that might be had about the costs involved in redeveloping the energy sector It is a clear and necessary investment in our future
Transport and Environment (2022) Biofuels, Transport & Environment Available at: https://www.transportenvironment.org/challenges/energy/biofuels/
(Accessed: December 22, 2022)
Wee, J -H (2007) “Applications of proton exchange membrane fuel cell systems,” Renewable and Sustainable Energy Reviews, 11(8), pp. 1720–1738. Available at: https://doi.org/10.1016/j.rser.2006.01.005.
Yue, M et al (2021) “Hydrogen Energy Systems: A critical review of technologies, applications, trends and challenges,” Renewable and Sustainable Energy Reviews, 146, p. 111180. Available at: https://doi.org/10.1016/j.rser.2021.111180.
Gaudernack, B and Lynum, S (1998) “Hydrogen from natural gas without release of CO2 to the atmosphere,” International Journal of Hydrogen Energy, 23(12), pp 1087–1093 Available at: https://doi.org/10.1016/s0360-3199(98)00004-4.
Johnson Matthey (no date) Hydrogen, matthey com Available at: https://matthey.com/products-and-markets/energy/hydrogen
(Accessed: November 20, 2022)
Guo, Y et al (2019) “Comparison between hydrogen production by alkaline water electrolysis and hydrogen production by PEM electrolysis,” IOP Conference Series: Earth and Environmental Science, 371(4), p. 042022. Available at: https://doi org/10 1088/1755-1315/371/4/042022
Sparkedadmin (2022) Hydrogen, scaling up, Hydrogen Council. Available at: https://hydrogencouncil.com/en/study-hydrogenscaling-up/ (Accessed: December 10, 2022).
IEA (2019), The Future of Hydrogen, IEA, Paris https://www.iea.org/reports/the-future-of-hydrogen, License: CC
BY 4 0Bauer, C. et al. (2022) “On the climate impacts of blue hydrogen production,” Sustainable Energy & Fuels, 6(1), pp 66–75 Available at: https://doi.org/10.1039/d1se01508g.
Hydrogen delivery (no date) Energy gov Available at: https://www energy gov/eere/fuelcells/hydrogen-delivery
(Accessed: December 10, 2022)
Boretti, A. (2021) “The hydrogen economy is complementary and synergetic to the electric economy,” International Journal of Hydrogen Energy, 46(78), pp 38959–38963 Available at: https://doi.org/10.1016/j.ijhydene.2021.09.121.
Magazine, S. (2022) Hydrogen-powered passenger trains are now running in Germany, Smithsonian.com. Smithsonian Institution. Available at: https://www smithsonianmag com/smartnews/hydrogen-powered-passenger-trains-are-now-running-ingermany-180980706/ (Accessed: December 20, 2022)
RenewableUK (2022) Green Hydrogen: Optimising Net Zero. Available at: https://cdn.ymaws.com/www.renewableuk.com/resource/resmgr/gr een_hydrogen_optimising_ne pdf (Accessed: November 10, 2022)
Reviewed and edited by T.
LawsonABSTRACT: Peatlands exist in wetlands and are created when waterlogged conditions stop the decomposition of dead plant material. This leads to the accumulation of carbon in the wetland environment (Joosten, 2015). Whilst peatlands cover only 3% of the land, they sequester more carbon than all the Earth’s forests combined (Joosten, 2015). They are essential for storing carbon and help with water regulation, biodiversity protection, food, and fuel (Page and Baird, 2016). Peatlands are also known to entomb artifacts, preserve bodies, and record past flora and climates (Page and Baird, 2016). Despite being beneficial to the environment in many ways, peatlands are actively being damaged. In Europe, 52% of active peatlands have been lost (Chapman et al., 2013). When peatlands are damaged – often for agricultural and forestry reasons – they release their sequestered carbon back into the atmosphere Herein lies their potential to impact on climate change. As a result, non-governmental organisations such as the United Nations have put peatlands on their target lists, advising their protection by local and national governments (Page and Baird, 2016). Whilst playing many other essential environmental roles, peatlands are space-efficient carbon storers. Their role is therefore significant, and their ability to release sequestered carbon into the environment makes them dangerous. Because of this, preserving them will allow countries to work towards meeting their carbon emissions goals by stopping them from releasing unwarranted gases back into the atmosphere. Consequently, it is in countries’ best interest to mitigate damage to peatlands to help with the overall climate crisis.
Three areas will be examined to analyse how the world’s peatlands affect the climate crisis, predominantly focussing on their ability to sequester and emit carbon. Peatlands exist on all seven continents, so to take a holistic approach, we will look at peatlands existing in differing climate environments. Firstly, the United Kingdom will be discussed because it has the perfect climate for boreal and subarctic peatlands, which allows them to thrive in this area. Secondly, Indonesia will be used as an example both because it holds peat swamp forests, and to show how the tropics differ for peatland existence. Thirdly, Canada –specifically the western boreal forests – will be analysed to show what a drier climate means for peatlands. Using these three individual examples, we will better understand the current conditions, threats, and rehabilitation projects for peatlands around the world.
The UK’s peatlands cover 15% of its land and hold approximately 2,302 megatons of carbon (Billett et al , 2010) Since the UK is warmer and wetter than many boreal and subarctic areas, it has an excellent climate for peatlands (Billett et al., 2010). Many peatlands in the UK have been storing carbon since the Holocene, however during the Neolithic period, forest clearing greatly affected the growth of peat (Evans, 2017).
Fig 1: shows the extent of peatland destruction, demonstrating why this is such a cause for concern.
Peatlands in Asia – the greatest number of which are in Indonesia – exist in peat swamp forests. They develop at low altitudes where woody plant debris experiences high levels of rainfall and high temperatures to create peat soil (Posa et al., 2011). It was long considered that peatlands here held low biodiversity, and the land on which they existed needed to be used more efficiently. As a result, scientists largely disregarded them (Posa et al., 2011). Rather than conserve peatlands, they converted them to land used for agriculture and industry by logging and burning the land (Posa et al., 2011). In Indonesia especially, many peatlands were lost and used as arable land for palm oil production (Hergoualc’h et al , 2018) In the last decade, however, Indonesia has become increasingly aware of the value of peatlands, though there is still a long way to go. Whilst burning land is common practice because it can temporarily improve soil fertility, fires can easily get out of control when peat is burned (Hergoualc’h et al., 2018). This is because dry peat is highly flammable, with these conditions only being heightened by the effects of El Niño (Posa et al., 2011). Further, draining newly created dried up peat can bolster agricultural development (Hergoualc’h et al., 2018). Despite these hurdles, some action has been taken to mitigate damage to peatlands. In 2011, the Indonesian government set in place a two-year moratorium, preventing the covering of 11 2 million hectares of peatlands (Hergoualc’h et al., 2018). A 2016 revision of this plan involved a ban on peatland clearing and burning (Hergoualc’h et al., 2018). Although these changes are steps in the right direction, they are difficult to enforce completely.
Further, the fact that these changes were implemented so late means that only 7% of pristine peat remains in Sumatra and Kalimantan, areas in which peatlands are most abundant (Hergoualc’h et al., 2018). A research brief by Hergoualc’h et al. (2018) showed that despite laws and regulations, burnings and clearings still occurred among both small-scale farmers and the wider agricultural industry. Posa et al. (2011) reached a similar conclusion, conveying that simply researching the issue further would be insufficient in achieving the desired environmental outcomes: there needs to be further policy action.
Fig 2: Before and after pictures of a peat swamp forest in Indonesia; the land is now used as a palm oil plantation.
Peatlands cover 17% of Canada’s land. In its western region, however, the coverage is 4050% (Kuhry, 1994). Canada’s peatlands are not only of interest because of their vast coverage, but also because the peatlands themselves are covered with Sphagnum moss permafrost (Kuhry, 1994; Robinson and Moore, 2000).
Sphagnum is a type of moss that is considered an ‘ecosystem engineer’ in arctic, temperate, and boreal peatlands (Noble et al., 2019). These mosses often grow on bare peat, but they are eroded away when the land is drained and converted for agriculture or forests (Chapman et al., 2013). Interestingly, sphagnum mosses on peat in Canada’s western regions are especially prone to damage by fire as exposure to light is a common cause of the area being set alight (Kuhry, 1994). Whilst fires vary significantly between different sites in the region, they reduce peat height and release sequestered carbon (Kuhry, 1994). Additionally, approximately 50% of north-western Canadian peatlands are in permafrost (Robinson and Moore, 200), a state often caused when a significant amount of sphagnum has grown above the water table; it is certainly possible that peatlands in permafrost have decreased carbon accumulation (Robinson and Moore, 2000)
As a result, Robinson and Moore’s (2000) research shows that as global warming continues, more carbon is predicted to be emitted into the atmosphere through increased decomposition. If the predicted 2-degree Celsius increase occurs, collapse bogs will be produced. These will first increase rates of carbon accumulation, and then decrease bog accumulation under warmer and drier environments (Robinson and Moore, 2000). This process will eventually lead to an increase in the susceptibility of land to setting alight Kuhry (1994) – whose conclusions align with this – notes in his research that, as global warming continues, further drought and increased wind will only exacerbate fires among peatlands.
The aforementioned information is a product of research from six sources; each of the three specified areas has two of these sources associated with it. By analysing peatlands from these three locations, it is clear that they can express huge variability. In the UK, peatlands in their current state will continue to offset carbon emissions. However, this will not continue with worsening climate change and developing land management (Billett et al., 2010). Currently, the UK is a leading research base for peatland restoration, even though 20% of the country’s peatlands are used as a source of carbon (Evans, 2017). The other 80% - peatlands that do sequester carbon – are unable to function at their full capacity because 50% of the 80% have been impacted through drainage (Evans, 2017) It is evident, therefore, that Peatlands are a unique landform, and changes to their capacity to function effectively powerfully affect their impact on climate change. Indonesian and UK peatlands are comparable in that both struggle to store carbon at the rate they once did. Indonesian swamps store 22 tonnes of carbon per hectare, making them 12 times more effective at removing atmospheric carbon tropical rainforests in Asia. However, an estimated 3% of total global anthropogenic emissions from burning on peatlands (Hergoualc’h et 2018, Posa et al., 2011). Indonesia’s submission its greenhouse gas goals to the United Nations a plan to reduce 29% of business-as-usual emissions is certainly an exciting prospect though misleading (Hergoualc’h et al , 2018) This aim unfortunately undermined by the fact that only of their peat is contained within the Environment Programme World Conservation Monitoring Centre (Posa et al., 2011).
As is the case with the UK, advancements in research are of benefit, but for the value that the land holds, managerial action needs to be accelerated. Finally, in Canada, approximately 455 gigatons of carbon are stored in peat deposits, but permafrost and fires still hinder their efficiency at mitigating climate change (Kuhry, 1994). Permafrost still covers most peatlands, thereby decreasing carbon accumulation (Robinson and Moore, 2000). Macrofossil analyses of charcoal horizons in peat demonstrate the damaging effects of fires on peat at the surface, slowing their ability to sequester carbon (Kuhry, 1994). Overall, peatlands in the UK, Indonesia, and Canada can sequester carbon impressively, even more efficiently than more widespread storers like rainforests However, the numbers alone do not paint the whole picture. Previously untouched peatlands are actively being damaged, and those currently under protection are still suffering the effects of past destruction.
Restoring and protecting the world’s peatlands can significantly mitigate the effects of the climate crisis. Put simply, this is because they actively sequester carbon in a space-efficient manner. However, this only holds true if they are kept in pristine condition, which, regrettably, they are not; and the damage peatlands endure tends to vary depending on which region they exist in When industry replaces peatlands, the damage is two-fold: it is not only the industrial action that happens on the peatlands that contributes to increased carbon emissions, but also the fact that the land might no longer serve to sequester carbon If peatlands are damaged, they can release their stored carbon back into the atmosphere. This makes them just as dangerous as they are valuable to the climate crisis. For peatlands to continue to have positive effects on climate change, further research should focus on the limits of their abilities, and strong political and non-governmental action will be needed to protect and rehabilitate the land on which they have developed.
Joosten, H (2015) Peatlands, climate change mitigation and biodiversity conservation [Online], Copenhagen, Nordic Council of Ministers. Available at https://books google co uk/books?hl=en&lr=&id=xyOuCAAAQBAJ&o i=fnd&pg=PA3&dq=peatlands+damage&ots=D8OdOr Vy&sig=WX50CO5lkp0r6bfz6FvUCM5vWM&redir_esc=y#v=onepage&q&f=false (Accessed 9 April 2023).
Noble, A , Crowle, A , Glaves, D J , Palmer, S M , Holden, J (2019) ‘Fire temperatures and Sphagnum damage during prescribed burning on peatlands’, Ecological Indicators, vol 103, pp 471-478 [Online] Available at https://www sciencedirect com/science/article/pii/S1470160X19302869 ?casa_token=iarpgfDQoEkAAAAA:UYPQlSu8 27OJzNSBPBBsi2Ssz g__IPdlMig63ByU3hBhcg6VyVHu_6Jrdtc5adMCgHWSuFOyg (Accessed 9 April 2023)
Page, S E and Baird, A J (2016) ‘Peatlands and Global Change: Response and Resilience’, Annual Review of Environment and Resources, vol. 41, pp. 35-57 [Online]. Available at https://wwwannualreviews-org.ezproxy.st-andrews.ac.uk/doi/pdf/10.1146/annurevenviron-110615-085520 (Accessed 9 April 2023).
Chapman, S , Buttler, A , Francez, A , Laggoun-Defarge, F , Vasander, H , Schloter, M , Combe, J , Grosvernier, P , Harms, H , Epron D , Gilbert, D , and Mitchell, E (2003) ‘Exploitation of northern peatlands and biodiversity maintenance: a conflict between economy and ecology’, Frontiers in Ecology and the Environment, vol 1, no 10, pp 525-532 [Online] Available at https://esajournals onlinelibrary wiley com/doi/10 1890/15409295%282003%29001%5B0525%3AEONPAB%5D2 0 CO%3B2
(Accessed 9 April 2023)
Billett, M F , Charman, D J , Clark, J M , Evans, C D , Evans, M G , Ostle, N. J., Worrall, F., Burden, A., Dinsmore, K. J., Jones, T., McNamara, N. P., Parry, L., Rowson, J. G., Rose, R. (2010) ‘Carbon balance of UK peatlands: current state of knowledge and future research challenges’, Climate Research, vol 45, pp 13-29 [Online] Available at https://www jstor org/stable/24861575?searchText=the+uk+peatlands& searchUri=%2Faction%2FdoBasicSearch%3FQuery%3Dthe%2Buk%2 Bpeatlands&ab_segments=0%2Fbasic_search_gsv2%2Fcontrol&refreqi d=fastly-default%3A2a1d0363a64ad48d52fa21156cfbb941 (Accessed 9 April 2023)
Evans, M. (2017) ‘Erosion, restoration and carbon cycling in UK peatlands’, Teaching Geography, vol 42, no 1, pp 26-29 [Online] Available at https://www jstor org/stable/26383181?searchText=UK+peatlands&searchU ri=%2Faction%2FdoBasicSearch%3FQuery%3DUK%2Bpeatlands&ab_seg ments=0%2Fbasic_search_gsv2%2Fcontrol&refreqid=fastlydefault%3Ad29183baf1826d709e785dc2134b6224 (Accessed 9 April 2023) Hergoualc’h, K , Carmenta, R , Atmadja, S , Martius, C , Murdiyarso, D , Purnomo, H (2018) ‘Managing peatlands in Indonesia: Challenges and opportunities for local and global communities’, Center for International Forestry Research, pp 1-8 [Online] https://www jstor org/stable/resrep16232?searchText=Peatlands+Indonesia& searchUri=%2Faction%2FdoBasicSearch%3FQuery%3DPeatlands%2BIndo nesia&ab_segments=0%2Fbasic_search_gsv2%2Fcontrol&refreqid=fastlydefault%3A5a7d97e914bc8517bada8e6c2200c179 (Accessed 9 April 2023) Posa C M , Wijedasa, L S , and Corlett, R T (2011) ‘Biodiversity and Conservation of Tropical Peat Swamp Forests’, BioScience, vol 61, no 1, pp. 49-57 [Online]. Available at https://www jstor org/stable/10 1525/bio 2011 61 1 10?searchText=peatland +damage&searchUri=%2Faction%2FdoBasicSearch%3FQuery%3Dpeatlan d%2Bdamage&ab_segments=0%2Fbasic_search_gsv2%2Fcontrol&refreqid =fastly-default%3A320b4cabe67442bb17b9e75cb9c654cc (Accessed 9 April 2023)
Kuhry, P. (1994) ‘The Role of Fire in the Development of SphagnumDominated Peatlands in Western Boreal Canada’, Journal of Ecology, vol. 82, no. 4, pp. 899-910 [Online]. Available at https://www jstor org/stable/2261453?searchText=peatlands+canada&search Uri=%2Faction%2FdoBasicSearch%3FQuery%3Dpeatlands%2Bcanada&ab _segments=0%2Fbasic_search_gsv2%2Fcontrol&refreqid=fastlydefault%3Ad791b584543695f3e120f5bccf9d8394 (Accessed 9 April 2023) Robinson, S D and Moore, T R (2000) ‘The Influence of Permafrost and Fire upon Carbon Accumulation in High Boreal Peatlands, Northwest Territories, Canada’, Arctic, Antarctic, and Alpine Research, vol 32, no 2, pp 155-166 [Online] Available at https://www jstor org/stable/1552447?searchText=peatlands+canada&search Uri=%2Faction%2FdoBasicSearch%3FQuery%3Dpeatlands%2Bcanada&ab _segments=0%2Fbasic_search_gsv2%2Fcontrol&refreqid=fastlydefault%3A26330c2cf9393371e72efdb79694b695 (Accessed 9 April 2023)
The examination of volcanic eruptions with low volcanic eruption indexes in terms of economic damage and damage to global infrastructures.
Reviewed and edited by T. Burton And L. Deen
ABSTRACT: This paper aims to examine the impacts of small eruptions. While media coverage and public attention often focus on large eruptions, those that can alter global climates and lead to human fatalities, the threat of smaller eruptions is more frequent. These eruptions are more common and can result in major economic losses which can still greatly impact human life. Much can be done to mitigate these eruptions, but as this field is still new, much is still unknown. Smaller eruptions have potential to impact the global population and more must be done to prepare for such events.
Some eruptions cover continents with ash, some project debris the size of cars, some induce floods, and others result in fatalities (British Geological Survey, 2022). Despite the significance and importance of these issues, little attention is often paid to the eruptions that cause economic strife and destruction of local infrastructure. These setbacks are not only more common than the disruption caused by larger eruptions, but also have the opportunity to impact the globe on a greater scale.
Worldwide, there are approximately 1500 active volcanoes, with roughly 50 eruptions occurring annually (Ceurstemint, 2021). Large volcanic eruptions, those with a higher Volcanic Explosivity Index (VEI) are far less frequent (SDSU, 2010). VEI measures the severity of a volcanic eruption based on the magnitude and intensity of a given eruption (National Parks Service, 2022). Eruptions with a high VEI have the potential to alter global climate and cover continents with volcanic ash (Self, 2015). For example, the 1815 eruption of Mt. Tambora released so much ash and aerosols that it caused what is commonly known as The Year Without a Summer, cooling the atmosphere by more than 1 degree Celsius and initiating a famine across Europe and North America (Brönnimann and Krämer, 2016). The impacts of larger eruptions are planet-altering, yet these eruptions occur 1000 or 10,000 years apart from each other, reducing the threat they pose due to their infrequency (SDSU, 2010) High VEI eruptions should be monitored and prepared for. Eruptions with a lower VEI are much more common, occurring multiple times a year (SDSU, 2010). Below is a figure developed by Oregon State University, depicting the size and frequency of eruptions with differing VEIs (SDSU, 2010). Note the shift in frequency from lower VEI eruptions to high VEI eruptions (SDSU, 2010).
Figure 1: Comparative Volcanic Explosivity Index From SDSU (2010).
Although the same level of threat is not posed by eruptions with a low VEI, increased globalization and urbanisation have massively increased the impact they can Eyjafjallajökull erupted kilometres (NASA Earth Eyjafjallajökull created leading to an immediate European airspace cancelled, 10 million stranded, and economic euros (Alexander, 2013 of 4, making it a However, due to the impacted (Alexander,
In the past, eruptions where a volcano with kilometres at most increases and communities crowded, the impacts
Currently, more than possibility of volcanic significant loss caused population density increases
Globalisation supports environments already Cole, 2021). In the volcanic eruptions with only essential resources, 2021). With this in hazards must take place were once confined populated world we have massive impacts (Mani, Tzachor and Cole, 2021).
9% of the world’s population, which is over 500 million people, are exposed to volcanic hazards (Doocy et al., 2013). Currently, there are hundreds of active volcanoes across the globe, many of which are located in regions experiencing rapid population growth, such as those near or within large cities (Rymer, 2000). These environments include Naples, the capital cities of Mexico, and locations throughout both Japan and the Philippines (Doocy et al., 2013). Volcanoes such as Mt. Rainier, located in the Pacific Northwest pose a threat to Seattle, a major city and global hub, as well as Silicon Valley, a centre for scientific innovation within the United States (Abbott, 2001). Even countries such as the UK, with no active volcanoes, are at risk, as seen during the 2010 eruption of Eyjafjallajökull which caused great economic strife for the entirety of the United Kingdom (Wilson et al., 2014).
The eruption of these centrally located volcanoes does not need to be big to have a major impact on human life Major cities are highly globalised and interconnected: a disruption on one side of the globe can result in major financial loss thousands of miles away. With many major cities being located in regions that are likely to face volcanic hazards in the near future, much is at risk (Mani, Tzachor and Cole, 2021). A map has been created to show clusters of resources around the globe, areas with high numbers of critical systems and infrastructures (Mani, Tzachor and Cole, 2021) Seven major pinpoints are indicated, all of which are located in proximity to global hubs and major cities (Mani, Tzachor and Cole, 2021).
Figure 2: The Pinch Points of Clustered Critical Systems and Infrastructures from Mani, Tzachor and Cole (2021)
The Taiwanese cluster specifically, on the edge of metropolitan Taipei region of Taiwan includes the main as well as the key supplier to the Cole, 2021). The technology within Billion market shares (Mani, Tzachor small scale, could cause thick transportation networks (Pu, 2020 technology and not only impact countries that rely on such technology
Looking at the Mediterranean pinch of eruption occurs. This region Europe to Africa, North America, Tzachor and Cole, 2021). Shipping facilities may all be threatened Canal was disrupted for 6 days in global trade (Russon, 2021). eruption takes place, as this area landslides, and Tsunamis (Mani, eruption, the disruption would entire networks would be forced Mediterranean is home to volcanoes Campi Flegrei, all of which have VEI 3-6 (Mani, Tzachor and Cole, a lower VEI, global economies and Cole, 2021). All the major impact global trading networks and cause major economic loss. With such a globalized society, even a small eruption in one of these regions could impact the globe.
The seven points indicated on the Map include the Taiwanese pinch point, the Korean-Chinese pinch point, the Luzon pinch point, the Malay pinch point, the Mediterranean pinch point, the Northwest pinch point (Mani, Tzachor
With such major threats being posed across the globe, what are the possible solutions or methods of risk management that can take place to safeguard international economies?
Within the last 250 years the movement of people from rural areas to cities has majorly increased, and within recent history this migration is often concentrated in more economically successful regions (Chester and Duncan, 2000). However, the last 50 years has seen major movement occurring in less economically developed areas (Chester and Duncan, 2000). In these now more populated and less developed regions large risks are posed by natural hazards such as volcanic eruptions as these areas have a weaker infrastructure than their more economically stable counterparts (Chester and Duncan, 2000). If these regions could effectively respond to the possible hazards posed by volcanic activity, their impacts would majorly lessen (USGS, 2021). The best ways to reduce the risks posed by such natural disasters are specific predictions by use of surveillance techniques and general predictions by using hazard maps (USGS, 2021). Compared to super volcanic eruptions, the risks being posed by smaller eruptions are not solely those posed against human life (Mani, Tzachor and Cole, 2021) The majority of smaller eruptions puts local infrastructure at risk, meaning that mitigation must come in the form of restructuring local economies and cities in order to better prepare for such events (Mani, Tzachor and Cole, 2021)
Risk assessment and management allow losses to be greatly minimized. If quantitative risk assessments are done and provide numerical estimations for the risks certain eruptions may pose, then comparisons can be facilitated between the hazards and the communities at risk (Mani, Tzachor and Cole, 2021).
In order for risk assessments to be done and preventive measures to be taken, monitoring the hazards must occur first (Wilson et al., 2014).
Monitoring may include techniques such as seismographic detection of earthquakes, measurements of ground deformation, and changes in magnetic fields (Wilson et al., 2014). These techniques are then compared to normal background levels of activity and thus eruptions can be more easily predicted (Wilson et al., 2014).
Despite the impact volcanoes can have on both local and global scales, little research has been done in the past to uncover the best solution to such risks (ChoumertNkolo, Lamour and Phélinas, 2021) Once a possible eruption is noted the best response for a community is to set up information campaigns about the risks, allowing both individuals and businesses to aptly prepare (Choumert-Nkolo, Lamour and Phélinas, 2021). Additionally, the deeper issues and instabilities within infrastructures must be dealt with on the policy level (Choumert-Nkolo, Lamour and Phélinas, 2021). Despite significant progress in this field, more comprehensive research must be done on the mechanisms behind the frequency of eruptions as well as the globalisation leading to the clustering of resources (Mani, Tzachor and Cole, 2021) Improvement in these two areas would give policymakers vital information that may minimise future negative impacts (Choumert-Nkolo, Lamour and Phélinas, 2021).
Figure 3 is a graphic published by the Journal of Volcanology and Geothermal Research which shows the individual assessments that must take place in order for comprehensive mitigation to take place in order to inform policymakers (Wilson et al., 2014).
Following a Volcanic Eruption From Wilson et al. (2014)
As illustrated above, many factors play a part in successful mitigation (Wilson et al., 2014). Therefore, in order to protect these growing cities hazard assessments must be done to examine the risk that the eruption poses, exposure assessments must take place to assess how many may be affected by such an event, and vulnerability assessments must take place to examine just how impacted a community would be if an event occurs (Wilson et al., 2014).
The risks posed by volcanoes are many, and though small eruptions pose unique risks towards economies and different global networks, mitigation can still take place.
Small eruptions can have massive impacts. Though human life is often threatened to a lesser extent with small eruptions than larger ones, the threats posed by such eruptions are still impactful. Increased levels of globalisation in cities and communities that were once isolated allows for small eruptions to affect a greater amount of people and resources. Past small eruptions have majorly impacted global economies and through mitigation much can be done to reduce potential losses. Though the study of such processes is still young, if research continues and increases, much can be protected during future eruptions.
Abbott, C. (2001). Greater Portland: Urban Life and Landscape in the Pacific Northwest. [online] Google Books. University of Pennsylvania Press. Available at: https://books.google.co.uk/books?hl=en&lr=&id=CxUDCgAAQBAJ&oi=fnd&pg=PA1&dq=Volcanoes+such+as+Mt.+Rainier [Accessed 2 Apr. 2023].
Alexander, D. (2013). Volcanic ash in the atmosphere and risks for civil aviation: A study in European crisis management. International Journal of Disaster Risk Science, 4(1), pp.9–19.
British geological survey (2022). Volcanic hazards. [online] British Geological Survey. Available at: https://www.bgs.ac.uk/discovering-geology/earthhazards/volcanoes/volcanic-hazards/.
Brönnimann, S. and Krämer, D. (2016). Tambora and the ‘Year Without a Summer’ of 1816 A Perspective on Earth and Human Systems Science
GEOGRAPHICA BERNENSIA. [online] Available at: https://boris.unibe.ch/81880/1/tambora_e_A4l.pdf.
Cascades Volcano Observatory (2022). Tephra Fall Is a Widespread Volcanic Hazard | U.S. Geological Survey. [online] www.usgs.gov. Available at: https://www.usgs.gov/observatories/cascades-volcano-observatory/tephra-fall-widespread-volcanic-hazard.
Ceurstemint, S. (2021). Unravelling the when, where and how of volcanic eruptions | Research and Innovation. [online] ec.europa.eu. Available at: https://ec.europa.eu/research-and-innovation/en/horizon-magazine/unravelling-when-where-and-how-volcanic-eruptions.
Chester, D. and Duncan, A. (2000). The increasing exposure of cities to the effects of volcanic eruptions: A global survey. Research Gate, 2(3). doi:https://doi.org/10.1016/S1464-2867(01)00004-3.
Choumert-Nkolo, J., Lamour, A. and Phélinas, P. (2021). The Economics of Volcanoes. Economics of Disasters and Climate Change, 5(2), pp.277–299. doi:https://doi.org/10.1007/s41885-021-00087-2.
Doocy, S., Daniels, A., Dooling, S. and Gorokhovich, Y. (2013). The Human Impact of Volcanoes: a Historical Review of Events 1900-2009 and Systematic Literature Review. PLoS Currents doi:https://doi.org/10.1371/currents.dis.841859091a706efebf8a30f4ed7a1901.
Mani, L., Tzachor, A. and Cole, P. (2021). Global catastrophic risk from lower magnitude volcanic eruptions. Nature Communications, 12(1). doi:https://doi.org/10.1038/s41467-021-25021-8.
NASA Earth Observatory (2023). Eruption of Eyjafjallajokull Volcano, Iceland. [online] earthobservatory.nasa.gov. Available at: https://earthobservatory.nasa.gov/images/event/43253/eruption-of-eyjafjallajoumlkull-volcano-iceland#:~:text=Reaching%20a%20height%20of%204 [Accessed 2 Apr. 2023].
National Centre for Atmospheric Science (2020). Eyjafjallajökull 2010: How an Icelandic volcano eruption closed European skies. [online] NCAS. Available at: https://ncas.ac.uk/eyjafjallajokull-2010-how-an-icelandic-volcano-eruption-closed-european-skies/.
National Parks Service (2022). Volcanic Explosivity Index - Volcanoes, Craters & Lava Flows (U.S. National Park Service). [online] www.nps.gov. Available at: https://www.nps.gov/subjects/volcanoes/volcanic-explosivity-index.htm#:~:text=The%20Volcanic%20Explosivity%20Index%20
Pu, H. C. et al. Active volcanism revealed from a seismicity conduit in the long-resting Tatun Volcano Group of Northern Taiwan. Scientific Rep. 10, 6153 (2020).
Russon, M.-A. (2021). The cost of the Suez Canal blockage. BBC News. [online] 29 Mar. Available at: https://www.bbc.com/news/business-56559073. Rymer, H. (2000). Living with volcanoes. Geology Today, 16(1), pp.26–31. doi:https://doi.org/10.1046/j.1365-2451.2000.1601006.x.
SDSU (2010). How Big are Volcanic Eruptions? [online] Volcano World. Available at: https://volcano.oregonstate.edu/how-big-are-eruptions. Self, S. (2015). Chapter 16 - Explosive Super-Eruptions and Potential Global Impacts. [online] ScienceDirect. Available at: https://www.sciencedirect.com/science/article/pii/B9780123964533000162 [Accessed 2 Apr. 2023].
Sigurdsson, H., Houghton, B.F., Mcnutt, S.R., Rymer, H. and Stix, J. (2015). The encyclopedia of volcanoes. 2nd ed. London, Uk ; San Diego, Ca: Elsevier/Academic Press.
USGS (2021). Why is it important to monitor volcanoes? | U.S. Geological Survey. [online] www.usgs.gov. Available at: https://www.usgs.gov/faqs/whyit-important-monitor-volcanoes.
Wilson, G., Wilson, T.M., Deligne, N.I. and Cole, J.W. (2014). Volcanic hazard impacts to critical infrastructure: A review. Journal of Volcanology and Geothermal Research, 286, pp.148–182. doi:https://doi.org/10.1016/j.jvolgeores.2014.08.030.
Reviewed and edited by L. Deen and S. Sandanatavan
ABSTRACT: Stem cell research has made immense waves in the world of science, entering the scene in the 1980s. Since then, a multitude of innovative research has been applied in almost all aspects of biology. This review will outline some of the ground-breaking applications of stem cells that are being used in the current day, such as regenerating damaged tissue, creating “disease in a dish” models, utilising stem cells as “trojan horses” to deliver chemotherapy drugs to target cancer cells, generating functional human organs, discovering specific biomarkers connected to disease, and even forming a deeper understanding of crucial biological processes such as human development.
types. Stem cells have the ability differentiate into 200 different cell types [3].
Stem cells can all be categorized differently, embryonic stem cells (ESCs) being at the forefront of recent studies. ESCs are harvested from embryos that are typically ages 3-5 days, when the embryo is a blastocyst around 150 cells in size. The ESCs are found in the intracellular mass of the embryo which when harvested destroys the blastocyst. These cells can be collected through various methods The reason this stem cell type is so highly sought after is its totipotent trait, allowing it to differentiate into all types of germline and somatic tissues in vitro. This is very important, as these cells can be used to form specific types such as: neural cells, cardiomyocytes, islets, dendritic cells, osteoblasts, chondrocytes and hepatocytes
The potential of this power is seemingly limitless, and research on these cells has been accelerated the past two decades, allowing for studies to develop therapies for a multitude of disease pathologies. Examples include spinal cord injury, Parkinson's Disease, heart failure, diabetes, cancer, osteoporosis, bone fractures, Arthritis, and liver failure [2,4]
This review will outline one of the recent ground-breaking discoveries within this field, which is providing scientists with a deeper understanding of early human development itself.
Stem cells have also been used to regenerate damaged nerve tissue in animal models, and there are ongoing clinical trials to test their use for the treatment of spinal cord injuries and other neurological conditions As we age, injuries become more common and can significantly impact our mobility and quality
of life. Common injuries include back, shoulder, and knee damage, which can be caused by a variety of factors [5].
One promising alternative to tradi tional surgery is stem cell therapy , which uses the body's own cells to repair and regenerate damaged tissues and joints. This type of regenerative therapy can be effective in restoring joint health and function and relieving pain, without the risks associated with invasive surgery. For example, stem cell therapy may be a good option for those with a disc herniation, torn rotator cuff, or damaged cartilage in the knee [5]
Rotator cuff injury represents one of the major performed musculoskeletal surgeries Regardless of high rates of RC tears, the surgical treatments provide pa tients with limited functional gains and result in high re-tear rates.
Tendon tissues have limited capa city for regeneration as they are hypocellular, meaning they have fewer cells than normal tissue.
Using stem cells to regenerate damaged tissue Researchers at the University of Washington have had success using stem cells to regenerate damaged heart tissue in animal models [2, 4], and there are ongoing clinical trials to test the use of stem cells for heart repair in humans. Stem cells have also been used to regenerate damaged nerve tissue in animal models, and there are ongoing clinical trials to test their use for the treatment of spinal cord injuries and other neurological conditions.
As we age, injuries become more common and can significantly impact our mobility and quality of life Common injuries include back, shoulder, and knee damage, which can be caused by a variety of factors [5].
One promising alternative to traditional surgery is stem cell therapy, which uses the body's own cells to repair and regenerate damaged tissues and joints. This type of regenerative therapy can be effective in restoring joint health and function and relieving pain, without the risks associated with invasive surgery. For example, stem cell therapy may be a good option for those with a disc herniation, torn rotator cuff, or damaged cartilage in the knee [5].
Rotator cuff injury represents one of the major performed musculoskeletal surgeries Regardless of high rates of RC tears, the surgical treatments provide patients with limited functional gains and result in high re-tear rates.
Tendon tissues have limited capacity for regeneration as they are hypocellular, meaning they have fewer cells
than normal tissue. These cells are currently being applied to increasing our understanding of diseases, regenerating healthy cells that have been diseased (often referred to as Regenerative medicine) and to test efficacy of drugs on specific cell types [2].
However, controversy arises when it comes to ESMs. As these cells are typically sourced from human embryos, the question of ethics has become a large topic within this research field. The National Institution of Health has placed guidelines for researching these cells, placing large usage restrictions and leading to a limited supply It is from here that other categories of stem cells come into play. These include adult stem cells and induced pluripotent stem cells (iPSCs).
Adult stem cells are found in most adult tissues but have a restricted ability to differentiate due to their stage of maturation and identity [2].
These cells are limited to producing the specific cell types from the tissues that they were sourced from. For instance, blood stem cells can only differentiate into cells that are found in the blood. This trait is defined as multipotent, as it can differentiate into multiple types of specialized cells, but not all types of cells. Adult stem cells are also less stable than ESMs due to their limited self-renewal ability
These cells are utilised by researchers assuming they do not pose the same ethical issues as ESM; however, it is noted that they are difficult to isolate in adult tissues.
A good middle ground between these two categories of stem cells are iPSCs. iPSCs are adult skin or blood tissue cells that have been reprogrammed to resemble an embryonic stem cell state [4]. This allows the iPSCs to differentiate into any human cell type [7]. The development of these iPSCs were a vital stem in stem cell research, as they allowed for an unlimited source of cell types needed for therapies [4]. The utilisation of stem cells has led to a plethora of life-changing research studies currently being applied to all types of disease Abilities such as restoring eyesight, regenerating organs and even somatic cell nuclear transfer (the process underscoring cloning) soon became a reality with these cells now becoming better understood and manipulated.
Stem cells currently boast an entire field of research, which continues to expand its limits and possibilities In recent years there has been massive efforts in tendon genesis research [6]
The understanding of the tendon stem cell niche has been drastically developed allowing for cellular differentiation, improved scaffold fabrication techniques, and the identification of the phenotypic developmental process in tendon cells [7]. However, the promising results in vitro models have not translated into the in vivo experiments.
Interestingly, a study was conducted that investigated structural matrices that mimicked the tendon microenvironment as a cell
delivery vehicle in a rotator cuff tear model [7]. This study found that in rats with rotator cuff damage the utilisation of a matrix to deliver mesenchymal stem cells resulted in the improvement of regeneration over suture repair or repair with augmentation at 6 and 12 weeks post-surgery [7] The local delivery of these mesenchymal stem cells resulted in improvements with mechanical properties and tissue morphology (see Figure 1). These results propose a new treatment model for rotator cuff tendon tears based on matrices delivering stem cells for a regenerative healing response [7]. Back pain is a common concern that often leads people to seek medical treatment. This pain can be caused by a variety of factors, including muscle strain, tendon or ligament tears, and damage to the spine
One common approach to treating spinal damage is spinal fusion surgery, which involves immobilising the injured portion of the spine [5]. However, this surgery can have a number of complications, including infection, broken hardware, decreased mobility, and weakness in other areas of the spine [5]. Stem cell therapy offers a safer and less invasive alternative for addressing the root cause of back pain and instability. Research has shown that stem cells can help repair degenerative discs and damaged tendons and ligaments, improving function in the joints of the spine [2] In some cases, stem cell therapy may be used in conjunction with spinal fusion surgery to increase the chances of a successful outcome. Stem cell therapy is an emerging treatment option for individuals with back pain and spinal damage.
Figure 1: Non-augmented versus Augmented rat supraspinatus model. A) Non-augmented model with modified Mason-Allen stitch. Purple shows suture, * represents areas of stress. B) Integrated matrix augmentation model of supraspinatus tendon repair. Green shows the side of the cell seeding in the matrix/mesenchymal stem cell group. (Modified from source: Peach et. al [7])
According to the Virginia Spine Institute, this therapy has the potential to provide a safer and less invasive alternative to traditional treatments, such as spinal fusion surgery, for addressing the root cause of back pain and instability [8].
One example of how stem cell therapy is being used to treat back pain and spinal damage is through the repair of degenerative discs. Degenerative discs, which are a common cause of back pain, occur when the discs in the spine become damaged or lose their ability to function properly. This can lead to pain, stiffness, and difficulty with movement. Stem cell therapy has shown promise in addressing this issue by helping to regenerate damaged discs and improve their function According to Biomedical Central, stem cells can be injected into the affected discs to stimulate the production of new, healthy cells, which can help to restore their structure and function [9].
Stem cell therapy may also be used in conjunction with spinal fusion surgery to increase the chances of a successful outcome. Spinal fusion surgery involves immobilising the injured portion of the spine to promote healing and reduce pain [5]. By combining stem cell therapy with spinal fusion surgery, it may possibly reduce the risk of these complications and improve the final outcome of the surgery. This was concluded in a review by Schroeder that noted that stem cell therapy may heal spinal cord injury and help bone growth in spinal fusion, which in turn can help to strengthen the fusion and reduce the risk of complications, thus reducing morbidity rates [10].
Overall, stem cell therapy shows promise as a treatment option for individuals with back pain and spinal damage. While more research is needed to fully understand the potential benefits and limitations of this approach, current evidence suggests that stem cell therapy may be effective in repairing degenerative discs, damaged tendons and ligaments, and improving the outcome of spinal fusion surgery.
The "disease in a dish" stem cell model, also known as the human disease model or organoid model, is a revolutionary tool in the field of biomedical research. It involves using stem cells, which are undifferentiated cells that have the ability to differentiate into various cell types, to create miniature versions of organs or tissues in a laboratory setting. These organoids
can then be used to study the development and function of normal tissues, as well as the underlying mechanisms of various diseases. One of the main advantages of the disease in a dish model is that it allows researchers to study human cells in a more realistic and controlled environment. Traditional in vitro cell culture systems, such as those using cell lines derived from cancer cells or primary cells isolated from tissues, often lack the complexity and diversity of the in vivo environment. In contrast, stem cells can be derived from a variety of sources, including embryonic stem cells, induced pluripotent stem cells (iPSCs), and adult stem cells, and can be induced to differentiate into a wide range of cell types, including those found in the nervous system, cardiovascular system, and gastrointestinal tract. This versatility allows researchers to create organoids that closely mimic the structure and function of the corresponding tissues in the human body [26].
One example of the use of the disease in a dish model is in the study of neurodegenerative
diseases, such as Parkinson's disease [11]. These diseases are characterised by the progressive loss of specific types of neurons, leading to cognitive decline and motor dysfunction. Using iPSCs derived from patients with these diseases, researchers have been able to generate neural organoids that mimic the brain regions affected by these conditions. These organoids have provided valuable insights into the mechanisms of neurodegeneration, as well as potential therapeutic targets. For example, a study conducted by John Dimos [12] and his colleagues which modelled amyotrophic lateral sclerosis, a neurological disease that weakens muscles and their function This research generated pluripotent stem cells from an individual patient that allowed for the extensive production of the cell types that are affected by the patient's disease pathology [12]. Induced pluripotent stem cells (iPSCs) derived from individuals with genetic syndromes can be used for researching these diseases and developing therapeutic compounds [12]. iPSCs have the ability to self-renew and differentiate into many different cell types, providing an almost limitless source of material for study [11]. iPSCs have properties similar to embryonic stem cells and can be created from human fibroblasts through a process called reprogramming. However, it is not yet known if iPS cells can be directly produced from elderly patients with chronic diseases. This study was able to generate iPS cells from an 82-year-old woman with a genetic form of amyotrophic lateral sclerosis (ALS) [12] These patient-specific iPS cells were able to differentiate into motor neurons, a cell type that is lost in ALS This discovery is revolutionary, as the disease in the dish model will allow for copious diseases to be studied in depth and for the potential therapeutic application of newly generated healthy cells being reinstated into the body [12].
Another area where the disease in a dish model has been particularly useful is in the study of cancer. Cancer is a complex and heterogeneous disease, with many different subtypes and genetic mutations driving its development and progression. Traditional cancer cell lines, which are derived from tumours and grown in culture, often lack the genetic diversity and complexity of primary tumours and may not accurately represent the cancer in the patient. In contrast, iPSCs can be generated from cancer tissues or even normal tissues and induced to differentiate into cancer cells, allowing researchers to study the effects of specific genetic mutations and therapeutic interventions in a more relevant context [27] For example, researchers have used iPSCs to study the effects of specific genetic mutations on the development and progression of various cancer types, including breast, ovarian, and pancreatic cancer [13, 27].
In addition to their use in studying specific diseases, stem cell-derived organoids have also been used to investigate the development and function of normal tissues and organs. For example, researchers have used stem cells to generate miniature versions of the liver, kidney, and gut, which have provided valuable insights into the developmental and functional processes of these organs. These organoids have also been used to test the toxicity of drugs and chemicals, providing a more accurate and predictive model compared to traditional in vitro cell culture or animal models [14].
Figure details how organoids can be derived from human or animal, pluripotent or adult stem cells. Normal or diseased organs can be applied in a multitude of ways such as organ development, drug discovery, personalised medicine, toxicology, organ transplant, organ repair or disease modeling. (modified from source Garcia et al.14)
Figure 2 - Diagram of the applications of organoid systems
In conclusion, the disease in a dish stem cell model has revolutionised the field of biomedical research by providing a more realistic and controlled environment for studying human cells and tissues. Its versatility and ability to mimic the structure and function of normal and diseased tissues has allowed researchers to gain valuable insights into the underlying mechanisms of various diseases and to identify potential therapeutic targets. The use of stem cells in this model has the potential to significantly advance our understanding of human biology and disease and may ultimately lead to the development of more effective and targeted therapies for a wide range of conditions.
Researchers are exploring the use of stem cells as "Trojan horses" that can deliver chemotherapy drugs directly to cancer cells, making the treatment more effective. This approach has the potential to improve the effectiveness of chemotherapy while reducing its side effects. The idea is to use the stem cells' ability to home in on and target cancer cells, while also incorporating chemotherapy drugs within the stem cells themselves This approach has the potential to improve the efficacy and specificity of chemotherapy, while minimising its side effects on healthy cells.
An example of this approach is the use of mesenchymal stem cells (MSCs) for delivering chemotherapy to prostate cancer cells [15]. MSCs are a type of stem cell found in bone marrow and other tissues, and they have the ability to migrate towards areas of inflammation and tissue damage. In this study, Scientists developed a new method for delivering anticancer drugs to prostate cancer cells while minimising harm to the rest of the body [15] They used human stem cells and loaded them with tiny particles that contain a special drug called G114. When the stem cells were exposed to prostate cancer cells, the G114 was released and was able to kill the cancer cells. This approach was successful in both lab tests and in tests on mice with prostate cancer. The researchers at John Hopkins hope that this method can be developed into a treatment for humans with prostate cancer [15].
Overall, the use of stem cells as a "trojan horse" for delivering chemotherapy is an exciting area of research that holds great promise for improving cancer treatment. While more research is needed to fully understand the mechanisms at play and to optimise this approach, the initial results are encouraging and suggest that stem cells could play a valuable role in the fight against cancer.
The use of stem cells to generate functional human organs, a process known as organoid technology, has gained significant attention in recent years due to its potential for improving our understanding of human development and disease, as well as for providing a renewable source of transplantable organs
One example of the use of stem cells in organoid technology is the generation of functional human liver organoids [17]. The liver is a vital organ with many important functions, including detoxification, metabolism, and synthesis of proteins and other molecules. However, there is a shortage of donor organs available for transplantation, and current treatments for liver diseases often involve using drugs to support the remaining liver function rather than replacing the damaged tissue. To address this problem, researchers have been exploring the use of stem cells to generate functional liver organoids [17]
One study published by Nature [16] described the use of human pluripotent stem cells (hPSCs) to generate functional liver organoids in a dish. The researchers used a combination of biochemical signals and physical forces to guide the hPSCs to differentiate into liver cells, and the resulting organoids were able to perform key liver functions such as drug metabolism and synthesis of proteins [16] A separate study furthered this application of this therapy and was able to transplant liver organoids into mice with Acute Liver Failure. Once integrated into the host liver, there was an improvement in hepatic function and an increased
Overall, the use of stem cells to generate functional human organs is a promising field with great potential for improving our understanding of human development and disease, as well as for providing a renewable source of transplantable organs. While more research is needed to fully optimise this approach and to translate it to clinical use, the initial results are encouraging and suggest that stem cells could play a vital role in the future of organ transplantation. Using Stem Cells to generate specific biomarkers related to disease
Biomarkers are indicators of the presence or severity of a particular disease or condition. They can be used to diagnose diseases, monitor their progression, and predict their outcomes. Identifying biomarkers for diseases can help guide treatment and improve patient outcomes [13].
One way stem cells are used to identify biomarkers of disease is through the study of iPSCs. By generating iPSCs from patients with a particular disease, researchers can study the cells in the lab and identify differences between the iPSCs of healthy individuals and those with the disease. These differences, or biomarkers, may be at the genetic, epigenetic, or transcriptional level and can provide insight into the underlying cause of the disease [18].
Another way stem cells are used to identify biomarkers of disease is through the study of tissuespecific stem cells These are stem cells that are
associated with high platelet counts, platelet dysfunction, and clotting problems. There are certain chemical signaling pathways and genetic factors that help regulate the production of different types of blood cells in the body. However, these factors only explain about 10% of the variation in platelet and red blood cell production. In a study conducted by the Stony Brook Lab, researchers identified a specific mutation (called BLVRB) that is linked to increased production of platelets. This mutation causes an accumulation of certain chemicals called reactive oxygen species, which may affect the development of different types of blood cells and lead to increased platelet production. This is the first time that the function of the BLVRB has been identified It suggests that this mutation, and related chemicals called hemeregulated BV tetrapyrroles may play a role in a special energy pathway that helps produce megakaryocytes, which are a type of cell that gives rise to platelets. This finding could potentially be used to develop a new treatment using the observed drug target for increasing platelet counts in humans [20].
Lisa Malone, from one of Stony Brooks haemotopic labs, acclaimed researchers have directly commented on the process utilised by the lab that incorporates stem cells to find specific biomarkers. Malone states “one model used to study thrombopoiesis is the cord blood stem cell[s]”. In the lab, she routinely isolates these multipotent hematopoietic stem cells from umbilical cord samples collected from their hospital using the proper IRB approved protocol. Once the stem cells are isolated from the whole blood,
Not only is Lisa an active member in the academic research at Stony Brook Medicine, with a focus on specific cellular mechanism and answering questions, but she also has the role of C.O.O for a startup Biotechnology company, Blood Cell Technologies (BCT), which aims to “identify, validate and commercialise novel drug targets and compounds that may be used to diagnose and or treat blood cell disorders” BCT’s main drug development programs are currently focused on the “optimisation of redox inhibitors as novel reagents for enhancing platelet production”.
A key factor of embryogenesis post-implantation is the extraembryonic mesoderm (EXM) specification. The ESM is tissue that has a critical role in development, and is essential for erythropoiesis (the multipotent hematopoietic stem cell commitment to red blood cells) and the formation of the extracellular matrix. Notably, EXM specification mechanisms differ between mammal species. It was found that the EXM of rodents develop post-gastrulation while in primates, the EXM develops before gastrulation occurs In primates, the EXM forms a connecting stalk between the cytotrophoblast, the amnion, the epiblast disc and the primitive endoderm all form the primitive umbilical cord The EXM cells even go on to fill chorionic villi farther down the development process Although scientists understand the importance of the EXM, the depth of knowledge surrounding its regulation on the cellular and molecular level in humans remains very limited, to the extent that there are no models of primate EXM development in vitro. This is where Vincent Pasque et al. made a ground-breaking discovery that allowed him to model these human EXM cells using naive pluripotent stem cells [21].
Where the extraembryonic mesoderm comes from or how it forms is unknown. Early on, the EXM was thought to have originated from the trophoblasts due to its location and formation before the primitive streak. It was also postulated that the EXM originated from the primitive streak itself, as it appeared in a similar region of the epiblast. The EXM in mice and other species comes from the primitive streak during gastrulation; however, the EXM in primates was found before the formation of the primitive streak, concluding that it could not be the source of the EXM within primates. Theories of the EXM origination currently vary from an epiblast origin to potentially even the primitive endoderm, or a combination of both. Not only does the origin remain unknown, but also the regulatory processes leading to EXM identity in humans [21].
Embryonic development is an extremely hard concept to study due to its extensive
ethical and legal limitations. For this reason, our current understanding of early development is quite sparse. A solution presented to begin battling this problem is the application of stem cells to model different stages of human embryogenesis.
The application of naive human pluripotent cells has allowed for the modelling of embryonic lineages pre-implantation and extraembryonic primitive endoderm and trophoblast lineages, which include human and amnion. It isn’t clear if naive hPSCs are able to form new or more extraembryonic lineages such as EXM. These models become stronger based on their ability to develop cells that display the blastocyst stage.
These blastoids will develop to different extents of off targeted cells based on factors of initial cell state as well as the molecules that simulated their formation; however, in this research, the knowledge of this lineage identity as well as developmental stages of
the generated cells are not concrete. These were postulated to correlate to post-implantation epiblast, primitive streak, amnion, mesoderm-like cells and EXMCs in humans Pasque et al discovered EXMC specification from naive hPSC cultures and proposes that modelling EXMC specification will allow us to reach a deeper understanding of cell fate specification mechanisms in human periimplantation embryogenesis. This further allows for the study of embryogenic defects which lead to failure in development. This work shows that naive hPSC cultures can specify into EXMCs, therefore creating a model of early human post implantation development that can be studied and manipulated in vitro [21].
Naive human pluripotent stem cells (hPSCs), cells that have the ability to divide into any tissue cell type 22, were exposed to human trophoblast stem cell (hTSC) media, known as ASECRiAV, in hopes of deriving hTSCs At day 30, colonies were observed with hTSC morphology and GATA3 expression, suggesting the induction of hTSCs which was an expected developmental behaviour [28]. Surprisingly, an unexpected cell type was present at day 30 which had mesenchymal morphology but for the most part lacked the GATA3 expression. Both hTSCs and the surprise cell type were consistently observed in all attempts of cell conversion. hTSCs express epithelial marker CDH1. The CDH1 trait allowed for separation, fluorescent, and activated cell sorting which led to the observation that CDH1-cells appeared to selfrenew and grow for over 70 days This brought about the discovery that naive hPSCs differentiate to an unexpected CDH1 mesenchymal cell type using hTSC conditions [28] These unexpected cells obtained by ASECRiAV conversion were compared to human embryo cells missing single-cell RNA sequencing in order to establish the identity of the cells. The results from the single-cell RNA sequencing suggested that the unexpected mesenchymal cells were extraembryonic mesodermal cells. The EXMs obtained by hPSC were also found to transcriptionally match human and monkey embryo EXM and even express their specific key proteins. This was a significant finding, as it captures a
primate-specific post implantation human embryo cell type in vitro, thus making it a tangible subject of experimentation.
While the origin of EXMS remain unknown, this study suggests that, instead of deriving from predifferentiated cells in hPSCs or an PrE intermediate, the EXM cells may arise from an intermediate state between the naive and EXM states [21] In fact, when testing naive hPSCS and EXMS in a single cell RNA sequence time course, majority of cells where at an intermediate epiblast state around day four. This advertises that the intermediate state of epiblast cells have the potential to be the source of EXMs [21]; therefore, a model is proposed that naive hPSCs become intermediate epiblast cells then becoming the source from which EXMs derive from (see figure 1).
By utilising this stem cell technology to model specific cell lines, we can begin to chip away at the unknowns that burden our current breadth of knowledge surrounding early embryonic development and the mechanisms of how we come to be Stem cells are being applied in almost every field of research science, and this groundbreaking review unveiled a fresh cell type that contains new foundational knowledge in regards to developmental biology. Studying embryonic development specifically is critical in the journey of battling serious disorders that severely impact our society to this day (modified from source: Pham et al )
Figure 3- Graphical extract detailing postulated origination of EXM cellsIn conclusion, stem cell research has revolutionised the field of biology and has led to a multitude of groundbreaking discoveries and innovative applications. From regenerating damaged tissue to creating "disease in a dish" models and utilising stem cells as "trojan horses" to deliver chemotherapy drugs, stem cells have proven to be a valuable tool in the fight against various diseases. In addition, the ability to generate functional human organs and understand crucial biological processes such as human development has the potential to greatly improve the quality of life for many individuals. While stem cell research is still in its early stages and there are many challenges yet to overcome, the future looks bright for this promising area of study.
1.University of Nebraska Medical Center. History of Stem Cell Use | Stem Cells | University of Nebraska Medical Center Unmc edu https://www unmc edu/stemcells/educationalresources/history html#:~:text=Scientists%20disco vered%20ways%20to%2 0derive (2020)
2 Mayo Clinic Stem Cells: What They Are and What They Do Mayo Clinic https://www mayoclinic org/tests-procedures/bone-marrowtransplant/in-depth/stem-cells/art-200 48117 (2022)
3 Stem Cell Key Terms California’s Stem Cell Agency https://www cirm ca gov/patients/stem-cell-keyterms#:~:text=bone%20or%20cartilage - (2009)
4 Zakrzewski, W , Dobrzyński, M , Szymonowicz, M & Rybak, Z Stem cells: past, present, and Future Stem Cell Research & Therapy 10, (2019)
5 Stem Cells Heal Damaged Tissue, Doing What Surgery Can’t swspineandsports com https://swspineandsports com/orthopedic-blog/stem-cells-heal-damaged-tissue-doing-what-surge rycant.
6.Repairing torn rotator cuffs. National Institutes of Health (NIH) https://www.nih.gov/news-events/nih-researchmatters/repairing-torn-rotator-cuffs (2017)
7 Peach, M S et al Engineered stem cell niche matrices for rotator cuff tendon regenerative engineering PLOS ONE 12, e0174789 (2017)
8 Stem Cell Therapy Virginia Spine Institute https://www spinemd com/how-we-treat/regenerativemedicine/stem-cell-therapy.
9.Zhang, W. et al. Application of stem cells in the repair of intervertebral disc degeneration Stem Cell Research & Therapy 13, (2022)
10 Schroeder, J Stem cells for spine surgery World Journal of Stem Cells 7, 186 (2015) 11 Tiscornia, G , Vivas, E L & Belmonte, J C I Diseases in a dish: modeling human genetic disorders using induced pluripotent cells Nature Medicine 17, 1570–1576 (2011)
12 Dimos, J T et al Induced pluripotent stem cells generated from patients with ALS can be differentiated into motor neurons Science (New York, N Y ) 321, 1218–1221 (2008)
13 Kim, J J Applications of iPSCs in Cancer Research Biomarker Insights 10s1, BMI S20065 (2015)
14 Caipa Garcia, A L , Arlt, V M & Phillips, D H Organoids for toxicology and genetic toxicology: applications with drugs and prospects for environmental carcinogenesis. Mutagenesis (2021) doi:10.1093/mutage/geab023.
15 Levy, O et al A prodrug-doped cellular Trojan Horse for the potential treatment of prostate cancer Biomaterials 91, 140–150 (2016)
16 Broda, T R , McCracken, K W & Wells, J M Generation of human antral and fundic gastric organoids from pluripotent stem cells Nature Protocols 14, 28–50 (2018).
17.Nie, Y.-Z., Zheng, Y.-W., Ogawa, M., Miyagi, E. & Taniguchi, H. Human liver organoids generated with single donor-derived multiple cells rescue mice from acute liver failure Stem Cell Research & Therapy 9, (2018)
18 Valenti, M T Mesenchymal stem cells: A new diagnostic tool? World Journal of Stem Cells 7, 789 (2015)
19 Bonaventura, G et al Stem Cells: Innovative Therapeutic Options for Neurodegenerative Diseases? Cells 10, 1992 (2021)
20 Wu, S et al BLVRB redox mutation defines heme degradation in a metabolic pathway of enhanced thrombopoiesis in humans Blood 128, 699–709 (2016)
21 Pham, T X A et al Modeling human extraembryonic mesoderm cells using naive pluripotent stem cells. Cell Stem Cell 29, 1346-1365.e10 (2022).
22.https://www.cancer.gov/publications/dictionaries/cancerterms/def/pluripotent-stem-cell.
www cancer gov https://www cancer gov/publications/dictionaries/cancerterms/def/pluripotent-stem-cell (2011)
23 Sterodimas, A , de Faria, J , Nicaretta, B & Pitanguy, I Tissue engineering with adipose-derived stem cells (ADSCs): Current and future applications Journal of Plastic, Reconstructive & Aesthetic Surgery 63, 1886–1892 (2010)
24 Advances in Stem Cell Therapy: New Applications and Innovative Therapeutic Approaches | Frontiers Research Topic www frontiersin org https://www frontiersin org/research-topics/44258/advances-in-stem-celltherapy-new-applicatio ns-and-innovative-therapeutic-approaches#overview.
25 National Institutes of Health (NIH) (2015) Stem Cell Therapy Rebuilds Heart Muscle in Primates [online] Available at: https://www nih gov/newsevents/nih-research-matters/stem-cell-therapy-rebuilds-heart-muscleprimates#:~:text=Scientists%20used%20human%20embryonic%20stem
26 Zhao, X , Liu, J , Ahmad, I (2006) Differentiation of Embryonic Stem Cells to Retinal Cells In Vitro. In: Turksen, K. (eds) Embryonic Stem Cell Protocols. Methods in Molecular Biology™ , vol 330. Humana Press. https://doi org/10 1385/1-59745-036-7:401
27 Gargus, E S , et al (2022) An Ovarian Steroid Metabolomic Pathway Analysis in Basal and Polycystic Ovary Syndrome (PCOS)-like Gonadotropin Conditions Reveals a Hyperandrogenic Phenotype Measured by Mass Spectrometry. Biomedicines, [online] 10(7), p.1646. doi:https://doi.org/10.3390/biomedicines10071646.28. Dong, C., Beltcheva, M , Gontarz, P , Zhang, B , Popli, P , Fischer, L A , Khan, S A , Park, K , Yoon, E -J , Xing, X , Kommagani, R , Wang, T , Solnica-Krezel, L and Theunissen, T W (2020) Derivation of trophoblast stem cells from naïve human pluripotent stem cells eLife, 9 doi:https://doi org/10 7554/elife 52504
Reviewed and edited
by T. Burton and L. DeenABSTRACT: The ancient Greeks believed that every element of the world was made up of 5 core elements: earth, wind, fire, water, and air. Later they concluded that everything was made up of tiny indivisible pieces, slotting together to make the object. These undividable chunks, called atoms, were the prevailing theory; explaining how the tangible and observable world was made up building blocks that exist on the smallest of scales; for over two thousand years.
In 1904 John Thompson blew that understanding to pieces with the discovery of the electron The even smaller, negatively charged particle was thought to be dotted throughout the atom like plums in a pudding cake. After so many years of acceptance this idea was shattered by Ernest Rutherford in 1911 In fact, the 20th century is characterised by barrier after barrier of the quantum world falling in rapid succession. Just 27 years after the discovery of the nucleus we were able to split it in two and only shortly after, weaponize this process. But more importantly we learnt to shackle it and use it to generate untold amounts of energy compared to any other known energy source known at the time.
Nuclear Fission isn’t perfect, its biproducts included incredibly radioactive elements which are active enough to pose a danger for life for millennia, furthermore constructing and decommissioning power plants is a costly business leaving it inaccessible for developing nations However, there is another process that has remained out of our grasps for over a century, it happens at the core of every star in the galaxy for most of its life, this is nuclear fusion Nuclear could hold the key to answering the biggest challenges that society is grappling with including energy security and the climate crisis. It isn’t an easy goal; scientists have been hitting obstacles for over 70 years and only through bringing the world's greatest minds together will progress be made.
Before discussing Nuclear Fusion, it is important to understand nuclear binding energies The binding energy of a nucleus is how much energy it would require to strip it apart into its constituent pieces (i.e. protons and neutrons). The higher the binding energy, the more stable the nucleus is. It was Einstein who first discovered that the mass of a nucleus was ever so slightly less than the mass of the protons and neutrons that made it up [1] This deficit is what makes up the binding energy using Einstein’s infamous equation E = Dmc2 .
It so happens that Iron – 56 has the highest binding energy of all nuclei, this means that any nuclei that is heavier than Iron, if it were to be made so unstable as to decay or fission, will give off energy as it splits into lighter nuclei with higher binding energies. Inversely light elements fuse together to create heavier elements that have a higher binding energy and as a result give of large quantities of energy Looking at Figure 1 [2] it is simple to see that fusion will give off much bigger quantities of energy as it corresponds to a much greater change in binding energies.
While fission requires destabilising the nucleus of a heavy element causing it to decay, fission requires extremely high temperatures and pressures to cause two positively charged nuclei to combine. Ordinarily the electrostatic repulsion between two positive charges would prevent this from happening. To overcome this, the particles need to have high enough energy, which is achieved at very high temperatures and very high pressures. This forces the nuclei close enough that they begin to interact via the nuclear strong force. This force has a much closer range then electrostatic repulsion but is much stronger so will end up binding the nucleons together.
It is the requirement of these conditions that restricts fusion to the cores of stars; it is the fusion process that releases all the energy which they radiate. In our Solar System fusion only occurs in two places. First, the only continuous fusion reactor is the sun. The entire weight of its outer layer’s press down on the core causing the nuclei to smash together at over ten million kelvin [3]. Second, are the small purpose-built reactors here on earth. These are only active for a couple of minutes at a time and have been known to reach temperatures of over 100 million kelvin. This extra high temperature is required due to the fact that it is impossible to reach the same pressure as that in the core of a star. Problems arise when heating up a substance to such a high temperature, if the fuel comes into contact with any other surface, then it will cool instantly and all fusion will stop.
During the first draft of this article, it was at this point it was noted that fusion experiments have always required more energy input than was released by the fusion On the 13th December 2022 it was reported that researchers at the US National Ignition Facility used 192 lasers to release 3.15 MJ (roughly equivalent to the energy required to boil 15 kettles) with only an initial investment of 2 05 MJ of energy [4] While quantitively small, his is the biggest proof of concept that hydrogen fusion could realistically become a significant source of energy as countries scramble to phase out their reliance on fossil fuels. For 70 years, the problem of not releasing enough energy has plagued physicists, who have claimed that one day fusion will be the answer to the climate crisis
Many will suggest that claiming the climate crisis can be solved following an experiment that released enough energy to boil only 15 kettles, sounds ridiculous. While it is ambitious, the promises of fusion can’t be underestimated. As an energy source it is clean (in terms of greenhouse gasses and radioactive waste), cheap and practically unlimited
However, it has become somewhat of a running joke that “fusion is always a decade away.” No matter the developments that have occurred there is always a new and greater obstacle that prevents any real progress in the field of nuclear fusion to produce cheap, clean energy. With this latest break through let’s consider the current state of play and ultimately what reasons there are to be hopeful and what good will come from investing in this sector
Abstractly, the aim of any fusion reactor is to coax the plasma fuel to be confined for long enough to cause ignition. This was achieved decades ago inside atomic weapons: the first hydrogen bombs were devastating pacific atolls by the mid 1950s. The challenge is confining this electrically charged plasma to allow a more controlled release of energy. As the particles inside the plasma reach such dizzyingly high temperatures, they will be travelling at very high speeds. This means that
the charge of the plasma is non uniform throughout as these particles move randomly. This is where in traditional reactors confining the plasma becomes difficult. Many reactors use magnetic fields to suspend the plasma, as it’s electromagnetic field will be constantly shifting; it is therefore hard to predict how the two will interact.
To best understand the current technology in the sector lets look at how the American team achieved their breakthrough. They used a pellet of hydrogen fuel, no larger than a peppercorn, called a hohlraum 192 powerful lasers were fired through two openings in the pellet, the inner hohlraum rapidly heated up causing the outer casing to fly outwards and the inner deuterium and tritium were compressed to a density 100 times greater than lead. It resulted in fusion lasting less than a billionth of a second [5].
This does not reach what could be considered a fusion power station due to the scale of energy involved; but most importantly it is an experiment that proves the potential for large scale power production in the future This is not the miracle pill that will see the technology rolling out across the world in five to ten years, however the wheel of innovation is turning slowly. But it is turning.
Why then is there a reason to be hopeful? The dream of scientists, policy makers and environmentalists alike is for fusion power stations to be rolled out worldwide for the aforementioned reason. The International Thermonuclear Experimental Reactor (Iter) currently under construction in France is hoped to be the next big innovation in fusion research. It hopes to beat the record for the most energy produced in a fusion reactor (over 500MW for approximately 7 minutes) while giving off ten times the amount of energy than is used to power it. It is a massive international project that was first theorised in 1985 and today is the result of co-operation between The Russian Federation, The United States, China, and The European Union. It hopes to tackle the problem of plasma containment by using superconducting magnets at -269 degrees Celsius. Underneath the surface, pipes with cooling water will capture the heat given off [6]. This project is the boldest step yet taken in the quest for a fusion power plant. It is the only one of its kind globally and failure would result in a major setback for the sector, not only for morale and the science behind it, but also for the pressing timeline that is the climate crisis.
This is not the time for pessimism Only through vast international cooperation has Iter become a reality Still under construction, its results are eagerly anticipated as the next, potentially biggest, step in the race for a fusion power plant that can produce energy on a scale that may one day be able to be harnessed for domestic and commercial uses However, governments and other stakeholders must take heed; in times of international tension and isolationism forums, such as the Iter project must be maintained. Core components of the reactor, including the allimportant superconducting magnets, are being produced in Russia. As yet they have not been claimed as a victim of the war in Ukraine and their export will currently proceed as usual. In a time of economic and political uncertainty projects like this cannot be relegated to the sidelines, they hold the keys to a safer future with greater energy and political security It is my belief that we will achieve sustainable nuclear fusion as a source of energy but only through global collaboration and sufficient funding. This cannot be a small project which is neglected and underfunded; the challenges that need to be overcome don’t allow such half-heartedness. There can be no looking backwards, no new coal mines, no subsidies for oil and gas, no trying to bring back a time that has already passed. Investing in these fields are not future proof. We all know that the energy sector is transitioning from fossil fuels, so with a concerted, co-operative effort, these transitions can be made more seamless with the technology of the future
[1] Frisch, David H.; Thorndike, Alan M. (1964). Elementary Particles. Princeton, New Jersey: David Van Nostrand. pp. 11–12
[2] Bleam, William F; Soil and Environmental Chemistry (2012), Science Direct, section 1 6 Nuclear binding energies
[3] Barbrino, Matteo; What is Nuclear Fusion (2022), International Atomic Energy Agency
[4] Sample, Ian; US Scientists Confirm ‘Major Breakthrough’ in Nuclear Fusion (Dec 2022), The Guardian.
[5] Greshko, Micheal; Scientists achieve a breakthrough in nuclear fusion Here’s what it means (2022) National Geographic
[6] Iter, International Thermonuclear Experimental Reactor, What is Iter?, Fusion For Energy