

V I T A S C I E N T I A

Student-written articles covering Biomedical Engineering, Medical Ethics, Biochemistry, Neuroscience, and more.
VOL1.



The Dubai Life Sciences Society Student Journal Nov 2025








A NOTE FROM THE FOUNDERS.
Sheraya A. and Serin C.
Dubai Life Sciences Society (DLSS) began as a simple spark of curiosity during a GCSE Music lesson. What started as a passing idea soon grew into a shared passion of ours, one that has now taken shape as DLSS. We had a vision of schools interconnected around the country, a network of students all pursuing their passions in the ever-growing field that is the Life Sciences. Our goal was to cultivate an environment where multidisciplinary teamwork was encouraged at the high school level, where students could innovate, research and create in a field with endless possibilities.
The first volume of the Dubai Life Sciences Student Journal, ‘Vita Scientia’ , is our first step towards that direction After a successful first half term of mentorship of our students, we have been able to produce 19 student pieces in the span of three months, all from students at our school, Dubai College.
We are proud to present these articles and we encourage you to read them keeping in mind the extraordinary powers of peer-onpeer support and guidance to open students’ eyes towards the marvels of the living world outside of the curriculum.
As Dubai Life Sciences Society continues to grow, we hope this article marks just the beginning of our journey. In our next edition, we aspire to feature even more contributions, not only from our own school but from others across Dubai. We invite passionate students everywhere to join us in shaping the future of life sciences, one discovery at a time.
CONTENTS.
How can nanotechnology be used to treat skin conditions, and what are its future prospects in dermatology?
Sophia S, Year 11
What role does bioinformatics play in understanding the human genome and personalised medicine?
Maira A, Year 11
Levodopa and Its Role in Treating Parkinson’s Disease
Adam A, Year 9
Preventative Healthcare: A Cost-Effective Solution in ResourceLimited Settings
Sofia O, Year 11
To what extent is it ethically permissible for individuals to engage in biohacking, including body modifications like magnetic implants or sensory enhancements?
Ira C, Year 10
The Role of Neutrophils in Cancer Treatment
Aimal J, Year 12
Does Chronic Pain Change the Brain’s Response to Pain?
Rita W, Year 12
The Limbic System: Scent, Emotion, and Memory
Nour A, Year 10
The Biology of Depression
Leanna E, Year 9
The Effects of Early Childhood Environments on Brain Plasticity
Iman M, Year 10
Addiction on a Molecular Level
Jiwon Y, Year 9
AI in Healthcare
Shivaay S, Year 8
CONTENTS.
Neuroplasticity: How does the brain rewire itself?
Myra S, Year 9
In what ways is AI transforming medical practice today and what are its future prospects in healthcare?
Zaira S, Year 9
How does the development of designer babies challenge the balance between medical progress and ethics?
Zara M, Year 11
How CRISPR-Cas9 gene editing can help eliminate genetically inherited diseases and its possible ethical implications
Layla A, Year 11
What are the difficulties with stem cells being used for heart disease and what is the potential for the future?
Alexandra G, Year 11
Investigating Failure in the Björk–Shiley Convexo–Concave Heart Valve: A Case Study
Sheraya A, Year 12
How does the level of spinal cord injuries determine whether paralysis is complete or incomplete?
Serin C, Year 12
Winning Infographic of the DLSS Science Communication Competition
Myra S, Year 9, Zaira S, Year 9, Nour A, Year 10
HOW CAN NANOTECHNOLOGY BE USED TO TREAT SKIN CONDITIONS, AND WHAT ARE ITS FUTURE PROSPECTS IN DERMATOLOGY?
Sophia S
Introduction – What is nanotechnology:
First envisioned by physicist Richard Feynman, who is often regarded as the “father” of the practice, nanotechnology is the process of manipulating matter at a molecular and atomic scale The term itself was defined by Professor Norio Taniguchi at Tokyo university as the “separating, consolidating and deforming of materials atom by atom or molecule by molecule”. Nanoparticles are among the most significant innovations within this field, having dimensions between 1nm - 100nm and the ability to be many shapes such as spheres and rods. These features give them unique chemical and physical properties, making them highly valuable in medical operations
Scientists have started to incorporate nanotechnology in various aspects of dermatology in order to enhance the effectiveness of skin treatments, such as by using nanoparticles to facilitate drug delivery. (https://pmc.ncbi.nlm.nih.gov/articles/PMC3938363/)
The layers of the skin:

Image source: (https://www.mdpi.com/pharmaceutics/pharmaceutics13-01475/article deploy/html/images/pharmaceutics-13-01475g001png) The skin has multiple layers, including the epidermis, dermis, and the hypodermis, which each have different functions. The epidermis is the outermost layer of skin which provides protection against harsh external environmental conditions. The dermis sits
underneath the epidermis and is the thickest layer of the skin containing many structures such as blood vessels, sweat glands and hair follicles. The hypodermis is below the dermis and is responsible for the storing of fat and controlling body temperature. These layers pose a challenge to the absorption of molecules through the skin as the multiple layers thicken the skin and can prevent the molecules from reaching deeper layers where treatment is most effective (https://pmc.ncbi.nlm.nih.gov/articles/PMC7429187/)
Main pathways for nanoparticles:

Image source: (https://wwwmdpicom/1422-0067/23/24/15980)
The main barrier particles have to pass through is the outermost layer of the epidermis called the ‘stratum corneum’, which is made up of dead epithelium cells and other inactive cells. However, the stratum corneum is the strongest barrier in the skin, meaning that it limits drug penetration through the skin’s layers. A molecule must be smaller than 500 Daltons in order to pass through this layer, meaning that most nanoparticles are able to enter due to being much smaller in size (1-100nm).
This is the main pathway most nanoparticles travel through as they can enter lipid channels between corneocytes (dead cells) through passive diffusion The nanoparticles can be designed to be lipophilic so that they can dissolve and move easily within the stratum corneum's lipid matrix. Nanoparticles entering the skin through this way usually deposit drugs, so that they can slowly diffuse deeper into the skin. (https://pmc.ncbi.nlm.nih.gov/articles/PMC2835875/)
However, many nanoparticles are still unable to reach the deeper layers of skin as they accumulate within the stratum corneum or upper epidermis. Instead, scientists have begun to look at alternative transdermal pathways to avoid passing through the stratum corneum. In this way, they can deliver drugs via nanotechnology to target more specific and deeper layers of the skin
One of these pathways, proposed by Professor Jürgen Lademann, is to transport these active ingredients through hair follicles, which are tiny openings that extend deep into the dermis. This means that they bypass most of the stratum corneum and can act as reservoirs, providing sustained drug release to deeper dermal layers. The follicular route has many advantages, including its large surface area for deposition, as well as how it can offer targeted delivery for skin conditions like acne and alopecia which form around hair follicles and sebaceous glands.
(https://www.mdpi.com/1420-3049/30/15/3308)

Image source: (https://www.mdpi.com/molecules/molecules-3003308/article deploy/html/images/molecules-30-03308-g004.png)
Topical molecules versus nanoparticles:
Topical (surface layer) molecules are much larger than nanoparticles, with a size greater than 500 Daltons, meaning that its penetration ability into the skin is quite limited. Most of these molecules are unable to cross the stratum corneum due to these features, meaning that they may only remain on the surface of the skin. This means that skin products using topical molecules do not target specific areas of
the skin, which could potentially lead to the treatment being less effective, as less of the drug reaches the target site.
(https://wwwsciencedirectcom/science/article/pii/S0378517322002113)
Meanwhile, nanoparticles have a size smaller than 100 Daltons. This means that they are able to travel deeper into the skin and target specific layers in order to maximise the effectivity of the treatment.
They can provide controlled and sustained drug release as well as localised delivery, leading to fewer side effects Nanoparticles can also carry drugs and other active molecules, including antibiotics, vitamins, and even genes and peptides during advanced procedures. Once they reach their target site, they can gradually release the drug as the nanoparticle diffuses or breaks down.
(https://pmc.ncbi.nlm.nih.gov/articles/PMC7429187/)

Image source: (https://www.mdpi.com/molecules/molecules-3003308/article deploy/html/images/molecules-30-03308-g002 png)
Uses of nanotechnology:
To create products with specific properties, such as increased dermal penetration or controlled drug release, nanoparticles can be utilised in medicinal and cosmetic skin treatments. They are used to transport medication deeper into the skin, meaning that they can improve the treatment of conditions such as eczema, psoriasis and acne. For example, to treat acne, nanoparticles deliver antibiotics and retinoids such as tretinoin directly into sebaceous glands where acne originates from. Nanoparticles are also being tested for the treatment of
melanoma, a type of skin cancer. They can target cancer cells more selectively, meaning that they are less cytotoxic (damaging) to healthy cells and more effective at killing melanoma cells (https://pmc.ncbi.nlm.nih.gov/articles/PMC9780930/)
Nanoparticles have also been used in the cosmetic industry, such as in anti-aging serums and creams in order to deliver active ingredients such as retinol and vitamin C. Sunscreens can also contain nano-sized titanium dioxide and zinc oxide which block UVA + UVB rays, leading to stronger UV protection. (https://pmc.ncbi.nlm.nih.gov/articles/PMC3938363/#sec20)

(Table by Nasir A. Nanodermatology: a glimpse of caution just beyond the horizon - part II Skin Therapy Lett 2010;15:4-7) - Areas of application for nanomolecules.
Disadvantages of nanotechnology:
Due to nanotechnology being a relatively new field, there are still gaps in the understanding of it and the potential risks that may come from its use, especially long-term impacts on the human body. One example of a potential risk of nanotechnology is the small size and shape of nanoparticles. This feature allows them to move easily within the human body and cross membranes, meaning that they can access other cells, tissues, and organs inside the body and potentially damage them. or instance, using a sunscreen spray containing nanoscale titanium oxide may lead to the inhalation of these nanoparticles, which can then travel through the nasal nerves into the cerebrum and sensory system, therefore posing the risk of the particles entering the bloodstream and damaging vital organs.
Furthermore, due to their high surface-area-to-volume ratio, nanoparticles are more chemically reactive compared to larger particles This can lead to an increased production of ROS (reactive oxygen species) which are reactive oxygen-containing molecules that can damage cells when built up in large amounts. This can cause side effects such as inflammation, and consequently damage cell structures.
(https://pmc.ncbi.nlm.nih.gov/articles/PMC8951203/#sec4-gels-0800173)

Image source:
(https://www.ncbi.nlm.nih.gov/core/lw/2.0/html/tileshop pmc/tileshop pmc inline.html?
title=Click%20on%20image%20to%20zoom&p=PMC3&id=8951203 gels -08-00173-g004 jpg)
The future of nanotechnology in dermatology: In the future, nanotechnology may become a much more accessible option of skin treatment, as it offers a controlled and sustained release of drugs for common skin conditions including acne, eczema, and psoriasis, as well as more targeted drug delivery for the precise treatment of skin cancer Scientists may design nanoparticles to be multifunctional in the future, as well as making personalised formulations based on the patient’s skin type, genetics and condition.
Overall, nanotechnology is currently a promising, emerging field in various aspects of medicine including dermatology. Its ability to deliver drugs to targeted areas of the skin has the potential to revolutionise skin treatments in the future. However, the toxicity associated with nanotechnology may pose a concern to consumers, especially since the field has not been heavily researched yet. In order to prevent adverse health issues, nanoparticles used in products should go through sufficient testing in order to be declared safe before the commercialisation of the products Clinical trials should also take place to assure the safety of the formulations in humans.
Regulations may also need to be put into place so that nanoparticles considered too harmful to the human body are restricted from consumer products, and so that these formulations are transported and stored in a safe manner These potential actions mainly rely on the research being conducted in the present day, so that existing gaps in key data can be filled in order to minimise harmful side effects and improve the effectiveness of nanotechnological skin treatments in the future.
WHAT ROLE DOES BIOINFORMATICS PLAY IN UNDERSTANDING THE HUMAN GENOME AND PERSONALISED MEDICINE?
Maira A
Bioinformatics has played a major role in deepening our understanding of the human genome and investigating the genetic causes of different diseases. With the rapid development in bioinformatics technology, personalised medicines have been designed and provided for patients with these genetic diseases.
Developments in Bioinformatics
There have been many recent developments concerning bioinformatics technology. For example, the introducing of nextgeneration sequencing (NGS), which uses advanced sequencing technology, allowing scientists to read and decode DNA cheaply and accurately. In addition, artificial intelligence tools and machine learning can analyse complex biological data such as gene expressions and protein folds.

An example of this is AlphaFold developed by the DeepMind software company, which performs predictions of protein structure. Cloud computing helps with storing and processing large biological datasets. Metagenomics is the studying of genetic from microbial communities rather than single organisms. This reveals the
DeepMind’s AlphaFold technology allows us to predict different protein structures.
biodiversity of bacteria, viruses and fungi in environments such as soil, oceans, and even the human gut. Furthermore, bioinformatics has also been introduced in drug discovery, helping us understand how different drugs react with biological molecules. Finally, multi-omics data integration combines DNA, RNA, protein and metabolite data to give us a full biological picture of a person’s genome. This reveals how genes are expressed, regulated, and their connections to cellular processes and diseases.
How do these developments help us have a better understanding of the human genome?
Each of the developments above have played a key part in helping us understand the human genome better NGS allows a rapid and accurate reading of a person’s DNA This helps to identify different genetic variations and mutations that may contribute to certain traits of a person, and the diseases they are more likely to suffer from. AI tools and machine learning can be used to identify patterns within the DNA, predict the functions of various genes and link different genes to diseases. Cloud computing lets scientists to store, share and analyse massive biological datasets globally This speeds up discoveries about how genes interact with other, helping us to understand more about individual genes and how they affect different people’s genomes. Metagenomics involves the study of microbial DNA and human DNA. This gives us information on the impact of microbes and the environment on human health. Bioinformatics in drug discovery target specific genes or mutations that cause certain diseases All of these rapid developments in bioinformatics have revolutionised and deepened our understanding of the human genome. Through technologies like NGS, AI tools and cloud computing have allowed scientists to make instant and precise interpretations of complicated genetic data. Metagenomic data multi-omics data integration has assisted us in our knowledge of genetic interaction with the environment and microbes.
How can this information be used for the development of personalised medicine?
With the help of the information discovered by these new advancements, we can have a better understanding of different individuals and their genetic makeup. This helps us to find out which genes of mutations may be the cause of a disease, or which ones make an individual more susceptible to a disease This information also lets us know how different genes and proteins may react to certain viruses and microbes, as well as their reactions to various treatments and medication. Scientists and doctors can now also share and analyse biological data globally. This speeds up medical discoveries, allowing new, personalised treatments to be manufactured and sold quicker Moreover, the interaction of DNA, RNA, and proteins can be studied, which reveals the biological pathway behind diseases, allowing the design of medication and treatments that target the specific protein or gene behind the cause of a disease. Additionally, metagenomics helps in studying and finding out about a person’s microbiome, and how it affects their health. This is useful in helping doctors recommend more precise remedies and diets specialised for patients Overall, each of these evolutions in bioinformatics allows us to understand a person’s genome in much greater depth, helping us to manufacture personalised medicine, in which diagnosis, prevention, and treatment are tailored to the individual’s needs, helping to improve the accuracy and efficiency of medication.

Social and Economic Impacts
The new progressions in bioinformatics technology have had many positive social and economic impacts. Social impacts include improved health outcomes in patients, due to personalised treatments. This means that medication is more effective for people and has fewer side effects. Furthermore, diseases can be detected earlier within a person, allowing for earlier diagnosis of medication, leading to an overall healthier lifestyle. In addition, with more specialised medication, patients have a better understanding of their health, meaning they can make informed decisions about their conditions and lifestyle. However, there are some negative social impacts as well; for instance, there are concerns around ethics and breach of privacy, such as misuse of genetic data, potential genetic discrimination, and issues with informed consent Moreover, advanced medications may not be an affordable option for everyone, but rather wealthier people and countries. On the other hand, there are some advantageous economic impacts as well. Bioinformatics technology can reduce healthcare costs by preventing diseases and increasing treatment efficiency in the long term. As well as this, there is a growing demand for genetic testing and precisions treatments, which is helping to expand the biotechnology industries, creating loads of new job opportunities.
In conclusion, the developments in bioinformatics have helped in increasing our understanding of the human genome. In addition, it has led to the rapid production of personalised medicines, helping patients live a healthier lifestyle. Finally, these new advancements have helped to reduce healthcare costs and has provided many more job opportunities, allowing the biotechnology industries to continue expanding.
LEVODOPA AND ITS ROLE IN TREATING PARKINSON’S DISEASE
Adam A
Parkinson’s is a brain disorder that makes it difficult to control your movements, causing symptoms such as shaking, stiffness, and balance problems. It mainly affects older people, but anyone can have it. Scientists have been studying it for many years, but there is still no cure Despite this, drugs have been developed to treat the symptoms of Parkinson’s and make them more manageable One of the most important drugs used to treat it is called Levodopa.
Parkinson’s is caused when dopamine-producing cells stop working. This leads to lower dopamine levels, which impairs movement. Dopamine is a neurotransmitter, which means it carries messages between nerve cells It’s especially important in a part of the brain called the substantia nigra, which helps control movement.
When someone has Parkinson’s, many of these dopamine-making neurons die or stop working properly. Scientists don’t know exactly why this happens, but the most likely reason is that it is a mix of genetic and environmental factors For example, some people inherit genes that make them more likely to develop the disease, while in other cases, exposure to certain toxins or head injuries might play a role.
As dopamine levels drop, the brain can’t send smooth, controlled signals to the muscles This leads to the jerky or slow movements that are common in Parkinson’s disease.
In a healthy brain, dopamine helps maintain a balance between different brain areas that control movement especially between the basal ganglia and the motor cortex. These regions coordinate how we start, stop, and control motions
When dopamine levels fall, this balance is disturbed. The brain’s “instructions” to muscles become unclear or delayed, which causes stiffness, tremors, and slowness. You can think of it like trying to drive a car with poor steering it still moves, but it’s harder to control.
Levodopa, also called L-DOPA, is a chemical that the body naturally makes in small amounts. The human body can turn it into dopamine through a simple chemical reaction. Scientists discovered that giving dopamine itself as a medicine doesn’t work because dopamine cannot cross the blood-brain barrier a protective wall that keeps certain substances in the blood from entering the brain.
Levodopa, however, can cross this barrier. Once it reaches the brain, enzymes convert it into dopamine, replacing what was lost due to Parkinson’s. This makes Levodopa one of the most effective treatments for the disease, even though it doesn’t stop it from getting worse over time.
After someone takes Levodopa usually as a pill it travels through the digestive system and into the bloodstream from the small intestine. From there, it crosses into the brain. Inside brain cells, an enzyme called aromatic L-amino acid decarboxylase (AADC), also known as DOPA decarboxylase, changes Levodopa into dopamine by removing a chemical group (a carboxyl group).
Once converted, dopamine can be released from neurons and bind to dopamine receptors on nearby nerve cells. This activates signalling pathways that help control movement and coordination. Essentially, Levodopa boosts the brain’s dopamine supply, helping restore more normal communication between nerve cells.
However, there’s a problem: if Levodopa is converted into dopamine before reaching the brain, it can cause side effects like nausea and low blood pressure. That’s why Levodopa is usually given with another medicine, such as Carbidopa or Benserazide. These drugs block the enzyme that turns Levodopa into dopamine outside the brain, ensuring more of it reaches where it’s needed most
Levodopa doesn’t cure Parkinson’s, but it can greatly reduce its symptoms. Many patients notice that after taking Levodopa, their tremors lessen, their movements become smoother, and they can walk and talk more easily. It can help people maintain independence for many years
However, as Parkinson’s progresses and more neurons are lost, the brain becomes less able to store dopamine. This means that the effects of Levodopa may wear off more quickly, causing “on-off” periods where symptoms return before the next dose. In later stages, some people may also experience dyskinesia, which means involuntary movements caused by too much dopamine activity. Doctors usually adjust doses carefully or combine Levodopa with other medicines to balance these effects.
Levodopa has been used since the 1960s and is still considered the best medication for treating Parkinson’s disease It has allowed millions of people to live longer, more active lives Scientists continue to research better ways to deliver it, such as controlled-release tablets or intestinal gels, to keep dopamine levels steady throughout the day.
Still, Levodopa isn’t perfect. It can’t prevent the loss of brain cells, and over time, it becomes less effective. That’s why researchers are also exploring other treatments, like deep brain stimulation (DBS), gene therapy, and experimental drugs that might protect or repair neurons. Some newer research is also studying GLP-1 agonists (diabetes drugs) for possible neuroprotective effects, though these are still being tested.
Parkinson’s disease is a complex and challenging disorder that affects how the brain controls movement. It’s caused mainly by the loss of dopamine-producing neurons, often with the buildup of Lewy bodies, which leads to tremors, stiffness, and slow movements. Levodopa works by replacing the missing dopamine, helping restore balance in the brain and improve movement.
Even though it doesn’t cure the disease, Levodopa remains one of the most important medicines in modern neurology. Its discovery showed how understanding chemistry at the molecular level can lead to lifechanging treatments. For many people with Parkinson’s, Levodopa provides hope and the ability to keep living their lives with more freedom and control
References
https://medlineplus.gov/druginfo/meds/a601068.html https://my.clevelandclinic.org/health/drugs/20349-carbidopalevodopa-tablets
https://wwwdrugscom/medical-answers/levodopa-parkinsonsdisease-3554930/
https://www.parkinson.org/living-withparkinsons/treatment/prescription-medications/levodopa https://www.apdaparkinson.org/article/common-questions-aboutcarbidopa-levodopa/
https://wwwnhsuk/conditions/parkinsonsdisease/treatment/#:~:text=your%20needs%20change-,Levodopa,dizzi ness
PREVENTATIVE HEALTHCARE: A COSTEFFECTIVE SOLUTION IN RESOURCELIMITED SETTINGS
Sofia O

Introduction
During the spring of 2024, I had the opportunity to observe firsthand the inner workings of Lahore General Hospital, one of the largest public healthcare institutions in Pakistan. As a key provider of healthcare for millions of people, this hospital exemplifies the challenges faced by public healthcare systems in resource-limited settings. The hospital, with its overburdened staff and overstretched resources, operates under immense pressure, serving large volumes of patients daily.
In the ophthalmology department, where I spent the majority of my time, these challenges were particularly evident. The operating rooms were crowded, with multiple surgeries occurring simultaneously in a shared space, and clinic days involved managing upwards of fifty patients per day. Despite the high patient turnover and the obvious limitations in terms of privacy, equipment, and staffing, the healthcare providers worked with remarkable efficiency and professionalism, performing sight-saving surgeries in less-than-ideal conditions.
This experience underscores the critical need for improving public healthcare infrastructure in Pakistan, especially in specialties like ophthalmology where timely intervention can prevent irreversible blindness. The contrast between public and private healthcare facilities there was stark, with the latter being more accessible to the wealthy and the former being the only option for the majority. This disparity in access to quality care raises important questions about the equity and effectiveness of healthcare delivery in Pakistan, particularly in the context of preventable diseases such as cataracts, which remain the leading cause of blindness in the country.


Observations and Data from Pakistan

This experience allowed me to gain first-hand insights into the healthcare system of a developing country with a large population of 240 million My experience alternated between clinic days, where patients came in for diagnosis and screening, and operation days where the surgeries actually happened. During the clinic days I had originally planned a set of questions to ask patients, including a long list covering the patient's job, family history, and the appointment process. After just one day at the hospital, I realized that half of those questions were irrelevant and didn't even address the key issues The patients came from low-income backgrounds with minimal education. I was surprised to find out the ages of some of the younger
Figure 1. Operating Room
Figure 2. During Surgery
Figure 3. Clinic Day
patients who I had thought were closer to 8 years old, in fact turned out to be 12. Conversely, older patients who I thought were in their 70’s were really in their 50’s. This dual phenomenon of stunted growth and early ageing was linked to poor nutrition and lack of basic healthcare. Surely this should be an easy fix which drew my focus towards the importance of preventative care, and I changed my questions accordingly.
This article examines the role of preventative healthcare as a costeffective approach in resource-limited settings. During my time at Lahore General Hospital, I collected data from patients on eye health conditions, exploring why many had delayed screening and treatment. This data not only sheds light on the most common issues affecting low-income patients, but also illustrates how late intervention impacts both health outcomes and healthcare costs. By comparing preventative healthcare systems across different countries, the article highlights the strengths and weaknesses of various approaches and underscores the importance of early detection and intervention.
The data from page ... shows that out of the people that could have come earlier (before their condition worsened), 60% came for treatment in a more advanced stage of their condition due to various reasons such as lack of time, fear of surgery, dependency on others to take them to the hospital, and family responsibilities. This excludes patients who were not relevant to the survey, such as those with trauma-related cataracts. For example, one patient who came in had developed cataracts after a serious road accident. It also excludes individuals coming in for post-surgery follow-ups. It is important to note that this sample is not representative of the population, though it points to trends Dr Jahangir, the doctor I had been shadowing, has seen throughout her practice. Patients who presented late required more invasive treatment, increasing their recovery time and associated costs. Even in a public hospital where basic services are free, additional injections or treatments needed for complications can cause significant out-ofpocket expenses.
According to a study done in another public hospital, the Jinnah Hospital Lahore, 75.2% people over 45 years old had heard about cataracts and among those 80.8% correctly knew that cataracts caused the lens to become opaque¹. The high percentage of mature cataracts clearly shows a critical delay in early action by the patient, despite it being the most common eye disease globally. It shows that awareness alone does not necessarily translate into timely action. Delayed cataract treatments may very well be indicative of a wider trend of not seeking treatment until advanced stages of other illnesses.
The Importance of Preventative Healthcare
Preventative healthcare is a focus on preventing diseases and maintaining health rather than treating illnesses after they occur. This can include lifestyle changes such as emphasis on eating healthy, exercising, regular screenings and check-ups. This approach requires widening healthcare coverage to include healthy individuals but lowers the cost of treatment as it reduces the incidence and severity of diseases, improving patient outcomes.
Reduces
Disease Incidence: By preventing diseases, the overall burden on the healthcare system is reduced, allowing for more efficient use of resources.
Lowers Treatment Costs: Early detection and intervention can prevent complications, reducing the need for more expensive treatments and prolonged hospital stays for patients.
Wider Impact: Vaccinations, nutrition, sanitation and simple public health interventions can reduce both communicable and noncommunicable diseases.
Focus on High-Impact Strategies: Effective prevention strategies, such as decreasing tobacco use and encouraging exercise, can prevent a significant percentage of deaths.
Comparing Preventative Healthcare Around the World
1. Norway: Norwegian healthcare is consistently ranked in the top ten globally². Citizens are insured with a $250 cap on out-of-pocket expenses, funded by high government taxes (22% income tax³). 105% of GDP is spent on healthcare⁴ out of which 3% is spent on preventative measures⁵ such as early screenings, vaccination programs, home visits and addressing social and lifestyle factors that impact health. You wouldn’t be wrong in thinking that developed countries can afford to have better healthcare systems that are able to encompass preventative measures Let’s look at our next example to refute that a
2. USA: The USA is the only developed country in the world that does not provide universal healthcare, with only 36% of Americans covered by public health insurance⁶. In fact, despite being the highest spender on healthcare (17.3% of GDP), Americans experience the worst health outcomes overall of any high-income nation; they are more likely to die younger, and from avoidable causes, than residents of peer countries⁷. However, one area where the US has good outcomes is breast cancer, which directly correlates to high cancer screening protocols in the US. More screening leads to early detection which improves outcomes. After looking at this example, it seems both developed economies and intentional policies are required for improving healthcare services in a country. The next example is an interesting contrast. a
3. Cuba: Cuba ranks 138th in global GDP rankings yet ranks top 27th in global healthcare⁸ with 15% of GDP being spent on healthcare. The country’s healthcare is free and universal, with annual doctor home visits a norm, checking the whole family as well as their living conditions. Cuba has 8.4 physicians per 1000 citizens, compared to USA with just 3. Well-equipped community polyclinics are spread throughout the country with emphasis on preventive medicine, hygiene, nutrition, and sports practice. The fight against risk factors are the backbone of the healthcare system developed in 1984.
This leads to possibility of a country to have a better healthcare system despite economic challenges, if preventative universal healthcare is government policy with citizen support.
4 Pakistan: Let’s look at Pakistan considering the above examples
aaaPakistan ranks 172nd in GDP per capita with a healthcare ranking aaaof 124. With just 1.1 physicians per 1000 citizens, it not only lacks aaasufficient healthcare professional but also hospitals and clinics, aaaessential medical equipment and medicines. Only 0.84% of the aaaGDP is spent on healthcare which shows the government’s lack of aaainvestment in the area With limited resources and sophisticated aaatreatments unavailable, preventative care could be Pakistan’s best aaacost-effective healthcare policy.
Conclusion
Preventative healthcare is a severely underutilized tool that could create a better quality of life across Pakistan. For example, chronic diseases, which are often preventable, account for a significant portion of healthcare spending. By reducing the prevalence of these diseases, countries can save substantial amounts of money. Countries like Norway and Cuba highlight the benefits of strong preventive healthcare systems, including reduced disease incidence, lower treatment costs, and improved patient outcomes. In contrast, despite higher healthcare spending in the US, Americans have worse outcomes due to the lack of universal healthcare prioritising early intervention. With Pakistan lacking both spending and policy on healthcare, there is an urgent need for reforms ultimately saving lives, reducing healthcare burdens and creating a productive workforce for the country.
References
1. https://www.jscimedcentral.com/jounal-article-info/Annals-ofPublic-Health-and-Research-/To-Assess-the-Awareness-about-Glaucoma-and-Cataract-in--Patients-%28Aged-45-andAbove%29--Presenting-to-Outpatient--Department-%28OPD%29of-Jinnah--Hospital-Lahore%2C-Pakistan-8818
2. https://www.statista.com/statistics/1376359/health-and-healthsystem-ranking-of-countries-worldwide/
3. https://www.skatteetaten.no/en/person/foreign/are-youintending-to-work-in-norway/the-tax-return/what-are-you-liableto-pay-tax-on-in-
norway/#:~:text=The%20income%20tax%20rate%20is,those%20wi th%20a%20high%20income.
4. https://www.ncbi.nlm.nih.gov/books/NBK545732/
5. https://health.ec.europa.eu/system/files/201911/2019 chp no english 0.pdf
6. https://www.visualcapitalist.com/which-countries-have-universalhealth-
coverage/#:~:text=The%20State%20of%20Universal%20Health,so me%20form%20of%20universal%20healthcare.&text=UHC%3F&te xt=The%20United%20States%20is%20the,for%20all%20of%20its% 20citizens.
7. https://www.commonwealthfund.org/publications/issuebriefs/2023/jan/us-health-care-global-perspective-2022
8. https://www.statista.com/statistics/1376359/health-and-healthsystem-ranking-of-countries-worldwide/
TO WHAT EXTENT
IS
IT
ETHICALLY PERMISSIBLE FOR INDIVIDUALS TO ENGAGE IN BIOHACKING, INCLUDING BODY MODIFICATIONS LIKE MAGNETIC IMPLANTS OR SENSORY ENHANCEMENTS?
Ira C
In our day and age, more and more individuals have turned to modifying the human body using technology, from inserting magnets into fingertips to implanting RFID (radiofrequency identification) chips for contactless payment Although this may seem like an amazing way to enhance our characteristics, it could also be dangerous; and sometimes, fatal. This is known as biohacking, and to some, it is a form of freedom and innovation, but others are concerned about safety risks, ethical issues and inequality within society. So this raises the question: To what extent is it ethically permissible for individuals to engage in biohacking, including body modifications like magnetic implants or sensory enhancements?
What is Biohacking?
Biohacking can be defined as exploiting genetic material experimentally without regard to ethical standards according to Oxford Languages This is just a broad term for lifestyle selfimprovement, it could involve making minor changes to someone’s body, diet and lifestyle to improve their overall quality of life. Some enhancements can be relatively safe, and are widely used by the population, such as wearable technology including smart watches. However, more serious cases could pose health risks and unpredictable outcomes A subgroup of biohackers, known as ‘grinders’, consider themselves to be innovators of human augmentation, typically involving the implantation of devices inside the body, which is what this article will be focusing on.
Many people take part in biohacking in order to improve minor flaws or characteristics about themselves and increase their control over their health, to stay in good shape. On the other hand, some use biohacking to experimentally improve the human body using
technology and push the boundaries of human capabilities. For example, certain ‘grinders’ are trying to extend the human lifespan from the current average of 〜73 years to 200 or even 300 years
Benefits of Biohacking
First of all, it is needed to address that every person is their own individual human being, and we each have human rights and the freedom to govern our own decisions. Supporters of biohacking argue that people have the right to personal autonomy, and should be allowed to engage in these practices at their own discretion. From a purely legal standpoint, others concerned about the health of these ‘grinders’ shouldn’t interfere with their choices, as it is solely the decision of that person alone. To illustrate this point, we can compare people engaging in ‘grinder’ activities to smoking, or participating in substance usage (which is commonly seen around the world today). Even though the health risks of these substances are well-known, individuals choose to use them, and society does not interfere with that choice.
In addition to this, biohacking and ‘grinders’ also significantly contribute to the development of technology and science, by conducting research on the extents of the human body, whilst incorporating technology. This could be seen as innovation, an activity worth investing in to push human limits, provide new data for scientists, and serve as a means of technologically augmenting the human body. Grinders are also scientists themselves, experimenting with technology and developing inventions aimed at enhancing humans This could shape the future of human health, well-being and physical boundaries, which could improve billions of people’s quality of life.
Risks and Ethical Concerns
On the other hand, making humans more powerful beings raises the problem of societal inequality, as certain individuals would acquire more advanced capabilities, setting them apart from those who were unable to access these technological enhancements. This can lead to unrest and tension within the population, and even internal conflicts between people due to the perceived unfairness of the
situation. We must also consider the perspective of regular people who could consider themselves as inferior to ‘enhanced’ ones and at a biological disadvantage The social tension and division may result in resistance movements or opposition towards those enhanced individuals, which could escalate to physical altercations.
Additionally, the technological augmentation of the human body presents a multitude of health risks and raises significant ethical concerns Although individuals should have the right to their own body, one could argue that this unrestricted autonomy can lead to misuse and harmful consequences. One of which being a man called Aaron Traywick, who was found dead after injecting himself with a supposed ‘herpes vaccine’ that had been developed through experimental biohacking, with no scientific proof to back this up. This could set a dangerous precedent; by normalising these invasive practices, many others online could be encouraged to imitate this, which could lead to potentially harmful outcomes. The misconduct of biohacking can lead to fatal consequences, so is it really ethically permissible to allow these people to take part in these practices, knowing that they are subject to severe health risks- possibly being fatal
Conclusion
This article has explored the practice of biohacking, highlighting its potentially innovative discoveries as well as the risks it poses towards our society. While this is a wonderful opportunity to conduct experimental research in pushing human capabilities and empowering individuals to take control of their own body, we must also address the ethical concerns regarding inequality, safety and societal impact. Without regulation or strict rules set in place to moderate the participation in this practice, this could lead to harmful consequences, threatening the safety of the individual and the ones around them. This article highlights the need for laws or boundaries to be implemented for anyone taking part in ‘grinder’ activities, to promote innovation and creativity, whilst still ensuring human safety. Overall, as biohacking and self-experimentation become more common (as well as other practices), it is essential that people exercise personal autonomy in a safe and responsible manner.
Bibliography:
https://www.merriam-webster.com/dictionary/biohacking https://wwwjhsgoorg/article/S2589-5141(24)00057-4/fulltext
https://pmc.ncbi.nlm.nih.gov/articles/PMC11331163/#:~:text=It%20is%2 0estimated%20that%20between,people%20already%20have%20bee n%20chipped.
https://www.medicalnewstoday.com/articles/biohacking#does-itwork
https://blogunguessio/exploring-biohacking-the-intersection-ofbiology-and-technology-in-human-augmentation
THE ROLE OF NEUTROPHILS IN CANCER TREATMENT
Aimal J
Neutrophils are the most abundant form of immune cell and are classified as phagocytes- white blood cells that defeat invaders in the body through phagocytosis, a process where they consume the invader before releasing digestive enzymes in the pocket where it is trapped, killing it They play a key role in inflammation and are some of the first responders to an invasion. However, despite their crucial role in the host immune response, when faced with cancer, half of the time they perform their job, eliminating the tumors, while the other half of the time they seem to act as our antagonists, aiding the malignant cells rather than destroying them, subsequently almost “betraying” the body they exist to defend (1)
Neutrophils act as the body’s first line of defense against pathogens, alongside macrophages- another type of phagocyte. They form and develop within the bone marrow, starting out as myeloblasts until fully differentiated- after which they patrol the body through the bloodstream and the lymph nodes Due to their incredibly aggressive behavior compared to other immune cells, they typically only live up to 24 hours, and need to be replaced continuously. While their main method of murder is through phagocytosis, they have an additional, unique way to kill- Neutrophil Extracellular Traps, also known as NETs. This is a process where neutrophils will essentially “vomit” out their DNA, trapping and killing pathogens (and the cell itself- although neutrophils do occasionally still keep fighting after as well until they die of exhaustion!) (1)

Neutrophils are not cells that are particularly designed to kill malignant cells from the body itself, unlike other cells like Natural Killer (NK) cells- and this is for good reason. Neutrophils are the white blood cells with the highest prevalence in your body, accounting for approximately 40-60% of all your white blood cells (2)- if they were designed to assimilate cells, autoimmune diseases would have a much higher prevalence within our population. However, as the main white blood cell in the body, they are still capable of targeting virus infected or cancerous cells should the need arise. The immune system recognizes malignant cells by commanding every cell with a nucleus (aka. Any cell that produces proteins) to display the proteins they produce using MHC class 1 molecules- which are receptors on the surface of the cell that act as a “window” to the workings inside it. The human body functions with a “guilty until proven innocent” scheme- a cell is at risk of forced apoptosis (cell suicide) unless they display their proteins and prove that they are functioning as they should Cancer cells are cancerous due to mutations in their genetic code that prevent the cell from performing apoptosis and that cause the cell to divide uncontrollably. These mutations are called “oncogenes” and produce “oncoproteins”- an obvious sign of a cell being cancerous. No matter the type of cell, no matter where in the body, all cancer cells will produce oncogenes. The body recognizes this, and thus immune cells are trained to trigger apoptosis on any cell that exhibits oncoproteins on their MHC class 1 molecules. But what if the cell develops a mutation that allows it to hide its MHC class 1 molecules? Well, the body immediately recognizes this, too, as a sign that the cell is dangerously faulty- and thus prompts the cell into apoptosis. (1)
The anti-tumor effect of neutro phils acts this way , as neutrophils are able to recognize MHC class 1 molecules- and thus can trigger other immune cells such as NK cells or Killer T cells into killing cancerous cells. However, neutrophils are also capable of targeting cancerous cells in other ways. In a cancer model in a mouse, neutrophils were drawn to the tumour by the low oxygen signalsand after recognizing the cells, were able to induce a separation of the cells from the membrane they lay on, cutting off their food and oxygen supplies and inhibiting the tumour’s growth. When tasked to kill cells, neutrophils secrete reactive oxygen species (ROS) - a molecule that can trigger on and off certain cell functions- triggering
The anti-tumor effect of neutrophils acts this way, as neutrophils are able to recognize MHC class 1 molecules- and thus can trigger other immune cells such as NK cells or Killer T cells into killing cancerous cells. However, neutrophils are also capable of targeting cancerous cells in other ways. In a cancer model in a mouse, neutrophils were drawn to the tumour by the low oxygen signals- and after recognizing the cells, were able to induce a separation of the cells from the membrane they lay on, cutting off their food and oxygen supplies and inhibiting the tumour’s growth. When tasked to kill cells, neutrophils secrete reactive oxygen species (ROS)- a molecule that can trigger on and off certain cell functions- triggering apoptosis. Neutrophils also play a large role in triggering the adaptive immune system- the body’s second line of defense- which includes NK cells, which are specialized in killing cells that no longer adhere to the body’s collective (3)
However, despite the numerous anti-tumour effects of neutrophils, they also seem to be susceptible to being reprogrammed or manipulated by cancer cells into aiding the tumour. Neutrophils fight very aggressively- to the point where they frequently cause harm to surrounding tissue. This can pose a problem if they fight frequently in one area, as this can cause repeated DNA damage in surrounding cells- thus promoting DNA mutation and the formation of a tumour. Additionally, tumour cells can often mislead neutrophils into believing they are damaged cells in need of aid- and thus, neutrophils can begin to release a chemical that triggers angiogenesis- the production of new blood vessels. These supply the tumour with nutrients and oxygen, promoting its growth Additionally, neutrophils can release signals that inhibits nearby immune activity- which is typically a good thing, as this means that immune cells would not target and destroy the damaged cells (and thus cause further damage), however when it comes to cancer, this acts as a direct disadvantage, allowing the tumour to grow continuously.
The dual-sided role of neutrophils in the growth of cancer poses significant problems when it comes to medical treat ment - on one hand, inhibiting the function of neutrophils is challenging as they DO provide significant anti-cancer support, as well as general defense
around the body- to inhibit neutrophils is to leave the rest of the body at risk of illness due to pathogens. However, neutrophils also have the potential to disable immune responses within a tumour, thus harming efficiency of certain treatments Three solutions have been suggested so far: restricting neutrophils from entering cancer sites, removing the ability of neutrophils to suppress other members of the immune system, and improving their anti-tumour effects. In trials performed so far, these methods seemed to express significant potential in halting or suppressing the development of cancer. (3)
Sources:
1.Immune by Philipp Dettmer
2.https://www.aafp.org/pubs/afp/issues/2015/1201/p1004.html
3.https://molecular-cancer.biomedcentral.com/articles/10.1186/s12943024-02004-z
DOES CHRONIC PAIN CHANGE THE BRAIN’S RESPONSE TO PAIN?
Rita W
Pain is one of the most essential warning systems in our body. It protects us by alerting us when something is wrong, for example, when touching something too hot or twisting an ankle. But what happens when pain doesn’t stop, even after the injury has healed? This kind of long-term discomfort is known as chronic pain, and it doesn’t just affect the body; it rewires how the brain processes and responds to pain itself. Over time, repeated pain can physically alter nerve pathways and brain regions which changes how our bodies respond to pain.
Understanding Chronic Pain
Chronic pain is typically defined as pain that lasts for more than three months. Unlike short-term, or acute, pain, which disappears as the body heals, chronic pain persists long after our bodies recover after the initial injury or illness. According to the MSD Manual, this can happen when nerves continue sending pain signals even when there’s no longer a clear physical cause This constant stimulation can “train” the nervous system to stay in a heightened state of alert, leading to pain that feels stronger and lasts longer than it should. People living with chronic pain often begin to avoid activities that may trigger or worsen their symptoms. This avoidance can lead to a cycle of reduced physical activity, muscle weakness, and even social withdrawal. The impact extends far beyond physical sensations, it influences mood, memory, and motivation, changing how a person lives their day-today life.
How Repetitive Pain Rewires the Brain
Pain isn’t just felt in the body; it’s processed in the brain. Normally, when you experience pain, nerve signals travel through the spinal cord and reach brain regions like the thalamus, somatosensory cortex, and limbic system. These areas decide how intense the pain feels and how you should react to it. However, when pain becomes chronic or repetitive, the brain’s pain circuits can change through a process called neuroplasticity the brain’s ability to reorganize and form new
connections. According to Lone Star Neurology, long-term pain can cause certain regions, like the prefrontal cortex and thalamus, to shrink in volume, while others involved in emotion and stress become overactive These changes alter how we perceive pain and how we respond to it. In other words, the brain may start to expect pain, making it more easily triggered even by mild sensations. This process, called central sensitization, makes the brain and spinal cord more responsive to pain signals over time. Instead of “getting used to” pain in a way that dulls it, the nervous system can actually become more sensitive to it
Research Findings: Motivation and Emotion
One of the most interesting studies on chronic pain came from Stanford University School of Medicine (2014). Researchers found that long-term pain affects not just sensory processing but also the brain’s motivation and reward systems In their experiment, animals with chronic pain became less motivated to work for rewards, not because they didn’t want the reward, but because the pain altered how their brains processed effort and reward balance. The study showed that the nucleus accumbens, a key part of the brain involved in motivation and pleasure, had reduced excitatory input due to a molecule called galanin This disrupted how the animal’s linked effort with reward In humans, this same mechanism may help explain why chronic pain is often linked with fatigue, loss of motivation, and depression. The researchers described this as a “brain drain,” where chronic pain slowly saps energy and drive, making even enjoyable activities feel like too much effort.
Can the Brain “Get Used To” Pain?
It might seem logical to think that if pain keeps happening, the brain will eventually adapt and “tune it out.” However, the reality is much more complex, repeated pain doesn’t make the brain immune, it makes it more alert. Over time, neurons involved in pain transmission can become “trained” to fire more easily, even in response to harmless signals This means that the brain can start interpreting normal sensations, like pressure or touch, as painful. Therefore, rather than desensitizing, the system becomes hypersensitive. However, the brain also retains some ability to adapt in
the other direction. With consistent treatment and positive interventions, such as mindfulness, physical exercise, and therapy, the brain can gradually rewire to dampen unnecessary pain signals. This demonstrates the flexibility of neuroplasticity: the same mechanism that worsens pain can also help relieve it, but only if properly guided.
In conclusion, chronic and repetitive pain doesn’t just linger in the body, it leaves its mark on the brain. It changes how we feel, think, and act, altering motivation, memory, and emotional balance Studies like those from Stanford University show that these effects are rooted in physical changes to brain circuits involved in reward and emotion. So, can our brains “get used to” pain? Not exactly. Instead of dulling over time, chronic pain often rewires the brain to become more sensitive and more focused on pain signals. Yet, understanding these changes gives hope: because the brain is plastic, treatments that target both the mind and body can help retrain it to respond more normally again. Chronic pain isn’t just a symptom it’s a neurological condition that shows just how deeply the body and brain are connected and how important it is to treat chronic pain before it turns into a larger problem.
Sources:
https://www.precisionpaincarerehab.com/blog/long-term-effectsof-chronic-pain-on-the-brain-and-body-explained33962.html#:~:text=Altered%20Central%20Nervous%20System%20 Processing,and%20smell%2C%20can%20be%20increased.
https://lonestarneurologynet/others/the-connection-betweenchronic-pain-andneuroplasticity/#:~:text=Brain%20Changes%20Associated%20with %20Chronic,volume%20in%20certain%20brain%20areas.
https://med.stanford.edu/news/all-news/2014/07/study-revealsbrain-mechanism-behind-chronic-pains-sapping-of-mo.html https://www.msdmanuals.com/home/brain-spinal-cord-and-nervedisorders/pain/chronic-widespread-pain
https://www.moregooddays.com/post/why-brain-fog-with-chronicpain
SMELL YOUR PAST, FEEL IN THE PRESENT
Nour A
Have you ever caught a whiff of a certain scent that transcended you back to a long-forgotten memory? Maybe that single sniff of perfume was enough to remind you of your first childhood crush. Well, here is why, the Limbic system is a small collection of structures in the brain that control your emotions and memories. Do not be fooled by its size, as it constantly collaborates with the rest of the brain to shape your interactions with the world surrounding you. When you take a breath, invisible molecules travel through your nose until they reach sensory neurons, from where they are transferred to the olfactory bulb in the brain to be interpreted However, these electrical impulses also skip straight to emotional control stations, making it unique from the other senses. This article delves into the links between the human sensorium and the limbic system, explaining why smell can evoke such strong emotional experiences.
The Anatomy
Some argue that having a consensus on the structures that are part of the limbic system is too much of a simplification. The complexity of emotion calls upon all parts of the brain, some more than others. Hence, making a definitive list is impossible, however, here are the structures most frequently involved:

Hypothalamus: Regulates vital functions (body temperature, sleep, hunger, thirst, and mood).
Amygdala: Processes social interpretations and emotions (especially pleasure and fear)
Thalamus: Helps with memory and planning and the perception of the 5 senses.
Hippocampus: Responsible for forming new memories and recall.
Olfactory bulb: Receives signals from the olfactory sensors in the nose and transmits them to the rest of the brain.
Why is scent unique?
The distinctive ability of odour to trigger emotion stems from the direct connections between the olfactory bulb, the hypothalamus, and the amygdala For all other senses (excluding smell), information is sent to the thalamus to be filtered, and only then are the signals sent to the relevant parts of the brain. However, olfactory signals travel directly from the olfactory bulb to the limbic system. This shortcut bypasses the thalamus, resulting in a faster, instinctual response. The immediate factor of the limbic system is what makes it so critical for survival, in humans and animals. Early humans relied on smell for the fight or flight response, danger, the retrieval of food, and reproduction, creating a tight-knit web of emotional significance. Although auditory and visual stimuli may produce faster reaction times in some contexts, the uninterrupted pathways of smell to memory and emotional control centres make its impact extraordinarily deep and long-lasting.
How emotion is linked
Lilianne Mujica-Parodi (director of LCNeuro) led a study investigating whether humans can detect fear or stress from the odour of sweat without consciously realising. A randomly selected group of participants were exposed to two sweat samples: one from people who had exercised on a treadmill, and the other from first-time skydivers Although the participants thought that there was no difference between the two types of sweat, their brains responded differently. Those who had smelt the ‘fear sweat’, felt scared, as they experienced high levels of activation in their amygdala and hypothalamus. The same participants had 0 activation in their amygdala when smelling the ‘exercise sweat’. This suggests that scent carries emotional information which can only be recognised subconsciously, allowing somebody to mirror another person’s emotional state without exchanging a single word. In turn, this proves the limbic system’s role in not only providing warnings, but in social and empathetic interactions. Moreover, in a workspace or group setting, anxiety and stress could spread throughout without anyone realising The famous saying of ‘they can smell your fear’ may be scientifically true!
Scent and memory
Cues are triggers that your brain associates with specific memories. There are various types of cues like sights or sounds, nevertheless, smell is the most powerful one These memories that are stimulated because of a scent cue are called, odour-evoked autobiographical memories. The olfactory bulb, amygdala, and hippocampus are all closely connected, and this link builds the foundation of the phenomenon. Information from the olfactory bulb is sent directly to the amygdala, which then transmits it to the hippocampus for longterm memory integration. This is why it is possible to recall an old, detailed memory instantly after smelling something When a memory is first formed, the information is encoded. Encoding means that every experience you have changes your brain, forming new connections and pathways. For these memories to become long-term, however, they must be consolidated and strengthened. Once a long-term memory is formed, it can be retrieved. The recall of memories is often triggered by cues in the environment that are closely associated with them, and so smell can be a vital part of how a memory is encoded in the brain. As a result, when you smell something that reminds you of a past event, it can prompt a vivid and emotional recollection. So much so that there have been various cases of using scent in dementia, Alzheimer's and PTSD therapies. All with shockingly positive results.
Conclusion
The limbic system continues to grow in significance, as we gain a more thorough understanding of emotion, memory and smell. Comprehending its function enables you to apply it to all areas of life, from social settings to the art of human behaviour. Each day is a new discovery of how scent can be used relieve the symptoms of neurocognitive disorders or even advertise a product. With every inhale, the limbic system bridges science and sentiment, transforming simple scents into evocative memories.
References: https://myclevelandclinic org/health/body/limbic-system https://wwwncbinlmnihgov/books/NBK538491/ https://www.ncbi.nlm.nih.gov/books/NBK55967/ https://journals.plos.org/plosone/article? id=10.1371/journal.pone.0006415
https://journals.plos.org/plosone/article?
id=10.1371/journal.pone.0005987
https://pubmed.ncbi.nlm.nih.gov/21208988/
https://pmc ncbinlmnihgov/articles/PMC8927807/ https://newsharvardedu/gazette/story/2020/02/how-scentemotion-and-memory-are-intertwined-and-exploited/
THE BIOLOGY OF DEPRESSION
Leanna E
What is the definition of depression?
Depression (major depressive disorder) is a common and serious mental disorder that negatively affects how you feel, think, act, and perceive the world
Symptoms of depression:
You may start to feel sad, irritable, empty, and hopeless. Ending up Losing interest or pleasure in activities you usually enjoy.
A change in appetite eg, eating more or less than usual
Sleeping too little or too much
Loss of energy or increased tiredness or fatigue
Increase in purposeless physical activity (e.g., inability to sit still, pacing, handwringing) or even slowed movements.
Mentally feeling worthless or excessively guilty (over nothing).
Difficulty thinking or concentrating, forgetfulness.
Self-Harm (eg, Burning, Cutting, and Hitting)
Thoughts of death, suicidal ideation, or suicide attempts.

Types of depression:
Major Depressive Disorder (MDD)
This is a common mood disorder that affects how you feel, think, and behave. It is an ongoing sense of sadness and also a loss of interest in activities you once used to enjoy. Some common symptoms of MDD are continuous low mood, low self-esteem, having no motivation, etc MDD can be caused by a variety of things, but it is usually caused by chemical changes in the brain, and it also tends to run in families. However, it can also be triggered by life events, eg, Deaths. It is also most common in young adults in their 20s, but it can still develop at any age. People who have major depressive disorder have had at least one major depressive episode For some people, this disorder is recurrent, which means they may experience episodes once a month, once a year, etc. There isn’t necessarily a fix for MDD, yet it doesn’t last forever, and it can be ‘reduced’ with therapeutic help and antidepressants.
Persistent Depressive Disorder (PDD)
Persistent depressive disorder is a continuous, long-term, chronic state of low-level depressed mood. Where you may feel sad or empty, where you start to lose interest in activities, and have no motivation all of a sudden. This can last for a long time, usually around 2 or more years; however, the depressed state of persistent depressive disorder isn’t as severe as with major depression, but that doesn’t mean it can’t be just as disabling PDD tends to be triggered by traumatic or stressful events. It often begins around childhood, adolescence, or early adulthood. Treatment for PDD can come in many different forms; similar to MDD, talking to a therapist or taking medications can help and improve mood over time.
Postpartum Depression
This occurs after childbirth, and it affects one in every nine women. It is usually because there is a dramatic drop in the hormones estrogen and progesterone. It generally starts 1 to 3 weeks after giving birth, but it can still begin anytime within the first year. Postpartum depression is characterized by feelings of sadness, exhaustion, etc. and it can last for several months to years
Bipolar Depression
People diagnosed with bipolar disorder often have mood swings, involving both lows and highs When people experience the lows of bipolar disorder, their symptoms are very similar to those that someone with unipolar depression might experience. Bipolar disorder is a lifelong illness; nonetheless, you can still learn how to live and manage your symptoms in order to live a healthy life. Bipolar disorder is typically triggered by changes in sleep, periods of high stress, interpersonal conflict, or it could even be a genetic disorder (though it’s usually not the sole cause)
Seasonal Affective Disorder (SAD)
Seasonal Affective Disorder, as the name states, is seasonal, and it typically starts in the late fall and early winter and dissipates during the spring and summer. Depressive episodes linked to the summer can occur, but they are much less common than winter episodes of SAD. It is characterized by a recurrent seasonal pattern, with symptoms lasting about 4-5 months out of the year. There is no actual, clear cause of SAD, but most believe less sunlight and shorter days may be a part of the cause. And that melatonin (sleep-related hormone) may also be linked to SAD. SAD typically starts between the ages of 18 and 30, and it is commonly treated using antidepressants
Psychotic Depression
Psychotic depression occurs when a major depressive episode accompanies psychotic features such as hallucinations and delusions, though psychotic symptoms generally have a depressive theme such as guilt, worthlessness, and death While it is slightly similar to schizophrenia, psychotic depression is a subtype of major depressive disorder; however, schizophrenia is a stand-alone condition. Two treatments are recommended for treating psychotic depression: an antipsychotic medication combined with an antidepressant or electroconvulsive therapy (ECT). This normally occurs around the age of 30 and above, but it could also occur during your 20s

Nearly three in ten adults (29%) have been diagnosed with depression at some point in their lives, and about 18% are currently experiencing depression, according to a 2023 national survey. Women are more likely than men, and younger adults are more likely than older adults to experience depression. While depression can occur at any time and at any age, on average, it can first appear during one’s late teens to mid-20s.

For adolescents in 2021, the overall rate was much higher at 20.1%, with nearly one in three girls (29.2%) affected compared to about one in nine boys (11.5%). Depression increased with age, from 13.0% at ages 12–13 to 26.8% at ages 16–17. Multiracial adolescents (27.2%) reported the
highest levels, followed by Hispanic (22.2%) and White youth (20.7%), while Black (14.0%) and Asian (13.8%) youth reported the lowest.

For adults in 2021, 8.3% reported experiencing depression in the past year, with women (10.3%) showing higher rates than men (6.2%).
Young adults aged 18–25 were most affected at 186%, while those 50 and older had the lowest rate at 4.5%. By race and ethnicity, people of two or more races (13.9%) and American Indian/Alaska Native adults (11.2%) reported the highest levels, while Asian (4.8%) and Native Hawaiian/Other Pacific Islander adults (5.1%) reported the lowest.
Areas of the brain involved with depression
The regions shown here are mirrored in both hemispheres of the brain. The illustration is not precise location.

Amygdala: The amygdala is part of a group of structures deep in the brain that’s associated with emotions such as anger, pleasure, sorrow, fear etc. Recalling an emotionally charged memory, such as a frightening situation, activates the amygdala. Activity in the amygdala is higher when a person is sad or clinically depressed, and this continues even after recovery from depression. This increase in activity can cause the amygdala to increase in size
Basal ganglia: The basal ganglia are a related group of structures deep in the brain. They are connected to and interact with structures that are closer to the brain’s surface. They help with movement and can be involved in memorizing, thinking, and emotional processing. Studies have found during depression; the Basal ganglia may decrease in size and have a change in structure. Hippocampus: The hippocampus plays a key role in long-term memory. It is this part of the brain that registers fear when you are confronted, for example, by an aggressive dog. This can cause the memory of such an experience to make you wary of other ‘dogs’ you come across later in life The hippocampus also decreases in size in some depressed people, and research suggests that ongoing exposure to stress hormones impairs the growth of neurons in this part of the brain.
THE EFFECTS OF EARLY CHILDHOOD ENVIRONMENTS ON BRAIN PLASTICITY
Iman M
You may be wondering what brain plasticity even is. Brain plasticity is the ability to change neuronal structures to change responses to stimuli Plasticity can impact whether a specific individual can form synapse connections as well as remove them, all based on the changes in their environment.
Plasticity changes are very dependent on age, as seen in an experiment done with rats. Adult rats placed in complicated environments had drastic changes in their brains and spine density, whereas for young, newborn rats, this was not the case, and the results were much less severe.
However, in terms of early childhood, we know that over 1 million neural connections are created each second for the first few years of life (this is the most active time for neural connections) This is all in response to interactions with parents and caregivers, as that is essential in healthy brain development, according to studies done by Harvard.
By the time a child has reached the age of 5, their brain should be nearly adult-sized as the basic framework of connections is established. Of course, there is still time to add languages, longer attention spans, etc., but the initial connections stay forever.
Our brains are made up of billions of connections; however, our early childhood can provide either a weak or a strong foundation for later connections to form The main factor in healthy brain development is a child’s social environment In order to develop the brain, children must have interactions with others. Without these interactions, children are at risk of damaging future cognitive, emotional, and social abilities. Our academic performance, mental health, and relationships are all built on the emotional development we achieved as children.
Other factors brain plasticity depends on:
1.Stress
2.Diet
3 Peer relationships
4.Parent-child relationship
5.Psychoactive drugs
6.Sensory and motor experiences
7.Intestinal flora (bacteria and organisms)
8.Hormones
*not in order
Ramon y Cajal (Nobel Prize winner in 1906) believed that “In adult centres the nerve paths are something fixed, ended, immutable. Everything may die, nothing may be regenerated.” Although when neurons are damaged, they will never regrow, the brain continues changing and adapting all throughout life.
The deprivation of family for children has statistically resulted in lower total brain volume, lower intelligence, and more hyperactivity disorder symptoms later on in life. The lack of family is proven to lead to higher mental disorders in adulthood, and a study shows that deprived children in adulthood have a lower right inferior frontal surface area/volume but greater right inferior temporal lobe thickness Animal experiments suggest that the most vulnerable parts of early life stress are the prefrontal cortex, amygdala, and hippocampus.

Hippocampus: Memory centre of our brains. Connections made can associate memories with the 5 senses. Adult stem cells can make new neurons here.
Amygdala: Responsible for our emotional responses, including fear and anger. It attaches us emotionally to memories, specifically fearful ones.
Physical, emotional, and sexual abuse during childhood often leads to negative mental health disorders/outcomes in adults. A study done by over 44 thousand participants shows that individuals who face more than 3 forms of early life stress were 4-12x more likely to have a substance addiction/disorder, depression, and a higher likelihood of suicide and suicidal tendencies. The experiment also revealed that there is an increased risk of personality disorders and schizophrenia.
Early childhood stress can alter DNA methylation patterns in receptor genes, and these changes are seen in adulthood, which shows how there are long-term effects of early stress.
In a 2018 study, it was observed that 80 girls aged 14-16 who had previously experienced maltreatment had different shyness levels in comparison to the control group. Greater left frontal EEG activity when at rest and when stimulated is linked to approach-oriented emotions and behaviour, while right frontal EEG activity is associated with withdrawal tendencies, early shyness, and increased risk of anxiety and depression. It was found that the adolescent girls who experienced childhood maltreatment had higher levels of shyness than the control groups, but only when they had a greater right frontal EEG asymmetry, and those with left frontal EEG asymmetry were less influenced by environmental factors.
In another 2018 study, risk-taking behaviour was investigated regarding children’s family environments. The study was done on 167 individuals aged 13-15 (53% boys). Participants underwent decisionmaking while under scans The study found that children who had parents who were greatly involved in their activities often evaluated risky choices more; however, these results mostly came from
participants in low-chaos environments. The results show that environmental stability can shape the brain’s processing of risky decision-making.
Therefore, in conclusion, early childhood trauma can negatively impact the brain’s plasticity and cause reduced brain volume, higher shyness, lower evaluation skills, and a higher chance of mental disorders. However, positive environments such as stable families and interactions can create a great brain foundation for adulthood.
Sources:
https://solportalibe-unescoorg/articles/neuroplasticity-how-thebrain-changes-with-learning/?utm source https://developingchild.harvard.edu/resources/workingpaper/childrens-emotional-development-is-built-into-thearchitecture-of-their-brains/
https://www.pnas.org/doi/10.1073/pnas.1911264116
https://www.mdpi.com/2673-4087/3/1/8
https://pmc ncbinlmnihgov/articles/PMC5948168/ https://pmc.ncbi.nlm.nih.gov/articles/PMC9291732
https://pmc.ncbi.nlm.nih.gov/articles/PMC5504954/ https://pmc.ncbi.nlm.nih.gov/articles/PMC3722610/ https://pmc.ncbi.nlm.nih.gov/articles/PMC3222570/ https://pmc.ncbi.nlm.nih.gov/articles/PMC7013153/ https://qbiuqedu au/brain/brain-anatomy/limbic-system
ADDICTION ON A MOLECULAR LEVEL
Jiwon Y
Introduction:
Addiction is a disease in which, after a period of recreational use, a subset of individuals develops compulsive use that does not stop even in light of major negative consequences.
This can be explored through two questions.
Molecular-Signalling Question: How drugs of abuse alter molecular processes in the brain, and how these molecular pathways lead to long-term change.
Epigenetic-Plasticity Question: What the lasting changes in gene expression are and how life-experiences influence vulnerability to these changes?
Neuroscientists have made significant progress on the question about molecular signaling. For example, NIH (2019) shows that excessive dopamine release during drug use triggers intracellular signaling cascades, gene transcription and synaptic alterations, challenging the assumption that addiction is simply a matter of choice. Nestler & Lüscher (2019) found that epigenetic mechanisms, including histone modifications and DNA methylation, sculpt the brain’s transcriptional response to drugs and thereby the trajectory of addiction.
Epigenetic and circuit-level frameworks address the epigeneticplasticity question Factors within one’s life such as early stress shape the chromatin landscape and thus modulates how the brain responds to such drugs, altering each user’s vulnerability. (Cadet et al., 2016)
These two questions cannot be fully separated. The molecular signaling events triggered by drugs require a permissive epigenetic background and produce synaptic and circuit modifications that are stabilized by chromatin re-wiring. Accordingly, addiction is best understood not as a purely behavioral or psychological phenomenon, but as a disease of long‐term neural plasticity, rooted in molecular, epigenetic, synaptic and circuit mechanisms.
Part 1: Molecular signaling in Drug exposure
1.1; Dopamine surges
The primary gateway into the molecular cascade of addiction is the excessive activation of the mesolimbic dopamine system. Repeated use of addictive drugs increases dopamine release into regions such as the NucleusAccumbens (NAc) and the VentralTegmentalArea (VTA), (Volkow et al., 2016).
This dopamine surge triggers downstream signaling: elevated cAMP, activation of protein kinases (PKA etc.), phosphorylation of transcription factors (for instance, the cAMPResponseElement‐BindingProtein CREB), and induction of gene expression.

1.2; Transcription factors
Beyond immediate signaling, drugs of abuse induce changes in transcriptional regulators. One key player is the ΔFosB family of transcription factors which accumulate in reward-circuits after repeated drug exposure and remain stable, acting like a “molecular switch” for addiction-related adaptation.
Nestler & Lüscher highlight how drug-induced dopamine signaling and other mechanisms converge on transcriptional regulators, altering expression of target genes that govern synaptic strength and plasticity
1.3; Synaptic and circuit adaptations
From the molecular level we move to synaptic change. Drugs cause changes to how brain cells communicate with each other in areas like the VTA and NAc. These changes can either make some synapse be stronger or weaker (LTP/LTD), especially in response to drug related cues (Lüscher and Malenka, 2011). As a result of this, the correlation between the brain circuits that control reward and behavior become unbalanced, while the control signals become weaker and the reward signals becoming stronger, thus drug-seeking becomes harder to resist.
1.4; Summary of Part 1
Overall, drug exposure -> dopamine surge -> intracellular kinase signaling -> transcription factor activation -> gene expression changes -> synaptic plasticity -> circuit rewiring
Part 2: Epigenetics, plasticity and vulnerability
2.1; Epigenetic modifications
Epigenetics plays a role in addiction. Epigenetic mechanisms include histone modifications (acetylation, methylation), DNA methylation and changes in chromatin structure, that controls how genes are going to be turned on or off
Drug exposure triggers epigenetic remodeling; for instance, repeated dopamine signalling can alter histone acetylation linked to neuron activity. These changes help make earlier molecular adaptations more permanent.

2.2; Life-experience, vulnerability and the epigenome
Addiction risk isn’t only caused by drug use but also by life experiences that shape the epigenome For example, early stress in life, the social environment an individual grew up in, or prior drug exposure can increase the brain’s sensitivity to drugs. Thus, certain individuals have chromatin states that magnify the effects of drug-triggered signaling, making them more vulnerable to addiction. The interplay of individual history + drug exposure = heightened adaptation

2.3; Longterm plasticity
Epigenetic and transcriptional changes feed into longer-term plasticity: modifications of dendritic spine density, synaptic architecture, connectivity of circuits. This structural plasticity makes the brain increasingly wired towards drug-seeking and drug-taking behaviors (Kalivas & O’Brien, 2008).
Such changes underpin the transition from voluntary use to compulsive use: when the reward circuitry, learning/memory circuits and control circuits are re-organized by molecular and epigenetic changes, behavior becomes harder to redirect
2.4; Summary of Part 2
In summary, epigenetic modifications provide a durable framework through which life experience and drug exposure shape vulnerability; these changes stabilize gene-expression shifts and drive structural synaptic/circuit rewiring; therapy might aim at reversing or modulating these pathways.
Conclusion:
The molecular basis of addiction shows that addiction cannot be reduced to a behavioral choice or simple chemical pleasure Rather, it is a disease of neural plasticity, in which involving short term molecular changes and long term rewiring.
By rejecting a simplistic “just willpower” model, we see addiction as stemming from the brain’s adaptive capacity being hijacked by drugs: the same molecular and cellular mechanisms that serve learning and memory become repurposed to lock in drug seeking The epigenetic dimension shows how vulnerability is shaped by life experience, and how the brain is primed for adaptation.
Thus, effective treatment should not only address behaviour and environment, but aim at the molecular and epigenetic biology: modulating transcriptional and chromatin states, restoring synaptic balance, and supporting circuit resilience
References:
1.Cadet JL (2016). Epigenetics of stress, addiction, and resilience: Therapeutic implications. https://doi.org/10.1007/s12035-014-9040-y
2.Volkow et Al (2016)
https://wwwnejmorg/doi/101056/NEJMra1511480
3.Walker DM, Cates HM, Loh YE (2018)
https://www.biologicalpsychiatryjournal.com/article/S00063223(18)31447-1/abstract
4.Bellone C, and Lüscher C (2012)
https://www.frontiersin.org/journals/molecularneuroscience/articles/103389/fnmol 201200075/full
5 Nestler E and Lüshcer C (2019)
https://pmc.ncbi.nlm.nih.gov/articles/PMC6587180/#R8
AI IN HEALTHCARE
Shivaay S
Introduction:
Ever since AI and ML have become common at diagnostic centers and surgeries, doctors have been finding the advanced technology very useful to deploy in their daily routine. The other day, I was talking to my parent’s doctor friend. As a urologist, he was telling me how the Da Vinci robot has made his surgeries more accurate and efficient, being particularly useful in preventing human errors/fatigue in operation theaters around the globe.
Common Uses of AI in Healthcare:
AI is currently being used in many hospitals around the world, and in many ways. For example, diagnostic work (whether it’s blood tests or radiological imaging, like CT or MRI scans) has made tremendous progress thanks to AI. It is used in disease predictions, real-time image comparisons and gives accurate summaries of reports Early disease detection has also been a huge advantage here, making patients’ recoveries faster. AI models of diseases have helped doctors perform practice surgeries before operating- this not only helps junior doctors learn a procedure beforehand but also helps the surgeon decide where the incision should be made, which tissue to remove and how the sutures will be done
Hospitals have also benefited with scheduling, medicine inventory management, and appointment work with the use of efficient technology. All of this reduces workload on doctors, nurses, and hospital staff.
Possible Issues and Oversights with AI in Medicine
While technological advancements are hugely welcome, they do come with a cost. Ethical concerns, data security, AI errors and algorithm biases threaten to push policymakers towards stricter rules and guidelines.
Solutions to Possible Oversights
So, how does one prevent the above from happening?
1. Establishing a secure audit trail (a system that traces actions on data) will help control who has how much patient data that could potentially be misused.
2 Minimizing the amount of patient information that is fed into an AI system for effective security, training, and treatment. For example: If a patient is going into surgery, the only information that should be fed into a robot, like the Da Vinci, must be strictly restricted to the disease on hand. There should be no need to mention the patient’s name, number, address, etc. If sensitive information like this is released to unregulated companies, there could be a large patient – doctor privacy breach This is the reason why many hospitals identify patients using only their MRN – Medical Record Number. This guarantees patient – doctor privacy while keeping treatment quick and efficient.
3. Multi-Factor Authentication systems are also used in hospitals these days to prevent sensitive medical records from reaching the wrong hands
4. With regards to patient safety, practice is the key here. Information that is fed into a machine or a robot must be checked thoroughly with rigorous methods so that the algorithm isn’t faulty from the start. As the machine is taught to use data, it will begin the process of adapting and processing the information more efficiently The issuing of checks and balances in diagnostic work can ensure that no random tests are done on patients, thereby reducing costs, workload and patient turnaround times.
Conclusion
In summary, AI is currently being used in many shapes and forms throughout the medical world, but we must use it with extreme caution, as privacy breaches and data security issues threaten our safety.
NEUROPLASTICITY: HOW DOES THE BRAIN REWIRE ITSELF?
Myra S
Neuroplasticity is the brain’s ability to reorganise itself by forming and modifying neural and synaptic connections. Or, as neuropsychologist Dr Celeste Campbell puts it, “it [neuroplasticity] refers to the physiological changes in the brain that happen as the result of our interactions with our environment. From the time the brain begins to develop in utero until the day we die, the connections among the cells in our brains reorganize in response to our changing needs.”
Essentially, neuroplasticity allows the nervous system to change its activity in response to intrinsic or extrinsic stimuli It plays a significant part in supporting brain functions such as memory and learning, and is also crucial in recovery from brain injury like strokes. Without this fundamental process, our brains would be fixed and unchangeable, and we wouldn't be able to develop into adults.
How Neuroplasticity Works
Neuroplasticity can be broken down into three main mechanisms: synaptic plasticity, structural plasticity, and functional reorganization.
Synaptic plasticity:
This involves changes in the strength and number of synapses
For example:
(i) Synaptic pruning - At birth, a newborn has approximately 100 billion neurons, but only a few connections/synapses per neuron (around 2,500). At early childhood, these connections form and multiply at an extremely fast pace. By age two or three, a child will have around 15,000 synapses per neuron The average adult, however, only has about half that number of synapses. This is because of synaptic pruning, a process in which weaker or unused connections are eliminated so that pathways that are more important can become stronger and more efficient.

This image illustrates synaptic pruning We can see that the top right neuron has many branches (dendrites), whereas the one below it does not
(ii) Long-term potentiation (or LTP) - this is when the connection between neurones that are used frequently is strengthened. When 2 neurons communicate often, the electrical signals between these neurones are able to move faster

Structural plasticity:
Structural plasticity includes the formation of new neurons (neurogenesis) and changes in neuron structure.
(i) Neurogenesis – Neurogenesis is the formation of new neurones, mainly in the hippocampus. It begins when neural stem cells divide and move to a target area, where they then grow into fully functioning neurons and connect with existing brain circuits.
(ii) Dendritic changes - Dendrites grow new branches or extend existing ones, increasing the number of connections a neuron can make.
(iii) Axonal sprouting – Axons develop new branches to reconnect neurons or compensate for damaged pathways
Functional Reorganization:
Functional reorganization occurs when different areas of the brain take over roles that were previously handled by damaged regions. This can be broken down into:
(i) Vicariation – Healthy brain regions adopt functions lost due to damage in another area.
(ii) Equipotentiality – If one brain area is damaged early in life, other regions can take over its functions entirely.
(iii) Diaschisis – Damage in one area temporarily affects areas that are connected, but these areas may reorganize over time to compensate for the deficit.
Neuroplasticity in learning and memory
Acquiring a new skill, whether that is playing an instrument or learning a new language, strengthens the neural pathways that are involved in that activity. Repeated practice and experience cause longterm potentiation (LTP) and increase synaptic efficiency between relevant neurons, as explored above.
For example, a study of London taxi drivers conducted around 20 years ago showed that London cabbies have a bigger hippocampus, the region of brain responsible for learning, memory, and navigating.
This demonstrates that the brain is able to adapt by reorganizing and forming new connections. In education, this means that engaging consistently, practicing, and actively learning can improve your cognitive development and reinforce your memory, attention, and problem-solving skills. Teachers and students would be able optimize their learning strategies if they were to understand this concept, by making lessons align with the brain’s natural ability to adapt and reorganize itself. Neuroplasticity shows that your intelligence and skills are not fixed but that they can grow with practice and mental stimulation.

Factors that affect neuroplasticity
Neuroplasticity is influenced by many factors, such as:
(i) Age – Neuroplasticity is believed to peak at a young age and then gradually decline as one becomes older. This is due to a decrease in the number of neurons and neurotransmitters, as well as changes in neural connection Because younger brains are believed to be more adaptable and ‘plastic’ than older brains, it is crucial for children to receive a good education: they are better able to retain information and learn. On the other hand, an adult’s brain has a diminished capacity to change and adapt.
(ii) Sleep – Adequate sleep is also extremely important for neuroplasticity Sleep supports your blood flow and delivers oxygen to active neurones, removing waste, which aids synaptic remodelling. Moreover, sleep deprivation can impair one’s attention, reaction time, and brain activity, specifically in regions like the cerebellum and hippocampus. Sleep loss also disrupts cortical excitability and the brain’s ability to undergo ‘plastic’ changes, which has been shown in studies using TMS in sleep disorders
(iii) Exercise – Countless studies have proven that exercise boosts brain cell growth and improves learning and memory. For example, a study published in Proceedings of the National Academy of Sciences (1999) by Henriette van Praag, Gerd Kempermann, and Fred H. Gage found that rats with access to a running wheel experienced a significant increase in the number of neurones in their hippocampus These rats also performed better in tasks focused on spatial awareness, such as navigating a maze.
Future + Ethical Implications
The future of neuroplasticity holds promise for treating neurological conditions and disorders, transforming education through personalized learning, improving a human’s cognitive abilities and processing, and so much more. Modern applications include neurorehabilitation after strokes or injuries, brain training programs to improve memory and attention, and even AI-driven neuroscience tools that can model and optimize brain function.
While the possibilities are exciting, we must also consider the ethical concerns and limits. How far do you think we should push enhancement? What unintended consequences might arise?
To conclude, the brain’s ability to change its own structure and function through thought and activity is what makes us so adaptable and resilient.
According to Norman Doige, “The brain is a far more open system than we ever imagined, and nature has gone very far to help us perceive and take in the world around us It has given us a brain that survives in a changing world by changing itself. ”
Sources:
National Institutes of Health, Cleveland Clinic Health Essentials, VeryWell Mind, Positive Psychology, ScienceDirect
IN WHAT WAYS IS AI TRANSFORMING MEDICAL PRACTICE TODAY AND WHAT ARE ITS FUTURE PROSPECTS IN HEALTHCARE?
Zaira S
From diagnosing cancer to predicting heart attacks, artificial intelligence is becoming a prominent figure in modern healthcare, reshaping how doctors treat patients Yet as these systems continue to advance, questions around bias, unemployment and over-reliance are becoming increasingly harder to ignore.



One example of AI being used in healthcare is medical imaging. This is evident in Google’s ‘Deep Mind’, which is commonly used to analyse eye scans, searching for symptoms of cases like diabetic retinopathy (high blood sugar damaging blood vessels in the retina) and macular degeneration (losing the ability to see fine detail clearly). A 2025 study published by Jama Network Open tested an algorithm for detecting diabetic retinopathy in Indian hospitals. The system achieved an extremely high average of nearly 0 missed cases, meaning it correctly identified every patient who needed emergency care. By rapidly analysing thousands of scans with consistent accuracy, we can conclude that AI can expand early diagnosis in areas where trained specialists are scarce, portraying the positive aspect of technology in healthcare, making it faster and increasing accessibility without replacing doctors.
Another major area in which AI has been proven to be extremely efficient is in the early detection of cancerous tumours. A 2023 study by Harvard Medical School, published in Nature Medicine, introduced an AI model named Sybil that could quite accurately predict any risk of a patient developing lung cancer almost up to six years in advance during a single scan. This system tries to identify any faint patterns or changes in tissue concealed to the naked human eye, proving as an incredibly powerful tool in medicine. Typically, lung cancer is diagnosed at a later stage in the patient’s life when treatments such as chemotherapy are no longer as effective. Although still in the research stage and not yet commercially available, Sybil could potentially save countless lives by enabling earlier screening and detection
A more common use of AI in medicine is its ability to tailor or prescribe drugs to patients. Machine learning algorithms are now able to analyse extensive datasets of chemical compounds and genetic profiles a lot faster than traditional lab methods. For example, in 2023, researchers used AI to predict molecules that could effectively target a scarce form of leukemia, decreasing the time taken for this drug development from years to months. Furthermore, models are being used to personalise prescriptions, suggesting dosage amounts and treatment procedures based on patients’ genetics, age and varying lifestyles. This minimises side effects, and improves overall efficiency. This exemplifies artificial intelligence is moving at a fleeting pace, shaping treatment and shifting medicine from a mainstream, onefor-all approach to an individualised and personal treatment.
On the more psychological side of healthcare, chatbots such as Wysa and Youper have been designed to offer emotional support and help patients manage anxiety in stressful situations such as before treatments or operations These models use techniques based on collected evidence using Cognitive Behavior Therapy, aiming to engage users in therapeutic conversations. In fact, patients with high levels of anxiety showed a decrease in symptoms of apprehension of nearly 31%.
However, these models are not just important for participants undergoing treatment, but also professionals such as doctors and nurses. A questionnaire completed in 2024 revealed that out of 527 staff members who completed at least two sessions with Wysa, 80% reported feeling more in control of their thoughts and emotions, empowering them to manage stress proficiently.

On the other hand, there are considerable limitations of AI that must be taken into account such as its cost and accessibility. Developing and maintaining systems can be exorbitant, soaring up to $1,000,000 or more. This can create a barrier, especially in lowresource areas, preventing from further exploration and improvement For example, Dr. Gao Yujia suggested that ‘nontertiary and academic research hospitals, such as district hospitals and primary care clinics, inherently have less funding and budget for AI projects.’ This means that smaller hospitals with less financial resources struggle to adopt AI models, even when technology is available because they can’t afford the costs of implementation. Another expert, Dr. Mohaymen Abdelghany, a UAE based specialist stated that ‘integrating new technologies like AI and robotics requires substantial investments in infrastructure, staff training, and interoperability solutions.’ This gives us an insight into an often overlooked part of AI in healthcare – it’s not just about buying software but the cost of upgrading everything aroundsystems and staff, so it works safely and reliably
In the future, models can be developed by increasing investment and market growth. According to Statista, the global artificial intelligence market in medicine is predicted to grow from 11 billion in 2021 to over 187 billion by 2030, a growth rate of 37% every year. Furthermore, the WHO notes that over half of the world’s population lacks access to crucial healthcare devices. AI can help bridge this gap by offering virtual consultations in rural or underdeveloped regions.
To conclude, artificial intelligence is undeniably transforming modern medical practice, achieving remarkable feats such as finding cures and enhancing patient and staff care. From detecting lung cancer years in advance to diabetic retinopathy to offering emotional care in the form of therapy, it is shaping an ameliorated version of delivering healthcare.
However, despite these promising accomplishments, one must acknowledge the weaknesses of these models and the hindered adoption due to high costs, need for trained staff and difficult maintenance that are preventing institutions from unlocking AI’s full potential.
Ultimately, I believe that AI should be used as a supportive tool that aids research and treatment without being viewed as a replacement and doctors becoming too reliant on it. This will ensure better outcomes for patients across all levels of care, holding immense promise for the future of healthcare.
*During the process of writing this article I used Coursera, Smart Healthcare Solutions and BMC Medical Education. I was able to obtain statistics from the WHO and Statista, as well as NIVIDIA.
HOW DOES THE DEVELOPMENT OF DESIGNER BABIES CHALLENGE THE BALANCE BETWEEN MEDICAL PROGRESS AND ETHICS?
Zara M
If it were possible to prevent a child from inheriting a hereditary disorder, is it considered morally acceptable or is it ‘playing God’? The development of genetic engineering has made it possible to alter embryos before birth, protecting thousands of lives which would have been burdened by long-term genetic diseases. However, the pursuit of medical development has created moral controversy within different demographics in which, many have questioned whether this is ‘playing God’ or if genetic engineering may lead to people creating the ‘perfect’ baby As a result, countries have established different legislation, reflecting the views of the domestic population of each country, in an attempt to balance the scientific benefits and ethical concerns.
Designer babies are human embryos, whose genome has been selected or altered before birth, which influence the ‘traits of the resulting children’ Designer babies are created through in-vitro fertilisation (IVF) using either preimplantation genetic diagnosis (PGD) or CRISPR technology. (Pang, R, Ho, P, 2016)
The PGD technique is used to detect and identify genetic defects at preimplantation and so only embryos that do not have any genetic disorders are implanted The CRISPR-CAS9 technique involves modifying and correcting genes, prior implantation, to prevent any genetic disorders caused by mutations. Both of these techniques mitigate the risk of the child being burdened with a life-long genetic disorder. (New Hope Fertility Clinic, 2025)
1 in 10 people are affected by genetic conditions that impact the quality of their lives. (Sunny, 2020) Due to the advancement of medical technology, scientists have used the various techniques of creating
designer babies to prevent hereditary, debilitating diseases such as cystic fibrosis and Huntington’s disease. By removing the faulty gene from an embryo’s genome, this protects children from a lifetime of suffering, yet it also raises moral question about how far this science should go before disrupting the natural order of the world. (Sunny, 2020)
Designer babies are not only resistant to genetic disorders but are also incredibly resistant to diseases such as cancer. Arguably, gene editing creates a new pathway for future generations, in which people may become naturally immune to such debilitating diseases, putting an end to this kind of suffering. For example, in 2025 BBC News reported that 8 babies had been born through IVF and gene editing from three parents, all having incurable mitochondrial disease running in their families. These babies were born free of this disease, demonstrating the significant medical impact gene editing can have on a person’s life. (Gallagher, 2025)
However, whilst there are numerous medical benefits of creating designer babies, there are many ethical, moral, and philosophical concerns regarding their creation. Some demographics believe that the introduction of gene editing, IVF and PGD may lead to these techniques being used for ‘genetic enhancement’ and not just to treat genetic diseases. This creates ethical controversy within many demographics believing that this is ‘playing God’ by choosing and changing traits, such as eye colour, hair, and skin tone, of a child (Pang, R, Ho, P, 2016) This poses the question if it is morally acceptable to alter human embryos for not only the purpose of preventing disease but possibly enhancing the traits of these embryos.
By choosing their child’s traits, parents are essentially dictating the characteristics of the future generation. This may raise numerous social concerns regarding the psychological impact on children as it may create more pressure on children to meet parental expectations of being the ‘perfect’ baby that they initially designed. Moreover, the choice of an embryo’s traits may not always be in the
hands of a loving, caring parent but instead in the hands of a ‘racist, eugenicist or genocidal governments of the future’, transforming this life-saving technology in a dangerous tool if in the wrong hands. This highlights how the advancement of designer babies challenges the balance between ethical safeguarding and medical development. (Anderson, 2018)
The ethical concerns of various demographics worldwide have influenced the regulations countries have set regarding the allowance of gene editing to create designer babies. China, with a mainly conservative Buddhist population, has banned any gene engineering that relates to altering the DNA in early-stage embryos to prevent hereditary disorders (Wei, 2024) However, the UK has taken a more lenient approach creating the Human Fertilisation and Embryology Act 1990 (Great Britain, 1990) stating that gene editing of embryos in order to prevent the inheritance of genetic orders is allowed only involving human embryos which have not yet reached the ‘two cell zygote stage’. Finally, whilst there is no law prohibiting the creation of designer babies, the US federal forbids the use of federal funds on gene editing and human genetic engineering to attempt to prevent gene therapy clinical trials. (Global Gene Editing Regulation Tracker, 2020)
In conclusion, whilst there are many potential medical benefits in creating designer babies, such as the prevention of various lifethreatening genetic disorders, there is significant ethical controversy surrounding this topic within different demographics, challenging medical advancement. Thus, countries have enforced differing levels of regulation on human genetic engineering to create a unique balance between the pursuit of scientific development and the reassurance of the ethical concerns. Thus, as science continues to develop and challenge philosophy and morality, society may gradually oppose more and more developments, posing the question of where the line should be drawn between scientific advancement and ethical responsibility.
Bibliography:
Anderson, R. (2018) Just Because We Can Create Genetically Modified Babies Doesn’t Mean We Should. Available at: https://www.heritage.org/marriage-and-family/commentary/justbecause-we-can-create-genetically-modified-babies-doesnt-mean [Accessed: 17 October 2025]
Sunny, S. (2020) Designer Babies: The Key to a Better Developed World. Available at: https://www.seisen.com/student-life/seisenpost/features/~board/seisen-post/post/designer-babies-the-key-toa-better-developedworld#:~:text=Designer%20Babies%20could%20also%20be,the%20q uality%20of%20their%20life [Accessed: 17 October 2025]
Gallagher, J. (2025) Babies made using three people’s DNA are born free of hereditary disease, [Online]. Available from: https://www.bbc.com/news/articles/cn8179z199vo [Accessed 17 October 2025].
Global Gene Editing Regulation Tracker. (2020) United States: Germline/Embryonic. Available at: https://crispr-gene-editing-regstracker.geneticliteracyproject.org/united-states-embryonicgermline-gene-editing/ [Accessed 17 October 2025]
Great Britain, 1990, Human Fertilisation and Embryology Act 1990 Chapter 3. London: The Stationery Office. New Hope Fertility Clinic (2025) ‘Designer Babies’. Available at: https://www.newhopefertility.com/designer-babies/ [Accessed 17 October 2025].
Pang, R, Ho, PC (2016) ‘Designer Babies’, Obstetrics, Gynaecology & Reproductive Medicine, 26(2), pp. 59-60
Wei, A (2024) China bans clinical research in germline genome editing as ‘irresponsible’. Available at: https://www.scmp.com/news/china/politics/article/3270285/chinabans-clinical-research-germline-genome-editing-irresponsible? utm source=chatgptcom [Accessed 17 October 2025]
HOW CRISPR-CAS9 GENE EDITING CAN HELP ELIMINATE GENETICALLY INHERITED DISEASES AND ITS POSSIBLE ETHICAL IMPLICATIONS
Layla A
INTRODUCTION
Since the discovery of DNA, we have known that our genes, the things that define us, are predetermined From eye colour to blood type, our genes have made us; and for years, they were unchangeable; you had what you were born with for the rest of your life, whether you liked it or not. However, over the past decade, a new technology – CRISPRCas9, has given us the ability to change what we were born with, for good. However, with every good thing, comes risk. This article will explore the science behind CRISPR-Cas9 gene editing, as well as its use in the world today and ethical challenges faced by this cuttingedge technology.
WHAT IS CRISPR-Cas9?
CRISPR-Cas9 is a tool used to edit genes. Research on this tool in gene editing was first published in 2012 by Emmanuelle Charpentier and Jennifer Doudna Since then, many breakthroughs across various fields such as Medicine, Agriculture and Genetics have been made with the help of CRSIPR-Cas9.
HOW DOES IT WORK?
CRISPR stands for Clustered Regularly Interspaced Short Palindromic Repeats, and these were first discovered in E coli bacteria by Japanese scientist Yoshizumi Ishino. In the bacteria, Ishino and his team observed how when E. coli was infected with a virus, it stored small sections of the virus’ genetic information in portions of DNA called ‘spacers’; these were used in order to initiate a secondary immune response if the bacterium were to be infected by the same virus again. These spacer sections of DNA were separated by CRISPR DNA in the bacterium, and these helped organise the spacers so they could be easily recognised by the cell. Scientists in 2012 were able to take this system and recreate it in a lab where the spacer DNA contained the
the base sequence causing a genetic issue within an organism that they wanted to fix. This system allows fast and effective gene editing, and it consists of 2 main components that carry out this functionality:
Cas-9 protein – first discovered in E. coli bacteria, this protein cuts through DNA strands, exposing bases and allowing them to be removed, added or replaced.
Guide RNA – this is a strand of bases programmed to recognise certain sequences of DNA bases that scientists want to edit e.g. a genetic sequence causing sickle cell disease. The ‘spacer’ genetic information in the CRISPR model helps to synthesise the guide RNA to match up with the desired DNA sequence

The Cas-9 protein and guide RNA are injected into target cells containing the faulty DNA sequence. They open up the double helix and join up with the template strand of DNA until it reaches the faulty sequence recognised by the guide RNA. Here, scientists are able to make edits to the sequence.
A REAL LIFE BREAKTHROUGH
The world’s first patient to receive personalised CRISPR therapy was an infant in Philadelphia, Pennsylvania in February 2025. The young boy, KJ, was born with a rare genetic disease which is typically treated with a liver transplant, but due to his age, he was not eligible for the surgery. Instead, he received several doses of personalised CRISPR therapy, and is ‘growing well and thriving’, according to Penn Today. RJ’s case has given hope for thousands struggling with genetic disorders, and it is projected to “utterly transform the way we approach medicine”, according to Kiran Musunuru, a professor at Penn Medicine.

THE FUTURE OF CRISPR-Cas9
Since its development, CRISPR-Cas9 has been researched and is beginning to be used in clinical trials for different diseases and conditions:
Urinary tract infections caused by bacteria – 800 participants were used in a Locus Biosciences trial to eliminate a bacteria causing these infections, meaning CRISPR could also be used for medical issues other than genetic disorders in the future.
Type 1 Diabetes – CRISPR has been applied on genes involved in the immune system in order to try and modify pancreatic cells and increase insulin production.
Cancer – CRISPR is being used to enhance cells’ sensitivity to cancer cells, making the body’s response and attack against cancerous cells more efficient and responsive.
Penn Medicine’s Kiran Musunuru and Rebecca Ahrens-Nicklas holding KJ post infusion
ETHICAL RISKS AND CHALLENGES
With the rapid development of this technology, multiple ethical questions arise about the future when CRISPR technology becomes more advanced and mainstream. One of the most pressing questions, which has already been widely discussed, concerns the editing of gametes and embryos. This is called Germline Genome Editing, and this means that if a genetic change is made, it will be passed on to all of the future generations following the embryo or gamete that was edited. This imposes serious ethical concerns; future generations cannot consent to changes being made to their genomes – so should we refrain from making these edits, even if it saves lives?
Additionally, there are some limitations to CRISPR-Cas9 For example, the Cas-9 protein makes a double strand break in a DNA double helix, and this causes a high risk for genetic mutations to occur – specifically cancer related ones.
Looking into the future, ‘designer babies’ may also become a relevant ethic issue – where do we draw the line when making genetic changes to our children?
CONCLUSION
In summary, it is clear that CRISPR-Cas9 technology has already had a huge influence in the field of genetics over the past decade, and will continue to reshape medicine and patient care in upcoming years. Despite arising ethical concerns, I believe that the positives that will come from this therapy far outweigh the concerns; over 300 million people worldwide suffer from genetically inherited diseases, and with the possibility of gene therapy being used to treat other conditions such as cancer and diabetes, even more people will have the chance to live healthy lives as this state of the art technology advances.
WHAT ARE THE DIFFICULTIES WITH
STEM
CELLS BEING USED FOR HEART DISEASE AND WHAT IS THE POTENTIAL FOR THE FUTURE?
Alexandra G
Since the introduction of the concept of stem cells in the late 19 century, scientists and doctors have started to identify countless possibilities with what could be explored and done with them, ranging from biomolecular research to medicinal cures that haven’t been possible before. However, as we dive into this fascinating topic it is important to establish a clear understanding of what stem cells are, and why exactly there has been so much interest yet barriers in the past couple of centuries In a nutshell, a stem cell is an unspecialized cell that can become specialized, which means transforming into any other type of cell, such as muscle, blood or brain cell. These types of cells are most commonly found as an embryonic stem cell, which can only be derived from a 3–5-day old embryo (which immediately comes with some difficulty to retrieve them). The other type of a natural stem cell found in the human body is an adult stem cell, which can be found in small amounts in tissues such as bone marrow. However, these don’t have such a wide range for potential differentiation as an embryonic stem cell does and can only divide into a certain range of cells within a specific tissue or lineage.
It is also important to consider why cardiac diseases have a reputation of being difficult to cure in the medicinal world. First of all, the adult heart cannot regenerate its muscle cells after damage, meaning that this would be difficult to restore. As well as this, it is well-known that the heart is a vital organ and it has a very detailed and complicated structure, meaning that any surgeries performed on it carry a great risk and have to have a lot of precaution Furthermore, which the majority of any cardiac conditions it is almost impossible to fully cure without ensuring that the patient puts in certain changes to their lifestyle such as lifelong medication or treatment. With all of these factors combined, any problems relating to the heart are often perceived as a life-long burden, whether it is managing the intense medical treatments needed or the long-term impacts after any procedure.
From the more scientific side, there are other challenges that come with operating on the heart with things such as transplanting stem cells into existing marrow or tissue. It is also crucial to note that research on such cells is widely considered to be unethical and objects with many morals as well as religious beliefs, hence leading to research on embryonic stem cells being banned in many countries around the world. Naturally, this is a very complex issue due to the potential that it provides and there has been some change witnessed; however, it is still subjective to certain ethical guidelines. From a more biological point of view, it is also important to look at medical risks that come with stem cell transplants when it comes to how a patient would take it An example is graft versus host disease (GvHD), which happens when transplanted cells start to attack the other cells in your body. There are other potential side effects to a procedure like this would be having a reduced number of blood cells which could lead to a condition called anaemia (in which there is a severe iron deficiency), and the patient would also be likely to experience side effects from preparations before a stem cell transplant such as chemotherapy or radiotherapy which come with potential hair loss and infertility.
From the findings mentioned above, it is very clear that there are a lot of complications with such procedures. However, there have been some developments made in this field that have pushed the opportunity for such treatments. An example of this is the introduction of induced pluripotent stem cells, iPSCs. These were derived by reprogramming some existing adult somatic cells. When compared to ESCs, some studies have shown that there were only a few differences when it comes to genetic setup. As well as this, the discovery also opened a window for future research as it eliminates any ethical or moral issues with the use of ESCs and could also be available to a wider cohort of patients. These cells have shown potential to be used with cardiac diseases as there have been numerous efficient differentiation methods developed in order to replicate various cardiac tissue, such as cardiomyocytes, which are the specialized muscle cells responsible for the continuous contraction to pump blood in the body. These can be used for
disease modeling and cardiac cell therapies, something which could greatly improve the effectiveness of medical operations on the heart.
In conclusion, despite many obstacles that have to be faced when looking at potential stem cell therapies it is clear that there is quite a lot of potential with various ways that scientists are finding to overcome moral challenges. There is a lot of ongoing research with how these interesting cells can be used for something as complex as cardiac diseases, whether it is developing iPSCs differentiation (meaning the cells could develop into other types of cardiac cells), or looking at other gene-editing technologies such as CRISPR, it is evident that stem cell research and therapies will only be more and more prominent in future medications, and the potential will definitely be explored in such a complicated field such as cardiac disease. With more studies and tests being ran, and more investigations and trials being carried out with many different treatments, we are bound to see major breakthroughs will certainly have a significant impact on how cardiac diseases will be treated in the coming years
Bibliography:
-https://beikecelltherapy.com/stem-cell-treatment-timeline/ -https://health.clevelandclinic.org/is-heart-disease-curable -https://www.nhs.uk/tests-and-treatments/stem-cell-transplant/ -https://pmc ncbinlmnihgov/articles/PMC5951134/ -https://pmc.ncbi.nlm.nih.gov/articles/PMC8522114/#Sec2
INVESTIGATING FAILURE IN THE BJÖRK–SHILEY CONVEXO–CONCAVE HEART VALVE: A CASE STUDY
Sheraya A
Introduction
The Björk-Shiley Convexo-Concave heart valves, developed by Dr Viking Björk and Shiley, Inc., were implanted in human hearts from 1979 to 1986 [1] to help patients' hearts function correctly when the patient’s own valve was diseased. However, of the 86,000 implanted valves, 663 had catastrophic failure from 1978 to 2012 [2].
The heart valve had two models, the 60-degree valve and the 70degree valve, each which opened to that respective angle. The 60degree valve was approved by the United States Food and Drug Administration (FDA) in 1979. However, the 70-degree valve was never approved in the United States because Shiley withdrew their submission after testing revealed welding problems resulting in valve failures [3] Despite this, the FDA provided an export license to Shiley, so company distribution efforts redirected to markets abroad [4].
The Björk-Shiley company was one of the leading providers of heart valves in the 1970s, and therefore one of the most trusted. Hence, there was justified outrage when it was discovered that a series of manufacturing issues and corporate scandals resulted in the valves having structural issues that lead to the near-instant death of a proportion of its users.
The structure and physical failure

Fig 1. Photograph of Björk-Shiley 60° convexo-concave heart valve [5].
The physical structure was as follows:
A Teflon collar [6] used to suture it to the heart (as labelled ‘suture ring’ in Fig 1.)
A tilting disc that opened and shut with every heart beat (had a concave shape to help blood flow smoothly)
Struts to hold the disc in place, two on the inflow side (‘inlet strut’ in Fig 1) and two on the outflow side (‘outlet strut’ in Fig 1)
The underlying source of the unit’s physical failure is attributed to a flaw in the outlet strut, which could fracture, causing the disk to become loose and fall out, thus flooding the heart with blood This sudden heart failure, in many cases, lead to instant death
The failure rate for the large-size 70-degree valve was more than seven times higher than for the 60-degree or later versions of the 70-degree, as proved by a study by Björk and two other surgeons. This study concluded that these large-size 70-degree valves have a “major risk for mechanical failure”, to the extent where surgeons recommended removal despite the 1 in 20 death rate for replacement surgery, since the chance of a fracture was so high (1 in 8 over a seven year period) [4].
The ethical failure
Firstly, the company may have known about the fractures far earlier than publicly admitted. However, instead of warning patients about the design issues, or halting production and distribution, the company continued to sell the valves whilst attempting to fix the design internally [7]. This hushed attempt to rectify their lack of initial testing had severe moral and ethical implications for the lives involved, particularly due to them continuing to sell the valves despite awareness of their flaws.
In addition, there is evidence of Shiley, Inc. advising Dr Björk against publishing data around strut fractures, and of similar events frequently occurring around withholding information about failures. Additionally, as the evidence of the failures increased, researchers requesting valves to study were refused. Similarly, when the FDA requested for valves for testing, Shiley had purposefully sent
manipulated valves to them rather than a random sample, as had been requested. Further requests for more valves were rejected. [7] This is extremely worrying, as these tests around the fractures could have been used to alter the design to attempt to preserve the lives of future users However, these lives were endangered, likely in favour of maintaining company reputation.
One of the most pertinent failures that expands beyond Shiley, Inc. is that after failing to get FDA approval for the 70-degree-valve, Shiley opted for international distribution. [4] It is known that FDA approval has incredibly high standards to maintain tight regulation for medical devices, and whilst they were not used in the international markets that Shiley opted to target instead, it can be viewed as highly unethical that the company opted for redirecting distribution rather than improving the design to meet these rightfully high standards.
It is important to note that the failure rate was low (below 1% for the 60-degree model), and that the benefits of the valve could offset the risks associated. However, of the 86,000 implanted valves, 663 had catastrophic failure from 1978 to 2012 [2].
What can be learned from this study
Firstly, incomplete testing for compliance of biomedical devices can have fatal impacts on a global scale Trying to retrospectively correct this after the release of the product is not only extremely difficult but also highly unethical, compromising the lives of thousands. Redirecting distribution to countries with looser regulation standards is not an appropriate solution either, even if it may maximise profit. Patient lives should always be prioritised over profit: the purpose of a biomedical engineering company is to save patient lives, not to maximise monetary gain.
Secondly, it highlighted the importance of surveillance of products after they are released to market, to ensure that any problems that were not identified in testing are reported immediately.
Additionally, it introduced the possibility of ethically monitoring the data that biomedical companies supply to official regulatory bodies,
and minimise the risk of unethical practice.
Conclusion
The failure in the Björk–Shiley Convexo–Concave Heart Valve is a prime example of what failures can happen when profit and company reputation are valued over patient wellbeing. Whilst the failure percentage may be low, the effects were, and still are, deeply felt among the users of the valves; the perpetual fear that their valve may break can be a constant emotional burden.
Sources
1 Actis Dato GM, Centofanti P, Actis Dato A Jr, et al Bjork-Shiley convexo-concave valve: is a prophylactic re-replacement justified?. J Cardiovasc Surg (Torino). 1999;40(3):343-346.
2.Harrison, D. C., Ibrahim, M. A., Weyman, A. E., Kuller, L. H., Blot, W. J., & Miller, D. E. (2013). The Björk-Shiley convexo-concave heart valve experience from the perspective of the supervisory panel. The American journal of cardiology, 112(12), 1921–1931 https://doi.org/10.1016/j.amjcard.2013.08.020
3.Corrigan v. Bjork Shiley Corp. No. B015387. Court of Appeals of California, Second Appellate District, Division Four. June 9, 1986
4.Frantz D. For export: a double standard? Heart valves the FDA banned as risky were legally sold abroad. Los Angeles Times. December 12, 1989 Accessed November 21, 2025
5.Walker AM, Funch DP, Sulsky SI, Dreyer NA. Patient factors associated with strut fracture in Björk-Shiley 60° convexo-concave heart valves. Circulation. 1995;92(11):3235-3239. doi:10.1161/01.CIR.92.11.3235.
6.Fielder JH. Ethical issues in biomedical engineering: the BjorkShiley heart valve IEEE Eng Med Biol Mag 1991;10(1):76-78 doi:10.1109/51.70044
7.Fielder JH. Defects and Deceptions – The Björk-Shiley Heart Valve. Villanova University, Department of Philosophy; IEEE Technology & Society Magazine, Fall 1995. Published September 2022. Accessed November 21, 2025.
HOW DOES THE LEVEL OF SPINAL CORD INJURIES DETERMINE
WHETHER PARALYSIS IS COMPLETE OR INCOMPLETE?
Serin C
Introduction
A spinal cord injury (SCI) refers to damage to the spinal cord, which is the bundle of nerves and nerve fibres responsible for transmitting and receiving signals between the brain and the rest of the body. Due to the spinal cord being the primary pathway for nerve signals within the body, any damage to it could result in temporary or permanent changes in strength, feeling, movement, and autonomic functions below the site of injury SCIs are generally classified as either complete or incomplete: in complete injuries, there is no nerve communication below the injury resulting in a complete loss of movement and sensation below injury site whilst incomplete injuries involve the spinal cord still being able to transmit messages to or from the brain, allowing partial feeling and control of movement remain below the site of injury However, the extent and type of paralysis depend heavily on both the location and severity of the injury. Understanding how the level of spinal cord injury determines whether paralysis will be complete, or incomplete is crucial for improving diagnosis and future treatments.
The Structure of the Spinal Cord

The spinal cord is a soft, cylindrical column of nerve tissue that extends from the base of the brain down to the lower back through a canal in the center of the vertebrae Similar to the brain, the spinal cord has three layers of tissue (pia mater, arachnoid mater and dura mater) for protection, with cerebrospinal fluid (CSF) surrounding it to act as a cushion against shock or injury. The spinal cord can be split up into five regions: cervical, thoracic, lumbar, sacral and coccygeal and these five regions of nerves have different functions.
‘Cervical spinal nerves (C1 to C8) in the neck control signals to the back of the head, the neck, shoulders, arms, hands and the diaphragm.
Thoracic spinal nerves (T1 to T12) in the upper mid-back control signals to the chest muscles, some muscles of the back, and many organs systems
Lumbar spinal nerves (known as L1 to L5) in the lower mid-back control signals to the lower parts of the abdomen and the back, the buttocks, some parts of the external genital organs, and parts of the legs.
Sacral spinal nerves (known as S1 to S5) in the lower back control signals to the thighs and lower parts of the legs, the feet, most of the external genital organs, and the area around the anus. ‘ (Information obtained from National Institute of Neurological Disorders and Stroke, 2022) Coccygeal nerve controls sensory feedback and motor function for the skin and muscles around the tailbone, otherwise known as the coccyx.
Unlike the brain, in the spinal cord the grey matter (neuronal cell bodies responsible for local processing and reflex actions) is surrounded by white matter (bundles of myelinated axons that carry signals up and down the body). Tracts in the spinal cord carry the messages between brain and the body: motor tracts (descending) carry signals from the brain to control muscle movement whilst sensory tracts (ascending) carry signals from the body parts to the brain relating to sensations such as pain, cold, heat and the position of arms and legs of the body. Each segment of the spinal cord contains a pair of spinal nerves that transmit impulses between the CNS and areas like specific muscles or organs Due to this precise organisation, the location of a SCI is crucial to the level of injury someone may experience. Injuries that happen higher up the

spinal cord (such as the cervical area) can disrupt a larger number of neural pathways whilst injuries that happen lower down (such as the lumbar) can have more localised effects.
Pictures used: https://www.christopherreeve.org/todays-care/living-withparalysis/health/how-the-spinal-cord-works/ https://my.clevelandclinic.org/health/body/21946-spinal-cord
How the Level of Injury Determines Paralysis
The level at which the spinal cord is damaged directly affects how much of the body experiences paralysis. Cervical spinal cord injuries cause paralysis or weakness in both arms and legs, typically resulting in quadriplegia – the arms, hands, neck, shoulders and diaphragm are impacted by the injury. In addition, all regions of the body below the level of injury may be affected and this type of injury is usually accompanied alongside loss of physical sensation, respiratory issues, inability to regulate body temperature and bowel dysfunction. Meanwhile, thoracic spinal cord injuries can cause paralysis of weakness to the legs resulting in paraplegia along with loss of physical sensation, bowel and bladder dysfunction. Unlike cervical SCIs, in most cases the arms and hands are not affected, Similar to thoracic SCIs, Lumbar level injuries can also result in paraplegia with the same symptoms of physical sensation loss, bowel and bladder dysfunction; lumbar level injuries specifically impact the lower body though whilst thoracic SCIs can affect the upper chest and abdominal muscles Finally, SCIs to the sacral level primarily cause loss of bowel and bladder function, which can cause weakness or paralysis of the hips and legs. It is important to note that the source of the injury has little to do with the spinal cord injury- the outcome will provide little insight into how severe the initial injury was.
What happens after a spinal cord injury?
According to Christopher & Dana Reeve Foundation, 2025, the following steps take place after a spinal cord injury: 1.Immune cells move to the site of injury, which causes additional harm to some neurons and potentially death to other cells that survived the initial trauma
2. The loss of oligodendrocytes leads to axons losing their myelination, severely impacting the transmission rate of action potentials, leaving existing connections inefficient. The neuronal information highway is additionally impaired as numerous axons are cut off, breaking links between the brain and muscles and between the sensory system and the nervous system
3. Within a few weeks of the initial injury, the area of tissue damage has been cleared away by microglia, resulting in a fluid-filled cavity encircled by a glial scar. Molecules that prevent the regrowth of cut axons are now present at this location and the cavity (sometimes referred to as a syrinx) functions as a barrier to the reconnection of the two segments of the injured spinal cord
The Difference between Complete and Incomplete Paralysis
To put it simply, complete spinal cord injuries involve a total lack of sensory and motor function below the injury whilst incomplete spinal cord injuries involve partial ability of the spinal cord to relay messages between the brain with movement and some sensation below the level of injury being hypothetically possible. There are 6 main types of incomplete SCIs:
Anterior Cord Syndrome: affects front of spinal cord and characterised by motor dysfunction, dissociated sensations, usually a result of compression injuries.
Brown-Sequard Syndrome: rare, lesion on spinal cord which results in loss of some motor and sensory dysfunction, commonly asymmetrically affects body, caused by complete cord hemi transection (damaging one half of the spinal cord)
Cauda Equina Syndrome: affects nerves around lumbar level of spinal cord and causes loss of sensation and muscle weakness
Central Cord Syndrome: caused by injuries that impact the center of cervical spinal cord, often resulting in loss of sensation
Conus Medullaris Syndrome: affects sacral cord and lumbar nerve roots, affects control of excretory functions, lower limb reflexes with similar symptoms to cauda equina
Posterior Cord Syndrome: caused by damage to rear of spinal cord, usually results in poor coordination skills but usually does not highly impact movement ability and posture
The American Spinal Injury Association (ASIA) has an Impairment Scale used by medical professionals who can use this scale to assess the severity of damage down to a spinal cord, ranging from ‘A’ to ‘E’, with the higher the grade being the more severe the damage.

Source: https://www.orthobullets.com/spine/2006/spinal-cord-injuries
Recovery and Neural Plasticity
After undergoing magnetic resonance imaging (MRI)/computerized tomography (CT scans)/ X-rays, medical professionals can determine treatment for SCIs. This treatment for SCIs largely depends on the extent to which nerve fibres remain intact. As incomplete SCIs allow the spinal cord to retain some function, typically those with incomplete SCIs have a quicker recovery process than those with complete SCIs. The primary goals of treatment of spinal cord injuries is to stabilize the spine, manage pain and other symptoms and promote recovery and rehabilitation. This is why immediate treatment to SCIs involve procedures like realigning the spine using a rigid brace or mechanical force as quick as possible alongside surgery to remove any
fractured bones or other objects that press on the spinal column. In the long-term, rehabilitation programs are important for people with an SCI as these programs can help individuals with their physical and mental well-being. Rehabilitation programs could include physical therapy, occupational therapy, speech therapy and counselling. Although spinal cords cannot be completely repaired yet, the National Institute of Neurological Disorders and Stroke, 2022 reports that current research on spinal cord injuries focuses on advancing our understanding of the four key principles of spinal cord repair: neuroprotection, repair and regeneration, cell-based therapies and neuroplasticity. With further research into these fields, the potential for enhancing recovery in both types of SCIs run deeper
Conclusion
Overall, the level and severity of a spinal cord injury work together to determine the severity and completeness of a paralysis. Higher injuries disrupt more neural pathways and therefore affect larger regions of the body, while lower injuries have more restricted impacts Whether paralysis is complete or incomplete ultimately depends on how much neural communication is preserved at the injury site. A deeper understanding of how severity of injury level shapes paralysis outcomes is essential for predicting patient outcomes, planning treatment, and developing future approaches aimed at restoring lost function.
Bibliography (1)
Antal Nógrádi and Gerta Vrbová (2013). Anatomy and Physiology of the Spinal Cord. [online] Nih.gov. Available at: https://www.ncbi.nlm.nih.gov/books/NBK6229/.
Villines, Z. (2020). Complete & Incomplete Spinal Cord Injuries: Everything You Need to Know [online] wwwspinalcordcom
Available at: https://www.spinalcord.com/blog/complete-vs.incomplete-spinal-cord-injuries
Christopher & Dana Reeve Foundation. (n.d.). What Is A Complete Vs Incomplete Spinal Cord Injury? [online] Available at: https://www.christopherreeve.org/todays-care/living-withparalysis/newly-paralyzed/how-is-an-sci-defined-and-what-is-acomplete-vs-incomplete-injury/
Bibliography (2)
Christopher & Dana Reeve Foundation (2025). How Does The Spinal Cord Work | Reeve Foundation. [online] Christopher & Dana Reeve Foundation. Available at: https://www.christopherreeve.org/todayscare/living-with-paralysis/health/how-the-spinal-cord-works/. Callahan, webmaster (2023) Callahan & Blaine [online] Callahan & Blaine. Available at: https://www.callahan-law.com/how-spinalcord-injuries-can-cause-paralysis/ Mayo Clinic (2024). Spinal cord injury - Symptoms and causes [online] Mayo Clinic. Available at: https://www.mayoclinic.org/diseases-conditions/spinal-cordinjury/symptoms-causes/syc-20377890
National Institute of Neurological Disorders and Stroke (2022). Spinal cord injury [online] www.ninds.nih.gov. Available at: https://www.ninds.nih.gov/health-information/disorders/spinalcord-injury
Cleveland Clinic (2021). Spinal Cord. [online] Cleveland Clinic. Available at: https://myclevelandclinic org/health/body/21946spinal-cord.
www.orthobullets.com. (n.d.). Spinal Cord Injuries - SpineOrthobullets. [online] Available at: https://www.orthobullets.com/spine/2006/spinal-cord-injuries.
WINNERS OF THE DLSS SCIENCE COMMUNICATION COMPETITION:
Introduction by Serin C
Science communication is a vital bridge between scientific knowledge and the wider world. It transforms complex ideas into accessible, engaging, and meaningful narratives that can inspire curiosity, inform public opinion, and empower individuals to make evidence-based decisions. In a rapidly advancing scientific landscape, the ability to communicate these developments clearly and responsibly has never been more important Strong science communication not only educates the public but also cultivates trust in scientific institutions, supports informed policymaking, and encourages the next generation of thinkers to participate in scientific inquiry. For students especially, learning how to communicate scientific ideas nurtures critical thinking, creativity, and confidence- skills that are essential far beyond the classroom
To emphasise this importance, and to encourage students to explore topics beyond the curriculum, we launched the DLSS Science Communication Competition, with the idea of the winning entry being featured in this first edition of the DLSS Journal. This competition was designed not just to assess scientific understanding but to challenge students to present advanced material in a way that their peers (and even those with little scientific background) could understand and appreciate. By doing so, we hoped to nurture curiosity and academic exploration across our community, showing fellow students that science is not confined to textbooks but is something dynamic, creative, and deeply connected to society
Sheraya and I were incredibly impressed with our winners. Despite being in Year 9 and 10, they chose to tackle an A Level Biology topic, something far beyond their current syllabus, and managed to present it with great clarity and sophistication. Their infographic was visually compelling, and scientifically accurate, demonstrating not only strong research skills but also an impressive ability to communicate complex concepts in a digestible format. Congratulations once again to our winners Myra, Nour and Zaira!
WINNERS OF THE DLSS SCIENCE COMMUNICATION COMPETITION:
Winners:
Myra S, Nour A, Zaira S




