THINK. LEARN. DISCOVER. AI & Health
Interview with Dr. Abigail Ortiz Combatting Health Inequities ChatGPT
Integrating AI in Mental Health Research
Considering Intersectionality in Biomedical Research
Friend or Foe?
THINK. LEARN. DISCOVER. AI & Health
Interview with Dr. Abigail Ortiz Combatting Health Inequities ChatGPT
Integrating AI in Mental Health Research
Considering Intersectionality in Biomedical Research
Friend or Foe?
MAGAZINE
EDITORS-IN-CHIEF: Jason Lo Hog Tian Stacey J. Butler
EXECUTIVE DIRECTORS:
Niki Akbarian
Elizabeth Karvasarski
Iciar Iturmendi Sabater
Kyla Trkulja
PHOTOGRAPHERS:
Niki Akbarian (Director)
DESIGN EDITORS:
Xinyi Li (Director)
Joshua Koentjoro
Anais Lupu
Vanessa Nguyen
Emily Tjan
Livia Nguyen
Stephen Nachtsheim
Jayne Leggatt
Andrew Janeczek
Josephine Choi
Genevieve Groulx
Anne McGrath
Brendan Lazar www.imsmagazine.com
SOCIAL MEDIA TEAM:
Elizabeth Karvasarski (Director)
Lizabeth Teshler
Mahbod Ebrahimi
JOURNALISTS & EDITORS:
Niki Akbarian
Kateryna Maksyutynska
Beatrix Wang
Denise Sabac
Bahar Golbon
Sipan Haikazian
Janet Z. Li
Ilakkiah Chandran
Mahbod Ebrahimi
Sonja Elsaid
Kristen Ashworth
Shu’ayb Simmons
Iciar Iturmendi Sabater
Nikou Kelardashti
Samantha Ricardo
Alex SH Lee
Samuel Lasinski
Madhumitha Rabindranath
Jennifer Ma
Benjamin Traubici
Dhvani Mehta
Usman Saeed
Sara Shariati
Maryam Sorkhou
We hope you are all enjoying your summer and getting some much-needed rest as we are happy to release our Summer 2023 issue of IMS Magazine! This issue, we have decided to focus on artificial intelligence (AI) and digital health, a hot-button topic that has taken the world by storm, which was reflected in the keen interest we had from journalists. The IMS community is always on the cutting edge of research and medicine, and we are pleased to highlight some of the great work that is underway in this quickly growing field.
In this issue, we highlight the work of Drs. Alexander Bilbily, Farzad Khalvati, Abigail Ortiz, and Robert Wu who doing cutting edge research in AI and digital health to aid in diagnostic testing and predicting illness. We also have thought-provoking Viewpoint articles on AI and mental health, the use of ChatGPT, how AI is helping clinical judgement, and the implementation of AI in the global south. This issue we covered three IMS events including the annual IMS Scientific Day, the Regenerative Medicine Symposium, and the IMSSA 3-Minute Thesis competition. Check them out to learn more about the wonderful events that happen at IMS! Lastly, we have our Diversity in Science article talking about intersectionality in biomedical research, another poignant topic that all scientists should be knowledgeable about.
That brings an end to our 2022/2023 year of IMS Magazine issues! We would like to thank all of our journalists, editors, and designers who have been part of the team this year and we wish those who are graduating and moving on all the best. We would also like to thank all of our readers online and in print for your support – this enables us to continue to work hard and make the best product possible to showcase the work and lives of the IMS community. Enjoy the summer and we will see you back in the Fall for the next issue!
Jason is a 5th year PhD student examining the mechanisms linking HIV stigma and health under the supervision of Dr. Sean Rourke.
@JasonLoHogTian
Stacey is a 4th year PhD student under the supervision of Dr. Andrea Gershon evaluating the quality of care for patients with respiratory disease using a population-based approach.
@StaceyJButler Jason Lo Hog Tian Stacey ButlerAs the 2022/2023 academic year comes to an end, the IMS Magazine team is looking towards the future of healthcare in their Summer 2023 issue. With a focus on artificial intelligence (AI), this issue showcases how the IMS community is using technology to improve the quality of patient care and the efficiency of our healthcare system.
This issue features several IMS faculty who are pushing the boundaries of new technology. Drs. Alexander Bilbily and Farzad Khalvati are using AI to enhance the power of current imaging techniques like x-rays, MRIs, and PET scans, reducing the need for more invasive or costly diagnostic testing. Drs. Abigail Ortiz and Robert Wu are using wearables to monitor patients both in and out of the hospital and predict future illness episodes. All of the faculty in this issue also discuss how privacy, trust, and generalizability are important considerations with AI technology and emphasize that although the landscape of healthcare is changing, the doctor-patient relationship cannot be replaced.
In this issue we hear about how Julia Tomasi, a PhD student in the IMS, incorporated wearables and virtual reality into her thesis, allowing her to monitor patients remotely during the pandemic. We also hear about a different side of technology from IMS graduate Helen Liu, who is applying the skills she acquired during graduate school to a career as a healthcare investment analyst.
IMS faculty Dr. Cindi Morshead is also featured in this issue and discusses mentorship and following your passion, both of which were common themes at the 2023 IMS Scientific Day. This year we held a two-day event on April 24th and 25th at Hart House, featuring a new professional development initiative ‘Charting Your Own Course’. The event was a huge success and gave students the opportunity to network and hear from individuals who have been successful in both academic and non-academic careers. This was a record-breaking year with over 180 IMS students submitting abstracts to present their research findings at IMS Scientific Day.
On behalf of the entire IMS community, I extend my sincere congratulations to the new faculty joining the IMS and members who have recently been promoted. I would also like to thank the Editors-in-Chief, Jason and Stacey, along with the editors, journalists, photographers, and design team producing another great issue of IMS Magazine. I hope you enjoy reading about the innovative ways that the IMS is using technology to improve our health.
Sincerely,
Dr. Mingyao Liu Director, Institute of Medical Science DR. MINGYAO LIU Director, Institute of Medical Science Professor, Department of Surgery Senior Scientist, Toronto General Hospital Research Institute, University Health NetworkNiki Akbarian is a first-year MSc student under the supervision of Dr. Linda Mah and Dr. James Kennedy. Her research focuses on better understanding the association between personality traits and biomarkers of Alzheimer’s disease and the genetic basis of this association. Outside of academia, Niki enjoys photography, playing the piano, and watching sitcoms.
Ilakkiah Chandran is a first year MSc student at IMS supervised by Dr. Danielle Andrade at the Krembil Brain Institute. Her thesis aims to understand the phenotypic and genotypic presentation of pediatriconset developmental and epileptic encephalopathies in adults. In her free time, she enjoys reading, going on impromptu adventures and tuning into some true-crime!
Sonia Elsjad is a PhD student investigating brain function and cannabis use in individuals with social anxiety. Prior to going back to school, Sonja was a clinical research and medical communications professional with nearly 20 years of experience.
Kristen Ashworth is a first year MSc student working under the supervision of Dr. Brian Ballios at the Donald K. Johnson Eye Institute and Krembil Research Institute. Her thesis is focused on developing a retinal organoid model to evaluate stem cell therapies for USH2A- and CRB1related inherited retinal diseases. Kristen loves cross country running, reading, going to Marshalls, and most importantly, doting on her two adorable golden retrievers.
Mahbod Ebrahimi is a first-year MSc student investigating the association between immune gene expression and suicide risk in schizophrenia patients under the supervison of Dr. James Kennedy. Outside of research, Mahbod enjoys a good book, playing chess, and listening to Jazz music. Mahbod is also an active member of IMS Magazine’s Social Media team.
Bahar Golbon is a second-year MSc student investigating the surgical outcomes of primary hyperparathyroid patients in Ontario under the supervision of Dr. Jesse Pasternak. In her free time, you can find Bahar completing her millionth puzzle, and drinking a cup of coffee!
Sipan Haikazian is a first-year MSc student researching the efficacy and safety of maintenance ketamine infusions for relapse prevention in patients with treatment-resistant bipolar depression, under the supervision of Dr. Joshua Rosenblat. Outside of research, Sipan enjoys playing the piano, exercising, and being around good company.
Nikou Kelardashti is a first year MSc student under the supervision of Dr. Karen Davis. Her research focuses on the relationship between neural oscillations and pain-attention interaction. Outside of academia, Nikou enjoys reading poetry and classic literature, watching old movies, and going for long walks.
Kateryna Maksyutynska is a PhD candidate investigating whether brain insulin resistance is a feature of the biology of depression under the supervision of Dr. Mahavir Agarwal and Dr. Margaret Hahn at CAMH. Outside of the lab, she can be found enjoying a good book, painting, or biking along the lake.
Iciar Iturmendi Sabater is a PhD student researching social processing and adaptation across neurodevelopmental conditions (autism, ADHD, learning disabilities, etc) under the supervision of Dr. Meng-Chuan Lai and Dr. HsiangYuan Lin. Iciar likes reading, exploring new places, and spending time with family and friends.
Janet Z. Li is a first-year MSc student studying brain-behavior relationships between conditioned pain modulation capability and functional connectivity of key pathways in the dynamic pain connectome under Dr. Karen Davis at the Krembil Brain Institute in Toronto Western Hospital. Outside of research, she can be found practicing piano, figure skating, creating fashion content, and café hopping.
Samantha Ricardo is a first-year MSc student studying mechanisms of Alport Syndrome under the supervision of Dr. Moumita Barua at PMCRT. Outside of the lab, you can catch her biking around the city, trying new cuisines, or attempting to play chess.
Denise Sabac is a first year MSc Student working with Dr. Felsky in the Krembil Centre for Neuroinformatics at CAMH. Her work aims to subtype mental illnesses in treatment-seeking youth using Similarity Network Fusion analysis of the Toronto Adolescent & Youth CAMH Cohort Study data. Aside from research, Denise enjoys playing sports, walking along sandy beaches, and drinking lots of coffee.
Kyla Trkulja is a third year PhD student at IMS studying under the supervision of Dr. Armand Keating, Dr. John Kuruvilla, and Dr. Rob Laister at Princess Margaret Hospital. Her work focuses on better understanding the mechanism of action of a novel cancer therapy for lymphoma so it can be better utilized in the clinic. Outside of academia, Kyla enjoys reading, writing, video games, and going for road trip adventures across the province. kylatrkulja_
Elizabeth Karvasarki (Lead) is a PhD IMS at Mount Sinai Catheterization Laboratory under the supervision of Dr. Susanna Mak. Her research involves investigating right ventricular and pulmonary arterial interactions in patients with pulmonary hypertension and heart failure. Outside of research, Elizabeth practices martial arts and is a 4th degree black belt.
Shu’ayb Simmons is a second year IMS MSc student working with Dr. Tripathy at the CAMH Krembil Centre for Neuroinformatics. Their work aims to quantify geneenvironment interactions using robust data analysis and statistics to better Black American psychiatric outcomes. In their free time, Shu‘ayb enjoys advocacy, fashion, travelling, producing music, and songwriting.
Beatrix Wang is a fourth year PhD student who, under the supervision of Drs. Freda Miller and David Kaplan, is trying to better understand neural stem cell behaviour during development. In her spare time, she enjoys reading, writing, and learning taekwondo.
Lizabeth Teshler is a first year MSc student at IMS supervised by Dr. Brian Feldman at The Hospital for Sick Children. Her research investigates physical joint health assessment in people with Hemophilia. Outside of research, Lizabeth loves biking, spending time outdoors and volunteering for various community initiatives.
The IMS Design Team is a group of second year MSc students in the Biomedical Communications (BMC) program. Turning scientific research into compelling and effective visualisations is their shared passion, and they are thrilled to contribute to the IMS Magazine.
9,673 publications on AI indexed via PubMed in the last 3 years (2020-2022), which is higher than the total of all years prior (8,304 publications between 1967-2019)
Number of yearly publications on PubMed regarding AI was almost 9x higher in 2022 than 10 years prior in 2012
Over
AI-enabled medical devices are FDA approved
AI refers to computer programs, or algorithms, that use data to simulate human intelligence in making decisions or predictions.
The computer analyzes data and makes decisions from a set of rules or instructions created and given to it.
Machine learning involves the AI algorithm teaching itself how to analyze and interpret data.
The algorithm may pick up on patterns that humans may miss.
Their ability to learn and interpret data improves as they are exposed to more information.
Deep learning is a subset of machine learning that uses multilayered networks like the human brain does.
They mimic how our brain cells take in, process, and react to signals from the rest of our body.
The AI will self-discover features unknown or unanticipated by humans.
Risk Prediction
Predict risk of developing disease, or risk of outcomes (ex. hospitalizations)
Appointments Virtual care (telehealth)
Benefits
Record-keeping Electronic medical records Medications via E-pharmacies Smartphones or web-based apps to record symptoms or medication use
Accessibility
Increasing accessibility for remote and undeserved populations
Improved Outcomes
Patients get more personalized care
Accuracy and Reliability
More accurate, reliable and precise diagnoses
1. U.S. Food & Drug Administration. Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices [Internet]. FDA; 2022. Available from: https://www.fda.gov/medical-devices/software-medical-device-samd/ artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices
2. Jaber N. Can artificial intelligence help see cancer in new ways? [Internet]. 2022 [cited 2023 May 14]. Available from: https://www.cancer.gov/news-events/cancer-currents-blog/2022/artificial-intelligence-cancer-imaging
3. Shreve JT, Khanani SA, Haddad TC. Artificial Intelligence in oncology: Current capabilities, future opportunities, and ethical considerations. American Society of Clinical Oncology Educational Book. 2022 Jun 10;(42):842–51. doi:10.1200/edbk_350652
Monitoring Wearables to monitor physical activity or heart rate (ex. Fitbits or smartwatches)
Treatment Identifying the best treatment Predicting response to treatment
Generalizability
Can the algorithm be generalized to broader populations?
Bias
Will lack of diversity used in training datasets bias the algorithm?
Transparency
How can doctors and patients understand how the algorithm came to a conclusion?
Retraining
Can the algorithm need to be retrained every time there’s new equipment?
Regulation
How will changes to existing algorithms be monitored?
4. Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthcare Journal. 2019 Jun;6(2):94–8. doi:10.7861/futurehosp.6-2-94 5.
Artificial intelligence (AI) has been integrated into various aspects of today’s society, and the medical field is no exception. In recent years, AI has shown great potential in improving disease diagnosis, prediction of treatment outcomes, equity in healthcare, and overall patient care.1 Some even say that AI can revolutionize our approach to healthcare in the future. According to Dr. Alexander Bilbily—a Radiologist and Scientist at Sunnybrook Health Sciences Centre and the Co-Founder of 16 Bit Inc.—, “[AI] is the new electricity that has just been invented, and it is starting to be used everywhere to create things that were never possible before.”
Dr. Bilbily’s dual passion for medicine and computer science led him to specialize in radiology after receiving his MD from the University of Toronto. During his residency training, which coincided with the beginning of the modern era of AI, Dr. Bilbily recognized the potential for AI models to improve the medical field, especially for disease screening and diagnosis. As a result, he co-founded 16 Bit, a startup with the vision of improving healthcare quality, efficiency, and equity with AI. Dr. Bilbily also directs a research lab at Sunnybrook Health Sciences Centre where he and his team work on several projects to design and implement cutting-edge screening and diagnostic tools across Canada.
One of Dr. Bilbily’s initiatives at 16 Bit uses AI to augment screening programs for osteoporosis, a condition characterized by reduced bone mineral density and bone mass. This results in changes in the architecture and strength of bones, increasing the risk of fractures.2 Although there are precise tools such as dual X-ray absorptiometry (DXA) to diagnose osteoporosis, 75% of affected individuals have never been screened. This may be due to the silent nature of the disease, as patients typically live with low bone mineral density for many years before experiencing its detrimental consequences, such as bone fracture. Thus, Dr. Bilbily and his team at 16 Bit have developed a model that would allow the screening and identification of patients susceptible to osteoporosis with X-ray images, as 80% of the population aged over 50 undergo at least one X-ray in their lifetime for various medical conditions other than osteoporosis. Although it is not possible to directly measure bone density from X-rays like you would with DXA, other features,
such as bone quality and architecture that are also affected by osteoporosis but are not currently used as diagnostic measures, are captured by X-rays. Dr. Bilbily and his team at 16 Bit have trained an AI model, called Rho, which can map changes in bone architecture and other features detected by X-rays to changes in bone density as measured by DXA. In turn, Rho can extract clinical insight about the risk of each patient developing osteoporosis from X-rays and notify clinicians to run clinical fracture risk assessment and DXA to confirm the diagnosis for high-risk individuals. Similarly, Dr. Bilbily leads a project to train models that can predict patients’ cardiac risk from myocardial perfusion imaging, as well as covariates such as their age and gender.
The objective of another project that Dr. Bilbily and his team work on is to improve the quality and affordability of positron emission tomography (PET) scans. PET is an imaging modality that utilizes radioactive drugs, called radiotracers, to demonstrate the metabolic and biochemical activity of tissues and organs.3 Although PET scans are great assets in the medical field, especially for oncological, neurological, and cardiovascular applications,3 radiotracers injected for scans are expensive and associated with significant radiation exposure to patients. Hence, Dr. Bilbily and his team are interested in using machine learning (ML) methods to minimize the noise and improve the
[AI] is the new electricity that has just been invented, and it is starting to be used everywhere to create things that were never possible before. “
“
quality of imaging while using smaller doses of radiotracers. Specifically, the models created by ML would match full-dose PET scans to simulated scans that would be generated by a quarter of the required radiotracer dose. In other words, these models would take a lowdose image and turn them into images with the same quality and details as full-dose scans. If the team achieves a good model performance, it is possible to inject each patient with a quarter of the dose originally required, allowing patients to be scanned with reduced radiation exposure and at a reduced cost. One challenge, however, is to prove that the de-noised low-dose images will not miss important details, such
as small metastasis. To overcome this barrier, physicians, blind to the dosage of radiotracers used, are asked to interpret the full-dose images and their de-noised low-dose match to evaluate whether it is possible to arrive at the same diagnosis using low-dose versus full-dose scans.
In fact, the impact of AI tools developed by Dr. Bilbily and his team has gone beyond the research setting. For instance, Rho has been Health Canada approved and has screened 18,000 patients since September 2022. Out of the thousands of individuals screened with Rho, approximately 50% were identified as at risk for osteopenia/osteoporosis, and 90% of those who had DXA after being identified as high-risk were diagnosed with the disease. It is also notable that using tools such as Rho involves no additional costs for patients as the information required for their performance comes from X-rays that are being obtained for other medical reasons. Therefore, it is essential to be transparent with patients about how AI tools are developed and how they can assist healthcare professionals in providing more efficient and precise care.
Overall, the benefits of AI for the healthcare system are numerous. Dr. Bilbily noted, “AI is no longer a nice to have. It has become a necessity to improve efficiency because our healthcare system does not seem to be sustainable in the long term. As far as I can see, AI is our best shot.” Yet, it is critical to
consider the potential limitations of deploying AI in medicine. The training of AI algorithms relies heavily on large, standardized datasets which may not be readily available in the healthcare setting. Due to the unstructured nature of electronic patient records and concerns about confidentiality, datasets are usually fragmented and incomplete. Furthermore, patient populations and treatments are rapidly changing, making it challenging to generalize and externally validate AI models trained with data from a specific patient population. Thus, physicians must understand how changes in interventions and patients can impact the performance of medical AI tools. As Dr. Bilbily states, “So much value can be potentially unlocked by [AI], but at the same time, we have to be very careful with how these [tools] are implemented in medicine…we need to make sure that [AI] is used in a safe and appropriate way.” Hence, training the next generation of clinicians with sufficient understanding of both computer science and medicine is critical for appropriately leveraging AI in healthcare.
Artificial intelligence (AI) has established itself as a powerful and transformative tool. This technology allows for the manipulation of large volumes of data to solve various problems, resulting in its application within diverse disciplines ranging from mundane to complex. Specifically, the implementation of AI in healthcare has revolutionized medicine with its ability to optimize algorithms to inform patient care, and in turn, the patient and user experience. Its use is constantly being expanded and perfected, including in the context of mental health research as scientists work to understand the biological underpinnings of mental illnesses.
Dr. Abigail Ortiz, a Clinician Scientist at the Campbell Family Mental Health Research Institute at the Centre for Addiction and Mental Health (CAMH), implements AI in her study of mood disorders. Her research focuses on the use of wearable devices to build personalized clinical prediction models for individuals with bipolar disorder. Utilizing advanced nonlinear techniques, Dr. Ortiz and her multifaceted team of quantum physicists, mathematicians, biomedical engineers, and computational biologists utilize time-series data to forecast episodes of illness. Together, they study the unique architecture of patients’ mood regulation to better understand clinical trajectories and outcomes.
“The one question that, I think, will take my career to solve has to do with mood regulation… We all have good days and bad days. Why do we bounce back from a bad day, and how?”
Dr. Ortiz was inspired by her own use of wearable technology and took the opportunity to translate it to her clinical practice. Depending on the outcomes being studied, collected data ranges from tracking sleep cycles to objective measures of physical activity, all of which are key factors in the progression of the illness. She recognizes that although these devices are not foolproof, they provide more complete data and offer “a window into the physiology of the patient.” Over time, with more data acquisition and model training, these wearables can be universally integrated into clinical practice to serve as a form of personalized and preventative medicine.
Although this technology has great potential, there are important ethical considerations given the intricacy of some of the research questions that AI is being used to solve, and the scale of data that is required to draw conclusions. Dr. Ortiz emphasized that, “Before we get to developing a [prediction] model, we also need to talk about the ethics of using AI or machine learning into these processes, not only because, of course, they can be biased, but also because we need to understand ‘what do we want to do with it’? How can we better serve patients with this information? With all this information,
we need to be aware that privacy and confidentiality are critical.”
Therefore, steps must be taken to ensure the safety and confidentiality of data when prediction models are implemented beyond a clinical setting.
Another important consideration is the affordability of wearables to ensure equitable access to these devices. This is pertinent given that socioeconomic status is a predictor of various mental health disorders.1 Therefore, to develop accurate prediction models, it is essential for training data to be captured from diverse populations to allow for broad utilization in the future. Furthermore, as technology rapidly advances and certain populations may have difficulty adapting it into their daily life, the accessibility of such devices must also be considered. Notably, Dr. Ortiz reported that from her experience, elderly research participants were very open to the use of wearables, enjoyed partaking in the research, and were among one of the most adherent groups in terms of collecting the data. This stresses the need for patient engagement in research to seek the perspectives from individuals with lived experience at all stages of the study–from conception to execution. Considering the needs of key stakeholders allows for the construct of studies that answer relevant questions and offers insight on how to best support the collection of quality data. Given that the introduction of AI to healthcare is relatively recent, such partnerships build trusting relationships
between patients and the care team through open dialogue.
Dr. Ortiz also took some time to reflect on her scientific journey and offered encouragement for future students hoping to pursue this area of research. In outlining her work, she highlighted that medicine is not limited to techniques just within the field. To foster growth, various skills and practices must be translated from different disciplines to be able to answer complex questions.
“[People felt that] combining mathematics and AI in psychiatry, for years, was just too complicated–not doable. What I would like to share with grad students is that, if
you think you have a good project, with a good idea… there is no cutting corners–you have to do the hard work. You have to tolerate the critiques and keep going if you feel that, that’s what you want to do to solve the problem; to help others; to keep moving forward.”
When asked about the future of AI in medicine, Dr. Ortiz had a very positive outlook on its ability to promote patients to take ownership of their health data and take on a more active role in their own care.
“I think it’s not so much that the technology is going to change or it’s that the use of technology is going to change... I think that how we all use [technology] is going to change… and it’s very empowering to see patients own it, for their own health benefit.”
Through this discussion with Dr. Ortiz, it is clear that the use of AI in medicine has the potential to revolutionize the understanding of multifaceted illnesses and provide more personalized treatment to patients. The subsequent integration of these techniques into standard clinical care can offer opportunities for personalized interventions and care, and encourage patients to be engaged in their healthcare. With this field and technology rapidly expanding, there is need for discussion surrounding the security and confidentiality of vulnerable patient data to ensure that it is being used ethically and stored securely. In addition, to facilitate the full integration of AI in medicine,
stakeholder engagement is essential to accurately collect data and effectively construct the study design. Overall, AI has the potential to reshape medical care offered to patients and transform the study of dynamic and multifaceted illnesses, such as in the field of mental health.
From everyone at the IMS Magazine, we thank Dr. Abigail Ortiz for sharing her passion for research and the innovative scope of her work in the field of mental health research.
If you would like to read more about Dr. Ortiz’ ongoing study, you can find it on PubMed (ID: 35459150).
Tumours are extremely diverse, owing to the massive network of cell cycle regulators that, when perturbed, trigger formation of cancerous tissue. Clinicians and scientists are increasingly recognizing this heterogeneity and are decoding how the genetic backgrounds of tumours inform how they respond to different drugs. However, there are often barriers to actually using this genetic information to treat patients.
Nowhere are these challenges more evident than in tumours of the central nervous system, which represent a leading cause of death and morbidity for children with cancer.
“Biopsying a brain tumour is not an easy task,” says Dr. Farzad Khalvati. “It’s a very invasive procedure which may actually be harmful.” That being said, performing biopsies, a process that typically involves drilling a hole in the skull and using a needle to remove the tissue of interest, remains the gold standard for identifying the genetic factors underlying brain tumour growth. Without this information, it can be difficult to provide patients with precision medicine—treatments that target specific tumour subtypes while minimizing negative side-effects. Therefore, oncologists and neurosurgeons must carefully balance the risks and benefits of performing such a procedure.
Dr. Khalvati, a scientist at the Hospital for Sick Children and an associate professor in the Departments of Medical Imaging and Computer Science at the University
of Toronto, thinks there is a faster and less invasive way to provide precise treatments for patients with paediatric low-grade gliomas (pLGGs), the most common type of brain tumour in children. For Dr. Khalvati and his lab, this solution involves the combination of artificial intelligence (AI) algorithms and medical imaging.
The premise is simple: because magnetic resonance imaging (MRI) is routinely performed for brain tumour diagnosis, there exists a great deal of information linking tumour appearance to genetic makeup. Dr. Khalvati believes that, by training AI algorithms on these MRI images and their corresponding genetic data, he can develop a program that can accurately identify the mutations driving glioma formation and growth.
This approach is promising in part because pictures are, by their very nature, dense with information that can be extracted and transformed into meaning, something that AI is particularly well-suited for. Through training, deep learning algorithms can take the data implicit within MRI scans and build predictive models using image features that help inform the underlying genetics of the tumours.
These features go beyond characteristics that are intuitive to humans, like tumour size and shape. They even go beyond more abstract radiomic features like pixel intensity, texture, and whether there is heterogeneity or homogeneity within the image. “[AI] looks at any possible information latent in
the tumour region” Dr. Khalvati says. The result is thousands and even millions of potentially informative variables, which often have no concrete understandable meaning. Dr. Khalvati continues, “With AI, we are dealing with an ocean of biomarkers—candidate biomarkers—and we want to find the best model that uses these biomarkers to make predictions.”
Dr. Khalvati and his team have had significant success with their approach thus far. When looking at the two most common subtypes of pLGGs, they are currently able to correctly predict glioma type nearly 90% of the time, and the incorporation of less common subtypes yields an accuracy of roughly 80%. There is still much work to be done, but in this rapidly evolving technological landscape, Dr. Khalvati hopes that this application of AI can be deployed to clinical settings within the next five years.
That being said, before this approach can become reality, there are various challenges that need to be addressed, many of which are technical. For example, for AI-based diagnostic algorithms to be used widely, they must be generalizable across different MRI machines and settings. If such a program cannot recognize that the same tumour can look different when scanned under different conditions, then its usefulness will be limited.
One of the greatest barriers to the widespread adoption of AI-based diagnostic tools, however, is of a completely different variety. According to Dr. Khalvati, it has to do with trust. Will oncologists and neuroradiologists
trust the predictions made by AI enough to adopt this tool? How do you prove to clinicians that something as intangible as AI is as accurate as concrete laboratory results? How do you make clinicians believe in AI, to the extent that they would entrust the well-being of their patients to a computer program? And how do you ensure that patients feel comfortable knowing their diagnoses have been at least partially made by AI?
The question of how to build this trust is something Dr. Khalvati and his team are also actively working on. One solution, he says, is through explainability. If the extraordinarily complex models used to predict tumour genotypes could be made more comprehensible to the clinicians using them, and if their outputs were demonstrably
logical in ways that humans could follow, then clinicians would have fewer reservations about relying on them. These diagnostic decisions could then be more easily communicated to patients as well.
Another solution involves allowing for increased interaction between AI and clinicians in what is known as a ‘humanin-the-loop approach’. “I think there should be a mechanism in place where clinicians can learn from AI, and AI can also learn from the clinicians,” Dr. Khalvati says. “There should be a two-way connection.” By having clinicians and AI working alongside one another and informing one another’s decision-making, not only could clinicians correct mistakes made by AI to prevent them from happening in the future, but the AI could also potentially flag cases of
human error. Such a platform could not only be beneficial for patient care but would go a long way towards establishing trust between clinicians and AI-based diagnostic tools.
According to Dr. Khalvati, this philosophy of AI and humans working hand in hand is crucial as we move forward into a world that is increasingly reliant on AI-based tools. Right now, there are many open questions about what this world will look like and what roles AI will play in it, just as there are many problems without clear-cut solutions that need to be addressed as we push into the future. To Dr. Khalvati, it is clear that we must implement human-in-the-loop platforms that prioritize explainability as we move forward with AI-based technologies. “By always keeping [humans] in the loop, I think we are all better off,” he says. “We can learn from AI, we can adjust AI, and we can have a better understanding of how decisions are made that definitely impact our lives.” Dr. Khalvati believes that it is through making this human-centric approach a reality— wherein we work with AI, shape it, and are also informed by it—that AI can be a force for human empowerment.
The role of technology in medicine continues to grow. This integration creates numerous opportunities to improve patient care along every step of the timeline. For example, communication technologies have facilitated interactions between healthcare providers to accelerate the delivery of care. Further, symptom monitoring technologies empower patients to monitor their symptoms outside of the hospital setting and enhance patient autonomy. While these developments may dramatically alter the composition of patient care, it is important to systematically evaluate these changes to understand how they ultimately affect patients. Dr. Robert Wu, Associate Professor with the Department of Medicine at the University of Toronto and General Internist at the University Health Network, is a leading scientist in the creation and implementation of communication systems to coordinate care and internet-based tools for management of chronic diseases.
Dr. Wu currently focuses on the use of wearable devices, such as Fitbit, to monitor physiological data. These wearables are able to continuously monitor patients both inside and outside of the hospital setting to provide large amounts of data for patients and providers. Dr. Wu emphasizes the benefit of in-hospital patient monitoring to identify risk factors for future adverse events. For example, many patients experience post-hospital syndrome, a period after hospital discharge when there is an increased risk of adverse events and
rehospitalization. Possible contributors to post-hospital syndrome include factors such as poor sleep and low activity that have occurred during hospitalization, which can better be understood through patient monitoring using Fitbits. Dr. Wu’s findings revealed that the Fitbit heart rate correlated well with the nurse-recorded heart rate and were better able to measure activity and sleep compared to existing assessment methods.1 These results suggest the possibility for wearables to inform management practices through more data availability. In addition, this work provides a foundation for further research on how these devices can be used in addition to regular assessments to optimize patient care.
Furthermore, Dr. Wu explores the benefits of wearables outside the hospital setting. More specifically, he focuses on the use of wearables to monitor chronic obstructive pulmonary disease (COPD), a group of diseases that cause airflow blockage and breathing problems. Applications that monitor oxygen saturation, heart rate, and activity can be useful to this patient population by predicting early exacerbations and improving their awareness and ability to manage their condition. To better understand the role of wearables for COPD management, Dr. Wu employs patient-directed and inclusive practices when developing new technologies. He conducted a qualitative study to identify specific factors that patients liked or disliked about the wearables, and differences between patient
preferences. For example, some patients showed a preference for sharing data recorded from wearable devices with their physicians, while others had some reservations. Further, he shares that some patients were reluctant to use the device due to discomfort caused by continuous audio recordings or features related to the device’s aesthetics.2 In using this approach, he could identify areas of improvement to not only enhance the application but also increase uptake by patients. In addition, he identifies a lack of consistent protocol for physician use of the data collected by these devices; therefore, as reliance on technology grows, so should procedures for consideration of available data to inform care.
Dr. Wu further elaborates on additional challenges in the development of wearables for symptom monitoring, particularly for out-of-hospital use. He shares the difficulty in filtering vast amounts of information available through consistent monitoring, and using only relevant information to predict outcomes or ascertain the severity of an event. Furthermore, he explains the importance of understanding how patients interpret the data of their wearable devices. For example, Dr. Wu describes his process of systematically evaluating the perception of dyspnea intensity and dyspnea-related distress and anxiety (DDA) in patients with COPD. In this study, his team found that presenting live physiological data during exercise can reduce DDA and encourage physical activity.3 As such, understanding these perspectives and their
impact on behavior may better inform the development of health technologies to optimize benefits for patients.
Dr. Wu describes the activity of developing new health technologies as requiring many different components. First, he describes the process as “a team of people working together,” where it is important to ensure that everyone is on board and open to making changes. Second, he emphasizes the need for a holistic approach during development, and consistent evaluation following implementation. The importance of revisions is demonstrated in one of Dr. Wu’s first projects piloting the transition from pagers to smartphones to enhance communication amongst hospital staff. The project was conducted
over two decades ago and demonstrates the evolving nature of communication systems and technologies in general. The initial study findings revealed an overall perceived improvement in the efficiency of communication, while it also increased the amount of communication between staff by reducing barriers to contact.4 Subsequently, the protocol for using smartphones underwent many revisions in past decades, and eventually switched to an entirely different system. This reveals not that the initial system was flawed, but rather that new technologies should be explored and implemented when beneficial.
When asked about the optimal balance between reliance on technology vs clinical judgment in the healthcare setting, Dr.
Wu expressed his belief that technology should support rather than replace existing practices. In particular, he emphasizes the importance of “the actual interactions with the patients and bedside care” provided by hospital staff. He also describes the beneficial role of speech recognition and artificial intelligence in aiding with documentation to increase efficiency in the delivery of care, reflecting his current research trajectory which aims to supplement and enhance the existing patient care models.
Lastly, Dr. Wu emphasizes the importance of collaboration on his projects. He references the involvement of the “researchers, the clinicians, but also technology people” in the successful implementation of new technologies. Furthermore, he directly involves patients in project development through qualitative studies exploring patient preferences. He believes that it is through learning from others and remaining open-minded towards change that health technologies can continue to evolve and benefit patients.
Chimeric Antigen Receptor (CAR)-T cell therapy is a novel immunotherapy that has recently emerged as a promising pillar to treat blood-borne cancers such as acute lymphoblastic leukemia (ALL) and non-Hodgkin’s lymphoma (NHL). This Master’s Research project aims to inform a lay public audience on the current advances into CAR-T cell therapy at Princess Margaret Cancer Centre.
This project aims to address the lack of comprehensive skeletal physiology instruction for undergraduate students at Western University. The goal is to develop a comprehensive multimedia interactive platform that supplements the PHYS 3120/2130 curriculum, providing students with a deeper understanding of bone formation, remodeling, and pathologies, while demonstrating the effectiveness of integrating innovative teaching tools to enhance their learning experience.
ChatGPT has garnered heavy media attention in the past year as a breakthrough in artificial intelligence (AI). It was created by OpenAI and introduced on their website on November 30, 2022, for public use. Its launch was met with intrigue and heavy controversy, resulting in the expression of highly polarized opinions.
ChatGPT, or ‘Chat Generative Pre-trained Transformer’, was named after its ability to produce human-like responses and its development based on the GPT-3.5 model. The AI software’s data is limited to September 2021, meaning it cannot retrieve information on events beyond this point and does not learn from its experience. Nevertheless, ChatGPT can produce content such as news articles, translate information between languages, and provide personalized recommendations for products or content based on user data. As such, uses of the model span many industries. On March 14, 2023, OpenAI revealed the newest version of the AI technology, GPT-4. It is currently in limited beta testing, but ChatGPT Plus has been trained with GPT-4 and is available for public use at $20 USD/month.
Despite its impressive abilities, the scientific research community has been skeptical about accepting ChatGPT in practice. High-impact journals such as Science and Nature have expressed concern about using this tool in publications. Recent authorship guidelines published
by Nature state: “Large Language Models (LLMs), such as ChatGPT, do not currently satisfy our authorship criteria.”1 Similarly, Science released: “Text generated from AI, machine learning, or similar algorithmic tools cannot be used in papers published in Science journals, nor can the accompanying figures, images, or graphics be the products of such tools...In addition, an AI program cannot be an author of a Science journal paper.”2
All authors are responsible for producing truthful and ethical research for which they are held liable; ChatGPT does not fit this definition. It has been known to make fabricated statements and cannot be held responsible for any ‘hallucinations’. An example is the software’s ability to convincingly present information from publications that do not exist.3 Although some believe that the tool was never meant to be used for data collection but for text generation instead, ChatGPT can still exacerbate misinformation. As researchers, our main concern is mitigating any bias, which can severely skew and cause misinterpretation of findings. Since this tool has been trained using vast amounts of data produced by humans, it carries inherent biases in its outputs. OpenAI recognizes ChatGPT’s unfortunate gender, race, cultural, and political biases. For example, a recent article from Brookings referenced two separate queries evaluating President Biden and President Trump. ChatGPT highlighted more positive events, such as his notable accomplishments, when
describing President Biden.4 Although scientific research is never wholly objective and flawless, it is crucial to be aware of the biases that our tools introduce. Thus, mindlessly employing ChatGPT in all aspects of one’s research can be incredibly dangerous.
There have also been questions surrounding the privacy regulations of ChatGPT. On March 31, 2023, Italy banned the use of ChatGPT due to this rising concern.5 The Italian Data Protection Authority was concerned that ChatGPT was unlawfully collecting user data and providing access to inappropriate media to underage users. However, as of late April 2023, it has been reinstated in Italy as OpenAI has implemented tools to protect European users.6 Unsurprisingly, other countries, including Canada, Germany, France, Ireland, and Spain, have jumped on the bandwagon and are considering opening investigations into the software.6
Conversely, other organizations clearly see a benefit in this software as a means to improve autonomy and accessibility among disadvantaged individuals. On March 14, 2023, OpenAI announced its collaboration with Be My Eyes, an assistance app for the visually impaired, with the introduction of the Be My Eyes Virtual Volunteer.7 The chatbot is an image-to-text generator where users can upload various images. The AI software provides spoken language back, thus, significantly improving autonomy in decision-making among
visually impaired individuals. For example, users can snap a picture of a bottle of sauce in the grocery store, and the software can name it (even if the bottle is written in a different language). It can also provide step-by-step recipes using the ingredient! Currently, the virtual volunteer is in closed beta testing, however, a waitlist is available for eager users. On the same day, OpenAI announced its partnership with Duolingo, a widely used language learning app, by offering Duolingo Max.8 It provides learners with the basic Duolingo interactive exercises and two new features: Explain My Answer (a chatbot that explains why your answer was correct or incorrect) and Roleplay (a chatbot that engages in conversation). Duolingo Max aims to provide highly personalized feedback to all users at an affordable price ($30 USD/month or $168 USD/year). This is substantially more reasonable, considering English tutors charge $30/hour on average,9 ultimately improving access to language lessons. However, both companies understand the
shortcomings of GPT-4 and are carefully reviewing feedback to ensure the quality of their apps. Nonetheless, a growing number of companies are integrating GPT-4 into their apps or websites, including Stripe (international payment), Khan Academy/Khamingo (education), Snapchat (social media), and most notably, Microsoft’s search engine, Bing. Other mega-companies seem to be envious of this widespread use of ChatGPT and are challenging OpenAI by launching their own AI-powered chatbots, including Google.10
It would be naïve to deny the growing gaps AI software can address and the billions being contributed to this industry by influential tech giants, however, continuing the conversation about its benefits and drawbacks is critical. AI has profoundly impacted humanity in its infancy, so I and millions are anxious to see what comes next. Who knows, maybe this article was authored by ChatGPT (or should I say “me”)…?
1. Authorship | Nature Portfolio [Internet]. Nature. 2023. Available from: https://www.nature.com/nature-portfolio/editorial-policies/ authorship
2. Science Journals: Editorial Policies [Internet]. Science. Available from: https://www.science.org/content/page/ science-journals-editorial-policies?adobe_mc=MCMID% 3D79730734082570706754102817179663373464%7CMCORGID%3D242B6472541199F70A4C98A6%2540AdobeOrg%7CTS%3D1675352420#authorship
3. Welborn A. ChatGPT and Fake Citations [Internet]. Duke. 2023. Available from: https://blogs.library.duke.edu/blog/2023/03/09/ chatgpt-and-fake-citations/
4. Baum J, Villasenor J. The politics of AI: ChatGPT and political bias [Internet]. Brookings. 2023. Available from: https://www.brookings. edu/blog/techtank/2023/05/08/the-politics-of-ai-chatgpt-and-political-bias/
5. McCallum S. ChatGPT banned in Italy over privacy concerns [Internet]. BBC News. 2023. Available from: https://www.bbc.com/ news/technology-65139406
6. Robertson A. ChatGPT returns to Italy after ban. The Verge. 2023.
7. Introducing Our Virtual Volunteer Tool for People who are Blind or Have Low Vision, Powered by OpenAI’s GPT-4 [Internet]. Be My Eyes. Available from: https://www.bemyeyes.com/blog/introducing-be-my-eyes-virtual-volunteer
8. Duolingo Team. Duolingo Max Uses OpenAI’s GPT-4 For New Learning Features [Internet]. 2023. Available from: https://blog. duolingo.com/duolingo-max/
9. How Much Do English Tutors Cost? [Internet]. TutorOcean. Available from: https://corp.tutorocean.com/costs/how-much-doenglish-tutors-cost/
10. Kleinman Z. Bard: Google launches ChatGPT rival [Internet]. BBC News. 2023. Available from: https://www.bbc.com/news/technology-64546299
(ChatGPT's) launch was met with intrigue and heavy controversy, resulting in the expression of highly polarized opinions.
The impact of artificial intelligence (AI) on the healthcare system is becoming increasingly hard to overlook. The global market for AI technology is predicted to increase at a compound annual growth rate of 37% by 2030.1 With its ability to analyze and make predictions from a large amount of data, leading experts believe that AI can help healthcare professionals prevent disease, make accurate diagnoses, and suggest treatments tailored to specific patients.2
Although relevant in many aspects of healthcare, AI is set to play an extremely significant role in clinical judgement, an aspect of healthcare that is often the most relevant to patients. Clinical judgement refers to the application of knowledge and skills about best medical practices over time, gained through analysis and synthesis of patient data. Clinical judgement can be a complex task for healthcare providers, and AI technology has the potential to greatly aid healthcare providers in the important decisions they make involving patient treatment. However, there are concerns that AI may lead to a loss of critical thinking in physicians, eventually fully replacing human judgment in decisionmaking.3 While it is hard to believe that AI will ever autonomously make diagnoses and treatment recommendations for patients, it is important for us to understand where AI stands in the clinical decision-making process. And it is crucial that AI is viewed as merely a supportive tool.
But what exactly is AI referring to, and why is there so much hype about its use in healthcare? The term “AI” has been around
for decades now. AI in itself is a catchall term for a multidisciplinary field that focuses on creating computers which perform tasks normally associated with human intelligence. Within AI, there is a branch called machine learning, where computers use patterns from structured data (also known as the training data) to construct algorithms: a set of rules that computers follow to carry out pre-specified operations. These operations are carried out on data that is significant to a particular goal (also known as the testing data). Technology that uses machine learning differs from earlier forms of AI that relied on pre-programmed rules to perform tasks.
However, a further subset of machine learning called deep learning is what is driving the hype behind AI in recent years.4 One 2017 study published in Nature found that a deep learning system was capable of classifying skin cancer, both common and uncommon types, at a level of competence comparable to certified dermatologists.5 And that study was published six years ago. Recently, one study found that a deep learning algorithm could predict whether patients who suffer from less severe forms of acute kidney injury (AKI) would progress to a more fatal form given their current symptom characteristics.6
In deep learning, computers recognize patterns from unstructured data and use these patterns to make algorithms, ultimately leading to predictions. This “new” AI finds patterns and constructs algorithms from training data with lesser human intervention required.
Some advantages of deep learning are apparent. These algorithms can identify patterns and correlations in health data that may be missed by humans, leading to more accurate diagnoses and treatment strategies. 7 This is especially useful as a preventative measure. AI has the potential to predict patient outcomes based on their medical history, potentially leading to preventative treatment strategies. An example of this was presented earlier with the deep learning system that classified cancer. Furthermore, AI can be trained to recognize differences in disease characteristics between patients, and then suggest interventions that are tailored to the patient. 6 This feature is relevant as healthcare is increasingly adopting the precision medicine model of personalized and tailored diagnoses and interventions.
Its predictions are impressive, especially those relevant to the field of healthcare.
Other advantages of AI-based technologies in healthcare include an objective assessment of patient data. All healthcare professionals may harbour some amount of bias in the clinical setting based on patients’ race, age, and socioeconomic status, but since AI is only trained on raw data (in the absence of any external manipulation), it produces more objective predictions.
However, because AI systems are only as good as the data on which they are trained, AI-generated predictions may have their own faults. If the training data is incomplete or biased in some way, the predictions that AI makes can be inaccurate. Furthermore, AI may not take into account the patients’ social and cultural contexts. For example, if the data that AI uses to make its predictions is trained primarily on white, middleclass males, predictions would be less accurate for individuals belonging to other demographic groups. If an over-reliance is placed on treatment recommendations generated from AI, these important factors will be omitted. AI may also reinforce biases and discrimination through its data-driven predictions, strengthening inequalities in healthcare.
While developers of AI may address some of these aspects as advancements are made, human oversight is still required. AI systems should be continually monitored and evaluated to test for accuracy, especially as the technology becomes increasingly autonomous. AI algorithms should be transparent, and the data used
to train them should be representative of diverse populations. Moreover, healthcare professionals need to ensure that AI follows ethical principles and that the predictions that are made consider factors not captured by patient data (e.g., the aforementioned sociocultural context). And no matter how advanced AI becomes, there is something comforting about having a human physician make the final call for diagnosis and treatment.
If AI is used solely as a supportive tool alongside human judgement, healthcare will only benefit. AI can help healthcare professionals come to conclusions from a large amount of data, implement preventative interventions, and support patient monitoring and follow-up. However, it is important to recognize the flaws inherent to AI, which highlight the importance of cautionary integration into healthcare. For healthcare providers, it should only be used as a supportive tool rather than a complete substitute for clinical judgment. Moreover, AI should be implemented ethically and appropriately, requiring oversight from all stakeholders in healthcare. Ultimately, AI presents opportunities for massive improvements in how we deliver healthcare, but if we wish to see this success, it is imperative that healthcare providers follow a model of patient-centered and evidence-informed care where AI plays a supportive role in clinical decision-making.
1. Stewart C. Artificial Intelligence (AI) in healthcare market size worldwide from 2021 to 2030 [Internet]. 2023 Mar [cited 2023 Apr 17]. Available from: https://www.statista.com/statistics/1334826/ai-in-healthcare-market-size-worldwide/#:~:text=In%202021%2C%20the%20artificial%20 intelligence,percent%20from%202022%20to%202030
2. Bohr A, Memarzadeh K. The rise of artificial intelligence in healthcare applications. In: Artificial Intelligence in Healthcare [Internet]. Elsevier; 2020 [cited 2023 Apr 16]. p. 25–60. Available from: https://linkinghub. elsevier.com/retrieve/pii/B9780128184387000022
3. Froomkin AM, Kerr IR, Pineau J. When AIs Outperform Doctors: The Dangers of a Tort-Induced Over-Reliance on Machine Learning and What (Not) to Do About it. SSRN Journal [Internet]. 2018 [cited 2023 Apr 16]; Available from: https://www.ssrn.com/abstract=3114347
4. Pettit RW, Fullem R, Cheng C, Amos CI. Artificial intelligence, machine learning, and deep learning for clinical outcome prediction. Emerging Topics in Life Sciences. 2021 Dec 21;5(6):729–45.
5. Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017 Feb 2;542(7639):115–8.
6. Wei C, Zhang L, Feng Y, Ma A, Kang Y. Machine Learning model for predicting acute kidney injury progression in critically ill patients. BMC Med Inform Decis Mak. 2022 Jan 19;22: 17. Available from: 10.1186/ s12911-021-01740-2
7. Johnson KB, Wei W, Weeraratne D, Frisse ME, Misulis K, Rhee K, et al. Precision Medicine, AI, and the Future of Personalized Health Care. Clin Transl Sci. 2021 Jan;14(1):86–93.
In 2020, there were approximately 20,000 registered working psychologists in Canada, a number that has nearly doubled since 2015. 1 While at first glance this may seem like steep improvement in terms of access to mental health care, this increase is seemingly insignificant when compared to the approximate 5 million Canadians who express a need for mental health treatment every year. 2 Factoring in the effects of the COVID-19 pandemic, during which time over 50% of Canadians reported worsening mental health, 2 the need for therapeutic services is of critical importance.
Over the last decade, there has been a global rise in mental health initiatives, such as Bell Let’s Talk and Time to Change, which are designed, not only to encourage much-needed conversation with affected individuals, but also to bring a more holistic approach towards supporting and educating the general population. Programs like these are highlighting how debilitating untreated mental illness can be, with an additional emphasis on the importance and overall benefit of therapy in improving wellbeing as well as a better understanding of one’s own mental health. However, while our communities are becoming better educated, the access to quicker, affordable, and more personalized psychotherapeutic services is becoming unattainable.
In a parallel timeframe, the world of artificial intelligence has also seen
explosive growth, particularly within the healthcare sector. Estimated to reach a market value of over 180 billion US dollars by 2030, 3 this current 20 billion US dollar market is beginning to cause tension within the field itself as recent technological advances have caused a rise in AI-assisted techniques; will AI replace psychologists?
In 2022 almost 50% of psychologists in the U.S. reported feelings of burnout and an inability to meet increasing demand.4 While practitioners in all clinical subfields of psychology and psychiatry are being pushed to their limits, the desolate plea for psychiatric help is only getting louder, with studies showing that mental health or addiction-related emergency department visit rates in certain Canadian provinces increased by 89.1% between 2006 and 2017,5 a number that is continuing to climb. Another consideration is the cost of producing mental health specialists. In North America, an individual is required to complete either a doctorate degree followed by clinical training or a medical degree with specialization. Altogether, the process can necessitate up to 15 years of higher education, and with the average cost of a doctoral degree in the U.S. being 40 thousand dollars per year of tuition alone6 and that of medical school being 60 thousand,7 incoming practitioners can wind up with hundreds of thousands of dollars of debt before even starting their official practice.
Looking beyond practitioners themselves,
the system is crumbling from within due to prominent barriers in accessibility and affordability. Structural barriers plague this field, particularly in countries experiencing socioeconomic conflicts and rural areas of developed countries. In the Northern United States, surveys show that 45% of individuals with a clinical-level mental problem in 2020 did not seek professional help, with over half of these cases being due to high expenses and a lack of access.8 Even under extensive healthcare coverage, therapy sessions can easily cost over $100 out-of-pocket per session which can snowball with recurring appointments.
As such, arguments can sensibly be made in favor of implementing a fully AI-based mental health care system that eliminates these long-lasting impediments. Interactive chatbots and avatars that engage with patients in the form of a virtual psychotherapist can remediate
accessibility issues with the touch of a button. Travel times no longer have to be considered when weighing cost-benefit, living in remote rural areas no longer has to come with disadvantages, and above all, ridiculous month-to-year-long waitlists for one appointment can finally be a thing of the past. Instead of one psychologist working with 15-25 clients, a well-programmed AI intervention has the potential to cater to entire communities and populations. In addition to creating a more equitable framework, digitized mental health care has the allure of a clean slate—the possibility of creating an upgraded system free of inconsistent
human biases and inequalities. Stigma-free initiatives in machine and deep learning are gaining a foothold worldwide and demonstrating the benefits of virtual care that is both flexible and accurate. Studies show that certain algorithms have over 90% accuracy in spotting behavioral symptoms indicative of anxiety and 100% accuracy at predicting who among at-risk teens are likely to develop psychosis.9
It’s easy to think that AI could be the solution to most of these problems, and while that may have merit, a completely digital mental health care system is not without its own limitations. A central point of concern is the ability of AI to comprehend and reciprocate the entire
spectrum of human emotion. This is especially important in mental health since human psychology is defined as the study of people’s minds and behaviours. A second doubt is the inherent human value of tradition and the apprehension of novelty. People may push back against opening up to or putting their trust in a robot. Technological mistakes are a normal part of any robotic system, and with skeptics underscoring the danger of such advanced AI, many are hesitant to rely on non-human technologies that have the potential to override human control. Research also shows that data and models predominantly remain private and there is little collaboration between researchers,10 putting transparency and real-world viability of AI models into question.
Will AI replace psychologists? The answer is still unclear. The debate becomes even more nuanced when delving into the four levels of AI, which range from reactive machines to self-aware entities. Current research is hovering in the middle of this spectrum and is a long way away from developing fully conscious AI that has the same empathic abilities as human beings.10 At present, lab-based initiatives are being translated into clinical applications despite only being labeled as a supporting role in therapy. Compared to other healthcare fields such as radiology or pathology where AI demonstrates better accuracy than humans, digitized mental healthcare has yet to fully substantiate the bold claims and aspirations we have imposed upon it. However, if advancements continue to be
made at the rate they are today, we have a good chance of welcoming AI robots into psychotherapeutic practice in the near future.
1. Number of psychologists Canada 2008-2018. Statista. https://www. statista.com/statistics/806108/psychologist-number-in-canada/.
2. Government of Canada. 2016. Mental Illness in Canada - Data Blog - Chronic Disease Infobase | Public Health Agency of Canada. Canadaca. https://health-infobase.canada.ca/datalab/mental-illness-blog.html.
3. AI in healthcare market size worldwide 2030. Statista. https://www. statista.com/statistics/1334826/ai-in-healthcare-market-size-worldwide/#:~:text=It%20was%20forecast%20that%20the.
4. 2023. Apaorg. [accessed 2023 Apr 9]. https://www.apa.org/ monitor/2023/04/psychologists-covid-burnout#:~:text=Faced%20 with%20the%20increased%20workload.
5. Chiu M, Gatov E, Fung K, et al. Deconstructing The Rise In Mental Health-Related ED Visits Among Children And Youth In Ontario, Canada. Health Aff (Millwood). 2020 Oct;39(10):1728-1736. doi: 10.1377/hlthaff.2020.00232. PMID: 33017254.
6. How Much Does a Ph.D. Cost? | BestColleges. wwwbestcollegescom. [accessed 2023 Apr 9]. https://www.bestcolleges.com/research/ cost-of-phd/.
7. Tuition at Every Medical School in the United States (Updated in 2019). Shemmassian Academic Consulting. https://www.shemmassianconsulting.com/blog/medical-school-tuition.
8. Nietzel MT. Almost Half Of Americans Don’t Seek Professional Help For Mental Disorders. Forbes. [accessed 2023 Apr 9]. https:// www.forbes.com/sites/michaeltnietzel/2021/05/24/why-so-many-americans-do-not-seek-professional-help-for-mental-disorders/?sh=149654ba3de7.
9. LaFrance A. 2015 Aug 26. The Algorithm that Predicts Psychosis. The Atlantic. https://www.theatlantic.com/technology/archive/2015/08/speech-analysis-schizophrenia-algorithm/402265/.
10. Daren S. 2021 Oct 12. Will AI replace Psychologists | Future of Psychology. InData Labs. https://indatalabs.com/blog/will-ai-replace-psychologists.
“Interactive chatbots and avatars that engage with patients in the form of a virtual psychotherapist can remediate accessibility issues with the touch of a button.”
The National Institutes of Health has committed to investing $130 million USD by 2026 in innovations focused on artificial intelligence (AI) to accelerate its application to biomedical research and healthcare.1 AI includes any tool that uses existing data and algorithms to effectively identify and solve problems in different systems. It has become more common in healthcare, whether as a diagnostic tool for chronic illnesses like diabetes, a rehabilitation tool, or cancer therapy. Researchers continue to look for ways to use AI in healthcare to improve access to healthcare, promote positive patient outcomes, and increase the healthcare system’s efficiency. Despite the growing acceptance of this in the Global North, (i.e., the developed world), the perspectives in the Global South (i.e., the developing world) are more skeptical.2
Clinicians and the general public in the Global South often view AI as an unfavourable tool that could compromise their safety, promote discrimination and exacerbate challenges to their day-to-day living.3 Since we perceive AI as effective and applicable in the Global North, it’s easy to overlook the fears and skepticism among the Global South as unjustified. Nevertheless, it’s crucial to consider the possible reasons behind this fear.
Firstly, AI has been described as “new colonialism” given its continued ability to learn from data and survey people and their activities.4 Colonialism and neocolonialism, the experience of having one’s experiences and spaces dominated by settlers, are firmly rooted realities for citizens of the Global South as they continue to face their consequences today.5 Having lost aspects of their culture, lifestyle and identities to colonization efforts has left long-term impacts and led to mistrust against the Global North.6 This isn’t new; during COVID-19, vaccine mistrust was highly prevalent amongst people in Sub-Saharan Africa as they connected their experiences of colonialism to vaccination.7 Since health-oriented AI often originates from the Global North, the ingrained mistrust of the North makes citizens of the South doubtful. Similarly, fears surrounding the potential of discrimination and oppression exist, given their experiences in the past.3
Although you need a screwdriver, you only have a hammer. Sounds frustrating, right? Frontline health workers in the Global South are often put in similar situations where they are expected to effectively learn how to implement technology from the North with minimal training.8 For example, in rural India, implementing AI-enabled mobile health applications made community health workers feel the tool was ineffective and, in some cases, doubt their abilities as they had difficulties adapting to a system they were not trained in.9 The differences in medical training, familiarity with technology based on AI, needs, and perceptions are often neglected when implementing healthcare AI in the Global South.8 This challenges the implementation of AI and their current practices as they must allocate time and effort to learn tools that are not tailored to meet their needs.
Thirdly, the implementation of AI in the Global South is not made to be sustainable.10 In many cases, healthcare AI is dropped off in these countries with the expectation of achieving the same success seen in the Global North.9 Even if patients and clinicians trust these tools and have adequate training to implement them, the health system must be prepared to sustain them. The implementation of health-based AI often neglects the role of the supply chain (i.e., available natural resources, production lines, and equipment) in building and ensuring the functioning of these tools.9 The failure to acknowledge the role of these factors in limiting the safe and responsible use of health AI is reflected in the lack of policies prioritizing the successful implementation of health AI in the Global South.10
The barriers that prevent the successful implementation of health-oriented AI in the Global South can be addressed with adequate consideration. Acknowledging the consequences of colonialism and neocolonialism on the Global South before implementing health-oriented AI can be the first step in successful
translation. Preparing both the Global North and South for discussion of healthoriented AI can build the opportunity to address hesitations and perceivable challenges to improve its implementation and sustainability. Its applicability remains endless as we work towards a future that will inadvertently rely on AI for day-to-day tasks. Collaboratively working towards tackling these underlying concerns can ensure that everyone can benefit from this new era of health.
1. National Institutes of Health. NIH launches Bridge2AI program to expand the use of artificial intelligence in biomedical and behavioural research. National Institutes of Health, U.S. Department of Health and Human Services; 2022 [cited 2023 Apr 10]. Available from: https://www.nih.gov/news-events/news-releases/nih-launches-bridge2ai-program-expand-use-artificial-intelligence-biomedical-behavioral-research#:~:text=The%20National%20Institutes%20 of%20Health,biomedical%20and%20behavioral%20research%20 communities.
2. Wall PJ, Saxena D, Brown S. Artificial intelligence in the Global South (AI4D): Potential and risks. arXiv preprint arXiv:2108.10093. 2021 Aug 23.
3. Dubber MD, Pasquale F, Das S. AI and the Global South: Designing for Other Worlds. In: The Oxford Handbook of Ethics of Ai. Oxford: Oxford University Press; 2021.
4. Sahbaz U. Artificial intelligence and the risk of new colonialism. [Internet]. 2019 Jul 1(14):58-71. Available from: https://www.jstor. org/stable/48573727
5. Iyer L. Direct versus indirect colonial rule in India: Long-term consequences. The Review of Economics and Statistics. 2010 Nov 1;92(4):693-713.
6. Wietzke FB. Long-term consequences of colonial institutions and human capital investments: Sub-national evidence from Madagascar. World Development. 2015 Feb 1;66:293-307.
7. Mutombo PN, Fallah MP, Munodawafa D, Kabel A, Houeto D, Goronga T, Mweemba O, Balance G, Onya H, Kamba RS, Chipimo M, Kayembe JN, Akanmori B. COVID-19 vaccine hesitancy in Africa: a call to action. Lancet Glob Health. 2022 Mar;10(3):e320-e321. doi: 10.1016/S2214-109X(21)00563-5. Epub 2021 Dec 20.
8. Damoah IS, Ayakwah A, Tingbani I. Artificial intelligence (AI)-enhanced medical drones in the healthcare supply chain (HSC) for sustainability development: A case study. Journal of Cleaner Production. 2021 Dec 15;328:129598.
9. Okolo CT. Optimizing human-centered AI for healthcare in the Global South. Patterns. 2022 Jan 3:100421.
10. Naidoo S, Bottomley D, Naidoo M, et al. Artificial intelligence in healthcare: Proposals for policy development in South Africa. S Afr J Bioeth Law. 2022 Aug 5;15(1):11-16.
Dr. Mark Boulos is a stroke & sleep neurologist, associate professor, and clinician-investigator in the Division of Neurology at the University of Toronto and Sunnybrook Health Sciences Centre, as well as the Medical Lead for the Sunnybrook Sleep Laboratory. Dr. Boulos oversees a research program that investigates the association of sleep disorders with TIA/stroke, dementia, and other neurological disorders.
Dr. David Cescon is a breast medical oncologist and clinician-scientist at Princess Margaret Cancer Centre. His research integrates laboratory and clinical studies that are focused on the identification of breast cancer therapeutic vulnerabilities and determinants of drug response and resistance.
Dr. Helen Cheung is an abdominal radiologist at Sunnybrook Health Sciences Centre and an assistant professor in Medical Imaging with a research interest in the use of imaging biomarkers for cancer research. Her main research focus is the use of MR biomarkers to predict biology and long-term outcomes in colorectal liver metastases.
Dr. Sage is an assistant scientist at the Toronto General Hospital Research Institute and an assistant professor at the University of Toronto. His research is focused on artificial intelligence and novel medical devices that guide surgical decisionmaking during lung transplantation.
Dr. Robert Grant is a clinician investigator and a medical oncologist at Princess Margaret Cancer Centre. He applies machine learning to electronic health record and high-dimensional biological dataset to improve outcomes for patients with cancer.
Dr. Brigitte Zrenner is a clinicianscientist with the Temerty Centre for Therapeutic Brain Intervention and the Mood and Anxiety Ambulatory Services. Her research interests include mechanism of pathophysiology in major depressive disorder and obsessive compulsive disorder and translational development of individualized brain stimulation protocols.
Dr. Kate Nelson leads the IDEA lab (Integrating Data, Experience and Advocacy) at SickKids which develops strategies to navigate medical uncertainty for children with lifethreatening illnesses. Our current focus is on children with medical complexity from neurologic conditions.
Dr. Kazuyoshi Aoyama is an associate professor in the Department of Anesthesiology and Pain Medicine. He is a staff anesthesiologist at the Hospital for Sick Children and an associate scientist at SickKids Research Institute. Aoyama’s main research interest is Health Services Research.
Dr. Amin Madani’s research focus is in surgical expertise and the optimization of performance in the operating room, including the development of technologies and innovations that incorporate artificial intelligence, computer vision, and advanced simulation.
Dr. Osnat C. Melamed, is a family and addictions physician at the Centre for Addiction and Mental Health. She is also an assistant professor in the Department of Family and Community Medicine at the University of Toronto. As a clinician-scientist, her research explores tobacco addiction, gender-related intersections with substance use, mental health, and physical health. She specifically investigates the efficacy of digital health tools like apps and chatbots for smoking cessation.
Dr. Hung’s research focuses on characterizing individual molecular profiles related to complex disease etiology and progression. Dr. Hung has expertise in integrative multi-omics data science based on machine learning analytics using high-dimensional data for disease prediction. She has been leading several large-scale international studies, including aero-digestive tract cancer, pancreatic cancer, and childhood cancers. In addition, Dr. Hung is leading work on early life determinants on complex diseases in longitudinal cohort studies.
Dr. Cindi Morshead believes that uncovering effective, long-term treatments to repair the nervous system in neurological diseases reflects one of the last frontiers yet to be crossed in medical research. Harnessing the regenerative potential of stem cells could be our passport to the other side.
With decades-long expertise in the field of stem cells and neural repair, Dr. Morshead is leading the charge at the University of Toronto to find cures for neurological diseases. As Division Chair of Anatomy in the Department of Surgery and crossappointed to a multitude of UofT-affiliated research institutions including the Institute of Medical Science, Institute of Biomedical Engineering, Donnelly Centre, and Rehabilitation Science Institute, Dr. Morshead has propelled her research program onto the international stage by using an integrative mindset. Her lab brings together an impressive docket of neuroscientists, stem cell biologists, and biomedical engineers in order to unite their expertise for a common goal–heal the damaged brain.
The challenges in treatment discovery for neurological diseases are substantial. For one, neurological diseases are vast, complex, and variable, and encompass a broad spectrum of conditions, such as Alzheimer’s disease, cerebral palsy, multiple sclerosis, stroke, and spinal cord injury. To add to the challenge, unlike most other major organs, the brain is one of few that isn’t meant to repair
itself. As such, neurological diseases are devastatingly progressive, and at present, largely incurable. However, Dr. Morshead explains that the regenerative potential of stem cells holds promise in the ability to promote neural repair in the brain and nervous system. The Morshead research team uses the foundational qualities of stem cells to study a versatile array of strategies for brain self-repair, including endogenous cell reprogramming, electric field stimulation, and metformin-induced neural stem cell activation, in both in vivo and in vitro models of disease.
‘Neural stem cells’ are three words that have been in Dr. Morshead’s vocabulary for quite a while. In fact, dating back to the very beginnings of her research career as a summer undergraduate student in the lab of Dr. Derek van der Kooy at UofT in 1985. Though at the time she was completing a degree in human physiology and philosophy, Dr. Morshead developed a keen interest in neuroscience after taking an upper-year neuropsychopharmacology course—a far cry from her philosophy prerequisites. Today, Dr. Morshead reflects that it was likely the convergence of the philosophical nature of cognition with the physiological operation of the human brain that drew her towards research in neurobiology and neurodegeneration. After all, neurodegeneration implicates the very structures that allow us to interact with the world—in a sense, the brain defines our humanity.
Fueling her passion for these intersecting disciplines, Dr. Morshead continued her journey in research as a PhD student at the van der Kooy lab. Through her thesis work, she was able to identify the neural stem cell niche in the adult brain. At the time of her studies, the scientific community was aware of the seminal finding that neural stem cells were present in the adult brain and Dr. Morshead’s pioneering discovery of their locale in the brain provided pivotal insight on the cellular and molecular microenvironment that regulates neural stem cell function, fate, and behaviour. This provided the fundamental groundwork for the advancement of research in neural stem cell biology for decades to come.
After the successes of her PhD, Dr. Morshead continued her momentum as a postdoctoral fellow in the van der Kooy lab. She garnered interest in the application of neural stem cells for neural repair, which paved the way for the starting of her own research program in 2003 within the Department of Surgery. Throughout
her 20-year tenure at UofT, Dr. Morshead has mentored hundreds of students in her lab, as an advisor outside her lab for those seeking her expertise, and previously, as an IMS Graduate Coordinator for several years. Her significant contributions in research, leadership, teaching, and mentorship have culminated with her achievement of UofT’s highly esteemed Lister Prize; the Award for Excellence in Graduate Teaching and Mentorship; and the Institute of Medical Sciences 50 Faces, to name a few.
With such a rich background of accomplishments and experiences, I asked Dr. Morshead what she considers to be the most pivotal achievements in her life. The first she shared was deeply personal: “Being a mom,” she said. “I have two boys, and they’re everything to me.” She added that having a family has allowed her to maintain a healthy balance between work and life and has kept her motivated in her career. In addition to motherhood, Dr. Morshead described the great personal reward she has found in collaborating with her scientific peers. “I love collaborating
in the same way that I love mentoring. You’re just always learning something new. And it’s also knowing that they [her peers] respect you and want to work with you that is very rewarding.” After all, gaining respect as a woman in science is something with which Dr. Morshead has become well-acquainted throughout her career. As one of six female scientists in a building of 30 faculty at the Donnelly Centre where her lab resides, she acknowledges that there is still a lot of work to be done to improve the landscape for women in science and academia.
If finding a cure for neurological diseases is a last frontier in medical research, Dr. Morshead stands at the forefront. In the future, her team’s contributions in the field of stem cells and regenerative medicine may help improve millions of lives. How do you get to this stature, of which many of us graduate students would reflect on as being at the pinnacle of success as a scientist? Dr. Morshead offered two mainstay pieces of advice: one, find a mentor—someone with whom you can truly connect and entrust. Two, put in the hard work to achieve the goals you are passionate about.
At the precipice of her academic career, Dr. Morshead adopted a simple yet powerful motto: “I’m going to keep doing things as long as I like them.” She credits this mantra for guiding her journey and shaping her into the person she is now. Her remarkable achievements—as a scientist, mother, professor, mentor, and woman trailblazer—serve as concrete evidence to us all that the principle of her motto is indeed effective; when you are driven by passion, you really can change the world.
While having a degree in the field that you want to work in is helpful, there are other ways to gain the skills required. In fact, it is common for many professionals to develop new goals as their career progresses. Often, these goals can involve switching fields entirely. The desire to pursue a new career path may happen right after graduating, while other times, the transition can happen even sooner— halfway through the training. Helen Liu, a recent graduate of the Master of Health Science (MHSc) program at the Institute of Medical Science (IMS), is a perfect example of a student who transitioned her professional interests from neurocognitive science to finance during her graduate training. IMS Magazine recently interviewed Helen to learn about the steps she took to transform her career.
Helen started her course-based MHSc program in 2017. However, shortly after, she realized that a career in scientific research was not the right fit for her. “I did not see myself in basic science. I was more interested in what happens after you publish a paper,” Helen said. Indeed, Helen wanted to help scientists advance their research and innovations by securing the necessary funding to conduct additional studies. For instance, imagine a scientist who has conducted animal studies and discovered a small molecule with therapeutic properties. If the scientist has an entrepreneurial mindset, they will approach technology transfer offices at their university or
hospital to protect their innovation by applying for patent protection. Then, the technology transfer office teams at these academic institutions will leverage the research and the patent to look for business opportunities. They could do so by partnering with an investment management organization, such as MaRS Innovations. MaRS Innovations will, then, investigate how many more animal studies are needed before moving to clinical trials and how much funding is needed to complete the pre-clinical studies. If it is determined that the product is worth developing, the scientist could either set up a start-up company and keep developing the molecule or sell it to a larger pharmaceutical organization. Often, companies such as MaRS Innovations help find larger investors eager to see start-ups grow and small molecules develop into profitable products.
“I was interested in learning how a molecule becomes a profitable product.” Helen echoed, and this exciting process made her pursue the field of finance. Once Helen decided to become a healthcare investor, she started taking elective MBA courses at the Rotman School of Management to gain the necessary skillset for her future career. These courses provided her with invaluable training. For instance, in the portfolio management and security analysis course, she learned how to perform financial analysis on publicly traded companies to decide whether their shares should be bought or sold. A business law class taught her basic areas
of law that typically affect a business’s operations, including how corporate contracts are structured and what happens when contract agreement provisions are violated. A pharmaceutical strategy course taught her about drug life cycle management. In all her jobs so far, she has been able to leverage something different from each class and apply these skills to her different roles.
With these qualifications in hand, after finishing her MHSc degree, Helen was ready to embrace her first position as an Analyst on the Life Science Technology and Venture Development team at MaRS Innovation. Within this role, Helen was constantly looking for investment opportunities for biopharmaceutical products at their early development stage, as she was part of a team that was responsible for determining whether the biopharmaceuticals were worth investing in. Following this role, Helen worked as an Associate on the Active Equities team at the Canada Pension Plan Investment Board, where she invested future retirement funds in healthcare companies.
Today, Helen is employed as an Associate at Sagard and focused on investing in healthcare royalties and credit opportunities. Royalty investments are upfront payments to purchase the rights to use or profit from biopharmaceutical products that could generate future revenues. At Sagard, Helen performs financial forecasting of pharmaceutical products and evaluates if investing in
their future product sales would be profitable. This includes understanding and predicting the drug’s pricing, reimbursement, competition, and other variables. Proceeds from these investments help biopharmaceutical companies continue to advance their research and development.
One of the things that Helen likes most about working at Sagard is that her position provides endless growth opportunities. “I very much enjoy that I am constantly learning about pharmaceutical innovation,” Helen noted. “Because my organization invests in new pharmaceutical products, I am always learning about new types of diseases and how these diseases can be treated and cured. Also, I am learning more about how to invest in healthcare products
and complete deals with multibilliondollar pharmaceutical and biotechnology companies.” However, working in this field does not come without its challenges.
Working in a high-paced investment firm has its drawbacks, too. “You must learn a lot on the job, such as building financial models and conducting analysis. This can be stressful when you are simultaneously managing multiple investments.” Helen said. Even after acquiring that one skill, there is always the next level of learning that must be conquered. Moreover, the expectation is to meet tight deadlines. This means that Helen sometimes works overtime on the weekends and even pulls all-nighters. The unpredictability of her work schedule will always keep Helen on her toes and require her to keep her schedule flexible.
Since IMS is trying to implement good equity, diversity, and inclusion (EDI) practices, we asked Helen to share what kind of EDI policies are practiced in her professional field. “Finance has been a male-dominant industry,” Helen mentioned. However, she also pointed out that in recent years, sanctions have been implemented to hire and promote females into financial roles. For example, by 2025, a stock exchange based in New York City will make it a requirement that all publicly listed companies have at least two diverse directors (one who self-identifies as female, and another who self-identifies as an underrepresented minority). Sagard, where Helen is currently employed, also
emphasizes diversity. Thus, EDI is highly prioritized, which would benefit the demographically diverse IMS students looking to pursue careers in finance.
Helen offered advice to the IMS students finishing their graduate training and wishing to follow in her footsteps. “It is never too early to think about what you want to do after graduation,” she stated. She pointed out that, at the end of their training, students often focus too much on completing their studies and writing and defending their theses. Yet, talking to people about potential work opportunities and looking for a job while still in school is essential. “Get involved with the Graduate Consulting Association at the University of Toronto,” Helen said. “Through the association, you could learn about organizations ready to hire students after graduating.” She also explained that building faculty networks is a way to explore job opportunities because the professors may know about available employment vacancies. They could even connect their current students with alumni working in the field. “So, every month, make a point of meeting one of these former students for a coffee,” Helen said. “IMS has such a professionally diverse faculty; you never know whom you will meet next, and that person may be your future colleague in the job you are looking for.”
The graduate school journey is both enriching and rewarding. However, it is a path filled with uncertainties and obstacles. Moreover, the COVID-19 pandemic has exacerbated the challenges faced by graduate students. Despite the adversities, Julia Tomasi, a PhD candidate at the Institute of Medical Science (IMS) in her sixth and final year of her PhD studies, stands as a testament to resilience and perseverance. Julia had to make a major pivot in her research due to the pandemic, but she refused to succumb to uncertainty. Instead, she embraced the hurdles as opportunities to flourish and thrive.
Julia started her post-secondary studies at the University of Toronto (UofT), double majoring in neuroscience and psychology. Throughout her time at UofT, Julia was very interested in being involved in innovation and research, especially exploring psychiatric genetics and the human brain. “In psychiatry, I felt there’s a special need for research because the treatments have not changed much over the past several years,” Julia explains. This inspired her to join Dr. James Kennedy’s lab at the Centre for Addiction and Mental Health (CAMH). Specifically, Julia was fascinated by anxiety disorders and dedicated her research efforts to identifying genetic factors that contribute to their development. “In the context of anxiety, it’s not just a feeling; it’s certain thoughts, behaviours, and physiology,” Julia explains. Since each person could experience these symptoms associated with anxiety differently and to various
degrees, Julia sought to find underlying genetic risk factors of this disease. Anxiety is most often studied using self-report measures that are prone to bias, and thus there is a need for more biologically-based markers. To achieve this, Julia needed to establish objective measures of anxiety. She explored physiological indicators associated with anxiety, including startle response (the magnitude of one’s reaction to a sudden noise) and heart rate variability (the difference in time intervals between consecutive heartbeats). Julia then delved into genes linked to these objective measures of anxiety, examining whether these genes could also provide insight into the anxiety risk.
As Julia commenced her recruitment efforts for her primary PhD project, the COVID-19 pandemic restricted research operations, shutting down research facilities. Before the pandemic, Julia had devised a plan for her PhD project that entailed collecting startle response data from patients with anxiety disorders and healthy individuals, determining whether genetic markers that predict exaggerated startle also contribute to the risk of developing anxiety disorders. Although she had intended to start recruitment in March 2020, restrictions precluded her from doing so. Since remote measurement of startle response was impossible, Julia had to alter her research approach to incorporate a different physiological measure. Despite the immense stress accompanying this situation, Julia leveraged her resourcefulness and transformed stress
into an opportunity to innovate. As she notes, “I used the stress to buckle down and try to find something that would work for the pandemic situation.”
During her first committee meeting following the research shutdown, Julia presented an alternative project plan that would enable her to remain on track. Her proposed project centered on heart rate variability, which is another objective physiological measure linked to anxiety disorder, assessed with a wristband device that can be mailed to participants (similar to what Apple watches do). She also wanted to find a method to model real-time anxiety reactions, to assess adaptive versus pathological responses.
Traditionally, researchers have attempted to induce anxiety in participants using methods such as electric shocks to observe the anxiety-altered state. However, given the remote nature of Julia’s research and plan to create an anxiety scenario that more closely mimics the real world, she explored an innovative approach involving a virtual reality (VR) environment. The objective of this VR environment was to induce mild anxiety in participants, allowing Julia to collect heart rate variability data and investigate differences in responding between individuals with and without anxiety disorders. To accomplish this, Julia teamed up with Dr. Richard Lachman, an experiential media expert at Toronto Metropolitan University, to create a method for inducing an anxiety-altered state. She drafted the scripts, and together with Dr.
Lachman, they designed the VR scenes. They created the videos to be viewed on a participant’s own smartphone with a cardboard VR headset. Julia also contacted and visited Dr. Kerry Ressler at Harvard to obtain expert advice on measuring physiological phenotypes, including heart rate variability.
Julia has successfully achieved her goal of recruiting 240 participants for her project over the course of two years, thanks in part to the utilization of videoconferencing and remote technology. The required equipment (wristband, cardboard VR headset, and saliva DNA kit) was shipped to participants.
Julia supervised their installation via videoconferencing and monitored data collection. Additionally, she identified a chance to examine the impact of COVID19 on anxiety in her study sample by integrating a validated COVID-specific questionnaire. Reflecting on the unique circumstances brought on by the COVID19 pandemic, Julia acknowledges that the project may not have progressed in the same way had it not been for the shift to remote work. She recognizes that a remote approach to research, which allows people to participate from the comfort of their own homes, is very valuable and can still be used even when in-person interactions become more feasible.
Besides her academic pursuits, Julia also engages in a range of extracurricular activities. These include her role as an IMS mentor, crisis responder for the Kids Help Phone Crisis Text Line, and membership in the University Consulting Group (UCG). In fact, she finds that these activities can boost her productivity, as they push her to have better time management skills. Moreover, Julia’s experience with the IMS mentorship program has provided her with the opportunity to guide newly admitted students through their academic journeys, giving her valuable insights into the importance of a good supervisor. Julia also recognizes that having a supportive supervisor who is invested in their students’ professional development has played a significant role in her success. After graduation, Julia plans on pursuing a postdoctoral position that would bridge
the gap between academic research and industry, with the ultimate goal of making a positive impact on people’s lives by contributing to innovative solutions on a larger scale. She aspires to take the research she has conducted and translate it into practical applications that benefit society.
Overall, through her journey, Julia has demonstrated that perseverance and resourcefulness are key to achieving success as a graduate student. Julia notes, “entering grad school, there could be a lot of anxieties, a lot of unknowns. You might feel like a small fish in a massive pond. It’s normal to feel this way at the beginning and you will meet people that will support you. There will be obstacles that come in along the way, but you are capable of pushing through them. If you seek the right people and find the field you are passionate about, all of the exciting things will far outweigh any stress. IMS has a really great community and wide range of research. Try to enjoy the ride and as long as you are motivated and resourceful you can absolutely work through any challenges that come your way”.
Intersectionality’ is a term that has gained increasing recognition in recent years, and for good reason.
Coined by seminal legal scholar and civil rights activist Kimberlé Crenshaw in 1989, intersectionality refers to the connectedness of an individual’s social identities and how they shape one’s lived experience.1 Crenshaw initially used the term to describe how Black women experience racism and sexism simultaneously—thus, creating a unique form of oppression that cannot be fully understood by examining either identity in isolation.
Looking further, however, intersectionality is relevant in various fields, especially biomedicine, where it has become increasingly important to consider the role of social identities in shaping health outcomes to better health inequities.2 Social factors such as race, gender, socioeconomic status, sexual orientation, and immigration status can significantly impact an individual’s health. To this end, failing to acknowledge intersectionality can lead to health disparities and perpetuate outcome inequities.
Therefore, considering intersectionality and understanding its overarching implications in biomedicine is crucial. In this article, we will explore the concept of intersectionality and its relevance, then discuss practical strategies for incorporating it into research.
In biomedicine, it is crucial to recognize that social identities and interactions can significantly impact health outcomes. You can consider health outcomes as the result of a particular health condition (e.g., a cure, improvement, or death). Health outcomes are often employed to assess the effectiveness of medical interventions or public health programs. On the other hand, social determinants of health are the conditions in which people are born, grow, live, and age that can significantly shape overall health outcomes.3 It is important to note that health outcomes are not static across all social groups. Notably, social determinants of health play a critical role in perpetuating these disparate outcomes known as health inequities. Health inequities are differences in health outcomes that are avoidable, resulting from inequitable barriers that limit opportunities and access to resources for specific groups.
While race, gender, and socioeconomic status have been well-documented as significant determinants of health, it is essential to acknowledge that other identities, such as sexual orientation and immigration status, also play a critical role.4 For example, 2SLGBTQI+ individuals may face unique social-level challenges related to healthcare access and discrimination, leading to poorer health outcomes. Similarly, immigrants
can experience barriers to healthcare access due to language barriers, lack of documentation, and discrimination— thus, resulting in delays in diagnosis and treatment. Due to differences in healthcare access, exposure to social stressors, and experiences of systemic discrimination and racism, minority populations are more likely to suffer worse health outcomes than their White counterparts.5 Therefore, understanding intersectionality within the context of biomedicine is crucial in promoting health equity for all and addressing health disparities in and beyond the Canadian medical system.
Intersectionality operates in complex and compounding ways by shaping individuals’
lived experiences6 and health outcomes among individuals who belong to multiple socially marginalized groups.7 The compounded effects of intersectionality have been documented in numerous other areas, including gender identity, sexual orientation, and immigration status.8 For example, a person who identifies as both a racial minority and from a low socioeconomic status household may face unique challenges pertaining to healthcare access and discrimination based on both their race and socioeconomic status. Therefore, one needs to consider the powerful and transcending impact that intersectionality continues to have in the design of healthcare policies
and interventions. Importantly, by intentionally applying the intersectionality perspective, decision-makers in healthcare can start to address the unique barriers that marginalized groups face while accessing healthcare services. Therefore, by recognizing the compounding effects of intersectionality, biomedical professionals can take meaningful steps toward addressing the health disparities that marginalized groups continue to face.
So, what are some pragmatic ways to intentionally incorporate intersectionality in medical research? First, we must address the need for diversity in study populations participating in biomedical research. A recent 2017 meta-analysis aggregated 99 Canadian health studies between 1978 and 2014 and found that only five studies examined nationally representative data.9 Such paucity of racially and ethnically diverse data severely limits our understanding of how social identities impact health outcomes and continue to perpetuate population-level disparities. To combat these disparities, it is essential to prioritize the inclusion of diverse populations in biomedical research. Specifically, researchers must consider implementing culturally sensitive research protocols that recognize and incorporate different populations’ unique experiences and needs. Furthermore, researchers must also leverage publicly available data
containing sociodemographic variables at all steps of the study development pipeline. Second, if collecting sociodemographic data, researchers must co-design studies with communities and engage in equitable data sharing while employing culturally appropriate outreach strategies. Finally, in addition to recruiting diverse study participants, it is imperative for researchers to consider intersectional variables in study design and analysis carefully. This can include using stratified analyses to explore differences in health outcomes across different demographic groups and including intersectional variables as covariates within statistical models. By doing so, we can better understand the unique relationships between social factors and health outcomes while maximizing study validity.
1. Crenshaw, Kimberlé (1989) “Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics,” University of Chicago Legal Forum: Vol. 1989, Article 8. Retrieved from https:// chicagounbound.uchicago.edu/uclf/vol1989/iss1/8/
2. Braveman, P., & Gottlieb, L. (2014). The Social Determinants of Health: It’s Time to Consider the Causes of the Causes. Public Health Reports, 129(Suppl 2), 19-31. https://doi. org/10.1177/00333549141291S206
3. Public Health Agency of Canada. (2022, June 14). Social determinants of health and health inequalities . Social determinants of health and health inequalities - Canada.ca. Retrieved April 27, 2023, from https://www.canada.ca/en/public-health/services/health-promotion/population-health/what-determines-health.html
4. Ghavami, N., Katsiaficas, D., & Rogers, L. O. (2016). Toward an Intersectional Approach in Developmental Science: The Role of Race, Gender, Sexual Orientation, and Immigrant Status. Advances in Child Development and Behavior, 50, 31-73. https://doi. org/10.1016/bs.acdb.2015.12.001
5. Williams DR, Yan Yu, Jackson JS, et al. (1997). Racial Differences in Physical and Mental Health: Socio-economic Status, Stress and Discrimination. J Health Psychol. Jul;2(3):335-51. https://doi. org/10.1177/135910539700200305
6. King, D. K. (1988). Multiple Jeopardy, Multiple Consciousness: The Context of a Black Feminist Ideology. Signs: Journal of Women in Culture and Society. https://doi.org/10.1086/494491
7. Cole, E. R. (2009). Intersectionality and Research in Psychology. The American Psychologist, 64(3), 170–180. https://doi.org/10.1037/ a0014564
8. Grzanka, P. R. (2020). From Buzzword to Critical Psychology: An Invitation to Take Intersectionality Seriously. Women & Therapy, 43(3-4), 244–261. https://doi.org/10.1080/02703149.2020.1729473
By taking a more intersectional approach to biomedical research, we can better understand how social identities and lived experiences interact to impact health and promote health equity. This includes prioritizing diversity in research, developing culturally sensitive protocols, and carefully considering intersectional variables in study design and analysis. As we move forward, biomedical researchers must continue to recognize the importance of intersectionality in health outcomes and promote health equity for all.
9. Khan, M.M., Kobayashi, K., Vang, Z.M. et al. (2017), “Are visible minorities “invisible” in Canadian health data and research? A scoping review”, International Journal of Migration, Health and Social Care, Vol. 13 No. 1, pp. 126-143. https://doi.org/10.1108/ IJMHSC-10-2015-0036
Bridging the Gap: Intersectionality and Biomedical Research
Promoting Health Equity through Intersectionality in Biomedical Research
Like a Scottish kilt, every person is differently woven. Based on the tartan yarn colours and patterning, the final designs may perpetually vary. Just as every kilt is unique, no person is 100 per cent neurotypical. Instead, neurodiversity means no one stands exactly at the population mean point of every psychological trait, let that be concentration ability, social skills, restrictiveness of interests or repetitiveness of behaviors (to mention a few). People are rather woven like a kilt — with unique and complementary sets of traits.
Traveling to Scotland in March was my first chance to attend an in-person conference: It Takes all Kinds of Minds (ITAKOM). As Professor Nick Walker put it, its purpose was that attendees could ‘undergo some neuroqueering’, that is, to acknowledge the richness in the differences between us.
It’s been said that neurodiversity-focused research is currently in crisis.1 To put it simply (perhaps, too simply), the field appears split between two apparently competing perspectives. On one end are those who consider neurodivergent conditions such as autism, attentiondeficit/hyperactivity or learning disorders (amongst others) as medical conditions characterised by inherent difficulties which should be supported. For instance, a teenager with a tic disorder may seek behavioral or pharmacological help to alleviate their tics, which get in the way of daily tasks.
On the other end are those who look at neurodivergent conditions from a more social lens—differences are not uniquely inherent to a person but also stem from
social expectations. For example, the social difficulties typically attributed to autistic children may instead be the product of an interaction between persons with differing neurotypes and social schemas. The difficulties arise in the interaction, rather than from empathic deficits within the autistic child (this is a myth!). Understandings rooted in the social perspective were held by the majority ITAKOM.
Available literature tells us that information transfer is similarly efficient in both autistic and neurotypical groups whereas communication efficiency is reduced in neurodiverse groups.2 But cooperation between neurotypes is not necessarily negative: A poster presented at ITAKOM showed preliminary evidence that indeed, when only autistic, only neurotypical, and mixed neurodiverse pairs were asked to build towers out of spaghetti and modeling clay, the towers built by mixed pairs were ranked as most innovative.3
My takeaway from ITAKOM is that the medical vs. social debate in neurodiversity research is partly flawed since the social model does also emphasise the need for support. As speaker Holly Sutherland suggested, neurodivergent people’s ‘capacity jug’ (which represents one’s capacity to cope with stress) fills up quicker due to prevailing neurotypical expectations in society, calling for support. The social model promoted at ITAKOM suggests that we must first investigate whether the environment could be changed to free up ‘capacity jug’ space, rather than directly focusing on remediation of individual characteristics. For example, if a child’s
school performance drops due to being distracted, one should remove external distractors in the test-taking environment.
Kilts find it easier to thrive in gloomy Scotland but if worn in a wedding in Madrid (like my Scottish uncle did when marrying my Spanish aunt), guests may need a quick primer on Scottish traditions beforehand. As a neurotypical researcher, ITAKOM was the ultimate neuroqueering experience!
Raw Talk is a graduate student-run podcast at the University of Toronto about medical science, and the people who make it happen. We focus on the journeys, perspectives, and expertise of health researchers, professionals, students, patients, and community members at the University of Toronto and beyond.
Listen wherever you get your podcasts or at www.rawtalkpodcast.com
Follow us for updates, photos, and videos
@rawtalkpodcast
Get started with some of our favourite episodes:
Ep. 102 Healthcare Behind Bars
Ep. 101
The Many Faces of Burnout in Healthcare
Ep. 100 100 Years Later: Insulin and Beyond
Ep. 99 Refugee Healthcare in Canada
Ep. 98
Podium Pills: Fame or Folly?
Ep. 97 Let’s Talk
Grad School
The Institute of Medical Science (IMS) Scientific Day was held on April 24th and 25th, 2023. This two-day event was the time for the IMS community to gather together and showcase students’ research, honour their accomplishments, and foster networking opportunities. Both students and faculty members appreciated this opportunity to connect, share ideas, and learn about the cutting-edge research taking place in the IMS department.
The first day of the event was themed “Charting Your Own Course”, featuring interactive career panels and workshops. The career panels offered insights regarding working in academia as well as the industry. Attendees enjoyed learning from esteemed panellists who shared their personal and professional perspectives, and offered valuable guidance on navigating these different career paths. Moreover, two interactive workshops were held. The first workshop focused on optimizing students’ use of their Individual Development Plan, a document that covers essential topics for students to discuss with their mentors. The second workshop emphasized the importance of leveraging strengths to achieve professional goals. Overall, the first day offered participants a wealth of practical knowledge to help them succeed in their future careers.
The second day of the event started with the highly anticipated Alan Wu Poster Competition. This event brought together students from diverse academic disciplines
to present their research projects to a panel of judges. After each presentation, the judges posed thought-provoking questions that allowed students to think critically about their work. This year’s well-deserved winners for this competition were Danica Johnson and Brendan Santyr. Reflecting on her experience, Danica expressed that her favourite part about the IMS Scientific Day was the opportunity to learn about the various research topics that she would not usually be exposed to. She also felt delighted to hear about the many important differences that students and faculty members are making not only at the institution, but also in our community and science in general. The Laidlaw Manuscript competition was another highlight of the day, featuring four talented students who all gave compelling ten-minute oral presentations and answered questions directed by the judges. After much deliberation, Andreea Furdui emerged as the well-deserved winner.
The distinguished keynote lecture of the event was delivered by the renowned Dr. Daniel Drucker, entitled “The therapeutic promise of the enteroendocrine cell for cardiometabolic disorders”. Dr. Drucker is a Senior Scientist at the LunenfeldTanenbaum Research Institute, Sinai Health, and a Professor at the Department of Medicine at the University of Toronto. In his captivating keynote lecture, Dr. Drucker discussed his ground-breaking research on glucagon-like peptides and proglucagon-derived peptides, which has led to significant advances in the
treatment of conditions such as diabetes, short bowel syndrome, and obesity. Furthermore, Dr. Drucker shared insights into the translation of his research from lab to the clinic. He emphasized that this translation occurs at a “sweet spot”, that is, molecular research discovering new mechanisms with therapeutic potential, and human studies providing proof of concept with real effect sizes. When asked about the most inspiring aspect of the IMS Scientific Day, Dr. Drucker stated, “The highlight always is seeing young people start their scientific journey.” His remarks emphasized the importance of supporting and empowering young scientists to ensure continued advancement of science.
The awards ceremony marked the end of this thrilling day, which celebrated the achievements of both the students and faculty members. It was truly inspiring to witness the hard work and dedication of award winners recognized by the IMS community. Congratulations to all the participants, honourable mentions, and award winners for their contributions to science.
Overall, the IMS scientific day 2023 was a resounding success. It not only fostered interactions among the IMS community, but also allowed students to share their research as well as get inspired by the incredible work of their fellow researchers.
Research is a story – sometimes, a very long and complicated story. For many graduate students, presenting research in a condensed manner is difficult. In most graduate defences, you have between 20 to 40 minutes to present years of research. Now, imagine doing that in under 3 minutes. Could you do it? The IMS Student Association (IMSSA) 3-Minute Thesis (3MT) competition returned to an in-person format this year on Saturday, March 25th. Hosted by the IMMSA Academic Affairs Subcommittee, this annual competition challenges the presentation skills of IMS graduate students by inviting competitors to present their thesis research in under 3 minutes. This is an extremely challenging task!
This year, fourteen IMS students were selected from a pool of applicants to participate in the competition. Presentations were preceded by a welcome address from Dr. Mingyao Liu, Director of the Institute of Medical Science. A variety of topics were showcased in the participants’ presentations, including congenital heart block, Parkinson’s disease, and late-life depression. Participants were evaluated by a panel of three judges, which included Drs. Istvan Mucsi, Daniel Felsky, and Yaping Jin.
After difficult deliberations, the judges selected three winners. The winners of the 2023 IMS 3MT competition were Katharina Göke (first place), Lisa E. Lee (second place), and Alexander Koven (third place). Honourable mentions were given to Carmen K. Chan and Amanda Mac.
I caught up with a few of the winners after the event to get a sense of what it was like presenting at the IMS 3MT competition. Alexander Koven, urology resident and master’s student, said that the opportunity to engage in storytelling in relation to his research is what drew him to participate in the competition. Katharina Göke, secondyear PhD student in Dr. Daniel Blumberger’s lab, gave advice to future presenters and emphasised the importance of authenticity and simplicity when presenting.
Telling a story matters in research. Understanding your audience is also important. Competitions like IMS 3MT are great opportunities to tell your research story to different audiences.
The event was organised by Raesham Mahmood and Kristen Ashworth, Director and Deputy of IMMSA Academic Affairs, respectively. In conjunction with the other members of the IMMSA Academic Affairs Subcommittee, the two worked tirelessly to plan and implement this event. Although the process was a lot of work, both organisers agreed that the result was extremely rewarding.
Congratulations to all the participants, winners, and to the IMMSA Academic Affairs Subcommittee for planning a phenomenal event! IMS is truly brimming with exceptionally talented and passionate students. To future presenters, we cannot wait to see you next year!
The judges also shared their insights for future presenters. Drs. Felsky and Jin echoed the statements made by the winners, stressing the importance of knowing your audience and communicating the “why” of your research as opposed to just explaining what your research is. They also encouraged supervisors to be actively involved in supporting their mentees.
As a former IMS PhD student, I understand the IMS student experience (especially presenting) can be intimidating. It is so important as supervisors and mentors to be engaging and encourage the strengthening of scientific communication.
The great thing about hearing a 3-minute snapshot from a series of researchers is that something about each pitch will stick, and by the end of the program, you will have learned a little something new about a lot of different research topics! I think we all walked away from 3MT with more knowledge in our pockets and something new and interesting to talk about at the dinner table.
From April 26th to 27th 2023, graduate students gathered at the Courtyard Marriott Hotel to showcase their groundbreaking research in a two-day symposium as the grand finale to this year’s MSC7000Y course, Regenerative Medicine. Led by the renowned Dr. Sonya MacParland, this highly selective full-year course is an inter-provincial collaboration with limited enrollment spots, drawing students across Canada. Throughout the year, students immersed themselves in a diverse range of topics within the field of regenerative medicine. From exploring key scientific components to delving into ethical and economic considerations, they gained a comprehensive understanding of this groundbreaking discipline.
The symposium kicked off with a riveting keynote speech by the brilliant Dr. Cindi Morshead, titled “Promoting SelfRepair of the Injured Brain: A Stroke of Genius.” In her speech, she touched on the exciting advancements in Metformin (an FDA-approved antidiabetic agent) research, highlighting its potential role in neurogenesis. She also shared fascinating findings from her lab’s work on sex differences in behavioral recovery in mice models and the age-dependent effects on the size of the definitive neural stem cell pool. After the keynote speech, 20 students engaged in 10-minute presentations on their thesis projects in the categories of “Cell Engineering to Promote Recovery,” “Harnessing the Potential of Stem Cells,” “Maternal Health and the Gut Microbiota,”
and “Disease Mechanisms and Cell Therapies.” An esteemed panel of judges, including U of T faculty members Drs. Stephen Juvet, Ana Konvalinka, Katheryn Lye, and Shin Ogawa, assessed the students’ presentations.
On the second day, another 21 students took to the stage to showcase their work in 10-minute presentations followed by brief question periods. Once again, talks were separated into categories, this time focusing on “Engineering Approached in Regenerative Medicine,” “Innovations in Organ Transplantation: The Kidney,” “Innovations in Organ Transplantation: The Lungs,” and “Mechanisms in Organ Diseases.” Esteemed U of T faculty members, including Drs. Elmar Jaeckel, Michael Sefton, and Golnaz Karoubi, formed the judging panel for the second day. Dr. Bo Wang delivered an enthralling keynote titled “Opportunities and Challenges of Machine Learning in Regenerative Medicine,” where he explored the role of artificial intelligence (AI) in various aspects of regenerative medicine, including, ex vivo lung perfusion and genomic data acquisition and analysis. He used a relatable analogy of a racecar to explain the importance of ABCDs for AI success, with A for algorithms (engine), B for business (steering), C for computing (wheels), and D for data (oil). He also introduced DeepVelo, a revolutionary method utilizing Graph Convolution Networks (GCNs) to estimate cell-specific dynamics of splicing kinetics in single-cell studies.
Presentations were intermixed with networking coffee breaks where students and guests had the opportunity to chat with physicians, principal investigators, and keynote speakers Many students expressed their delight with the format of the event, as the formal setting created an authentic “conference” environment that enhanced their experience. Heather Booth from the University of Calgary and Grace Riddell from Queens University were chosen as the two winners. Their presentations were titled “Standardization of Adipose Mesenchymal Stem Cell Culture Parameters to Maximize Exosome Yield” and “Examining the Effect of Supra-Physiological Insulin in an In Vitro Human Insulin Infusion Cannula Host Response Model”, respectively.
Overall, the Regenerative Medicine Symposium was a success and showcased how MSC7000Y continues to be a standout course within the Training Program in Regenerative Medicine (TRPM). To learn more about the program and the symposium, visit their website at https:// www.regenmedcanada.com/.
Apositive deviant is a person that takes individual action—outside ‘the norm’—to better a pre-existing system in small increments, so that, over time, big changes ensue. Positive deviants have continuously instigated significant progress in healthcare for faster diagnoses, greater treatment success, and improved patient quality of life. In a collection of stories from the bedside, Better, by surgeon and New York Times best-selling author Dr. Atul Gawande, explores the actions of a few physicians – unsung heroes – who have conscientiously gone against the grain as positive deviants to create big changes in medicine.1 Gawande illustrates what it means to be a positive deviant through anecdotal evidence from his own clinical experience and that of his colleagues, underpinning his stories with three main themes: diligence, doing right, and ingenuity. Though Better is written from the perspective of a surgeon, the larger learnings extend far beyond the walls of the hospital and can be applied to us as researchers on the precipice of bettering progress in scientific discovery.
Gawande begins his book by elucidating the importance of diligence—an often overlooked virtue in medicine. “Diligence is both central to performance and fiendishly hard,” Gawande explains. He expounds on this thought by discussing the challenges of infection control in hospitals due to a lack of diligence in hand hygiene by medical teams. Gawande then uses anecdotes from medical staff on the frontlines to demonstrate how a small yet diligent
commitment for improvement in triage and treatment strategies has significantly reduced the mortality rate of soldiers in the last century. Most touching, perhaps, was Gawande’s recounting of his journey to India in the early 2000s to help implement an emergency eradication of polio in a region of 4.2 million people in just three days. Working with an almost impossible time constraint, the team remained committed to knocking on every door, in every village, to ensure no child was missed. The inoculation campaign was a success.
The duty of ‘doing right’ was the next theme covered by Gawande, using ethical topics such as medical malpractice and physician-assisted dying to explore what a doctor owes to a patient. Though many of his stories speak to American healthcare, litigation, and government processes, the key takeaway still rings true: when handed great responsibility (for physicians, the responsibility of human lives), moral principles must guide deviance. That’s in part what makes positive deviance, positive. Gawande concludes his book by speaking of ingenuity and innovation in medicine, and the importance of applying a scientific mindset to improve clinical care. He illustrates the introduction of obstetric tools, such as the C-section and the Apgar score, to explain the power of initially radical, yet relatively simple, ingenuity in medicine that has significantly improved both child and maternal outcomes.Overall, Better was a thought-provoking read, accessible and relatable to a broad audience. Despite being published in 2007, Gawande’s
message remains relevant to our approach to healthcare in the post-pandemic world. If anything, the last three years have only further highlighted Gawande’s core thesis on the importance of building positive deviants and bettering our performance in the face of obstacles, limited resources, and in a culture of complacency. The distinct patient anecdotes Gawande recounts are expertly threaded into a congruent story of how to become ‘better’ at igniting change and inspiring others to do so too. Admirably, he does not restrain from giving a clear stance on controversial topics, eliciting a sense of trust in the reader. It makes the journey through his book personally reflective and leaves us convinced to better ourselves in our own lives.
As a first-year Master’s student, I have experienced the challenges of being a novice in my research field, while also trying to become a positive deviant in my own right. As such, some of Gawande’s final words resonated deeply with me: “Arriving at meaningful solutions is an inevitably slow and difficult process. Nonetheless […], better is possible. It does not take genius. It takes diligence. It takes moral clarity. It takes ingenuity. And above all, it takes a willingness to try.”
Canada is experiencing an unprecedented rise with its aging population. More older Canadians require more personal and bedside support, a need often fulfilled by personal support workers (PSWs). The pandemic generated a great sense of urgency to support long-term care facilities and address the multifaceted challenges that front-line workers like PSWs have been facing. In 2021, the Ontario government funded the creation of accelerated, hybrid caregiver programs at 24 colleges to combat staff shortages.
“The government offered to find PSWs to go to school and pay for their education. We started accelerating the time needed to become a PSW”, explained Taylor Boroof, spokesperson for the Ontario PSW Association. “Essentially, we are creating PSWs as quickly as possible.”
However, there is significant doubt over the quality of a “hybrid program” and whether these graduates are adequately trained. In fact, sacrificing quality education in an effort to accelerate it may diminish the strength of the PSW workforce, as it misses many deeper, systematic issues of lack of workplace support and failure to keep up with the changing landscapes of healthcare.
Firstly, relying on online modules and restricting practice of such bedside skills to accelerate graduation cannot accurately capture the rigorous nature of caregiving. Some practical aspects of bedside care cannot be taught online, especially due to the rising complexity of resident needs and staff shortages during the pandemic.
Secondly, inadequate culturally-safe care training will disproportionately impact those from marginalized communities. For instance, the Indigenous community has a great emphasis on emotional and spiritual well-being that necessitates a kincentric approach to care, which means treating people as though they are extended family members. Cultural competence is especially important to delivering high-quality care to our Indigenous populations, one of the fastest aging communities in Canada.
Lastly, we can agree that much of professional learning occurs on the
job. Unfortunately, PSWs and other caregivers do not have this luxury because educational and longitudinal resources in their communities are lacking. For this reason, the accelerated hybrid model misses the mark on retaining PSW staff. One can argue that it exacerbates the turnover rate because the new graduates will feel more undertrained without community resources to support their continued growth as professionals. While increasing workforce capacity may be effective in the short-term, there must be more long-term institutional support that PSWs can lean onto to create a better, wellstaffed work environment.
“One of the main things I’ve heard from students is that they feel alone in the workplace. They were thrown into school really quickly. This is what you’re going to learn, now go out into the field, do it all on your own,” said Booroff. “There might not be people out there to help you because we’re all drowning in our own work.”
Canadian healthcare is facing workforce shortages all around, especially physicians. Yet, physician training is not being shortened in order to address the problem. In fact, the residency training for family physicians has become longer by an additional 1 year, in order to expand on increasingly important topics such as senior care, new technologies, and mental health. Evidently, more healthcare training must be provided with the changing times. But why are PSWs getting less training?
“We need to start our money into resources that are actually going to carry us until the end and play the long game here,” said Booroff.
PSWs go through a deeply personal journey - an opportunity to recognize one’s own strength and give back to the elderly who once cared for them. Their work is personally rewarding as much as it is societally impactful. The government must recognize the importance of the PSW role, put resources into proper education, and listen to those on the ground that are screaming for help.
As Booroff aptly states: “No more conversations about how we need to change things. Make the change happen.”
To learn more about the issues faced by caregivers, as well as what is being done by government and community advocacy groups to combat them, we invite you to listen to episode #109 of Raw Talk Podcast, titled “Caregivers: The Forgotten Pillars of Health Care.”
Additional resources on healthy aging and supporting Canadian caregivers can be accessed from the following organizations: AGE-WELL, the Ontario PSW Association, and the Center for Aging + Brain Health Innovation
We would like to acknowledge Raw Talk Podcast’s episode #109 team. This episode was hosted by Helen and Prisca with content development by Prisca and Junayd. Noor, Helen, and Frank conducted interviews on which both the podcast and this article are based. Episode audio engineering was completed by Alex. Co-executive producers Noor and Junayd oversee the production of all Raw Talk Podcast episodes.