

PEO's Jennifer Quaglietta and her vision for excellence in protecting the public interest
BC lawyer Will Tao on humanizing immigration processes through AI COMMUNICATIONS CORNER
Purposeful communications enhanced by AI
Dr. Michael P. Cary's perspectives on AI in American health regulation
Journalists and Contributors
Damian Ali
Oluwatoyin Aguda
Dean Benard
Collette Deschenes
M. Daniel Roukema
Graphic Designer & Production Manager
Allison Wedler
Editor in Chief
M. Daniel Roukema
Photo credits
DA Photography
Dean Benard
Edward Chang, V Saran Photo
Tolu Kukoyi
MDR Strategy Group Ltd.
Adobe Stock
The Registrar magazine is produced and published by
MDR Strategy Group Ltd.
800-1701 Hollis Street
Halifax, Nova Scotia Canada B3J 2T9
editor@theregistrar.ca
www.theregistrar.ca
www.mdrstrategy.ca
© 2025. All rights reserved
Connect. Learn. Lead.
Welcome to the 13th issue of The Registrar. For those of you who’ve been with us from the start, you’ll know that our goal has always been to provide you with great article, insights and practical knowledge that support your work in the regulatory sector. If this is your first issue—welcome! We hope you find this publication both useful and thought-provoking.
If you haven’t made one yourself, I’ll make a New Year’s resolution for you: resolve to connect. Meet the people we profile, engage with regulatory peers you’ve never met, and take advantage of the many opportunities to learn from other leaders. Having attended conferences around the world for a decade, I’ve seen firsthand when we meet, how these events generate new ideas and actionable solutions to the challenges regulators face. In fact, this very magazine would have never existed had it not been for several conversations with other regulatory communications practitioners at a conference in Portland, Oregon in 2016.
Conferences offer far more than networking opportunities—they are where you deepen your understanding of regulatory fundamentals. These events bring together regulators from diverse backgrounds to share their experiences and best practices, giving you access to fresh perspectives that can inform your own approach to regulatory challenges.
This year, consider attending CLEAR’s conferences in Chicago, U.S. and Wellington, N.Z. or CNAR’s in Calgary. If AI in regulation is something you’re interested in, don’t miss our Toronto event on Feb. 11 . This one-day conference brings together experts from three continents to discuss the role of AI in regulatory processes. The hard work happens in sessions, but the real connections are made when you step outside the conference room—over a coffee or drink, sharing ideas and solutions that will shape the future of our field.
If I had to make a resolution myself, it would be to see you at one of these events. The exchange of knowledge and ideas is what drives meaningful progress in regulation. And who knows? The next great regulatory breakthrough could be the result of a conversation you have this year. To those of you I had the pleasure of meeting in Savannah, Georgia, last month, it was great connecting with you.
M. Daniel Roukema
NSBS handed long-awaited report on systemic discrimination.
Jennifer Quaglietta on PEO's journey towards regulatory excellence.
Exploring AI use cases in regulatory communications.
Dr. Michael P. Cary's unique perspectives on AI streamlining American health regulation.
President and CEO of Bernard + Associates, Dean Bernard, on using AI to enhance regulatory investigations.
Immigration lawyer Will Tao on how AI could humanize the immigration process in Canada.
Provincial and territorial health plans will cover primary care provided by nurse practitioners, pharmacists, and midwives in 2026, federal health minister Mark Holland announced. As part of a new interpretation of the Canada Health Act, non-physician health care professionals will be able to provide the full spectrum of care they are qualified to deliver.
The British Columbia Securities Commission has fined a cryptocurrency trading platform and its director more than $18 million after determining the company misled customers by diverting nearly $13 million of their investments into gambling websites and personal accounts.
The Nova Scotia College of Pharmacists has attracted over 100 internationally trained pharmacists one year after launching a streamlined licensing program. The initiative has led to faster licensure decisions and improved recruitment of international professionals to support provincial health care needs.
CPA Ontario and Ordre des CPA du Québec (CPA Québec) have separated from the national accounting body, CPA Canada. Announced in June 2023 and formally in effect since December 2024, the decision was driven by various factors, including the goal of strengthening regulatory frameworks to better serve the public in their respective jurisdictions.
The College of Physicians and Surgeons of Nova Scotia has opened a new assessment centre to support streamlined licensure for international medical graduates (IMGs). The initiative aims to address physician shortages in the province and encourage long-term service from participants after completing their assessment and obtaining licensure.
The Ontario government is recommending that medical care in long-term care (LTC) facilities be overseen by nurse practitioners instead of physicians under a new bill.
The College of Licensed Practical Nurses of Prince Edward Island and the College of Registered Nurses and Midwives of Prince Edward Island are exploring the possibility of establishing a single regulatory body. Consultations are underway to gather feedback from the public, registrants, and other influencing parties and groups.
The Registrar staff
Nova Scotia has a rich judicial history. Distinguished lawyers and changemakers, including human rights activist Lee Cohen, Senators Wanda Thomas Bernard and Donald Oliver, and former Prime Minister Brian Mulroney, all graduated from Dalhousie Law School. Yet, alongside these achievements, Nova Scotia’s legal system carries the weight of historical and systemic inequities that have left generations-long scars.
The province has grappled with significant challenges in achieving justice and equity, exemplified by cases
such as the wrongful conviction of Mi’kmaq man Donald Marshall Jr. While Dalhousie University’s landmark Indigenous Black and Mi’kmaq (IB&M) program was designed to address the underrepresentation of marginalized groups in the legal profession, Indigenous and Black lawyers remain noticeably underrepresented in the province’s law firms. Compounding this issue, then-Premier Russell MacLellan stirred controversy in the late 1990s by publicly questioning the preparedness of IBM program graduates for roles in major law firms. In 2021, the Nova Scotia Barristers' Society (NSBS)
launched an investigation into systemic discrimination in the legal profession. The regulator appointed Douglas Ruck, K.C., a seasoned labour and human rights lawyer, to lead the review. Over the course of the investigation, Ruck interviewed more than 200 individuals who shared their experiences. The NSBS reported that the racial equity survey collected feedback from practising, non-practising, and retired members, as well as articling clerks, who identified as racialized, Mi’kmaq, Black, Indigenous, or of African Nova Scotian descent. Ruck’s findings, published in October 2024 in Regaining Trust: The Ruck Report , paint a sobering picture of systemic racism that continues to impact career advancement, workplace inclusion, and access to justice in Nova Scotia.
The Ruck Report reveals a troubling pattern of institutional and interpersonal biases in Nova Scotia’s legal system. Racialized lawyers—particularly African Nova Scotians—reported barriers that hindered their professional progress and created inequities in legal services. Ruck identified three key areas of concern:
• Hiring and promotion practices: Many respondents said they were overlooked for opportunities due to implicit biases and the lack of diversity in leadership and mentorship roles.
• Workplace dynamics: Racialized professionals frequently experienced microaggressions and isolation in predominantly white workplaces.
• Complaint mechanisms: A significant number of participants expressed distrust in the Society’s complaint-handling processes, citing fears of retaliation or inaction.
“A recurring theme was the reluctance of individuals to come forward, believing their voices would not be heard or their complaints would lead to negative consequences,” Ruck stated in the report. “Racism in the legal profession isn’t always overt. It’s embedded in the policies, practices, and everyday interactions that disadvantage racialized lawyers and clients.”
Despite decades-long challenges that face Nova Scotians of all communities, Ruck emphasized that transparency remains essential for rebuilding trust, both within the legal profession and with the public the NSBS is mandated to protect. This was a key competency raised in the report, with the 2020 murder of George Floyd drawing much attention to the inequities resulting from systemic discrimination of racialized and marginalized communities at the micro and macro levels in
Canada and the United States.
“Transparency is not optional; it is the cornerstone of any regulator that seeks to earn the trust of the public,” Ruck wrote. “Without openness about how decisions are made and why, we risk perpetuating the very inequities we aim to eliminate.”
The report cautioned that without visible accountability, public confidence in the NSBS—and by extension, the legal profession—could erode. “The public relies on lawyers to be champions of justice, and when the body regulating those lawyers fails to uphold principles of equity and transparency, it sends the wrong message,” Ruck wrote.
The Ruck Report outlines 21 recommendations to address systemic barriers and foster lasting cultural change. Key proposals include:
1. Independent ombudsman: The creation of an impartial ombudsman to oversee complaints of discrimination and bias, ensuring grievances are addressed without fear of retaliation.
2. External reviews: Regular evaluations of NSBS policies and practices to identify and dismantle systemic barriers.
3. Enhanced complaint processes: Making the complaint process more accessible and protective for complainants.
4. Mandatory anti-racism training: Embedding equity and inclusion as core principles in the Society’s governance and operations.
5. Community engagement: Actively collaborating with underserved communities, particularly African Nova Scotians, to ensure reforms reflect lived experiences.
“The Society has an opportunity—and an obligation— to lead by example, demonstrating that meaningful change is possible when accountability and transparency are prioritized,” Ruck noted.
The NSBS has publicly acknowledged the Ruck Report as a valuable tool to improve oversight and public trust.
Ruck views the report as a foundation for transformative change and while it may not resolve all issues, he expressed hope that it will encourage the legal community to refine strategies and achieve better outcomes.
“This report is not a panacea to eradicate all instances of systemic discrimination in the province, but it is much more than a starting point,” Ruck concluded. “It is a shining example of how the legal community can, and should, lead the way—to inspire everyone to take up the mantle as we move towards a future without racism.”
The Nova Scotia Barristers' Society declined requests from The Registrar to collaborate on this article and did not provide a response regarding its action plan in relation to the Ruck Report.
The Motor Vehicle Sales Authority of BC (“VSA”) builds public confidence in the motor dealer industry in BC by engaging and educating industry and consumers, ensuring a safe and reliable motor vehicle buying experience.
The Registrar plays a leadership role in delivering on the VSA purpose, which is to build public confidence in the BC motor dealer industry by engaging and educating industry and consumers, ensuring a safe and reliable motor vehicle buying experience. As a member of the Executive Team, this role will carry out responsibilities with a high degree of independence, initiative, and discretion, demonstrating good judgment and prudence. This position includes active involvement in strategic and annual business planning, oversight of ongoing operations and review of organization performance.
Read More
How to feature your organization’s career opportunities: Sign up for RegulatoryJobs.org today and unlock a wealth of resources and opportunities to find and retain top talent in the regulatory field. From job postings to board and committee postings and RFPs, RegulatoryJobs.org is your ultimate partner in recruitment success. Join us and take your recruitment efforts to the next level! Contact our team to learn more.
"For engineers looking to lead, true leadership means looking in places where you might not think you'll find it."
– Jennifer Quaglietta
and Registrar,
ICD.D, P.Eng, MBA, CHE, PMP, LLSSGB
The Registrar staff
What does it take to lead a century-old organization into a rapidly changing future?
For CEO and Registrar Jennifer Quaglietta, ICD.D, P.Eng, MBA, CHE, PMP, LLSSGB, the answer lies in bold vision, empathetic leadership, and a relentless commitment to innovation. As the first woman to hold the dual role at Professional Engineers Ontario (PEO), Quaglietta is paving the way for a new chapter of regulatory excellence. Under her guidance, the organization has embraced modernization, preparing Ontario’s engineering profession to meet the demands of the future
In 2022, PEO celebrated a century serving the public interest. Quaglietta highlighted this milestone both as celebration and a call to action to continue advancing public protection, particularly in our ever-evolving world shaped by emerging technologies and societal expectations. “We’ve embarked on a journey to become a data-driven, purpose-driven organization,” Quaglietta says. “By leveraging tools like business intelligence and promoting a learning culture, we’re ensuring PEO evolves to meet the challenges of tomorrow.” Cover Story
while upholding its regulatory mandate.
“I think there's a lot of opportunity for AI to play a role as a tool in supporting, for example, folks who are looking for something on our website.”
- Jennifer Quaglietta
An unwavering commitment to empathy has shaped Quaglietta’s career. She earned her MBA at the University of Toronto, combined her chemical engineering (ChemE) education and embarked on her professional journey in consulting. “Consulting was instrumental for me because it taught me how to communicate with different audiences, prepare polished materials, and navigate the complexities of diverse stakeholder interests,” she reflects.
These early experiences laid the groundwork for her multidisciplinary approach to leadership and problem-solving, even-
tually leading to her roles in multiple industries, including pharmaceuticals, government, health care, and insurance.
After spending six years in government, followed by her work in Crown corporations, she honed her expertise in managing large-scale system transformations. “I gained a deeper appreciation for how to effect meaningful, system-level change,” she notes. “Being in a regulated environment taught me the importance of transparency, fairness, and understanding the needs of the public,” Quaglietta explains. “It also instilled in me the value of listening to diverse perspectives and synthesizing them into actionable strategies.”
Her time in health care, particularly in acute care settings, was eye-opening. “I
learned so much from the remarkable leaders I worked with,” she recalls. “But perhaps the most valuable lessons were about empathy and compassion - understanding that these values are essential in everything you do.”
Throughout her storied career, Quaglietta has carried these lessons with her emphasizing their importance in shaping her leadership at PEO since becoming CEO and registrar in 2023.
Under the direction of PEO Council and in collaboration with staff, Quaglietta has prioritized ensuring the organization meets the demands of a rapidly evolving profession, making decisions based on necessary evidence. She led the regulator to take bold steps to align its operations with the principles of transparency and
fairness. Quaglietta credits this approach with maintaining relevance by actively listening to PEO’s stakeholders and leveraging data and technology to drive meaningful change.
This commitment has led to significant regulatory advancements, including the removal of the Canadian experience requirement for licensure ahead of the timeline mandated by the Fair Access to Regulated Professions and Compulsory Trades Act (FARPACTA). This milestone positioned PEO as a trailblazer among regulated professions in Ontario.
To further enhance systems, Quaglietta explains that PEO also relaunched their competency-based assessment guide, and adopted the ISO plain language standard, which removed jargon from written communications and augmented it with video media. “As adults, we have different modes of learning,” she notes. “We're going to continue to expand this concept and philosophy of multimodal ways of support and go back to basics with plain language.”
“With more than 60 per cent of our prospective applicants being internationally trained engineers, we recognized the need to adapt our processes to reflect the realities of our applicants,” she says. “By introducing digital tools, such as an online application portal, and embracing data-driven decision-making, PEO has enhanced accessibility and efficiency in its licensing and enforcement processes.”
These changes have resulted in measurable outcomes including faster application processing and increased compliance with professional development requirements. PEO now acknowledges receipt of applications for 90 per cent of applicants within 10 days and makes licensing decisions within 180 days. Additionally, 88
per cent of license holders who’ve completed at least two PEAK elements now comply with mandatory continuing professional development (CPD), a significant increase from 73 per cent in 2023. This is a major success as the organization transitioned from voluntary to mandatory continuing education.
“We’ve exceeded targets by providing faster acknowledgment and decisions for applicants,” she notes, adding that such initiatives underscore PEO’s role in setting the bar for regulatory excellence. “[These] important accomplishments completely changed the way individuals experience our organization.”
Quaglietta highlights PEO’s exploration of artificial intelligence (AI) as part of its modernization efforts and describes the regulator’s digital transformation journey as promising.
“I think there's a lot of opportunity for AI to play a
role as a tool in supporting, for example, folks who are looking for something on our website,” she says. “There's a role for AI to play in ensuring that provided answers are comprehensive and line up with [our] 34 competencies, or support scanning environments when it comes to our unlicensed practice department.”
Quaglietta explains that while AI has the potential to enhance regulatory operations, its crucial to address the ethical considerations to mitigate risks. “I published a paper called ‘Preparing for the future: How organizations can prepare boards, leaders, and risk managers for artificial intelligence’ and I'll share a number of risks that need to be considered before moving ahead,” she says.
“There are governance, performance, implementation, and security risks that all need to be understood first,” she says. “We need to understand the risks involved and how we can either mitigate or accept them from that perspective. But from an ethical standpoint, even at a high level, we must ensure that we build a culture
around it. We have to understand the culture of AI, the permissions around it, and know where to apply it if [we’re] going to use it.”
“We are a creature of statute — is government considering AI in how it sets the mandate, how it sets direction from a legislative perspective?” she says. “From enhancing processes to supporting applicants, there’s tremendous potential to use AI as a tool, provided we address ethical considerations and build a culture of trust around its use.”
Quaglietta’s forward-thinking leadership extends beyond regulatory modernization to addressing gender representation within the engineering profession. In 2023, she was named one of Canada’s Most Powerful Women: Top 100 by Women’s Executive Network (WXN). This honour recognizes women making a transformational difference in their fields and actively shaping a more inclusive future, particularly in underrepresented areas. She describes how this recognition is deeply meaningful and reinforced her vision for inspiring more young women to pursue STEM (science, technology, engineering, and mathematics).
she reflects. “In my career, I've used my engineering experience to lead impactful system level transformations across several industries, and I've always advocated for women. In our profession, we only comprise of 13.6 per cent of all licensed engineers in Ontario and I hope to inspire more women to study engineering and take on leadership roles, because they can.”
Quaglietta acknowledges that challenges persist in achieving gender parity within the profession. However, she emphasizes the importance of first identifying where gender-related issues arise from during the licensing process before addressing them. “Our gender audit, conducted with researchers from the University of Toronto and the University of California, Los Angeles [publishing the report’s results in 2025] analyzed a large data pool of women and found that overall, women took longer to get licensed than men and report higher intentions of quitting the licensure process,” she explains.
“I published a paper called ‘Preparing for the future: How organizations can prepare boards, leaders, and risk managers for artificial intelligence’ and I'll share a number of risks that need to be considered before moving ahead.”
“I think what meant the most to me was that the nomination came from staff,”
“They found the experience requirement to be more challenging, particularly for women with young children. They also looked at why the processes take longer, why women take longer to get licensed, and why are they more likely to quit the process. Two major themes emerged from these findings: A lack of support from educational institutions, and a lack of support from employers. That said, they found no perceived differences be-
- Jennifer Quaglietta
tween women and men in terms of meeting the academic requirement for writing the National Professional Practice Exam.”
“Leadership is not a title but a behaviour, and that true leadership often means looking sideways and down, not just up.”
- Jennifer Quaglietta
“When we listen and learn [about these issues], we can synthesize priorities, adapt our frameworks to meet changing needs, and promote a professional culture where every engineer and aspiring engineer can see a place for themselves and that they belong,” Quaglietta says.
In the book ‘The Fifth Discipline ’ by Peter Senge, servant leadership is described as a leadership style that begins with the natural feeling that one wants to serve, and serve first. It’s this style of leadership that Quaglietta says influences not just her approaches as CEO and registrar, but in creating a culture of belonging in engineering environments.
“In the last couple of years, I've had the privilege of speaking to thousands of women who have obtained a license, or are in the process of, getting licensed,” Quaglietta reflects. “I've heard their sto-
ries, their difficulties and worked with and met many women who have inspired me, women who are engineering leaders.”
As she considers the legacy she hopes to leave behind, Quaglietta highlights how she wants engineers to be seen as leaders, full stop. She acknowledges the mentors and colleagues who inspired her along the way to take risks, innovate, and be an exceptional listener, which she says she hopes to replicate as CEO and registrar.
“Leadership is not a title but a behaviour, and that true leadership often means looking sideways and down, not just up.”
“It's about understanding the perspective
of the public so that the public looks to engineers as leaders,” she concludes. “For engineers looking to lead, true leadership means looking in places where you might not think you'll find it. That’s where you'll find the best forms and examples of great leadership.”
Jennifer Quaglietta will appear as an expert panelist at the national AI IN REGULATION conference this February in Toronto. Learn more about the event and register.
Industry leaders in AI, government, and the licensing and regulation sector from Canada and abroad will meet in February 2025 to discuss, discover, and determine the impacts and approaches to protecting the public interest in the new era of computer-generated reasoning, decision-making and problem-solving.
This critical meeting of Canadian regulatory bodies intends to provide a deeper understanding of AI’s use in regulation and its impact on professional standards, investigations, jurisprudence, and complaints and discipline.
Collette Deschenes
Scroll through your LinkedIn feed. Can you spot any AI-generated content? Maybe it's the subtle clues, like em dashes or a rocket ship emoji. Or maybe you can’t tell that your favourite follow is using ChatGPT to share
Perhaps you haven’t embarked on a purposeful AI discovery journey. Even so, you’re likely using many tools day-to-day like Buffer, Grammarly and Outlook that have AI features built in. Simply put, AI is becoming more and more entrenched in our professional and personal lives.
Regardless of how AI-savvy you are, the rapid pace of change is transforming communications across the board. Staying informed about the tools available, understanding their implications, and learning how they
It’s no secret that generative AI tools like ChatGPT, Gemini, Jasper, and Claude are transforming the content we create and consume. Tools like Otter.ai and Fireflies are helping us manage meeting workload and the pressure to multitask by transcribing our conversations. Zapier and other automation tools are helping enhance our everyday workflow processes and track our organization’s stakeholder engagement.
can enhance or impact our work in communications is essential. Like any digital transformation, it’s about more than just using tools; it’s also about understanding their role in shaping the way we connect and communicate with individuals and communities.
Reflecting on digital transformation, I’m reminded of my first role in regulatory communications, when the sector was grappling with whether to embrace social media as a way to engage key audiences. Back then, platforms like Instagram were brand new. It was still a space for sharing snapshots of pets and your latest meals, far from being tools at the time to share regulatory updates or educate the public about a regulator’s role.
I recall how we began drafting “social media releases,” as organizations adapted to tailor traditional press releases to align with the emerging social media landscape and shift in how we all consumed media. The rise of social media was a transformative time of rapid change that required regulators to rethink how they engaged with stakeholders and adapt to constantly evolving expectations.
To this day, as we all know very well, social media continues to evolve moment by moment at a relentless pace. As communicators and regulators, we need to be proactive, agile and strategic in how we engage. It’s essential to stay connected with how regulated professionals, the public and other key partners want to be communicated with. Which platforms are they actively using? Should we still be engaging on X? Are they reading our emails? Would SMS updates be more effective? Should we explore an AI-chatbot? These are frequent questions underlying how we are constantly
adapting and aiming to refine our approaches.
The evolution of AI feels like a similar turning point, though on an even larger scale with larger impacts. Like the rise of social media, it requires curiosity, critical thinking, and strategic planning so AI is used effectively and responsibly.
Curiosity is an important first step in exploring AI’s ever-growing potential in communications. It can begin by asking yourself how AI might help address a challenge your organization faces in regulatory communications.
There are countless possibilities for AI adoption so the process can feel overwhelming. To focus your efforts, as with all strategic communications, reflect on your organizational values and strategic goals.
Perhaps your organization values inclusivity and enhancing partnerships and ensuring meaningful engagement is a current strategic goal. The tone of your communication is essential. It helps build trust and your messaging should consistently reflect these priorities and reinforce your organization’s commitment to inclusivity and collaboration.
An AI use case in this context could be utilizing generative AI to ensure your tone and messaging align with your organization’s values and broader goals. You could aim to utilize AI to:
• Enhance accessibility ensuring the content is accessible to your organization’s diverse audiences and by identifying and simplifying regulatory jargon in your copy or complex terms.
• Enhance readability and clarity of your messaging by identifying passive voice, lengthy paragraphs and run-on sentences. AI tools can offer actionable recommendations to make your organization’s messaging more concise, engaging, and digestible for your key audiences.
Ensure consistency in tone and alignment with your organization’s voice. AI can analyze and align content with your organization’s established tone, ensuring messaging reflects your values.
Ultimately, identifying the right AI use case isn't just about improving efficiency. It’s about enhancing and complementing your existing communications efforts. Thoughtfully aligning AI with your organization’s goals and values helps ensure AI supports you strategically.
Learning lessons from past digital transformations, asking the right questions, and identifying AI use cases that align with your organization's goals are just the tip of the iceberg. This conversation goes well beyond content creation and enhancing our messaging
as there are many challenges and opportunities that require thoughtful exploration.
If you’re interested in delving deeper into the intersection of AI, regulation, and communications, join me at MDR's AI in Regulation Conference on February 11, 2025 . I’ll be facilitating a session with Melissa Peneycad, MDR’s director of public engagement, where we’ll share insights on integrating AI to maximize impact in regulatory communications. We’ll dive into additional AI use cases that enhance regulatory communications and highlight strategies to adopt AI while maintaining trust and transparency.
Let’s continue the conversation.
www.mdrstrategy.ca
MDR’s regulatory communications expertise can support your organization in building trust, improving stakeholder relationships, and achieving your communication goals.
Join other regulators who choose the MDR solution to deliver their messages with meaning and purpose. Contact us at info@mdrstrategy.ca.
MDR Strategy Group is an award-winning agency that provides strategic communications, public engagement, and organizational design services exclusively to non-profit and public sector organizations.
Headquartered in Halifax, Nova Scotia, with additional staff in the Greater Toronto Area, we are recognized as one of Canada’s trusted advisors to regulatory bodies. These organizations, by government statute, oversee the licensing and professional regulation of dozens of occupations nationwide. We also serve organizations in the sustainability, housing, education, and social justice sectors.
We are hiring an Associate, Communications, a mid-level consulting professional to join our team. This role will support our firm’s strategic communications services by executing tasks and activities that drive impactful client outcomes. With a primary focus on crafting content, this position is integral to delivering top-notch communications strategies for our clients.
Read More
How to feature your organization’s career opportunities: Sign up for RegulatoryJobs.org today and unlock a wealth of resources and opportunities to find and retain top talent in the regulatory field. From job postings to board and committee postings and RFPs, RegulatoryJobs.org is your ultimate partner in recruitment success. Join us and take your recruitment efforts to the next level! Contact our team to learn more.
Global Insight
Dr. Michael Paul Cary on leveraging AI to achieve better health equity outcomes in the U.S.
Licensed professionals and the organizations that regulate them around the world are finding innovative ways to harness artificial intelligence (AI) to streamline operations and enhance efficiencies.
In North Carolina, Duke University’s School of Nursing is actively exploring AI to enhance patient care and broaden access to health care services.
Dr. Michael Paul Cary, PhD, RN, is a leader in applying AI within health care,
focusing on health disparities related to aging and developing strategies to advance health equity. As the inaugural AI Health Equity Scholar at Duke Health, Dr. Cary and his team scrutinize clinical algorithms applications that could perpetuate harm onto diverse communities, while promoting equitable health outcomes for all patients at Duke and beyond.
The Registrar spoke with Dr. Cary about his vision for reducing health care dis-
parities through AI innovation and how human-centered AI interfaces can equip health care professionals with skills to use AI effectively.
Dr. Cary’s appointment as Duke Health’s first AI Health Equity Scholar was just one of many accomplishments in his career grounded in the intersection of social justice and health equity. After earning his master of science in nursing (MSN) at the University of Virginia in 2006, Dr. Cary worked as a registered nurse in postacute care (PAC) facilities and community-based settings before joining Duke University’s faculty in 2016.
“These experiences exposed me to a diverse patient population and highlighted significant health disparities, especially among older adults requiring PAC rehabilitation after an acute illness or injury,” he explains.
“Motivated to make a broader impact, I pursued advanced degrees in nursing and health services research [at the University of Virginia]. My focus has always been on improving care delivery and health outcomes for populations susceptible to adverse outcomes, particularly older adults.”
2022 was a year of recognition for Dr. Cary’s professional accomplishments, highlighted by his induction as a Fellow of the American Academy of Nursing for his significant contributions to improve health and health care. His appointment as the inaugural AI Health Equity Scholar in 2024 expanded his work beyond the university, enabling him to address systemic health care challenges.
“As the inaugural AI Health Equity Scholar at Duke AI Health, my primary goal is to evaluate healthcare algo-
rithms and AI-enabled decision tools across the Duke University Health System (DUHS) to identify those that may exacerbate or perpetuate disparities based on race, ethnicity, national origin, age, or disability,” he says.
“By addressing and mitigating biases in these tools, we aim to promote health equity and improve health outcomes for all patients at DUHS.”
Dr. Cary’s work directly impacts four distinct levels of society, using various AI-influenced models:
• Community level: Redesigning algorithms to include social determinants of health (e.g., housing and food insecurity) to ensure AI addresses marginalized populations' needs, where nurses' involvement helps to build trust and bridge participation gaps.
• State level: Duke’s AI Governance committee sets equitable AI deployment models across North Carolina, reducing rural-urban disparities and ensuring safety, fairness, and compliance.
• National level: Empowering nurses to deploy tools like the Bias Elimination for Fair AI in Healthcare (BE FAIR) framework, which mitigates AI bias to align with federal policies, while establishing frameworks for national AI equity standards.
• Global level: At the International Association of Medical Regulatory Authorities (IAMRA), Dr. Cary emphasized governance, ethics, and strategies for responsible AI integration, proposing global collaboration, accreditation frameworks, and ongoing AI competency in health education.
"AI has the potential to streamline these processes by automating document verification, using natural language processing to review applications, and standardizing evaluation criteria thereby minimizing redundancies and subjective biases."
– Dr. Michael P. Cary
“I serve as a bridge between clinical practice and technological innovation, a role strengthened by my clinical and research background,” Dr. Cary explains. “My experience as a nurse, health services researcher, and applied data scientist provides a comprehensive understanding of patient care and the systemic challenges within healthcare.”
“Specifically, my research program focuses on developing machine learning models to predict patient outcomes in postacute care (PAC) settings. These models are designed to reduce bias and support data-informed clinical decisions, particularly for older adults who are at risk of adverse outcomes. This allows me to collaborate effectively with data scientists, clinicians, and policymakers, ensuring that AI tools are not only technically robust but also practically applicable.”
Dr. Cary notes that current medical licensing processes in the U.S. involve extensive paperwork and manual credential verification. This is compounded by varying licensing requirements to be fulfilled if the medical professional want to practice across different states. He explains how this all leads to inefficiencies that can be problematic in rural and underserved communities that need health care services. He highlights how AI could
be used to streamline these processes but stressed the need to proceed with caution.
“AI has the potential to streamline these processes by automating document verification, using natural language processing to review applications, and standardizing evaluation criteria thereby minimizing redundancies and subjective biases,” he says. “However, as regulators consider new and innovative ways to improve licensing processes, it’s crucial to proceed with the same cautionary principles applied to other AI systems. These tools must undergo rigorous evaluation for bias and discrimination before being deployed. By ensuring equity and accuracy in AI-driven systems, regulators can enhance efficiency while maintaining public trust and fairness in medical licensing practices.”
State-specific licensing requirements enhanced by AI-driven systems can expedite evaluation processes for international medical professionals who aim for quicker licensure without drawbacks, he says.
This, Dr. Cary explains, comes down to the regulator’s ability to track relevant data.
“The Physician Information Exchange (PIE) platform, for example, serves as a critical tool in ensuring transparency and accountability,” Dr. Cary says. “It has
facilitated the sharing of information between medical regulatory authorities, allowing agencies to identify doctors with fraudulent applications or those with sanctions who may attempt to practice in another jurisdiction. By leveraging platforms like PIE alongside AI tools, regulatory agencies can maintain robust oversight, track individual progress through the licensure process, and contribute to a safer, more sustainable global health workforce."
Dr. Cary acknowledges that biases and subjective human judgments, often influenced by unconscious bias, can affect credential assessments. To ensure fairness in the standards used for evaluation, he stresses that AI frameworks must first be developed with health equity at the core.
His team employs the Bias Elimination for Fair AI in Healthcare (BE FAIR) framework, which underpins principles of reducing bias and promoting equity to support the credentialing process. “Although BE FAIR was originally designed to address bias in clinical algorithms, its principles of reducing bias and promoting equity among groups can be adapted to support the credentialing process,” Dr. Cary says. “Applying BE FAIR to credentialing could allow AI systems to:
• Analyze credentialing data to uncover patterns of inequity.
• Adjust algorithms to ensure factors unrelated to professional competence, such as socioeconomic background or place of education (and other factors that potentially discriminate against individuals), do not negatively impact evaluations.
• Provide explanations for credentialing decisions to promote accountability and fairness.
"Particularly for vulnerable populations, AI tools must incorporate robust encryption methods, strict access controls, and comply with regulations like HIPAA to protect patient information."
–
Dr. Michael P. Cary
• Regularly review and update algorithms to address emerging biases.
Looking ahead to a future where AI is harnessed by medical professionals with enthusiasm, Dr. Cary underscores the need for strong governance committees to ensure the safe and ethical use of AI tools, including those used in the licensing process. He emphasizes that these tools must undergo rigorous evaluation for bias and discrimination before deployment and be continuously monitored for potentially discriminatory effects, while also ensuring patient health data is managed responsibly.
“Particularly for vulnerable populations, AI tools must incorporate robust encryption methods, strict access controls, and comply with regulations like HIPAA to protect patient information,” Dr. Cary says. “Additionally, these tools should employ data anonymization techniques to safeguard patient identities while al-
lowing for valuable insights. Routine checks and highlighting areas of concern in AI tools will allow regulatory bodies to maintain efficiency without compromising the thoroughness required to ensure patient safety and uphold professional standards.”
Dr. Cary also anticipates more personalized and predictive health care to continually develop. “[The] integration of AI with electronic health records and wearable technology could enable continuous monitoring and early detection of health issues,” Dr. Cary says. “These innovations hold great promise for improving health outcomes and advancing health equity.”
With over 30,000 registrants, the Ontario College of Social Workers and Social Service Workers (“the College”) is the regulatory body for social workers and social service workers in Ontario. Our mandate is to serve and protect the public interest through self-regulation of the professions.
We are seeking a dynamic Coordinator, Digital Content and Design to join the Communications team. This role requires an exceptionally creative, collaborative and detail-oriented professional, with a passion for producing impactful content. Reporting to the Director, Strategic Communications and Government Affairs, you’ll combine your passion for design, storytelling and digital strategy to create visually compelling content that aligns with the College’s broader strategic goals and objectives and resonates with the public we serve.
Read More
How to feature your organization’s career opportunities: Sign up for RegulatoryJobs.org today and unlock a wealth of resources and opportunities to find and retain top talent in the regulatory field. From job postings to board and committee postings and RFPs, RegulatoryJobs.org is your ultimate partner in recruitment success. Join us and take your recruitment efforts to the next level! Contact our team to learn more.
Dean Benard, President and CEO Benard + Associates
Have you ever watched a movie and thought, “Wait! We have that now!
We can do that!” Let’s consider Minority Report and Mission Impossible. These two movies provide compelling foreshadowing of potential Artificial Intelligence (AI) applications in investigations. Both films showcase futuristic technologies that resonate with the real-world development of AI tools, particularly in the context of predictive analytics, evidence gathering, and ethical challenges in investigations.
The integration of AI into investigative processes is not just a fictional vision anymore; it is quickly
becoming a reality. As we strive to uphold standards, protect the public, and maintain trust in various professions, AI offers tools that can enhance our efficiency and accuracy. However, with these advancements come challenges that require our careful consideration.
In my upcoming talk for the AI In Regulation Conference: Global Perspectives and Local Leadership, I will explore how AI will reshape investigations and introduce the opportunities and ethical dilemmas that come with this technology. For now, consider this article a primer for what’s to come as I describe sev-
eral AI applications we might soon see in practice.
Traditionally, regulatory investigations, have been reactive, addressing issues as they surface. AI can potentially shift this paradigm by enabling predictive risk modeling, allowing us to forecast potential violations based on historical data (Yes kind of like Minority Report). For instance, in healthcare, AI can analyze billing patterns to identify anomalies indicative of fraud, enabling early intervention and prevention of significant harm. This proactive stance is applicable across various professions, from law to engineering, where early detection of non-compliance can prevent escalation. However, as exciting as this technology is, it requires a delicate balance between innovation and fairness. We must question the accuracy of these predictions and establish safeguards to prevent false positives or undue targeting. Predictive tools can only be valuable if they enhance, not replace, our commitment to justice and equity.
Let’s face it: bias is a part of being human. It creeps into our investigative reports, sometimes in ways that we don’t even realize. AI offers a fascinating solution by analyzing text for subtle biases that may escape human reviewers. Through natural language processing (NLP), AI identifies patterns in language that suggest prejudice, favoritism, or stereotyping. Take, for example, a misconduct investigation.
AI might flag language like describing a complainant as “emotional” while calling a respondent “confident.”
Such descriptors can subtly influence perceptions decision makers generate about the parties. By highlighting these biases, AI ensures reports remain neutral and focused on facts. This isn’t just about catching mistakes; it’s about reinforcing our commitment to fairness. With proper training and transparency, AI can help us eliminate unintentional bias, ensuring that everyone involved in an investigation is treated equitably.
In regulatory work, not all cases are created equal. Some demand immediate attention due to public safety concerns, while others carry less urgency. AI-driven dynamic case prioritization enables us to allocate resources effectively by scoring cases based on criteria like severity and public interest. For instance, a minor licensing infraction might rank lower than a case involving professional misconduct that risks harm to clients. By automating this process, AI allows us to focus on high-impact cases, improving efficiency without sacrificing thoroughness. That said, we must approach this technology cautiously. What criteria should drive prioritization? How do we ensure transparency in these algorithms? We have seen criticism of regulators in the past where these types of decisions were alleged to have been poorly implemented, and that was when humans did it!
If you haven’t encountered a deepfake yet, let me tell you—they’re as impressive as they are alarming. These hyper-realistic forgeries of video, audio, or images are
already complicating our lives. Imagine a video submitted as evidence showing a professional behaving inappropriately during a consultation. It looks real, but AI video authentication tools might detect unnatural blinking patterns or inconsistent lighting, exposing it as a deepfake. Deepfakes highlight the necessity of advanced detection tools in maintaining the integrity of digital evidence. But they also remind us of the catand-mouse game between technology and those who seek to manipulate it. Staying ahead of these developments is crucial.
Now let’s talk about what might seem like science fiction but is closer than you think: AI as a “digital witness.” Imagine AI systems monitoring professional interactions and flagging potential misconduct in real-time.
This might sound outrageous to some and appealing to others, but it also raises ethical questions about privacy, data ownership, and accuracy. Looking at it from the regulated professional’s side, we are currently seeing digital note taking programs in use, ostensibly to save time and create greater efficiency, but what about using AI technology to monitor activities as a type of defensive practice against potential fabricated complaints. Will we eventually see AI chaperones in rooms rather that other people?
The possibilities are almost exponential as we consider AI applications. Emerging technologies are exploring neural signal interpretation, yes, reading brain activity. Imagine AI detecting dishonesty or intent based on neural patterns. While this remains largely theoretical,
the implications are staggering. Then there’s the idea of “digital regulators.” These systems could autonomously handle tasks like ensuring professionals meet licensing requirements. But what happens when AI misinterprets ambiguous rules? This tension between automation and nuance will undoubtedly have a major impact on the future of regulation.
In the discussion about AI the focus is often on how we can do things more efficiently and be proactive in our work, and how we guard against unscrupulous use of AI. However, AI can also be used to research and create strategies and inform operational decisions, and policy. The dynamic case prioritization mentioned earlier is a means of making decisions, but AI can also be used to analyze thousands of past cases and outcomes to assist in establishing the criteria that might be used to determine prioritization. AI could analyze all past cases, and their outcomes then provide a clear history of decisions to measure historical consistency in decision making, as well as inform new decisions as part of an effort to make consistent decisions in cases concerning similar issues and even evidence.
Just as Minority Report showed us a world where crimes could be predicted and Mission Impossible dazzled us with its use of cutting-edge technology to expose deception, we find ourselves standing at the intersection of innovation and reality.
AI is no longer a cinematic fantasy; it’s becoming an integral partner in our efforts to ensure fairness and
integrity in investigations. However, as those movies also warned, technology comes with risks and ethical complexities. As we adopt AI to enhance efficiency, uncover hidden patterns, and protect the public, we must remain vigilant to its limitations and potential misuse.
Like the heroes of these films, our mission - should we choose to accept it - is to harness AI’s power responsibly, keeping humanity and justice at the core of every decision.
The future may feel like science fiction, but it’s ours to shape. Let’s meet this challenge head-on, equipped with both curiosity and caution.
The Canadian Network of Agencies for Regulation (CNAR) was established in 2003 as a federation of national organizations whose provincial and territorial members are responsible for protecting the public through profession and occupation regulation Today, CNAR connects Canada’s provincial and national regulators, licensing boards, accrediting agencies, examining bodies, and government officials at all levels to discuss challenges, share ideas and develop best practices related to a wide range of issues relevant to organizations engaged in the regulation of professions and occupations.
The CNAR Board of Directors is seeking an Executive Director (ED) to provide visionary leadership and strategic direction for the organization, advancing CNAR’s mission of fostering engagement among Canada’s professional regulatory community and enhancing regulatory excellence through its programming. The ED will also ensure alignment of CNAR’s programming and initiatives with the evolving needs and priorities of its members, while building meaningful relationships and delivering measurable value to its members.
Read More
How to feature your organization’s career opportunities: Sign up for RegulatoryJobs.org today and unlock a wealth of resources and opportunities to find and retain top talent in the regulatory field. From job postings to board and committee postings and RFPs, RegulatoryJobs.org is your ultimate partner in recruitment success. Join us and take your recruitment efforts to the next level! Contact our team to learn more.
The Registrant
Immigration lawyer Will Tao on rethinking immigration systems with artificial intelligence
The Registrar staff
At a time when immigration policies are more systematic than they are humanistic, British Columbia-based immigration lawyer Will Tao (he/him) is pushing for change with bringing his unique perspectives and expertise to the table.
Tao has carved a professional niche for himself by combining legal expertise with a deeply empathetic ap-
proach to his clients' experiences. A prolific writer and policy advocate, he is also the cofounder of Heron Law Offices and creator of the award-winning Vancouver Immigration Blog.
Drawing from his career in immigration law, Tao emphasizes the critical intersection of immigration and modern regulation and the potential of emerging tech-
nologies to humanize the processing of applications for those seeking to immigrate to Canada.
Tao's professional trajectory is shaped by his personal experiences as a child of Chinese immigrants, his connection to ethnocultural and Indigenous communities in Canada, and his deep interest in people and their stories.
Having worked with highly respected law firms in Vancouver, including Larlee Rosenberg and Edelmann & Co, he developed a deep understanding of the legal field, particularly in navigating complex cases involving immigration. However, Tao always thought about the
structural inequities in legal practice, both internally and externally. “I didn’t resonate with the fully corporate practice, where I’d just help people move in masses here and not really think about the people at the end of the day,” he says. “[Immigration law] is whiteness personified. You're in a white space, and I felt I was always being that kind of outsider on the inside. Am I invited to the leadership conversations? Do you want me in the future role of a partner? When should I speak? When should I not speak?”
These experiences directly tied into his professional philosophy when establishing the award-winning Heron Law Offices, Tao’s law firm that placed first in national and regional Tier 1 rankings from Best Lawyers for immigration law practice areas in 2025.
Tao is deeply passionate about new technologies, particularly artificial intelligence, and its application in regulation. He emphasizes the critical role of regulatory safeguards in maintaining standards and ensuring accountability, especially as emerging technologies present both opportunities and challenges for end users. He envisions developed frameworks that integrate technology without losing the human touch, and he actively highlights on how AI and automation can streamline immigration processes in Canada.
“That's leading to the work I'm doing around automation and technology,” Tao says. From the applicant's perspective, individuals want to tell their stories, differentiate their cases, and have their unique situations fully understood, with their documents and letters genuinely considered.”
He asserts that AI cannot replace the nuanced judgment of human processing and cautions legal professionals against a potential over-reliance on technology. This reliance may stem from the mindset of ‘tech solutionism’ which is
the belief that technology will improve over time “The current system, driven by advanced analytics, often relies on algorithms to profile applicants based on factors such as their country of origin, citizenship, gender, and age—elements beyond their control and, at times, beyond their understanding.”
One solution Tao calls for is ensuring that tech tools are designed with inclusivity in mind, so that vulnerable and underserved communities aren’t marginalized in the immigration process. He emphasizes the need to address the foundational issues in how these programs were initially implemented in Canada to ensure automation and technology tools are fair and equitable.
"The starting point [in Canada] is examining how these programs were initially rolled out,” Tao says. “Initially, there was no tool to bulk-process visitors, work permits or render certain decisions. Instead, these systems were quietly implemented, often through litigation, without public transparency or announcements about their purpose or functionality. It wasn’t until issues were raised—such as the disproportionately high refusal rates for Francophone students, which became a political debate and led to parliamentary committee hearings
"That's leading to the work I'm doing around automation and technology. From the applicant's perspective, individuals want to tell their stories, differentiate their cases, and have their unique situations fully understood, with their documents and letters genuinely considered" – Will Tao
where I testified—that the government started sharing information. Even then, the information presented was, in my view, somewhat misleading."
Drawing on his experience as chair of the City of Vancouver's 'Cultural Communities Advisory Committee', Tao compares the current approach to a flawed development model.
“Sometimes, these projects developers would come to our committee right at the last minute when their project was already done, just to get us to rubber-stamp them and have a report saying they talked to us, without showing they’ve done appropriate studies on racial bias, gender assessments, and have opportunities for critique before approval,” Tao explains.
“Individuals want to tell their story and differentiate their cases without a system working off advanced analytics that treats them as algorithmic data points,” he adds. “We’re advocating to humanize this discussion and remind decision-makers there needs to be meaningful human involvement in the process first.”
The concept of "human in the loop" remains pivotal to AI applications used in operations, Tao explains, but he cautions against assuming that human intervention eliminates bias. “Humans can bring their own biases or simply act as rubber stamps, perpetuating the same issues automation seeks to address,” he says. “There needs to be meaningful human involvement, not just a human in the loop to actually have human feedback on the system, but being invited to the table to discuss it, speaking to the people who are most impacted.”
Rethinking how decision-making processes in the AI era are implemented is crucial, Tao highlights, as courts now require disclosure on an individual’s use of AI during proceedings. “The courts do want to know when folks are using these tools and trying to figure out ways to protect individuals, especially self-represented applicants, from the harms of hallucination or misrepresenting in their work.”
Like any other emerging tech that garners organizational interest, Tao admits that his views are still evolving as he continues his research on this topic. “I would like to use the word expert very lightly,” he said, but affirms that he is committed to exploring these complexities and advocating for systems that prioritize fairness and equity.”
When asked what advice he would offer to professionals in the regulatory sector, Tao was clear: focus on building relationships. He believes regulation is not just about enforcing rules but creating trust and fostering engaging dialogue with stakeholders, including lawyers, advocacy groups, and the communities they serve.
“If you set some good guiding standards now and at least get people to be aware of both the benefits and harms of these technologies, you can shape some behaviour to avoid negative consequences,” Tao concludes. “Bringing AI to the masses and thinking about ways to reach those who’ve historically been impacted, could lead to systems that are more inclusive and better reflect the realities and needs of immigrants.”
Our Communications on Retainer (CORe) program enables regulators with limited resources or temporary capacity constraints to retain MDR Strategy Group to support new and ongoing communication priorities.