
8 minute read
ARTIFICIAL INTELLIGENCE AND THE EVOLVING REGULATOR
Protecting the public interest in an age of algorithms
Melissa Peneycad, director, public engagement strategy and special advisor on AI, MDR Strategy Group
Regardless of the professionals they license and regulate, all regulatory bodies exist to protect the public interest.
They do this by ensuring that professionals such as doctors, engineers, teachers, social workers, and others meet rigorous standards to enter and remain in practice. They enforce legislation, standards of practice, codes of conduct, manage a complaint and discipline process, and apply disciplinary actions when needed. While these core responsibilities are universal across regulators and have not changed, the context in which regulators operate is shifting rapidly.
Artificial intelligence (AI) is no longer a future consideration, but part of our present reality. From clinical decision support and algorithmic hiring platforms to generative design, process automation, and predictive analytics, AI is transforming the work of the professions regulators license and regulate. But what about regulators themselves? Can AI help them better uphold their public protection mandate? Should it? And if so, how can it be implemented responsibly?
In this article, I explore how AI can support and strengthen regulators’ public interest mandates. It also highlights key strategic, ethical, and operational considerations for regulatory leaders seeking to evolve their organizations with care, clarity, and credibility.
Upholding the public interest in a changing world
Regulators are tasked with protecting the public by ensuring that only qualified and competent professionals are licensed to practice. This involves setting entry-to-practice standards, maintaining codes of professional conduct, managing complaint and discipline processes, and applying corrective action where needed, from mandatory training to license suspension or, in the most serious cases, revocation.
These are not abstract responsibilities. They directly impact people’s health, safety, rights, and financial well-being. Therefore, as the world changes technologically, socially, and economically, so too should our approach to licensing and professional regulation. For example, new risks emerge, public needs and expectations shift, and the complexity of decision-making grows.
Enter AI.
While AI has raised legitimate concerns around bias, transparency, and overreach, it also offers promise and potential for regulators.
AI as a tool for public protection
AI is already used in sectors like healthcare, education, and finance to analyze large datasets, detect anomalies, and support decision-making. For regulators, similar applications are beginning to take shape, and they deserve attention. For example:
• Triage and intake: Natural language processing (NLP) models can help analyze and categorize complaints as they arrive, flagging potentially high-risk cases for faster human review. This doesn’t replace human judgment; it augments it. The same approach can be extended to applications for licensure. In the last issue of The Registrar, we interviewed Dr. Michael P. Cary, Duke University’s inaugural AI health scholar, whose work explores AI applications enhancing licensure processes through careful, human-centred evaluation.
• Public support: AI-powered chatbots can assist members of the public in navigating complex regulatory websites and enhance accessibility when designed with clear boundaries and appropriate oversight. These tools can provide 24/7 support, answering frequently asked questions, directing users to relevant resources, or helping them understand how to file a complaint or check a professional’s status.
• Data-driven insights: AI-powered analytics can help regulators identify trends in complaints, disciplinary outcomes, or professional conduct, offering a more proactive approach to risk management.
• Continuous competence: AI can support monitoring of continuing professional development (CPD) through tools that track, assess, or even personalize registrant learning over time.
• Administrative efficiency: Automating repetitive administrative tasks, such as document review, redaction, or case categorization, can free up skilled staff to focus on higher-value work.
These innovations are not about doing more with less.
Rather, they’re about doing better with what we have, for example, improved risk identification, enhanced public protection, and the provision of more thoughtful and fairer support for registrants during the licensure application, renewal, and even investigation processes.
Strategic fit matters
Despite the potential, not every regulatory task is well suited for AI. One of the most important things a regulator can do is ask: Is AI the right tool for this problem?
Several strategic questions can help guide the decision, some of which include:
Is the task high-volume, repetitive, and data-heavy? AI excels at these.
Does the task require nuanced, ethical, or context-rich decision-making? AI may assist, but human oversight is essential.
Can the outcomes of AI use be explained and justified? If not, it may erode trust.
Will this improve fairness, efficiency, or effectiveness for the public? That’s the bottom line.

Being intentional about AI adoption helps regulators avoid implementing tech because it’s “on trend” and instead focus on tools that genuinely support their mandate.
Ethics, transparency, and trust
Perhaps the most significant challenge for regulators adopting AI is maintaining trust with registrants, staff, and the public.
AI tools are only as good as the data they’re trained on. Poor or biased data can lead to unfair outcomes and algorithms that cannot be explained or appear to “black box” important decisions, risk undermining credibility and procedural fairness. Therefore, to uphold trust, I argue that regulators must ensure human oversight over all AI-driven decisions, use explainable models wherever possible, establish clear and transparent policies on when and how AI will be used, and conduct regular audits to ensure fairness, accuracy, and alignment with legal and ethical standards. AI should never replace human judgment; it should support it. After all, public protection depends on what decisions are made, how they’re made, and who is accountable for them.
Becoming an AI-ready regulator
Integrating AI into the operations of a regulatory body should not be viewed as simply a tech initiative but as an organizational transformation, and I contend it starts with education and mindset.
1. Build digital fluency:
Regulators should understand how AI affects, or could affect, the professions they regulate and their regulatory responsibilities. This doesn’t mean regulators need to become data scientists; it means becoming informed decision-makers.
2. Promote a culture of innovation:
Regulatory bodies are, by necessity, risk aware. However, it’s important to note that innovation and public protection aren’t mutually exclusive. Regulatory leaders can create safe spaces to test, learn, and iterate, anchored by strong values and clear accountability.
3. Communicate with purpose:
Introduce AI changes with transparency and clarity. Explain why a new tool is being adopted, what it will (and will not) do, and how it aligns with the public interest.
4. Engage stakeholders early and meaningfully:
Understanding how AI is perceived by registrants, staff, and the public is essential. Conduct surveys, host roundtables, or include registrant representatives in working groups. Engagement builds trust, identifies concerns early, and helps shape solutions better aligned with stakeholder expectations.
5. Establish an AI vision, goals, and meaningful KPIs:
Begin by articulating a clear vision for how AI aligns with your regulatory mandate. Set measurable goals that reflect both efficiency and fairness. Then, define KPIs to monitor progress, such as turnaround times for resolving complaints, application processing improvements, registrant satisfaction, or early risk detection. Combine quantitative and qualitative metrics to capture a complete picture of success.
6. Develop an implementation plan:
A roadmap is essential for responsible integration. This includes identifying priority areas, selecting appropriate tools, setting timelines, ensuring adequate training, and defining accountability. Pilots, phased rollouts, and clear evaluation checkpoints can help regulators learn and adapt as they go.
7. Collaborate across the sector:
AI readiness doesn’t need to be done in isolation. Sector-wide dialogue, resource sharing, and collaborative pilots reduce duplication and strengthen outcomes. This was a central theme at our February 2025 AI in Regulation conference, which brought together regulators from across Canada, the United States, and Europe to share use cases, explore governance frameworks, discuss ethical considerations, and collectively advance sector-wide readiness.
Going from risk to readiness
Like most organizations, regulatory bodies don’t have the luxury of standing still. As AI continues to reshape the world of work, and the very professions they license and regulate, regulatory bodies must rise to meet the moment.
Being AI-ready isn’t about adopting every new tool or chasing trends. It’s about understanding the landscape, asking the right questions, and ensuring that any use of AI serves one purpose above all else: protecting the public.
At MDR Strategy Group, we believe that readiness is more than just awareness—it’s a way of thinking and operating that supports responsible innovation across the regulatory sector.
As the conversation around AI in professional regulation continues to evolve, we are working closely with regulatory leaders to explore how best to prepare for the opportunities and challenges ahead. If your organization is beginning to think about AI’s role in licensing, conduct, complaints, or organizational operations, we’d be happy to start the conversation. Whether through a tailored educational session or strategic dialogue, we can help you explore what AI readiness could look like in your context.
In today’s world, protecting the public interest isn’t just about managing risk; it’s about anticipating it. This requires tools, systems, and strategies built for the complexity of the world in which we live. AI is not going away.
Like it or hate it, it’s a technology we must all be prepared to understand and use to our benefit and the benefit of the public we are charged with protecting.