6 minute read

INSURANCE INNOVATION MEETS ACCOUNTABILITY

RIBO’s approach to to regulating AI and automation

The Registrar staff

Inside the typical insurance professional’s office, fax machines still hum, spreadsheets are visualized with (sometimes) excess manual data, and multilayered CRM systems are used to communicate. But, in an increasingly automation-driven industry, efficiency must be redefined.

As regulated insurers and brokers explore artificial intelligence (AI) to streamline processes, improve decision-making, and enhance customer experiences, questions about regulation, accountability, and transparency remain at the forefront. Recognizing the urgency of these issues, the Registered Insurance Brokers of Ontario (RIBO) commissioned a study to assess AI’s impact and establish regulatory guidelines.

“We know AI isn’t just coming, it’s already here,” says Jessica Harper, RIBO’s director, policy, licensing and standards. “The challenge isn’t whether to regulate AI, but how to do it in a way that protects consumers while allowing for innovation.”

Jessica Harper, director, policy, licensing and standards, RIBO

Discovering initial findings

RIBO’s study, completed in summer 2024, aimed to understand the implications of AI in Ontario’s insurance sector, and ensure that both regulated insurance professionals and the clients they serve could benefit from the technology without undue risks. “Our goal with this study was to better understand where AI is already being used and where it’s heading,” says Harper. “Just as importantly, we needed to determine where regulatory gaps exist so we can establish proper guardrails.”

Early findings from the study highlighted several key regulatory areas of concern:

Bias and fairness: AI models, if not carefully managed, may unintentionally discriminate based on gender, race or other protected characteristics.

Transparency: Many AIdriven processes operate as “black boxes,” meaning their decision-making mechanisms are not easily explainable.

Accountability: If an AI model makes an incorrect or unfair decision, questions of who is responsible, whether the broker, the insurer, or software provider, will arise.

These concerns underscore the need for proactive regulation, Harper stresses. “If we don’t establish accountability from the start, we risk creating a system where no one is responsible for AI-driven mistakes,” she says.

Behavioural touchpoints in AI decision-making

Accountability remains one of the biggest challenges in widespread AI adoption according to Harper. The technology introduces new layers of automation and complexity, making it challenging to pinpoint responsibility when errors occur. “We have to ensure that AI remains a tool, not a decision-maker in itself,” Harper emphasizes. “[Regulated] brokers must remain accountable for the decisions made on behalf of their clients.”

This need for accountability extends beyond professional regulation due to its impact on consumer trust. Research from the Behavioural Insights Team (BIT), specializing in applying behavioural science to policymaking, highlights how individuals react differently to AI-driven decisions compared to those made by humans.

“There's a well-documented psychological effect where people are more likely to trust human judgement than machine judgement, even in situations where humans make more errors,” explains Sasha Tregebov, director, Canada at BIT. “When AI makes a mistake, such as denying an insurance claim unfairly, it feels impersonal and opaque, leading to greater frustration.”

Tregebov notes that transparency plays a critical role in public perception. “People are more accepting of AI when they understand how it works and why a particular decision was made,” he says. “That’s why explainability should be a core principle of AI regulation in insurance. We’ve seen that when companies are upfront about how AI models make decisions, even if the decision isn’t what the consumer wants, the consumer is more likely to accept it. The frustration comes when people don’t know why they’ve been denied coverage or received a different premium than expected.”

Sasha Tregebov, director, Canada, BIT

Another key insight from BIT’s research is the importance of human oversight in AI-driven decisions. “Even if AI makes an accurate decision 99 per cent of the time, that one per cent where it fails can have real consequences for people,” Tregebov explains. “That’s why a regulatory framework that ensures humans can step in and review decisions is still very important.”

Harper says that these discoveries have informed regulator approaches to consider policies that require AI-driven insurance decisions to be interpretable. “If a consumer can’t challenge or understand an AI decision, that’s a problem,” says Harper. “We need to ensure that people feel empowered, not sidelined, by this technology.”

Towards a delicate balance

Tregebov highlights a growing regulatory concern: Who takes responsibility when AI makes an incorrect decision? “If a client is denied coverage due to an AI model’s assessment, what’s the recourse? Who’s accountable? These are questions regulators need to answer now before AI becomes more deeply embedded in the industry.”

Recognizing this, and extrapolating from the findings from BIT’s study, Harper says RIBO is developing a regulatory framework that will establish clear expectations for AI use in insurance brokerages. The regulatory mechanisms comprising RIBO’s AI initiative include developing clear AI guidelines, while making AI decisions more transparent, explainable, auditable and able to be challenged by clients using insurance brokerage services.

Ongoing monitoring, adjustment and industry collaboration also remain critical to this initiative, so that brokers, insurers, technology developers, and policymakers can engage in a regulatory environment that supports innovation without compromising ethical standards. “The worst-case scenario is an AI-driven system that reinforces biases or creates barriers for consumers,” Harper says. “Our job as regulators is to make sure AI is working for the public good, not just for efficiency or cost savings.”

Tregebov sees this as a critical moment for future-proofing concerns surrounding insurance regulation. “The way AI is introduced and governed now will set the stage for years to come,” he says. “Getting it right today means avoiding significant issues down the road.”

With many tools in the insurance professional’s toolkit once seen as cutting-edge, now feeling like relics of another era, Harper says RIBO remains committed to supporting professionals before and after the establishment of a strong, regulated AI framework in Ontario. “The biggest misconception is that AI will replace brokers,” she says. “That’s not the future we see. AI should enhance a broker’s ability to serve their clients, not take their place. That’s the framework we’re building toward, one where AI supports human expertise, not overrides it. It [AI] has the potential to improve insurance outcomes in incredible ways, and our role is to ensure that happens responsibly.”

"This is just the beginning,” Harper concludes. “AI in insurance will evolve, and so will our approach. What matters is that we continue listening to brokers, insurers, regulators, and, most importantly, the public, to ensure that AI serves everyone fairly and effectively. How we regulate it today will define its role in insurance for years to come.”

This article is from: