3 minute read

Build AI you can trust

Peter Nyman interviewed Ann-Elise Delbecq about Ethical AI at the AI & Business Strategies event in October.

Artificial Intelligence solutions can improve many aspects of running a business. They can help sustainability, customer experiences and innovation, just to name a few. But we have to make sure we can trust that using AI won’t result in unintended consequences.

Advertisement

TEXT: DAVID J. CORD

Ann-Elise Delbecq is the Program director of IBM’s Data Science and AI Elite team in EMEA Client Engineering. She explains that the challenge is observability. In general, observability is the extent we can understand the internal state of a complex system based on its external outputs. Data observability focuses on the data layer, while model observability focuses on the machine learning model.

“Many companies fly blind when it comes to observability,” she says. “We need to improve control, bring AI into the business processes and make sure we are compliant.”

An example Delbecq’s team worked on is a large US bank. They were concerned with how long it took to roll out a governed AI solution. They wanted to automate parts of the process, integrate it with existing model building tools, and monitor fairness and drift in model behaviour to remain compliant. The bank worked with IBM to build the perfect solution.

The five pillars of trust built into the lifecycle of an AI application are fairness, explainability, robustness, documentation and privacy.

– Ann-Elise Delbecq , Data Science and AI Elite Team, Client Engineer Emea- Program Director, IBM.

More regulations on AI are coming. Policy makers want AI to respect fundamental rights and be technically robust and reliable. The proposed EU AI Act will categorise different applications and systems on their risk level and specific legal requirements. For example, CV-scanning tools would be considered high-risk.

Yet this isn’t just about compliance, risk management and preserving a good corporate reputation. A well-designed and managed system can improve safety, reliability and efficiency, having a direct positive impact on your company’s bottom line. IBM estimates that unreliable data could have a 6% negative impact on annual revenues.

“If you standardise the design, use and management of AI models across the enterprise you can improve control, capitalise on existing models and accelerate new deployments,” Delbecq says. “You have a better overview of the entire process and system.”

Delbecq works in IBM’s Client Engineering, who co-create with customers. They examine challenges and opportunities and develop solutions. AI is still a rapidly developing field, not just the technology but also the uses and regulations.

“The five pillars of trust built into the lifecycle of an AI application are fairness, explainability, robustness, documentation and privacy,” says Delbecq. “If you are interested in learning more about how we can help build trustworthy AI, you can find more information at ibm.com/artificial-intelligence/ ethics or contact tarja.leporanta@fi.ibm.”

This article is from: