Fintech Finance presents: The Insurtech Magazine 07

Page 46

AI & AUTOMATION: RESPONSIBLE AI

Judgementcalls Intelligent AI’s Anthony Peake was motivated by a tragedy to improve the quality of data used to identify risk. But with that comes great responsibility

“The reason we started Intelligent AI was Grenfell Tower.” Anthony Peake, co-founder and CEO of the platform for risk underwriters, is talking about the night in June 2017 when fire ripped through a 24-storey block of flats in West London, leaving 72 people dead and exposing a shocking catalogue of safety failures. From the combustible cladding that fed the flames, to the woeful lack of readily available, detailed information on the building’s construction and layout that hampered rescue efforts, the data that might have flagged Grenfell as a disaster waiting to happen had never been fully collated or, more importantly, shared. Insurers were as much in the dark as anyone else. Like many watching the tragedy unfold live on TV, Peake, who’d spent a lifetime in IT, was first appalled and then angry that information technology could have been used to avert this and other disasters, but wasn’t. At the time of the Grenfell Tower fire, Peake was already involved in analysing fire service call-out data on behalf of insurers elsewhere, and he was curious to see the records for the block. It emerged that firefighters had been called to the tower 15 times during the previous year, but no claims had been made by residents as half did not have insurance, and the other half could not afford it, so there was no obvious data trail to inform any underwriting process. “You had two different views of risk, one side completely blind (insurers) the other side (fire services) data-rich. And none of the insurers involved with Grenfell at that time were looking at fire service call-out data,” says Peake.

46

TheInsurtechMagazine | Issue 7

And so, with insurance professional Neil Strickland, he set about creating a solution that made sure insurers did see the full picture, using AI, data analytics and satellite image analysis to help deliver more accurate real-time commercial property underwriting and risk management – an improvement that, crucially, could save lives. Intelligent AI plugs the knowledge gaps by taking a ‘digital twin’ approach to generating a statement of value (SoV), the file of information, often compiled manually, that traditionally underpins risk assessments – and which, according to Intelligent AI, often only contains 40 per cent of the available data. The insurtech uses AI to first cleanse, then analyse and compile data from more than 300 sources to create a virtual 3-D mirror image of a property. This visual representation of a SoV not only provides an assessment of the precise property, but also of the surrounding area in which it sits. Such an approach to information gathering means that autonomous machines are surfacing and making judgement calls on ever more granular data in complex situations. And, while better data is of obvious benefit in terms of preventing loss of life and reducing the liability on insurers’ books, it potentially raises questions over its ethical collection and use. That’s a judgement call Anthony Peake is well aware of, which is why he’s currently working with Innovate UK on its ethical AI project.

You could unlock data that an insurer did not have, and present a profile of an organisation as riskier than the insurer initially thought With growing reliance on immediate automated services, AI is fast becoming an integral part of insurance, but in parallel to its growing presence is the call for more robust regulation and

surveillance of autonomous processes. IBM found that, while 71 per cent of insurers have data-centric products and services in their portfolios, many still lack a cohesive data strategy. And yet the information and level of detail they have access to is only likely to grow. According to Accenture’s 2021 Global Insurance Consumer Study, seven out of 10 people would share their data, including medical records and driving habits, with insurers if it meant more personalised pricing. The responsible application of AI in determining outcomes based on this data requires insurers to do two things: ensure transparency in their sourcing of data and, secondly, monitor for bias in decision-making algorithms. Since last year, heavy-hitters like Google and Microsoft have committed to encouraging more ethical digital practices, providing advice on data security and AI development to the wider industry. And, in 2021, Intelligent AI teamed up with tech organisation Digital Catapult as part the Innovate UK ethical AI project, which aims to increase support for companies like Peake’s as they develop and deploy AI. The aim is to make sure it is used in a way that does not unintentionally harm any individual or society at large. “When you unlock a lot of data, you create profiles of organisations, in our case commercial businesses,” says Peake. “Along with some of the consortium members, I was very concerned that, quite often, you could unlock data that an insurer did not have, and, in our process, for example, present a profile of an organisation as riskier than the insurer initially thought.” A big area of focus for the Innovate UK project was to make sure that AI does not discriminate against smaller companies in particular. For large businesses, insurers often look past potential risk as they are guaranteed reliable returns, but this often does not extend to SMEs.

Unintended consequences Though the road to unbiased claims processing is paved with good intentions, ffnews.com


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.