AI & Business Strategies
AI STILL HAS TO EARN ITS WINGS
Consumers are often uneasy about how the technology uses their data, says Saara Hyvönen, co-founder of AI consultancy DAIN Studios. Companies need to make AI more trustworthy.
W
hen you next board an airplane, have a look at your fellow travelers and ask yourself who had to pay more, who less for the same kind of seat as yours. Airlines have, of course, always had variable prices for seats – from great-value economy-class deals to the-skyis-the-limit first-class offerings. But in recent years many carriers have adopted so-called dynamic pricing, which takes that idea a crucial step further. It uses artificial intelligence to price your ticket according to what the carrier knows about you – an algorithm takes all “your” data and decides whether you are in a position to pay more (or, yes, less) for that still empty seat. In theory, harnessing reams of individual data like this will produce the most consumer-friendly pricing practices – everybody on your aircraft only paid what they could and the route you’re frequenting is profitable (in a world in which unprofitable routes get cut). In practice, however, dynamic pricing can make consumers extremely uncomfortable. Its use by large online retailers over the years has stirred more public suspicion than delight; ride-hailing companies regularly spark controversy for so-called surge pricing, the practice of charging (a lot) more at times of peak-demand – or, allegedly, when your phone-battery level is low. The latter claim speaks volumes about one big risk of using AI to help with – or even take – business decisions. Consumers are often uneasy about what happens to their data in AI-powered “black boxes” of the companies they do business with, and even slivers of anecdote can, wrongly or rightly, create distrust that drives them away. Ironically, as AI usefully spreads to more and more industries and corporate functions, the danger of consumer backlashes against the technology is rising. If companies want to continue leveraging it, they must address these concerns by adopting a more transparent – and so more trustworthy – AI. TO GET THERE , companies have to adopt clear ethical guidelines about how they use algorithms and other tools. What should they be used for? I would draw the line at AI that subliminally manipulates behavior, but give a thumbs-up to dynamic pricing and many oth-
22
er uses… On condition that we are clear about how AI should be used in each field – individual rights have to be respected, privacy protected, discrimination against and manipulation of customers ruled out. Companies have to test their algorithms against these tenets before using them commercially – and make sure these standards never slip once on the market. THE EU IS CODIFYING ETHICAL GUIDELINES in AI its Act, which could be passed in 2023 or 2024. So, companies better start to understand where they are using AI, how its algorithms work, what effects they are having on consum-
BUSINESS CLASS | HELSINKI-VANTAA EDITION | SUMMER 2022
ers. The AIA is looking to ban AI applications like social scoring that carry unacceptable risks, force high-risk applications in areas like critical infrastructure to undergo conformity assessments and even oblige limited-risk applications like chatbots into more transparency about their use of AI. Companies should not view this as a threat, but as an opportunity to make AI more trusted – dynamic pricing included. | DAIN Studios is a Finnish-German data, AI, and insights consultancy www.dainstudios.com