8 minute read

US, UK and EU all make moves to regulate AI - What does it mean for telemedia?

The UK and US have signed a memorandum of understanding to look at the regulation of AI, hot on the heels of the EU’s AI Act being ratified in March as the world starts to take seriously the regulation of this new technology. But what are the ramifications for the telemedia industry, asks Paul Skeldon ?

The UK and US have signed a memorandum of understanding to look at the regulation of AI, hot on the heels of the EU’s AI Act being ratified in March (see panel). The governments of the world are finally waking up to the need to, if not control, then try and tame rampant AI development and turn it into a force for good. We also want to avoid the cliched Terminator-style ‘AI destroys the world’ as a possible outcome too.

But what is it likely to mean for the telemedia sector, which has become so inherently reliant on AI?

Well, the main use of AI in parsing vast data sets to better target consumers is unlikely to be readily impacted by these attempts at corralling the wild west of AI. The thrust of both the US-UK and EU rulings is to outlaw things like biometric targeting and profiling, as well as the European catch-all of “other high-risk AI systems”, which include – today, at least… who knows what the future may hold in high-risk AI? – critical infrastructure, education and vocational training, employment, essential private and public services, to name but a few. The laws are also heavily targeting emotion recognition and profiling, particularly on and around social media and even more particularly of the young. All these have no direct impact on telemedia today. However, as we have already seen in the past 24 months, AI changes rapidly and readily and has gone from having no hand in telemedia at all to being pivotal to pretty much all facets of the industry.

Who knows if emotion recognition, say, will be part of an interesting service offering down the line – probably not, now its been outlawed, but you get my point: we don’t know yet what impact AI will have next week, let alone next year.

There are some pointers though, as to what these AI regulations might do to the wider entertainment industries and to fintech – both of which have a direct relation to many telemedia services.

THAT’S ENTERTAINMENT

For the entertainment and media sectors, the EU and USUK initiatives are lacking in real detail and it seems like (and I am guessing here, just to be clear) that these sectors haven’t been widely consulted.

The ‘wait and see’ approach taken by both sets of legislators is scant on real detail for any industry and the lack of real insight into where this is heading is palpable.

As Tim Levy from Twyn points out, “AI is the opportunity of a generation for the entertainment sector. But the industry is hamstringing itself by refusing to make the case for its responsible and positive use. By only painting AI as a threat, be it to IP rights or writers’ jobs, the creative sectors have shut themselves out of the conversation and ceded the ground to big tech and regulators”.

Levy argues that the media and entertainment industries have viewed AI as a threat and, where they have inputted into governmental discussions around AI regulation, they have done so from a standpoint of wanting this threat stopped.

To my mind this isn’t the stance in the telemedia industry, especially around content creation, where AI is now seen as a tool to manage and personalise services, as well as to create content itself to meet changing demand.

This view could well offer a huge boost to mainstream media and entertainment were it to be adopted more widely. It would also help the entertainment sector adopt, as Levy wants, a more proactive and experimental approach to using AI to generate amazing things, not as the harbinger of the end times.

FUN WITH FINTECH

The stance in fintech – which has implications within the billing and payments side of the telemedia industry – is much more open. It has already seen some of the benefits of AI with data handling, as well as enhancing cybersecurity, but it too has reservations both about the future of AI getting out of hand and of the lack of clarity around regulation that the US-UK and EU moves deliver.

According to Scott Dawson, Head of Sales and Strategic Partnerships at payments processing company DECTA, the EU’s AI act is a positive step for the fintech sector, which other regions could learn from.

“Ideally, the role of regulation should be to facilitate innovation, and the EU’s AI Act is a good example of regulation that has the potential to do just that,” he says. “Classifying AI systems based on risk will allow fintech companies to benefit from the new capabilities of the technology while keeping a regulatory eye on the ‘black box’ problem. As AI models become more complex and opaque, their workings “Ideally, the role of regulation should be to facilitate innovation, and the EU’s AI Act is a good example of regulation that has the potential to do just that,” he says. “Classifying AI systems based on risk will allow fintech companies to benefit from the new capabilities of the technology while keeping a regulatory eye on the ‘black box’ problem. As AI models become more complex and opaque, their workings and reasoning are ever more difficult for any one human to understand. The act emphasises the need for transparent AI, ensuring companies can explain how algorithms arrive at decisions. Naturally, there are considerations presented by this approach, but by creating a conceptual structure for firms to innovate within, the EU is creating a regulatory framework we can pre-emptively manage.”

A NOTE OF CAUTION

But he sounds a note of caution. “While the UK hasn’t enacted similar legislation, its “wait and see” approach poses challenges,” Dawson warns. Especially because fintech firms aiming for the EU market will need to comply with the Act’s requirements, he believes. This includes increased transparency and robust due diligence for AI used in areas like credit scoring.

“Without clear guidelines and alignment with key other key jurisdictions, it will be difficult for firms in the UK to innovate in a manner that can be effective and scalable in the future,” he adds.

Dawson concludes: “During the UK’s AI Safety Summit in Bletchley Park last year, Rishi Sunak suggested the UK should be a leader in this space. However, it will need to make more decisive moves than just waiting and seeing if that is to be the case. If it does take the reins, regulating for the sake of innovation is very much an option.

While the UK hasn’t adopted its own AI legislation, the EU Act’s influence is undeniable. UK’s fintech sector must adapt to the new standards to ensure continued access to the EU market and foster trust in AI-driven financial services.”

EU becomes first to regulate AI

The EU has become the first government to implement a law regulating AI. t aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field. The regulation establishes obligations for AI based on its potential risks and level of impact.

BANNED APPLICATIONS

The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases.

Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.

Regulatory sandboxes and real-world testing will have to be established at the national level, and made accessible to SMEs and start-ups, to develop and train innovative AI before its placement on the market.

During the plenary debate prior to passing the Act into law, the Internal Market Committee corapporteur Brando Benifei (S&D, Italy) said: “We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency. Thanks to Parliament, unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected. The AI Office will now be set up to support companies to start complying with the rules before they enter into force. We ensured that human beings and European values are at the very centre of AI’s development”.

Civil Liberties Committee co-rapporteur Dragos Tudorache (Renew, Romania) said: “The EU has delivered. We have linked the concept of artificial intelligence to the fundamental values that form the basis of our societies. However, much work lies ahead that goes beyond the AI Act itself. AI will push us to rethink the social contract at the heart of our democracies, our education models, labour markets, and the way we conduct warfare. The AI Act is a starting point for a new model of governance built around technology. We must now focus on putting this law into practice”.

This article is from: