Skip to main content

TVBE 120 - Mar 2026 DIG

Page 1


TVBEUROPE MEETS THE WOMAN BEHIND

IStop the slop!

n January, I received a press release announcing award-winning director Darren Aronofsky’s collaboration with TIME Studios on an AI-created series about the American Revolution. Aronofsky is often at the forefront of new technologies.

Using the Big Sky camera system, he directed the debut film, Postcard from Earth, for the Sphere in Las Vegas. He also founded Primordial Soup, an AI-focused production studio, which employed Google’s DeepMind and other generative AI technology to create On This Day…1776, a series of short episodes that include real human voices alongside AI-created visuals.

You may not know this about me, but I’m fascinated by the American Revolution. I’ve read biographies of almost all the Founding Fathers, I love Hamilton, and I watch John Adams on repeat every year. So, I was excited to see what Aronofsky had created. Alas, it was not good. In the first two episodes, viewers saw snow falling upwards, flames flickering outside a lantern, and the word “AMERLED” instead of “America” on Thomas Paine’s Common Sense

Having such a well-known director put his name to a series that is not up to his usual standard of filmmaking is worrying. Do we want audiences watching what many have described as AI slop and thinking, well, if it’s made by Darren Aronofsky, then I’ll forgive the mistakes? What does that mean for the standard of film and TV going forward?

I’m not trying to stick my head in the sand; I know that AI-created TV and films are coming. It might not be in the near future, but it’s not that far away. The errors in a production like On This Day… make me question who approved the final videos. What seems obvious is that humans still very much need to be part of the process.

As AI-created video develops, there needs to be a focus on catching glaring mistakes that are easily spotted by the audience and, right now, that’s down to human involvement. As great as AI could be, it’s human quality control that will save us from the slop.

FOLLOW US

X.com: TVBEUROPE / Facebook: TVBEUROPE1 Bluesky: TVBEUROPE.COM

CONTENT

Content Director: Jenny Priestley jenny.priestley@futurenet.com

Senior Content Writer: Matthew Corrigan matthew.corrigan@futurenet.com

Graphic Designers: Cliff Newman, Steve Mumby

Production Manager: Nicole Schilling

Contributors: David Davies, Larissa Görner-Meeus, Graham Lovelace

ADVERTISING SALES

Publisher TVBEurope/TV Tech, B2B Tech: Joseph Palombo joseph.palombo@futurenet.com

Account Director: Hayley Brailey-Woolfson hayley.braileywoolfson@futurenet.com

SUBSCRIBER CUSTOMER SERVICE

To subscribe, change your address, or check on your current account status, go to www.tvbeurope.com/subscribe

ARCHIVES

Digital editions of the magazine are available to view on ISSUU.com Recent back issues of the printed edition may be available please contact customerservice@futurenet.com for more information.

LICENSING/REPRINTS/PERMISSIONS

TVBE is available for licensing. Contact the Licensing team to discuss partnership opportunities. Head of Print Licensing Rachel Shaw licensing@futurenet.com

MANAGEMENT

SVP, MD, B2B Amanda Darman-Allen

VP, Global Head of Content, B2B Carmel King MD, Content, Broadcast Tech Paul McLane

Global Head of Sales, B2B Tom Sikes

Managing VP of Sales, B2B Tech Adam Goldstein VP, Global Head of Strategy & Ops, B2B Allison Markert VP, Product & Marketing, B2B Andrew Buchholz

Head of Production US & UK Mark Constance Head of Design, B2B Nicole Cobban

In this issue

MARCH 2026

Jenny Priestley talks to Particle6 CEO Eline van der Velden about the human effort behind the creation of AI actress Tilly Norwood, and why her existence proves that the future of TV and film is here 20 Lighting the way to sustainability

As the UK’s first off-grid production company, Factual Fiction is determined to minimise its environmental impact. Matthew Corrigan meets directors Emily and Tom Dalton

SMPTE’s AI taskforce, the EBU and ETC have updated the AI Engineering Report to keep pace with the rapid evolution of the technology, writes David Davies

Why AI

Mark Harrison, founder and chief content officer at DPP, explains why some companies are losing patience with AI-led productivity

Last year, Levira Media Services unveiled its brand new facility in the centre of Leeds. Matthew Corrigan visits the historically significant site to uncover some of its technological surprises

Fear, not complexity, is what holds AI transformation back

For years, I’ve heard leaders across media and technology say the same thing: “Transformation is hard.” After leading a few of them, some successful, one not, I’ve come to believe something different. Transformation itself isn’t what’s difficult. What’s difficult is the fear that surrounds it. Nowhere is that clearer than in the current wave of AI transformation. The technology itself isn’t the obstacle, it’s people’s hesitation to trust, experiment, and relearn that slows progress.

In almost every company, a few individuals hold deep institutional knowledge. They become indispensable. Leadership hesitates to make changes that might disrupt that balance. Hesitation slows innovation long before any real change even begins.

But you can’t build the future while clinging to the past. You can’t drive forward if you’re fixated on the rearview mirror. Progress requires looking ahead, even when the road feels uncertain. The first step is lifting your gaze from the rearview mirror. Examine everything: how teams work, how technology is built, and where focus has drifted. Then begin rebuilding for the future, not to discard what worked, but to stop letting legacy thinking set the pace.

Lessons from breaking things

Early in my career, I tried to push a large-scale transformation too quickly during an acquisition by a global technology company. The change itself was right, but I underestimated the culture around me. The effort stumbled.

Since then, I’ve learned that transformation isn’t demolition; it’s balance. You can’t abandon continuity while building the future. The art is to maintain what people rely on while steadily creating the conditions for what comes next.

A startup mindset inside an established company

Strategy plans don’t transform companies, people and culture do. One of the most effective approaches to transformation is to create a startup within the larger organisation. In the context of AI, that kind of startup team becomes a safe place to experiment, encouraging testing new workflows, learning fast, and translating what works back into the core business. Building a small, crossfunctional “startup” team inside an established enterprise can ignite that cultural shift from within.

Look for people with that spark: those curious enough to challenge assumptions and determined enough to carry ideas through friction. Be prepared to move fast, work in short bursts, and learn constantly. Each small success builds belief.

Once people believe speed and innovation are possible, even within a large organisation, the culture begins to shift, and that belief is contagious.

From faster horses to faster cars

Innovation rarely happens by polishing what already exists. The world doesn’t need a faster horse; it needs a car. AI is testing this mindset daily. Many organisations simply bolt AI onto old systems—a “faster horse” approach. Real transformation asks: what if we reimagined how humans and AI collaborate from the ground up?

Media organisations often mistake modernisation for transformation, taking old systems and simply re-coding them with new tools. Real transformation re-imagines the purpose, not just the process. It asks: what could this be if we started fresh? Who else could it serve? When leaders and engineers are freed from the limits of legacy thinking, they start creating for the future audience, rather than the past system. That’s when change accelerates.

Coaching, not commanding

The longer I lead, the more convinced I am that sustainable change comes from coaching, not commanding. My daily question isn’t “What did we achieve?” but “Did I help something/someone move forward today?”

Good coaches connect the right people, set direction, and then step back. They build belief and let others shine. There’s a scene in The Shawshank Redemption that captures this perfectly: Andy gets his crew a few beers on the roof after a long day’s work. He doesn’t drink one himself, he just takes satisfaction in watching others enjoy them.

That’s leadership to me. Too many leaders want to be the hero. Coaches build heroes around them. As AI reshapes roles and responsibilities, that coaching mindset becomes even more essential, helping people build the confidence to use new tools creatively, rather than fearfully.

Cultivating a finer AI experience

I want to take you out on an excursion, away from the buzz around AI. We will go to a vineyard. Because the transformation we are going through is not mechanical, but cultural. A long, disciplined journey of turning raw natural potential into a refined and unforgettable experience. It is like making wine. As a WSET-3 sommelier, my private passion is about the process of making great wines.

While goals differ in our multi-surface, multi-platform business, we are all on the same trip: using data in a way that helps create the most attractive content and remain relevant in the future. We define clear outcomes: audience growth, content performance, monetisation, quality of experience, and operational excellence. It is choosing the right soil to plant our vine.

The soil and nutrients

A great wine starts with healthy soil—our data foundation. We need to understand the condition of the data living beneath our systems. Is it clean? Is it accessible? Is it structured?

The reality of most environments is hundreds of systems, legacy products, disconnected metadata, IP, cloud. The soil is largely depleted You can plant the best vines in the world, but if the soil is poor they will never thrive. The same is true for AI.

We find a lot of nutrition in the soil our data sources: production metadata, network telemetry, encoder logs…all the raw elements of intelligence. None of us lacks data, we lack intentional data selection. Because not all data is equal—and not all vines make a good wine.

The nutrition feeds the vines—our data ingestion. Here we start to feel the scale: logs moving, packets flowing, metadata accumulating. The vineyard is still wild. There is raw potential everywhere but it has no structure. To build trustworthy measures, a clean map of data domains is required: audience and identity signals, content metadata, QoE telemetry, QoS and security to name a few.

Trellis make the grapes strong

Left alone, vines grow in every direction and collapse under their own weight. Guided by a trellis, they grow upward, healthy and strong. The trellis is our data governance: standardised metadata, naming conventions, rights management etc.

With a good trellis in place, the vines finally produce grapes: data that is structured, enriched, and full of potential. This is where raw signals gain context such as player recognition, predictive network analytics, audience segmentation or sustainability scoring.

At this stage, data turns into intelligence. Instead of endless metrics, a focused set of decision KPIs is key, such as under-30s retention, content-completion, downtime or error-rates. Here we are moving from isolated alarms to signal-chain-correlation, and from box-level monitoring to service-centric operational modelling. We get correlated incidents, service impact analysis and audience impact tied to failures. A real-time intelligence layer that changes behaviour.

Harvesting to extract the value

Grapes must be harvested at the right moment. This step is turning insights into value: predictive routing, network failover decisions, automated highlights. This is when your data starts working for you. But grapes are still not wine, value extraction is just the beginning.

Fermentation—building AI models and automation

Through the magic of transformation, grapes are crushed, fermented, monitored, refined and, finally, become something new. Fermentation is the AI process: Models are trained, workflows become intelligent and predictions are connected to decisions.

Predictive AI uses structured machine learning to anticipate what comes next, identifying churn risk or triggering next-best actions. Generative AI extends this intelligence into everyday workflows: summarising issues, detecting sentiment, accelerating classification and processing which once took days. Media-specific knowledgegraphs and semantic layers enable recommendations that are not only personalised, but explainable.

Time and care are what makes wine stable, reliable, and worthy of trust. It is best done in a barrel. AI follows the same logic. In practice, this means continuous model monitoring, data-drift detection, quality assurance, ethical guardrail updates and strong governance. Then AI becomes trustworthy in mission-critical production environments.

Bottling and serving—delivering the experience

Finally, the wine is bottled and served. For our audiences this is personalised content, intelligent highlights, seamless streaming, and greener production.

The audience only ever sees the bottle. They don’t buy the vineyard, they buy the wine. Quick wins in AI are like drinking cheap, industrial wine—it works, but it’s forgettable. Crafting a Grand Cru experience requires patience, discipline, and care. This is our vineyard, and the future is ours to cultivate. Cheers!

Is AI ready for primetime?

Amazon is about to make an overture to the TV and film industry, offering a suite of generative AI tools aimed at speeding production and cutting costs. The tech giant’s newly formed AI Studio—an internal start-up comprising engineers, scientists and creatives—has been testing products at Amazon MGM Studios ahead of inviting industry partners to join a closed pilot. Amazon’s move marks Big Tech’s latest attempt to woo production companies to integrate their AI solutions within workflows. Last year Google launched Flow, a tool aimed at filmmakers that incorporates its video generator Veo alongside image model Imagen and flagship chatbot Gemini. Shortly after, AIstart-up Moonvalley released Marey, the first fully licensed AI video generator for professional users. Marey is a rarity among AI models in that the bulk of its training footage has been obtained by consent from indie creators and agencies.

Other AI developers have sought collaborations with individual studios. Runway is working with Lionsgate Entertainment on a custom video model trained on the studio’s archive. OpenAI has a licensing agreement with Disney for its video app Sora. That $1 billion deal last year also committed the “House of Mouse” to become a “major customer” of the ChatGPT-owner’s services, building “new products, tools and experiences”.

Video generators have come a long way in a short time. Advanced models now offer improved consistency, so a character’s facial features are more likely to remain the same in different scenes—an issue that plagued the Sora-generated Toys ‘R’ Us ad in 2024. There’s also more control over ‘camera’ pans, tilts, zooms and angles, and objects in outputs now observe the laws of physics more often. All that said, AI outputs still suffer from an all too perfect sheen. Human faces lack characterful imperfections and mouth movements often fail to sync with speech audio.

That makes AI detectable. And that’s an issue if AI-generated content is ever to move beyond comical clips in social media and make it to primetime, since people are wary of AI. They know that CGI animation has been around for decades, but they can recognise the human voices behind the computer-generated characters and form an attachment. They also understand that AI has been used to de-age actors in several movies but are aware that a real person lurks beneath the reassembled pixels—someone they can identify

with, and become a fan of. AI ‘actor’ Tilly Norwood might have an Instagram account, but no one believes it is a sentient human with real emotions and a lived experience.

AI’s lack of authenticity is one reason why studios are right to be treading cautiously. Another is the need to avoid another bitter dispute with talent unions. It’s why Disney’s landmark deal with OpenAI didn’t include any recognisable human talent, and why both parties pledged they would “respect the rights of individuals to appropriately control the use of their voice and likeness”.

Away from what viewers see on their screens, AI is making a difference as it’s used to perform tasks such as generating metadata, translating audio into multiple languages and automating dubbing. And what of the savings? We’re told AI will free creative people from repetitive chores so they can do more creative things. I’d love to think this will happen, and that creative teams won’t merely default to AI when human-made approaches to on-screen content are able to elicit a far deeper and far more meaningful connection with viewers.

A perfect example of that was BBC Creative’s Trails Will Blaze stop-motion promo for the Winter Olympics. The 45-second animation used 700 3D-printed athlete figures and deployed 14 different combustion techniques to create real fire. Working out how to capture it all in-camera took several weeks. The results were stunning. Another example is Apple’s handmade promo for its streaming TV service which captures light reflected from a sculptured glass logo in glorious cinematic quality. Other hi-techs have eschewed AI in ads that promote their AI services for the same reason: the need to deliver a humanly relatable and truly authentic message that resonates with audiences.

The quality of generative video will continue to improve, as will AI’s ability to mimic human sincerity. But it will never fully master the distinctive traits and quirky mannerisms that mark us out as being separate from machines, and will retain a ‘sloppy’ quality. Increasingly the most important decision among discerning creative teams will not be where to use generative AI in a project, but whether to use it at all.

Graham charts the global impacts of generative AI on human-made media at grahamlovelace.substack.com

Keeping an eye on AI

For some time, key trends have been changing the way the media streaming and broadcast segments operate. Perhaps the most important of these drivers is artificial intelligence. It touches almost every part of the streaming platform ecosystem, leading to intrinsic changes that are altering everything from how content is generated to how it is experienced.

AI is about to play a huge role in transforming the living room viewing experience, by enriching content and making it much more personal to the viewer. But AI needs to be managed carefully. What may come as a surprise to some is that its success will depend on human oversight. That means independent testers combining manual and automatic processes to grade the performance of AI agents and models.

New touchpoints for the living room

Streaming services have already integrated AI and machine learning into their platforms to analyse user behaviour, drive recommendations and automate workflows and processes. Streaming AI agents can now serve highly personalised and tailored recommendations, adverts and other features directly to viewers. They can digest metadata in order to create personalised content experiences based on user profiles, preferences, behaviour and context.

Innovation in areas such as interactive and wearable devices, extended reality (XR), and virtual reality (VR) can enrich content delivery even further, adding new transformation touchpoints for the living room. We should expect more narrative elements based on viewer input or preferences, such as AI-generated trailers, supplemental storylines and interactive overlays. In sports, broadcasts could be augmented with AI-driven statistics. However, service providers need to deploy comprehensive testing and QA strategies that ensure these AI agents are trustworthy and reliable, and indeed actually capable of improving the viewing experience for audiences If left unchecked, new AIdriven experiences could frustrate audiences of live events and streamed content. Already, some cases of overreliance on Gen AI tools in streaming platforms scaled across international markets have led to complaints regarding incorrect captions and subtitles.

In an industry where user engagement is everything, rolling out untested and unproved AI features—especially those prone to inaccuracy or bias—represents a serious risk.

User-centric testing delivers community-driven insights

Quality software testing and validation will always require a human element to validate that streaming media and content services meet the highest standards. Service providers understand audience expectations are constantly rising. They have found that testing and QA strategies play a crucial role in helping them maintain high-quality customer experiences.

These programmes employ humans that apply critical thinking and create tests for complex functionality or non-standard customer workflows. They emphasise a user-centric testing focus that not only improves customer experience but also increases sales and customer loyalty. The community-based approach allows service providers to test and validate AI-infused features, using real testers, on real devices, across global markets. They can deploy in-market testers to ensure that AI-powered services and other features are attuned to local languages and nuances.

This level of testing goes beyond functional accuracy—it can be used to assess whether AI features actually deliver value in context. Human testers are able to judge whether or not AI-generated content aligns with customer preferences and expectations. They can help streaming platforms deliver the new quality-driven experiences that audiences demand and expect. This includes bringing human insights to support real-time evaluations during major sports and live content events.

Making AI personalisation work for viewers

A fundamental shift is happening, where AI isn’t just supporting the streaming experience—in some ways it’s becoming the experience. However, AI-driven personalisation is only as good as the experience it enables. You can have the most advanced recommendation engine in the world but if a UX is clunky or a platform doesn’t represent how people actually use the services, it will fall short. Never forget, subscribers just want to see what they’ve paid for, especially with extras such as overlays in sports broadcasts.

Even as AI continues to become more prevalent, human software testers are the essential safety net in making sure it actually improves services. Successful streaming platforms won’t necessarily be the ones that adopt AI soonest. In an age of zero tolerance for not meeting user expectations, they’ll be the ones that deploy it thoughtfully, with a laser focus on quality, safety, accessibility and trust.

MARKING 100 YEARS OF TELEVISION AND THE LEGACY BUILT ON INNOVATION AND HUMAN CONNECTION

RTS Technology Centre’s Kara Myhill reflects on how the medium has transformed from a technological marvel into something that’s everywhere, constantly evolving, and shaping the way we understand each other.

THE OLYMPIC SHIFT: HOW STREAMERS ARE REDEFINING SPORTS VIEWING IN EUROPE

TVBEurope’s newsletter is your free resource for exclusive news, features and information about our industry. Here are some featured articles from the last few weeks…

UK’S CREATIVE INDUSTRIES MINISTER SUGGESTS PSBS SHOULD ‘MOVE CLOSER’ TO BATTLE THE STREAMERS

Ian Murray has written to Ofcom and the Competition and Markets Authority to suggest there would be “significant benefits to public-service media providers pursuing deeper and more strategic partnerships”.

DO YOU SUBSCRIBE TO TVBEUROPE’S FREE NEWSLETTER? SIGN UP VIA THIS QR CODE

Image courtesy of Simon Kennedy/BBC
Ampere Analysis’ Daniel Monaghan explains how streaming services are challenging Europe’s public-service broadcasters for rights, audience, and content spend, marking a dramatic shift in sports viewing.

Tilly

IS MY MICKEY MOUSE

Created by AI-first production studio Particle6, Tilly Norwood garnered attention around the world when she debuted last autumn. Jenny Priestley talks to CEO Eline van der Velden about the human effort behind Tilly’s creation, her role in an evolving industry, and why her existence proves that the future of TV and film production is here

Artificial intelligence is poised to fundamentally transform the creative industries in the near future, permeating every aspect of TV and film production, both behind-the-scenes and on-screen, and playing a major role in everything from advanced editing tools to the creation of hyperreal digital characters.

Leading the charge is Particle6, an AI-first production studio founded by Eline van der Velden, who first came to the UK with ambitions of becoming an actress, before pivoting to physics and studying at Imperial College London.

“Throughout my 20s, I was acting professionally, and I then set up a production company and made a lot of

shows for the BBC,” van der Velden explains. “I was always playing characters. That was really my favourite thing, comedy characters. Then, about three years ago, I started getting interested in AI, and I thought, this is where the future is going to go, and I want to know everything about it.

“I thought, if we're making AI content, then we also need to understand AI characters and the technology behind them. When I started using AI, I was shocked by what it could do. I was just blown away by it, so I felt compelled to show it to the creative community in the UK.”

This led to the creation of AI influencer Tilly Norwood, who first appeared on Instagram in July 2025. Initially, says van der Velden, the reaction to Tilly in the UK was

BELOW:
Eline van der Velden

“relaxed. [People] were like, well, yes, this is coming, we're used to having smaller budgets and whatever we can do to bring more value on screen.”

When Tilly debuted at the Zurich Film Festival in September, Hollywood took notice and “people lost their minds and thought I was going to kill all actors and the end of the world was near, which it's not.” However, she stresses it’s important for the industry to “not bury our heads in the sand” over what AI could mean.

Creating Tilly

Tilly was generated using a combination of publicly available tools. Van der Velden is keen to stress that she has not been trained on any proprietary data. “I wanted to create an original character. I didn't want to infringe on anyone's likeness, so I didn't want to do a deepfake or anything like that.”

A year ago, the consistency of AI engines wasn’t quite as advanced as it is now, which meant it took almost 2000 iterations to get Tilly right. “I remember that moment where I was like, yes, that's her!”

It was important that Tilly both looked British and had a name that matched. “She is an extension of me

in a way,” muses van der Velden. “What's funny is, when meeting lots of these AI influencers, I realised that all of them are just creating extensions of themselves in the digital world. It's really fascinating, people are bringing all their skills and experience into this AI world that's coming.”

In fact, Tilly is such an extension of her creator that van der Velden will often use performance capture to bring out the best of her model.

“There are two ways of bringing performance to an AI character and I do both for Tilly,” she explains. “You can either prompt Tilly on what to do, and you might get lucky and get a good performance. You might also get unlucky and not get a good performance. The second route is performance capture, that's where I act, and she's overlaid. Think Avatar. We use a hybrid in order to get the best performance out of her. But that performance capture was only recently introduced, probably three months ago.”

The biggest issue for many in the industry is the idea that their content could be used to train AI models. Van der Velden states that even though Tilly was created using publicly available software, all of

ABOVE: Tilly Norwood made her debut in 2025, sparking conversations across the TV and film industry

her content—imagery, videos, and voice—is protected by copyright.

“You could use an image of Tilly and upload it into Veo 3 [Google's AI video generator] and use it as a reference, but you're using a copyrighted image,” she adds. “The way the tools shape a face is by using billions and billions and billions of pieces of data, and the neural network learns how a female face is shaped, and then it de-noises and goes down more and more pixels to get to a higher grain. That is the reason why there's copyright, because of the human effort that goes into that process.

“If you just have a one-click AI model that's spitting out an AI image, then you're not putting any human creation work into it. We spent months getting to the final version of Tilly, and then months more creating who she is as a person, her backstory, name, look and everything.”

All of the above might lead you to think that AI actors aren’t going to help broadcasters and production companies meet their sustainability targets, but van der Velden argues that isn’t so. “I studied nuclear fusion because I wanted to get a green energy source out in the world, so I do comparisons all the time in order to check the carbon footprint of a production, whether it's using AI or a traditional production.

“What we find is that AI actually has up to a 99 per cent lower carbon footprint than traditional production. You have to think about it as going straight into post. You're skipping all of the pre-production, the flights, the catering, the generators on set, you name it. The carbon footprint is so much lower. I always say to people, one flight across Europe, from Paris to Denmark, for example, is about a million AI prompts. You're better off not taking the flight and making an AI product.”

The future

While Hollywood’s reaction to Tilly can be described as somewhat sceptical, van der Velden says there are already plenty of creatives who are queuing up to work with her. “We're really passionate about retooling and reskilling the workforce in the creative industry. Everybody needs to learn how to work in the new AI space, so we're working with directors, production designers, costume designers, DoPs, everyone. We welcome anybody who's really keen to work with these new tools to come and play with Tilly, whether it's a short scene or part of a longer story.”

Tilly will continue to evolve, with the team at Particle6 already working on what van der Velden describes as a “prompt terminal for her brain”, which will allow

creatives to interact with her autonomously. Van der Velden likens the development process to raising a child from newborn to 18 in a couple of months, then setting them free with the hope they’ll be a decent human being. “We have to craft everything, her personality, how she responds, etc. It's quite a long process, with guardrails in place so she doesn't talk about certain topics.

“I think people underestimate how much work goes into producing high-quality AI content. We've all seen AI slop. That's not the business we're in. We only want to produce best in class AI content. It can take quite a long time to produce traditional film and TV, we usually say that AI is about half the time, half the cost.”

Van der Velden says that for the moment, she expects humans to continue to be involved in content creation, adding that there is still a lot of “human work” being invested in Tilly. “Sometimes, people just think, oh, it's AI, so there's no humanity behind it. I would say the opposite. We're hiring lots of people, and all the experience and skills translate.”

However, in the future, she doesn’t rule out a film being created at the click of a button, adding that she believes at some point the TV and film industry will use either a mix of humans and AI, or full AI. “In a year, there'll be almost no production without a little aspect of AI in it, whether that’s using ChatGPT to check something. Every single person will use AI somehow. It's going to be unavoidable, like WiFi or electricity.

“Every single studio going forward will need to become an AI production studio, and they'll need the best talent,” she continues, adding there are big plans for Particle6 going forward: “Right now, the ambition is to grow into a bigger studio. Tilly is my Mickey Mouse, and I am building the next Disney.”

took almost 2000 iterations

Scan the QR code to watch Tilly in action

It
to get Tilly right, says van der Velden

WJust because we can

Matthew Corrigan wonders if we are all guilty of talking a good game on sustainability

hen asked, almost everyone in the media and entertainment industry seems to nod solemnly and agree that sustainability is of paramount importance.

A glance at the end credits of pretty much every production from the last 15 years or so will almost always reveal a reference to its impeccable green credentials. Strict rules are often in place, with broadcasters insisting on adherence to their environmental policies. Tickbox exercises must be completed to ensure everyone is fully on-board and compliant with the standards set by experts steeped in best practice.

The same is true of companies working to supply services and solutions, all of which are keen to ensure they are seen to be meeting sustainability goals. International events and trade shows encourage people to fly in from around the world then make a big deal of asking people to use public transport from the airport. It is almost as though we are trying to convince ourselves we are doing The Right Thing.

Elsewhere in this issue is an article about North Yorkshire’s Factual Fiction, the UK’s first off-grid production company. Researching it provided a fascinating insight into what can be achieved when sustainability truly is placed at the heart of a business’s efforts. It also raised an uncomfortable question—one that I wonder if the industry avoids all too often. Let’s face it, image is integral to media and entertainment and, with that in mind, I wonder whether too many of us are guilty of talking a good game on sustainability without actually stopping to consider the impact of our actions. Could it be that there is a collective belief that simply acknowledging the problem is somehow tantamount to solving it?

Naturally, there are certain inescapable realities that have to be faced. We work in a business that is utterly reliant on technology for its very existence. There always has been, and always will be, a need to use energy. It’s a well worn cliche but technology is expanding at an exponential rate. New and exciting use cases are finding their way into the mainstream almost as fast as they can be imagined. The very language of media and entertainment is beginning to mimic that of the IT world, with terms such as IP migration, cloud computing, interoperability and machine learning as likely to be heard in Pinewood or Hollywood as Silicon Valley.

By way of example, I recently read of an innovation that was designed to provide fans of a globally popular sport—ok, it was

football—with even more options to access match, player and club statistics while watching the game. While I understand the hard commercial realities at play, I couldn’t help but wonder who had actually asked for this. How much more saturation coverage is really necessary? At what point do fans become so overwhelmed with available content that they simply lose the ability to process it? And, to return to the subject at hand, has anybody considered the massive cost in terms of energy required to produce it?

I was reminded of the famous line uttered by Dr Ian Malcolm in Jurassic Park: “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”

And it isn’t just in production. When I’ve finished writing this column, I will unthinkingly save it to the cloud, as I do with documents all the time, often three or four versions of essentially the same thing. We all hold lots of meetings using platforms such as Teams or Google Meet. How many are really necessary when a simple email or phone call would suffice? It all takes computing power (incidentally, I wonder if this might in part explain the recent rise to prominence of the lessloaded euphemism ‘compute’). It all uses energy. Could we—all of us in the industry—be using tech for tech’s sake?

It is genuinely enlightening to discover what is being accomplished by those who have really considered the impacts of their actions. And the good news is they have a sound business case: energy isn’t a free commodity—using less of it is good for the bottom line. Maybe there’s a lesson we can all learn. Maybe, where our industry’s inarguable overreliance on technology is concerned, just because we can, doesn’t mean we should.

“I wonder whether too many of us are guilty of talking a good game on sustainability without actually stopping to consider the impact of our actions”

TECHNOLOGY Triumphant

Future’s Best of Show Awards recognise standout broadcast-grade technology products and solutions from the ISE Show in Barcelona.

This year’s winners in the TVBEurope category are showcased here

BROMPTON TECHNOLOGY

Tessera SQ200

Judges’ comment: The Tessera SQ200 is an 8K LED video processor featuring multiple 100G Ethernet ports, supporting the latest AV-over-IP protocols. It powers canvases up to 64K pixels wide or tall, and offers precise colour correction with a 33x33x33 3D LUT. As virtual production technology continues to evolve, the SQ200 has been designed for software that continues to improve with future releases.

EVERTZ AV

NEXX with NEXX-SCORPION

Judges’ comment: Taking a piece of hardware and making it flexible with software shows the direction of travel within the industry. Providing a pathway for future IP expansion and enabling integration of cloud services into various workflows is a clear positive. Quikbeam

Judges’ comment: Designed for film sets and broadcast productions, QuikBeam features swappable, fully waterproof batteries and fastcharge when needed. The swappable battery supports three charging methods: via PoE while wired in the QuikBeam itself, in the optional ChargingDock, or through its USB-C port.

CLEAR-COM

Helixnet four channel beltpack

Judges’ comment: As live broadcast productions become ever more complex, a 4-channel beltpack will be welcomed by production teams, providing them with greater flexibility for multi-team coordination in complex setups.

NETGEAR AV

NETGEAR M4350-16M4V (MSM4320)

VIDENDUM

Autocue PTZ Prompter

Judges’ comment: With the rise of PTZ cameras in broadcast, this product delivers on-camera prompting previously available via human-operated cameras. The fact that it's already been field-tested by broadcasters shows there is a demand for its capabilities.

LYNX TECHNIK

Yellobrik CDE 1922

Judges’ comment: The Yellobrik CDE 1922 from LYNX Technik is purpose-built for the transition from SDI to IP workflows, providing a compact, bi-directional bridge between 3G-SDI systems and SMPTE ST 2110 IP infrastructures without forcing broadcasters to choose between the two.

Judges’ comment: live event broadcasts, ensuring they stay securely connected during load-in, throughout performances, and during teardown. Each port lights up, allowing users to easily spot any problems straight away.

EVERTZ AV 7882IRDA-S2X

Judges’ comment: The solution supports SMPTE ST 2110 that interoperates with AV networks using the IPMX open specification (built on ST 2110 and AES67 audio), enabling end-to-end broadcast-toAV convergence. As broadcasters look at IPMX as an alternative to 2110, this could solve the problem.

SHENZHEN TECNON EXCO-VISION TECHNOLOGY CO, LTD Zeus

Judges’ comment: Exceeding 3000 nits, with a refresh rate over 30,000 Hz compared to the typical 7,680 Hz for shooting, Zeus appears to be pushing the boundaries for virtual production technology.

IMCP Servers

AND THE NEXT PHASE OF AI

n an incredibly short space of time, the video industry has moved from using AI to analyse large datasets, to using generative AI and LLMs to make systems less prescriptive, able to adapt to context, and easier to interact with in natural language. This type of AI is powerful, but its use is limited if it can’t interact with external data, tools and systems. The next step along this evolution is the use of agentic AI where agents work across separate systems and are given license to act autonomously to achieve a defined objective.

MCP (Media Context Protocol) is emerging as the accepted standard to connect AI tools to external systems . It was only launched just over a year ago, in November 2024 , so there i s still uncertainty about what it is, what it enables and what it means for the video industry.

What is MCP?

MCP is an open standard for connecting LLMs to external systems and data sources such as file storage, CRM systems, analytics platforms, or workflows with speciali s ed prompts. It acts as the br idge that c onnect s LLMs to these tools , enabl ing agents to complete their set objectives

It works differently to APIs, which are programmed by developers with defined endpoints so that data can only be exchanged in an explicit way. With MCP, each endpoint has an additional property that exposes a natural language ‘context’ that the LLM can understand. This context explains what a tool can do, what it outputs, as well as why and when it should be used. The LLM is then able to understand what all available tools do and what they’re for, so that it can decide which tool to use based on the task and context provided by the user.

What does MCP mean for video services?

MCP allows video teams to interrogate data systems using natural language . Rather than needing to create new reports in order to examine the cause of a recent spike with error rates, a non-technical team member can simply ask the LLM: “why did error rates increase on Monday?”

MCP servers are a key component of agentic systems, connecting AI tools to external systems. All compatible AI tools can use the MCP server, so it acts as a shared interface connecting AI tools and other relevant components together. This removes the need for

developers to build a custom integration between every agent and every system, which is time consuming and resource intensive.

In video player operation, for example, an MCP server can enable AI agents to create and manage complex test setups, run playback tests automatically, analy s e results, identify issues and carry out corrective actions, all without the need for manual setup or integration code T his enables testing to be scaled efficiently across devices and platforms to help video providers accelerate testing cycles, improve video quality, and reduce operational costs.

Technical considerations

A number of technical considerations need to be factored in before building and deploying an MCP server. As with any new emerging technology, a cautious approach is advised because best practices are still emerging, and potential problems are yet to be discovered, addressed and mitigated against.

Care needs to be taken around context window management, which essentially sets how much data the LLM can refer to in order to answer the query. If the context window is not big enough, the LLM will quickly become overwhelmed and not operate as expected.

Other important considerations include security and performance constraints. If the MCP server acts as a unified interface for all AI agents, thereby allowing a single AI agent to access all connected systems, this introduces vulnerabilities that weren’t there previously.

Developers therefore need to safeguard against unauthori sed access by incorporating robust authentication and authori s ation mechanisms. Performance is another technical issue requiring serious consideration because supporting large numbers of agents at scale can place significant demands on infrastructure, which can constrain performance.

Governance and risk management are also critical. If we let AI take control of critical video workflows, there have to be effective guardrails for the safe and responsible deployment of AI agents. This will likely require clear policies defining acceptable use, robust access and security measures, observability, monitoring, and human-in-the-loop oversight.

The video industry is moving forwards to a point where MCP is the foundational layer supporting AI agents to interconnect, democrati s ing data access , and creating new capabilities that were not previously possible.

As the UK’s first off-grid production company, Factual Fiction is determined to minimise its environmental impact. Matthew Corrigan meets directors Emily and Tom Dalton to find out how they are achieving their aims

THE WAY TO

The picturesque Yorkshire Dales are perhaps not the first place that most people think of when it comes to the location of a high tech production house. But for Tom and Emily Dalton, the directors of Factual Fiction, the remote site was perfect for what they hoped to achieve.

Formed six years ago, the company's launch coincided with the first wave of the Covid-19 pandemic and its concomitant lockdowns. As a result, the Daltons never actually worked from the office they initially planned to occupy. However, the situation forced a new way of thinking. At the time, businesses around the country had to urgently explore new ways of working. With offices closing up and down the land, a new

normal began to emerge with remote and hybrid models quickly coming to the fore. Explaining the sink-or-swim reality facing the fledgling company, Tom Dalton says, “We were left with a very practical problem: everything that we did had to happen where we lived.”

The pandemic caused the cancellation of two of the initial three commissions they had started with, but there was still one remaining. “We had a show to make, and we had to make it work,” says Tom.

Formerly the MD at a London-based company owned by Endemol (now under the Banijay umbrella), Emily explains the decision to set up the company in Yorkshire was in part informed by Channel 4’s move to Leeds, and a wider push to expand production beyond the M25. Added impetus came from the couple’s determination to operate a more sustainable company. “We wanted to tell people stories in an

environmentally friendly way,” says Emily. The benefits were stacking up and, having lived in Yorkshire before, both relished the opportunity to make shows in an area they knew well.

Notwithstanding the pandemic, there were numerous practical challenges to overcome, with the need for services and a reliable source of power being particularly pressing. “[There are] a lot of energy demands as a production company, particularly if you do post work, which we've always done,” Tom continues.

“We had no services, we had no gas, we had no water, we had no main sewage. You have to build all of these things yourself. We had problems to solve. We have a notoriously unreliable power supply. In the first year of living here, I think we had seven power cuts, one of which lasted three days.”

“We had to find ways to work with people that don't rely on conventional means,” adds Emily, who acknowledges that most UK production companies gravitate towards cities and urban areas, often because they are centres of population. “So we found a solution, originally through necessity, now through choice, setting up the systems that allow us to work remotely.”

Driving change

Establishing a power source came first, as Emily explains, “If you're uploading rushes and it takes 12 hours to do it, you don't want a power failure in the middle of the night. Tom is extremely technically minded, which perhaps not a lot of producers are, [and did] a lot of research into the different bits of software required and methods to upload

Tom and Emily Dalton

media. He built in a lot of redundancy, but beyond that, I don't think it needed a lot of hardware or upfront investment.”

“We just started building capacity,” says Tom, making the installation of a robust alternative energy solution sound like a simple task. Wind generation was quickly ruled out as impractical, meaning solar was the route of choice, despite the obvious geographical limitations. An array comprising 18 solar panels was built, with attention then turning to storing the energy.

“We put in the solar, then started building up our battery capacity, because we quickly realised that that was key. [We needed] a big enough battery that allowed us to ride out the notoriously poor North Yorkshire sunshine.”

The battery solution, which Tom describes as “very large”, is in itself an ingenious design utilising 126 recycled units from Nissan Leaf electric cars to feed a 16Kw Deye Inverter, which is managed using Solar Assistant and a JK-BMS. Here, the company provides an elegant demonstration that business goals can exist in harmony with sustainability. While recognising cost is a primary driver in any business, the principles driving Factual Fiction are bringing very real financial benefits.

“We are a little bit different in so far as a lot of what we do is fed by sustainability ideas,” says Tom. “We didn’t go out and spend £80,000 on a 100 kw/h battery. We built it out of recycled modules. We’re taking something that’s already there and reusing it in a very productive way but the cost difference is incredibly significant. Our

entire battery solar set up cost about £15,000 and it paid for itself very, very quickly.”

This kind of can-do mentality has been instrumental in why Factual Fiction is able to call itself the first completely off-grid production company in the UK. “We're prepared to get stuck in and get our hands dirty and figure out these things for ourselves, as opposed to paying someone else.”

Emily agrees, “We just get on and do stuff. Small things that most people had to do through Covid, but not many have kept up. We still do regular production meetings but rather than having to force everyone to get in the car and drive to the office or to our base, we replaced all physical meetings with Zoom.”

The concept was put to the test in 2023, when Factual Fiction was commissioned to work on The Greatest Show Never Made for Prime Video. It passed with flying colours. The company was able to run the production from its base, while also handling “a good portion of the post work.”

It is very evident that Emily and Tom are driven by a shared ethos and a desire to take positive action. Although they are aware of environmental concerns being expressed throughout the industry, the Daltons are determined to do things their own way. They have recently replanted six and a half hectares of land with trees, repurposing an area of scrubland and replacing trees damaged in recent storms. “We’re not doing it through Carbon Credits or certification schemes or for any PR value,” says Emily. “We just thought, OK, what can we do?"

Tom and Emily on set

Working in an industry which necessitates high energy consumption, Emily offers an explanation as to how this can be reconciled with the need to meet environmental targets. “We were granted protected worker status during the pandemic. It was a complete surprise and we didn’t capitalise on it but [it made us realise that] people do see TV as a necessary thing.” This supports conversations taking place in the wider industry about the importance of broadcast and its relevance in the Critical National Infrastructure space. She goes on to stress the importance of mitigation, which is at the heart of what Factual Fiction is trying to achieve.

“There are practical things that everyone can do,” says Tom. “And I think what we are trying to do is present an alternative to the tick box approach. This is what we’re doing—and it’s quite specific to our circumstances, of course—but is there some little bit of what we are doing that is relevant to you, and is that something you could do?

“Some of the best conversations we have are with people who are genuinely enthused by what we’ve done and want to understand how they can implement some of these things. Once they understand we always carefully consider the bottom line—we’re not just ignoring the need to support our business—they see that these things can coexist. It leads to somebody doing something.“

A sunny outlook

Looking to the future, the company is only too aware of the increasing demands placed on energy use by the expanding use of technologies such as AI and cloud computing. Factual Fiction is already working to ensure these innovations do not compromise its standards.

The company has just set up a post production facility in France’s Lot Valley. As with the Yorkshire operation, it is entirely off-grid. The site benefits from substantially higher broadband performance than that available at home. “It's beautiful countryside, middle of nowhere, and the broadband speed we get there is eight times the speed that we can get in central London,” says Tom, “and all of the post production technology is housed there.”

“We have five offline editing machines (Mac Studios) that run Avid, all of which are networked to a central server running about 80TB of solid-state drives, allowing us to edit natively in 4K. We grade in Resolve on a PC, with an eye on moving to Baselight if we can find the time to retrain. Small VFX jobs are handled by After Effects, whilst sound is done in Pro Tools and mixed on a Dolby Atmos Genelec setup. All work is done remotely using Parsec, with a mixture of our own tools and off-the-shelf software (such as Blip) for media transfer and management. Our broadband link is 8Gbps up and down.”

Given its location, the solar capacity in France is greater, with around a hundred kilowatts available and the potential to increase to a megawatt if needed. Aware of the often hidden environmental costs of cloud computing, Factual Fiction is taking steps that appear opposite to the prevailing industry wisdom, shifting away from cloud computing, with the new facility able to provide all the capacity needed.

They are taking a similarly enlightened approach to their use of AI. “We've started, obviously, like everyone else, having to do a lot of work

in the AI sphere, and particularly generative AI, which is incredibly power hungry,” says Tom. “As soon as you start doing it, you watch it create your six or seven second clip, and you see how much power it's sucking up. It's shocking. So we've been doing our own internal experiments, running these models locally, as opposed to through the cloud. And we have started producing content.”

In the near future, all of their AI activities will be done locally, which may very well set a precedent for the industry to follow. “I feel like the question of cloud computing is a really important one, because it's really timely,” Tom says. “We haven't talked about this with anyone before. It's very new.”

Concluding, he returns to the subject of budgets, maintaining a firm belief that what is good for the environment will ultimately be good for the bottom line. “[At the moment] it may be 10 or 20 per cent more expensive than if we went the conventional route. But we'll do it because we want to hit our sustainability targets. If we do it ourselves, we'll be able to drive that cost down a lot more.”

There are some fantastic developments already available in sustainable power generation that have yet to be been adopted by the industry, continues Tom, adding, "Suddenly, all of those costs have been driven right, right down.”

The Daltons built their own array comprising 18 solar panels
The company's solar capacity in France is around a hundred kilowatts, with the potential to increase to a megawatt if needed

LAUTOMATIC FOR THE

pirate hunters

Max

Eisendrath

, CEO, Redflag

AI explores how automation is helping tackle the issue of live sports piracy

ive sport has become the most valuable and most vulnerable form of broadcast content. As audiences shift toward streaming and digital-first viewing, piracy has evolved alongside legitimate distribution. Illegal sports streams now appear at scale, often within minutes of a match going live, and they spread rapidly across websites, social platforms and messaging apps.

The financial impact is significant. A major organised piracy operation in Europe was estimated to generate roughly €3 billion in annual illicit revenues and more than €10 billion in total damage to legitimate rights holders. For rights holders, broadcasters and advertisers, this is not simply lost revenue, it is lost confidence in the integrity of distribution and measurement at a time when margins are already under pressure.

While piracy itself is not new, the way it operates has changed. Today’s illegal streaming ecosystem is automated, cloud-based and designed to scale. Addressing it with manual processes or legacy protection tools is no longer viable. The industry must respond with systems that move at the same speed and scale as the problem itself.

Piracy now operates at network speed

Traditional anti-piracy workflows were built for static media such as films, highlights or delayed uploads. Teams could identify infringing links, issue takedown notices and gradually limit exposure. Live sport does not allow that luxury. The value of a match lies in the moment, and piracy spreads during that moment, not after it.

Data from live monitoring illustrates the challenge. In a recent threemonth period covering dozens of professional sports broadcasts, more than 11,000 unauthorised streams were detected across nearly 200 piracy sites and platforms. For major fixtures and playoff events, infringement volumes rose sharply, often doubling compared with regular season matches. In some cases, illicit audiences reached into the hundreds of thousands for a single event.

By the time a human team identifies one illegal stream, many others have already appeared. Each new link draws viewers away from authorised services, diluting audiences and advertising value. The speed of this activity renders manual enforcement reactive and incomplete.

To be effective, protection must operate at network speed. That means automated detection, verification and response, executed continuously and in near real time. Automation is no longer an efficiency upgrade. It is a baseline requirement.

Automation alone is not enough. The systems deployed must be intelligent. Modern piracy operations are adept at evasion, using stream cropping, re-encoding, URL rotation and mirror sites to stay ahead of static detection methods. AI-driven approaches address this by analysing video, audio, images and metadata at scale. When paired with watermarking or fingerprinting embedded directly into live distribution workflows, each authorised stream carries a unique, invisible identifier. If that stream appears elsewhere, automated systems can trace its origin and initiate action within minutes.

This approach changes the nature of content protection. Instead of reacting to individual links, broadcasters gain visibility into how and where their content travels. Each detection contributes data that improves future accuracy, helping systems adapt as piracy tactics evolve.

The value of this intelligence extends beyond takedowns. Automated monitoring reveals which regions generate the most illicit traffic, which hosting providers are repeatedly involved and how viewers access pirated content. For rights holders, this insight supports better risk modelling, stronger contractual enforcement and more informed investment decisions around distribution and security.

Collaboration defines the next phase

No single organisation can solve live sports piracy in isolation. Effective protection depends on cooperation across the broadcast ecosystem, including rights owners, platforms, CDNs and hosting providers. When automation is paired with trusted relationships, response times improve and repeat offenders face meaningful disruption.

The industry has already embraced automation in production, playout and analytics. Applying the same mindset to content protection is the logical next step. Piracy is no longer a peripheral issue. It is a parallel distribution network that mirrors legitimate workflows in sophistication and reach. AI-powered protection does not replace human expertise. It amplifies it. By handling scale and speed, automation allows people to focus on strategy, governance and collaboration. It preserves the value of live sport by keeping audiences and revenue aligned with authorised services.

As digital distribution continues to expand across devices and platforms, the organisations that invest in intelligent, automated protection will be best positioned to safeguard their rights and sustain trust. The future of live sports broadcasting depends not only on how content is delivered, but on how effectively it is defended.

SMPTE’s AI Taskforce, the EBU, and ETC have updated the AI Engineering Report to keep pace with the rapid evolution of the technology in broadcast, highlighting new standards for interoperability and advanced applications in the newsroom, reports David Davies

standardisation speeds up AI

f anything, it would be an understatement to suggest that the ascendancy of AI in broadcast and media applications has been rapid since SMPTE established a dedicated AI Taskforce in 2020. In fact, its trajectory has been nothing short of phenomenal, taking even Frederick Walls, AI Taskforce co-chair and AMD fellow, by surprise.

“Absolutely I’ve been surprised by the pace of development, especially when I think back to 2020 when not that many people were talking about AI,” he recalls. “From conversations with various people, I

would get the impression that some companies had a couple of staff looking at AI in the corner and seeing what it might be able to do.”

But if it all felt rather tentative back then, AI in broadcast has moved decisively from theory to practice during the last two years. Hence why SMPTE, in conjunction with the European Broadcasting Union (EBU) and Entertainment Technology Center (ETC), has now updated the Engineering Report that it first issued in 2023, acknowledging the fact that “the AI in media landscape is evolving at an unprecedented

pace, and the SMPTE AI Taskforce remains committed to keep up with this transformation,” notes SMPTE standards director Thomas Bause Mason.

In particular, the new report covers multiple recent developments around standards and technical frameworks. “One of the recommendations we made in this most recent revision of the report was to sort of standardise on things like Model Context Protocol (MCP) and A2A,” explains Walls. “Then, within about two weeks, Google and a number of big AI companies announced that they were going to standardise MCP.”

Likened by its developers to a “USB-C port for AI applications”, MCP is an open-source standard for connecting AI applications—such as Claude or ChatGPT—to external data sources, tools and workflows. Regarded as a complementary rather than

competing development, the Agent-to-Agent (A2A) protocol enables AI agents “to communicate with each other, securely exchange information, and coordinate actions on top of various enterprise platforms or applications,” say its developers.

The report also addresses the latest developments around ISO/IEC 42001, the first international certification standard for Artificial Intelligence Management Systems (AIMS). Published in late 2023, the standard provides guidance to organisations that provide, produce or use AI systems, and can help them achieve compliance with legislation such as the EU AI Act and individual national legislation like Italy’s AI Law, the notably comprehensive piece of legislation which entered into force in October 2025.

In short, the foundational building blocks that will allow AI to develop in a standardised and interoperable fashion are beginning to fall in place. “If you want interoperability and for everyone to be able to use something without fear of intellectual property issues and so on, it’s best to go through a standardisation project with due process,” says Walls. “Making sure that everyone’s on the same page and doing things the same way is very important; without that, you can end up with security issues and all kinds of unexpected problems.”

The fact that progress is being made in multiple areas of standardisation is just as well, given that “industry-wide we are now seeing a huge level of interest in AI and, increasingly, a breakneck pace of change. From our perspective as SMPTE, we’re doing all we can to keep our members and others who are reading the report up to date with what is a very rapid pace of evolution,” says Walls.

Revolutionising the newsroom

Application-wise, Alexandre Rouxel—who is senior project manager, data and AI at the EBU— agrees that it is tasks such as localisation services and camera automation where deployment is most advanced. “For real-time transcription and captioning, it’s becoming the norm,” he says. “We are reaching a plateau [for these applications] where there will still be small improvements, but in many ways it has reached maturity.”

Rouxel is also keeping a keen eye on the development of AI tools in the newsroom for production and fact-checking. “We are now at the age of edge computing, which involves AI running locally for production, [in other words] on your devices and a lot of things that are more or less linked to the race

BELOW: Frederick Walls
Image courtesy of Birdlkportfolio/Getty Images

of the LLMs [Large Language Models] and everything related to production.”

Also focusing on the newsroom, Walls anticipates further innovations around automation and quality control. “You already see vendors coming out with live production tools that are AI-enabled, so for example, you might have the news anchor saying something and then the software underneath sort of reacting to that. We see a lot of opportunities for improving the quality and efficiency on the broadcasting side

of things, and think that people are recognising opportunities and starting to bring products to market in those areas. It’s a value proposition that basically aims to help people do what they do, but do it better.”

Meanwhile, the SMPTE AI Taskforce is an ongoing entity, with members usually meeting once a month to “talk a bit about the various projects going on in SMPTE, and the other areas where SMPTE could make a difference, for example, are there other opportunities for standards or recommended practices? So there are plenty of strands to keep on track and ensure that we can provide our members with quality information that helps them navigate this world of AI.”

Inevitably, this will also involve further revisions to the Engineering Report. “After this latest version came out, I asked the Taskforce whether we should already be thinking about the next one, and the answer was unequivocally ‘yes’,” says Walls. “So it could be that there will be another edition in two years’ time, which underlines how intense the pace of change has become.”

To download SMPTE Technology Reports, including the latest revision to the AI and Media Engineering Report, please visit https://www.smpte.org/technology-reports-downloads

BELOW: Alexandre Rouxel

WHY AI GIVES MEDIA COMPANIES A

syncing feeling

Mark Harrison, founder and chief content officer at DPP, explains why some media companies are losing patience with AI-led productivity

The story of technology innovation over the past three years has become synonymous with artificial intelligence. It is now almost impossible to discuss the future of media and entertainment without invoking AI in some form. And yet, despite the scale of investment, the speed of technical progress, and the intensity of public debate, there remains a profound uncertainty about how AI will actually deliver for media organisations in the near term.

Very few doubt AI will prove important. But there is a syncing issue. Move too soon, waste money, and lose faith. Go too late, miss the opportunity, and lose face.

The syncing challenge can be found at two levels: the global economy and the media economy.

In the global economy, there is a widening gap between consumer adoption and business economics. Consumers are embracing generative AI tools at extraordinary speed. AI assistants, image generators, and content-creation apps are among the fastest-growing categories in app stores worldwide. But the financial contribution made by each individual consumer to the huge cost of delivering

Image courtesy of Yuichiro Chino/Getty Images

these services is tiny. Explosive demand paired with fragile sustainability has caused some to fear we’re in an AI bubble—the bursting of which could have repercussions that reach far beyond the timing of AI adoption.

FIRMS USING AI TO PRODUCE GOODS & SERVICES

%, United States

Within the media economy, there is a similar conflict between the consumer and business realms. Audiences are already using AI to search, create, and consume. This costs them little or nothing—but has the potential to profoundly impact the world of content discovery. Media companies, meanwhile, are attempting to deploy AI to make themselves more productive so they can deliver to consumer demand. These two trajectories are related, but they are not the same. And they threaten to throw the media industry wildly out of sync with the evolution of AI elsewhere.

Here’s why.

AI is often presented as an urgent necessity: a tool that must be adopted quickly because it is the only solution to the problem of an ever-expanding media economy. That sense of urgency is real. Media companies are under constant pressure to handle more content, in more formats, at greater speed, and with tighter margins. The traditional media factory—with its complex workflows, specialist roles, and long time-to-revenue—no longer seems fit for purpose.

And AI does indeed appear, at least on the surface, to offer a solution. The productivity potential now undeniable. The trouble is that the evidence of consistent, transformative, and measurable returns on investment is still limited. Even among the most optimistic forecasts, meaningful impact is often projected two years into the future. This creates a tension between board-level expectations and operational reality.

This tension is already visible in adoption data. Surveys suggest that AI use within businesses in general is widespread but plateauing. Initial experimentation has given way to a more cautious phase, as organisations discover the limits of what current tools can deliver without significant redesign of processes, skills, and culture.

In the specific case of the media industry, AI seems to work best either at the very start of the content supply chain—where it can assist creative exploration—or at the very end, where it can personalise, recommend, and monetise content for

consumers. But the industrial middle that is the media factory remains stubbornly resistant to transformation.

This resistance is not because of a reluctance to change, but because the constraints are large. The inputs are from a production world that is highly specialised and deeply human. The outputs are to a consumer world that is unpredictable and capricious. Meanwhile, the systems that underpin media management and distribution are complex, sometimes fragmented, and often poorly suited to rapid change. The result is that AI adds value in pockets, rather than reshaping the whole.

At the same time, the most dramatic gains from AI are often occurring in environments with very few people. The potential for a ‘one-person unicorn’ and the celebration of minimal staffing among AI startups underline a difficult truth: AI thrives where there is little organisational inertia. Large media companies, by contrast, are defined by legacy structures, workflows, and cultures. The very scale that gives them reach also slows their ability to adapt.

The result of all these conflicting forces? The boards of media companies look set to lose patience with the cost of investments in AI-driven productivity, and to return their attention to what they know best: driving as much revenue as possible from content IP.

And this is where the syncing issue returns. Established media organisations will try to remain competitive in a rapidly reshaping content ecosystem without fundamentally transforming their technology or their operating models. That means that by the time AI has evolved to provide the reliability and productivity they crave, the transformational challenge—and the urgency to undertake it—will be still greater. Meanwhile, challenger entities will have had another couple of years to spot their opportunity and make their mark.

The forecast that media organisations will lose patience with AI-led productivity is just one of five predictions in the newly released DPP Tech Trends report. Four other predictions explore how other developments in business and consumer technology will impact media and entertainment, and why it’s never been more important to understand the complex web of dependencies now shaping media businesses.

2026 will be the year in which successful media businesses equip themselves for the future by letting go of their wishful thinking about the potential of AI and getting themselves in sync with reality, frustrating and painful though it may be.

Mark Harrison image courtesy of Lars Hübner

ON INNOVATION Levira Leeds

Last year, Levira Media Services unveiled its brand new facility in the centre of Leeds. Matthew Corrigan visits the historically significant site to uncover some of its technological surprises

With offices in Tallinn’s iconic 314 metre TV Tower (Tallinna Teletorn), which it also owns and operates, Estonian media services provider Levira has been at the leading edge of broadcast technology for more than seven decades. Last year, as part of a strategic move to drive engagement and long-term collaboration with broadcasters in the region, the company launched a UK entity, establishing Levira Media Services UK in May under CEO Martti Kinkar and COO Stephen Stewart.

In September, the company began operations from its new headquarters, located in a somewhat unexpected building in the centre of Leeds. The city itself is a major internet hub, housing the first independent internet exchange outside of London, and was home to the UK's first internet service provider in 1998. An estimated third of the country’s

internet traffic passes through Leeds. Thanks to its extensive full-fibre infrastructure, the city is able to offer speeds up to 100 times faster than the national average, providing Levira with obvious connectivity benefits.

Speaking ahead of the official opening, Tiit Tammiste, CEO of Levira, said, “The decision to invest in the UK was driven by our belief in the strength and creativity of this market. Establishing a state-of-the-art facility in Leeds, backed by a dedicated UK team, ensures we can deliver the same quality, resilience, and innovation that Levira is known for across Europe, now tailored to local needs. This is about long-term partnership and building a sustainable foundation for growth.”

In a classic case of hiding in plain sight, Levira’s Yorkshire base gives no hint of the high tech work that goes on inside. Grade II listed, Salem Chapel is owned by telecoms provider aql, and houses a state-of-the-art data centre, a hub for global connectivity and satellite operators. Originally opened in 1791, the sympathetically restored chapel is steeped in local history, and was the birthplace of Leeds United Football Club over a century ago. It's also home to the Estonian Consulate to the UK and Isle of Man, which is how it first came to the attention of Levira.

The chapel incorporates a glass-floored auditorium situated directly above the data centre, offering a fascinating view of the hightech workings below. The venue has hosted many UK:Estonia delegations and launched numerous government initiatives Later this year, it will play host to the Society of Motion Picture and Television Engineers (SMPTE) UK’s annual Bill Lovell Memorial Lecture.

Fully integrated with aql’s Tier 3+ colocation and telecoms hub, the environment blends broadcast grade infrastructure with the flexibility of cloud computing, delivering scalable, low-latency and high-availability media operations.

Victoria Butt, media operations director at Levira, explains the facility has been designed to provide a “one-stop shop” for its users, enabling real-time delivery across linear and non-linear workflows. “We provide an end-to-end service through live and reactive operations. Our assets support everything from content preparation, media management, transcoding for different formats then playout and distribution,” she says. Monitoring is handled by TAG multiviewer with Techex’s tx darwin and tx edge deployed for media processing and IP transport. BCNexxt’s Vipe platform enables simplified AI-driven playout automation.

Levira works with several major global organisations including Warner Bros Discovery, PBS and Viaplay, as well as owning a European distribution network. Following a successful series of operational testing, broadcast tests are about to get underway. The company is already looking to expand its offer. “In the medium term, we aim to see increased collaboration and growing relationships,” says Butt. “It’s all about helping clients to monetise their content.”

Reinforcing Levira’s core ethos of combining technological innovation with local teams on the ground, the site provides operational support for broadcasters and content owners both nationally and internationally. The environment enables services such as pop-up channels to be up and running with minimal delay, providing on-demand coverage for sports and other live events. “Flexibility, scalability and agility are key,” says Butt. “Everything is cloud-based and can be scaled as needed to meet the changing needs of the rapidly evolving broadcast space.”

In October, Levira joined partners from across the industry to support Media Talent Manifesto’s (MTM) inaugural On Air broadcast. The company provided playout for the world’s largest student-led broadcast, bringing together more than 1,000 students from participating universities, staff and others based around the world. From a central hub at London’s Ravensbourne University, the project was the first to use the Time Addressable Media Store (TAMS). MTM has confirmed the project will return in October with even greater ambitions. Levira will once again be providing playout. “On Air will have five channels and possibly one live sports channel this year,” says Butt, explaining what involvement means for participants. “It’s great for establishing contact with students, both to provide work experience and to tackle the skills gap. It allows students access to a live test facility where there are no real-world consequences if any mistakes happen.”

Levira opened its Leeds premises with the aim of combining broadcast experience with cutting-edge IT-driven innovation. The company is already thinking of expanding the services it offers, enabling organisations of all sizes to face forward, boost collaboration and easily work with the technology that is driving the future of media and entertainment.

The site provides operational support for broadcasters and content owners both nationally and internationally

Why volumetric 3D is finally ready for broadcast

Dynamic 3D content has always had a scaling problem. Broadcasters and media companies have spent years exploring real-time 3D to create more immersive experiences—from virtual studio environments to live performance delivered specifically to XR (Extended Reality) devices. Capture and rendering have advanced significantly, but volumetric 3D has struggled to move beyond trials and controlled demonstrations. The core limitation is technical: while tracked mesh compression works well for predictable, animation-like motion, the industry has lacked an efficient way to handle non-tracked dynamic meshes. In simple terms, natural, real-world 3D content does not stay structurally consistent from one moment to the next, making it difficult to compress and stream reliably at scale.

Each captured moment in dynamic 3D content generates a 3D mesh made up of thousands, or millions, of vertices. When capturing people or dynamically complex environments, those vertices do not remain consistent. A person walking does not move as one fixed object: posture changes, clothing crumples, and surface details shift continuously. With no stable correspondence from 3D frame to 3D frame, each frame effectively becomes a new, irregular 3D shape.

For broadcasters and content creators, this creates an operational problem. Compression becomes inefficient and unpredictable with no control over the bitrate, and large volumes of data must be transferred repeatedly between the CPU and GPU in order for the GPU-based AI algorithms to generate the 3D meshes in real time. That transfer overhead increases processing cost, introduces latency, and makes real-time streaming difficult to sustain at scale. These constraints explain why most volumetric content in broadcast and XR contexts is pre-recorded or restricted to controlled environments.

A new ISO standard to solve the problem

Video-based Dynamic Mesh Compression (V-DMC) changes how dynamic 3D data is delivered. Instead of transmitting a full, detailed mesh for every moment, the standard converts original meshes to a simplified base version, with time-varying detail—such as motion or fine surface change—mapped to 2D video frames.

With V-DMC, decoding can take place directly on the GPU using hardware-accelerated video pipelines, reducing decoding complexity and minimising the need to move large amounts of data between the CPU and GPU.

This shift delivers two major benefits. Firstly, it enables high-quality, real-time rendering while preserving the detail and fluidity required for immersive XR, volumetric media, and digital twins. Secondly, because V-DMC is built on established video coding technologies, it ensures broad compatibility and can run on existing devices without specialised hardware. V-DMC also delivers a significant improvement in compression efficiency, reducing multi-gigabyte 3D mesh sequences to just a few megabytes, with typical compression ratios in the range of 250:1 to 300:1

Until now, volumetric content creation projects have relied on bespoke pipelines, specialist hardware, and custom software stacks. While effective for experimentation, this approach does not fit mainstream broadcast operations, where repeatability, cost control, and long-term support are critical. V-DMC removes much of this friction. Built on existing video coding technologies, it can be integrated into existing production, delivery, and playback workflows with fewer changes. Volumetric content that is encoded once can be decoded across a wide range of devices, improving interoperability and reducing deployment risk.

Driving value

For media organisations, the implications are significant. By aligning more closely with video-based operational models, volumetric content can move beyond demonstrations and into real-world deployments. Broadcasters can begin to use live or ondemand 3D content for XR platforms, special events, and interactive viewing experiences, allowing audiences to move around and engage spatially.

While V-DMC addresses a major technical barrier that has held dynamic 3D back, widespread adoption depends on how quickly the ecosystem matures. Interoperable implementations, reference tooling, and integration into existing multimedia frameworks, engines, and production pipelines will determine how fast and efficiently broadcasters can work with volumetric formats at scale.

As with earlier transitions in broadcast technology, success depends on shared standards rather than isolated deployments. With V-DMC, real-time 3D is closer to operational reality than ever before. The challenge now is to translate this technical foundation into production-ready workflows that broadcasters and content creators can rely on.