TVB Europe UK 0116 - Print Sept 2025

Page 1


Realising the promise of virtual production

A MODERN ROAR FOR A PREHISTORIC WORLD

Walking with Dinosaurs returns

IAn ever-changing world

n the run-up to IBC Show, I’m often asked the same question: which themes are you tracking ahead of the show? Sometimes I find it hard to answer because of the sheer amount of different subjects we cover in both TVBEurope’s Daily newsletter and each of our magazine issues. But this year, there are definitely a few that stand out.

Everyone has been talking about artificial intelligence for the past 18 months, and it shows no sign of slowing down. Of course, I know that AI is not new to the media and entertainment industry; it’s generative AI that is shaking things up and generating the most headlines. In a way, it’s almost helping to democratise things like video generation, maybe even helping young people see a way into both the production and technology sides of the industry. I’ll let you decide what to make of that.

I’m also hearing a lot about the Media eXchange Layer (MXL), part of the Dynamic Media Facility reference architecture developed by the EBU. It aims to enable interoperable exchange of media between traditional broadcast and professional AV software. I would definitely recommend checking out the July/August issue of TVBEurope for a proper deep dive into the subject. But what I am seeing is media tech vendors starting to talk about it a lot more. Sometimes when that happens, vendors get excited about something that their customers take longer to adopt—cloud and IP being good examples. But I feel that having an organisation like the EBU helping to push MXL will bring broadcasters on board much faster.

“Virtual production doesn’t have to include a huge LED screen; you can use the phone in your pocket to create amazing backdrops”

Virtual production isn’t new; it’s been around for a few years now, but again I feel like traditional broadcasters are becoming more interested in it. I always say virtual production went mainstream when Coronation Street used it back in 2022. Now we’re seeing it employed on shows like Doctor Who, Good Omens and Brassic Broadcasters are realising the potential of virtual production and the benefits it offers. It also doesn’t have to include a huge LED screen in a building on an industrial estate somewhere; you can use the phone in your pocket to create amazing backdrops. It’s even being used by corporate entities to enhance their content, and is also finding its way into visual podcasts. You’ll find examples of all three in the following pages. That’s just three of the themes I’ve been talking about ahead of IBC2025. I’m sure you’ve probably been talking about completely different areas of innovation. That’s the great joy of this industry: there’s always something to talk about. All I can say is, make sure you’re signed up to TVBEurope’s Daily newsletter to stay up to date with all that’s going on in this dynamic, ever-changing world.

www.tvbeurope.com

FOLLOW US

X.com: TVBEUROPE / Facebook: TVBEUROPE1 Bluesky: TVBEUROPE.COM

CONTENT

Content Director: Jenny Priestley jenny.priestley@futurenet.com

Senior Content Writer: Matthew Corrigan matthew.corrigan@futurenet.com

Graphic Designers: Cliff Newman, Steve Mumby

Production Manager: Nicole Schilling

Contributors: David Davies, Helen Dugdale, Kevin Emmott, Kevin Hilton, Graham Lovelace, Matt Stagg

Cover image: Image courtesy of BBC Studios, Lola Post Production, Getty Images

ADVERTISING SALES

Publisher TVBEurope/TV Tech, B2B Tech: Joseph Palombo joseph.palombo@futurenet.com

Account Director: Hayley Brailey-Woolfson hayley.braileywoolfson@futurenet.com

SUBSCRIBER CUSTOMER SERVICE

To subscribe, change your address, or check on your current account status, go to www.tvbeurope.com/subscribe

ARCHIVES

Digital editions of the magazine are available to view on ISSUU.com Recent back issues of the printed edition may be available please contact customerservice@futurenet.com for more information.

LICENSING/REPRINTS/PERMISSIONS

TVBE is available for licensing. Contact the Licensing team to discuss partnership opportunities. Head of Print Licensing Rachel Shaw licensing@futurenet.com

MANAGEMENT

SVP, MD, B2B Amanda Darman-Allen

VP, Global Head of Content, B2B Carmel King MD, Content, Broadcast Tech Paul McLane

Global Head of Sales, B2B Tom Sikes

Managing VP of Sales, B2B Tech Adam Goldstein VP, Global Head of Strategy & Ops, B2B Allison Markert VP, Product & Marketing, B2B Andrew Buchholz

Head of Production US & UK Mark Constance

Head of Design, B2B Nicole Cobban

Virtual production beyond The Mandalorian

Virtual production is no longer a niche or high-end technique. Broadcasters, sports organisations, and creators are now using real-time tools to deliver high-quality content faster, more consistently, and without traditional overheads.

Green screens, camera tracking, and real-time engines are making content workflows more flexible and accessible. From matchday studios to branded social campaigns, virtual production is quickly becoming part of the everyday production toolkit. What used to need a Hollywood soundstage now fits in the corner of a club media room.

What virtual production really means today

Virtual production blends live action with real-time graphics. It works with LED walls or green screens, using camera tracking to align virtual backgrounds with real-world movement. Broadcasters and content teams now use compact virtual production setups to build reusable environments that support everything from live programming to fast-turnaround digital content. Virtual scenes can be reused, modified quickly, and adapted across formats without the delays of physical set changes.

With the pressure to produce more content across more platforms, virtual production helps deliver speed, flexibility, and consistency while keeping control of production quality. Visual identity can be locked in and reused, even when producing from different locations or with remote contributors.

Virtual production also solves problems around physical space and set availability. It reduces the need for repeated set builds and allows a single asset to be adapted for different audiences. For sport and live-event teams working to tight schedules, it can be the difference between making content and missing the window.

There is also a clear sustainability benefit. Reducing travel, set construction, and on-site logistics lowers the environmental impact of production. As broadcasters and brands face rising pressure to hit sustainability targets, virtual production offers a way to cut emissions while improving output—and saving budget in the process.

Real-world use cases

Sports clubs are using virtual production for interviews, tactical segments, media days, and sponsor content. A basic green-screen setup with camera tracking and a real-time engine can serve multiple teams and outputs, from digital match previews to studio clips for international feeds. Broadcasters are turning to virtual environments to add flexibility and consistency to daily production. Studio-based

formats such as magazine shows, discussion panels, and hybrid news-sport formats benefit from adaptable sets that match brand identity and allow for quick format changes. Creators and branded content teams are using similar workflows. Virtual production allows them to film different scenes in a single session and deliver tailored assets for multiple platforms with a consistent look and feel. This is especially valuable when turnaround is tight or when clients demand location-style output without travel.

Virtual production is often misunderstood as expensive or overly technical. In reality, many teams are running small-footprint setups with green screen, basic lighting, camera tracking, and real-time rendering. It does not require a LED wall, VFX supervisor, or Hollywood-grade facility. The tools are becoming more intuitive, and training is more accessible. Roles like virtual studio operator and real-time content designer are now common within traditional broadcast teams.

The technology is also helping to bring in a new generation of creators. Tools like Unreal and Unity are widely used in gaming, animation, and interactive design, meaning many younger professionals are already using them. This opens the door for a more diverse workforce and new creative approaches. When paired with experienced broadcast operators and editorial teams, these creators are building a stronger pipeline that reflects how content is being made today.

What’s coming next

Virtual production is shifting from a specialist tool set to shared infrastructure. Cloud-based control, remote production, and AI-driven automation will all continue to influence how these environments are created and used. Graphics, lighting, and data feeds can already be updated in real time. Virtual production will soon be more deeply connected to editorial and commercial workflows. Broadcasters and rightsholders who build this capability now will be ready to respond faster, scale production intelligently, and take advantage of new commercial formats. In short, virtual production won’t just support the next wave of content; it will shape it.

Virtual production is not a trend. It is already transforming how content is made and delivered, giving teams more control, creative flexibility, and production consistency while reducing cost and complexity. From sports clubs to entertainment studios to digital-first creators, the same tools are now in play, and those who adopt early will not just keep up, they will lead.

Studios face a decision over AI

Google stole a march on its AI rivals in May by launching a video generator that introduced sound. Not just music and sound effects, but dialogue, too, with near-perfect lip-sync. Within hours of its release, the first Veo 3 outputs lit up social media with AI-generated street interviews (more “faux pops” than vox pops) and a chilling fake news reel on YouTube labelled It’s time for the thing that everyone feared: deepfakes looking and sounding as real as genuine live news.

Shortly afterwards, advertising legend Sir Martin Sorrell urged agencies to move from “time-based to output-based compensation” as 30-second commercials that previously required months and cost millions would now “take days and cost thousands”, thanks, of course, to AI. Speaking on the sidelines at Cannes Lions, Sir Martin said AI would transform copywriting and visual production.

US audiences watching the NBA Finals in June got a glimpse of that transformation—a totally wild AI-generated TV ad that delighted its client Kalshi, the online prediction market, and startled its creator, video producer PJ “Ace” Accetturo. Why the surprise? That Accetturo had been hired “to make the most unhinged” TV ad possible, and that a TV network had approved it. Accetturo blogged how he’d turned Kalshi’s rough ideas into prompts, using Google’s AI assistant Gemini, then fed those prompts to Veo 3.

Hundreds of Veo 3 generations resulted in 15 usable clips. The project took two days and cost $2,000. Anyone can do this, right? “You still need experience to make it look like a real commercial,” he said. “I’ve been a director 15+ years, and just because something can be done quickly doesn’t mean it’ll come out great.”

A month later, Netflix co-CEO Ted Sarandos admitted generative AI had been used for the first time in one of its original TV series. Sarandos told analysts that AI was used to simulate a building collapse in the making of the hit Argentine sci-fi series The Eternaut “Using AI-powered tools, they were able to achieve an amazing result with remarkable speed and in fact, that visual effects sequence was completed 10 times faster than it could have been completed with traditional VFX tools and workflows,” said Sarandos.

“We remain convinced that AI represents an incredible opportunity to help creators make films and series better, not just cheaper.”

We don’t know which AI model was used, but soon after Sarandos’ revelation, Bloomberg reported that Netflix had been trialling Runway AI’s generative tools. The same report claimed Disney was also testing Runway’s technology–though none of the parties wanted to comment.

Their reticence is understandable. Memories of the 2023 Hollywood actors’ and writers’ strikes are still raw. The longrunning disputes centred on the potential for AI to replace artists by scanning their faces and creating digital replicas, as well as substitute writers with AI prompts generating scripts. The studios made major concessions to creatives, but now those hard-won victories feel more like a temporary truce. The temptation on the part of studios to at least dabble with AI that, by the week, becomes more powerful and more cost-effective, is so great we’re bound to see a re-run of those labour battles.

The other tightrope TV and film producers are walking is the degree to which they want to be seen to be on the right side of AI’s ethical debate.

Video generators from the major AI developers were largely trained on material scraped from the web without the consent of rightsholders. Runway is being sued in the US by a group of artists who accuse it of infringing on their copyright. But Runway has another approach: training video models on a studio’s archive. That bespoke arrangement is at the heart of the company’s landmark deal with Lionsgate Studio, struck last year.

TV and film studios can now take a bigger leap along the ethical path. In July, AI-start-up Moonvalley released Marey, the first fully-licensed AI video generator for professional filmmakers with around 80 per cent of its training footage coming from indie creators and agencies. While Marey has been trained on a smaller dataset, its architect says he’s overcoming the shortfall by using better technology.

TV and film studios now have three options: dirty data, bespoke data, clean data. The question is no longer when they will embrace AI, since they’re doing it now, but how? Will they do it in a way that respects creators, or replaces them?

Graham charts the global impacts of generative AI on human-made media at grahamlovelace.substack.com

Real skills, global reach

It seems crazy that when I attended NAB Show earlier in the year, On Air was only really a concept. Fast forward a few months, and it’s now on track to become the world’s largest global student-led broadcast with giants of the media industry such as AWS and ITV Studios supporting the project and 17 universities across six continents taking part—all who are challenged with producing one hour of live content which will be streamed to a global audience via YouTube.

What has been striking over the last few months is how quickly partners, whether education or industry, have come on board for this project. And it has made me ask, why has it resonated so strongly?

Bridging talent and industry

At its core, On Air is about connection. Between students and industry. Between education and opportunity.

At a time when headlines are filled with talk of AI replacing entry-level roles, it’s no wonder young people are feeling uncertain about their future in media. Many of today’s students are entering the workforce with uncertainty. They’ve come through disrupted education experiences during the Covid-19 pandemic and are graduating into a rapidly evolving sector. AI, automation, remote workflows and structural shifts are all changing the shape of the media industry, and with them, the types of roles available. While some companies are making cuts, others are scaling up. Start-ups are innovating fast. Entire disciplines, from production to post, are being reshaped. All of them need talent with the right skills to futureproof their operations.

We need diverse talent entering this global industry now more than ever. Students won’t just be entering a job role, but a global, networked industry built around innovation and change. They won’t be focused on a local business, but a sector that spans every part of the globe, with jobs and career paths available for them to grasp and adapt with.

This is what On Air was built to support. It offers students practical experience, global exposure, and a network that starts before their first job does. At the same time, it provides the industry with an early window into the talent shaping tomorrow.

Bringing education and industry closer together has always been beneficial, and as the industry adapts to new skill demands, education must do the same. Collaboration is vital—and that’s exactly what On Air is designed to support. The benefits for students are both immediate and long-lasting. As well as broadcast-ready experience,

participants gain production credits, creative confidence, and international exposure—all before graduation. They’ll be building networks with peers from around the world and connecting with some of the biggest names in the sector. With over 500 students predicted to be involved in On Air, this initiative empowers them to start building international networks from day one—and gives companies the chance to support and engage with emerging talent that’s ready for the future.

In addition to this, we are also offering students personal branding training, sustainability workshops, access to major trade shows like IBC and NAB Show, and opportunities to contribute to the OTTRED app and community. It’s an extraordinary chance to become fully immersed in the media and entertainment technology ecosystem from the outset.

Shaping what comes next

On Air is proudly delivered by the Global Media and Entertainment Talent Manifesto, which I co-founded in 2023 to address mounting skills challenges across the sector. This project is one example of what can happen when we stop talking about the talent pipeline and start building real bridges.

We believe On Air has the potential to become a repeatable model for how industry and education can collaborate—not just as a one-off. With advertising and sponsorship opportunities still open, and the livestream staying online post-event, the channel will also become an enduring showcase of student work, accessible to employers and educators worldwide. It’s a practical way to build skills, visibility and confidence in the people who will shape the future of our industry, and a chance for us all to play a more proactive role in supporting them.

To find out more or get involved, visit: mediatalentmanifesto.com/on-air

“Collaboration is vital—and that’s exactly what On Air is designed to support”

The new era of virtual production

Virtual production has progressed significantly since LED walls first captured the imagination of filmmakers with their promise of real-time, in-camera visual effects (VFX). Over the past few years, studios and vendors have adopted a spectrum of techniques ranging from simple 2D plates to full 3D environments created using game engines. But as both the technology and practices mature, a middle ground built on the 2.5 techniques established by VFX over the years is emerging as the most viable approach, poised to best realise the promise of virtual production.

Real-time 3D environments offer the broadest flexibility, but the cost, quality, performance, and complexity for supporting them is often prohibitive. Using 2D plates is simpler and more cost-effective while also offering higher quality, but the lack of depth means that the technique is only applicable to a narrow set of use cases.

In contrast, 2.5D techniques use photographic content projected onto geometry, creating parallax and depth while maintaining the highest image quality, and doing so without the computational overhead or production demands of full 3D. It’s a creative sweet spot where many productions are finding the control, quality, and flexibility they need.

Take, for example, the rooftop sunset scene in VFX Oscar nominee The Batman. The Gotham skyline was created from photography of New York, stylised and enhanced with just enough 2.5D geometry to deliver the right amount of perspective and parallax. The result: a striking, cinematic backdrop with a fraction of the complexity and cost of a fully modelled city. This approach ensured the production remained efficient and manageable, and delivered a spectacular end result.

The creative advantages of 2.5D are clear. It offers high-resolution imagery based on real photography, with believable parallax and depth cues as the camera moves. But 2.5D techniques were previously difficult to achieve on set, largely because the tools and technologies were cobbled together from other industries and not built with filmmaking in mind. At Foundry, we serve VFX artists who drive innovative techniques and live at the intersection of technology and artistry. Our goal is to accelerate their craft with tools that empower them to create breathtaking imagery. Tools like Nuke Stage are designed specifically to bring VFX sensibilities into virtual production. Instead of treating LED walls and on-set visualisation as a separate virtual production pipeline, Nuke Stage integrates them into established VFX pipelines.

Artists can now fine-tune imagery on set using live compositing to bridge the virtual and physical elements. This approach enables productions to work with seasoned VFX studios and artists during filming, empowering supervisors who excel at capturing convincing digital imagery in-camera and maximising both quality and believability.

By integrating tools like Nuke Stage into 2.5D workflows, VFX artists effectively become virtual art departments, with the ability to prep, preview, and refine scenes while also contributing creatively on set. This stands in contrast to the more technically-focused workflows often seen in “brain bar” operations on typical LED wall shoots. The result is fewer iterations, greater consistency, and the integration of pre- and post-production thinking. In doing so, it fulfils the original promise of virtual production: to streamline creative decision-making and lower costs by placing skilled artists closer to the point of capture.

At a time when studios are under pressure to deliver more with less, these efficiencies matter. Virtual production, once seen as a bleeding-edge experiment, can be a practical tool in the filmmaker’s kit—and an efficient one in the eyes of producers and department heads. In many ways, virtual production is entering a more mature phase. And as the technology settles from dizzying hype to sustainable reality, it’s becoming clear that VFX professionals are central to making it work.

The early excitement around LED walls, game engines, and media servers generated plenty of buzz, but not all of it led to better filmmaking. Today, successful virtual productions blend creative disciplines, build on the foundations of established VFX practices, and understand how to make image-based content truly shine.

Tools like Nuke Stage are helping to usher in this new phase. By giving artists precise control over 2.5D environments, integrated colour pipelines, and the ability to work directly with photographic assets using established compositing tools, we’re bridging the gap between traditional VFX and real-time production.

It’s not about replacing one workflow with another. It’s about building a continuum where artists can choose the right level of dimensionality for the job.

Virtual production doesn’t need to be all-or-nothing. And it doesn’t need to mean reinventing the wheel. With the right tools and the right mindset, we can turn the promise of virtual production into a practical, creative, and sustainable reality.

TVBEurope’s newsletter is your free resource for exclusive news, features and information about our industry. Here are some featured articles from the last few weeks…

OBS TO SHIFT TO A FULLY IP- AND IT-BASED INFRASTRUCTURE FOR LA 2028

Aiming to blend cutting-edge technologies such as immersive camera systems with insights into performance data, OBS will shift to a fully IP and IT-based infrastructure for the Games. Cloudintegrated, virtualised workflows will be enabled across venues with key focus areas including AIdriven content processing, increased use of 5G and wireless systems for agile camera operations and sustainable practices across the entire event.

The streamer has revealed some of the technology behind its move into live content.

BUILDING THE SOUND OF BUILDING THE BAND

Sound supervisor Oliver Waton talks exclusively to TVBEurope about his work on Netflix’s new music series, and why Shure’s new Nexadyne microphones turned out to be the perfect fit.

DON’T RECEIVE THE NEWSLETTER? YOU CAN SIGN UP FOR FREE HERE: YOU CAN SUBSCRIBE FOR FREE VIA THIS QR CODE

NAVIGATING THE FUTURE OF public service broadcasting

As the EBU marks its 75th year, Jenny Priestley sits down with director-general Noel Curran to discuss its enduring mission and the critical challenges facing broadcasters today

While the core mandate of the European Broadcasting Union (EBU), to defend and promote public service media, remains steadfast, the organisation itself has undergone a dramatic transformation since it first launched in 1950. “It started out as a very old-fashioned international organisation,” explains Noel Curran, the EBU’s director-general. “Now, it’s a more dynamic, agile organisation providing a much broader range of services, and it’s much more vocal.”

The EBU’s importance extends beyond public service media, serving as a critical unifier for the entire European broadcasting industry. “Within Europe, what we are realising more and more is, while we think we’re big, we’re actually quite small,” says Curran. This awareness has helped foster a growing imperative for collaboration, not just among public service broadcasters but also with the commercial sector.

“We will always have flashpoints and things we disagree on, but I think in Europe we are beginning to realise that there is a common challenge which is significantly greater than us in size, and that we need to work closer together,” he states.

This shared understanding of an external threat, particularly around the scale and influence of big tech, is a driving force behind the EBU’s collaborative efforts as it aims to act as a crucial facilitator, opening doors for communication and cooperation that would otherwise be difficult to achieve.

Looking back at the EBU’s biggest achievement over its 75-year history, Curran points to the organisation’s unwavering commitment to defending and promoting public-service media. Beyond that, the EBU has been crucial in fostering an environment in which leaders, often insular due to the intense national pressures they face, can look beyond their borders.

“As the head of a public service media organisation, you’re in the middle of everything, usually storms and controversies, and you can become a bit insular,” he says. “The EBU’s achievement is to open people’s minds to what is happening elsewhere, collaborate, communicate, learn, and work together. I think that’s very special.”

A challenging political and technological landscape Since his appointment in 2017, Curran’s role has mirrored the EBU’s own expanding scope, as it offers a much broader range of services, encompassing content, technology, AI, and legal aspects, all of which have impacted his responsibilities. The relationship with technology and big tech, particularly with the rise of AI, has become a significantly larger part of the EBU’s work.

Perhaps the most notable change has been the increase in political pressure on public-service media across Europe (and beyond). “When I joined, 40 to 50 per cent of my travel was to do with policy, either in relation to political parties or regulation,” Curran states. “Now the vast majority of my travel is to meet governments and regulators, as well as to help broadcasters who are in some difficulties and facing political pressure.”

The EBU also actively works with a wide array of international organisations, from the World Broadcasting Union and the United Nations to commercial broadcasters and journalism advocacy groups like Reporters Without Borders. “It’s one of the most fascinating elements of the job for me, but it also shows how organisations want to work together more and more. How do we achieve scale of impact? That’s why we’re talking to each other much more than we did previously.”

In recent years, broadcasters have been widening their impact and saving costs by sharing content, pooling research and development, collaborating on innovative projects, and undertaking co-productions. This collective approach also provides a “much stronger voice when it is a unified voice in Brussels and even nationally,” says Curran.

Asked about the main challenges facing publicservice broadcasters today, he cites increasing political pressure and the overwhelming influence of big tech. “We see that right around Europe, and that is problematic and dangerous.”

Regarding big tech, he explains, “we need to work with them, but also we need to find ways of making sure that what they do is not detrimental to public service media and to culture and business in Europe.”

Despite these challenges, public service media maintains a critical advantage: “Trust is our lifeblood. It’s core to what we do. In 91 per cent of European countries, public service media is the most trusted media, and we shouldn’t lose sight of that or be complacent about it. We need to show our independence, impartiality, transparency and be prepared to put our hands up when we make mistakes.

“Infallibility is never a virtue that I have claimed for public service media. We will make mistakes, and it’s important that we own those mistakes and learn from

them, but we shouldn’t feel that public service media has fallen off some trust cliff. It hasn’t.”

Even in times of crisis, younger audiences turn to public service media, demonstrating that trust is earned and must never be taken for granted. “If you’re complacent, you lose it, and it’s very hard to get it back when you lose it,” Curran warns.

Leading the way in technology

The EBU’s role in technology development continues to be important in a world where change, particularly in areas like generative AI, is moving at a spectacular pace. It serves as a vital reference point for members, allowing them to understand what others are doing, learn from successes and failures, and find opportunities for collaboration. “A lot of our members feel they’re too small to really have an impact,“ Curran says. ”So I think all of those things just show how important it is to have a central reference point, and that’s the EBU.”

While the EBU isn’t a regulator, it plays a crucial role in shaping the technological landscape by working on approaches and standards. It also advocates for key issues like prominence and transparency, helping regulators craft effective policies.

The EBU Academy is a great example of this commitment, with its School of AI seeing extraordinary success since its launch last year, with more than 1,000 individuals completing the programme. “Training is the absolute bedrock of that for us and for all our members,” says Curran. It’s about shared experiences across the members on a range of different topics.

”EBU Academy is growing in terms of impact and influence, and I fully support that. I see it developing even further in the years ahead.”

Talking of the future, Curran’s current seven-year term as director-general is due to end in 2028. While he stresses he has no plans to step down, he does have aspirations for the future of the organisation.

“I would hope that the EBU will have strengthened public service media by helping the members transform themselves, and by bringing the members together,” he says.

“The EBU has changed fundamentally as an organisation,” Curran continues. “We are much more agile, much more responsive, much more dynamic. We offer a much broader range of services, and that’s down to everybody in the EBU who has instituted that change.”

While Curran says it’s “a bit early for me on the legacy front,” his vision for a stronger, more collaborative, and resilient public service media in Europe is clear.

Broadley Studios uses Brainstorm’s InfinitySet with Unreal Engine to enable clients to film in any virtual world they can imagine

DRIVING GROWTH AND SUSTAINABILITY WITH virtual production

For a few years now, virtual production has become a common technique widely used in film, broadcast, and live events worldwide. The industry agreement is that virtual production is here to stay and responds to the profound changes the digital age has driven in content creation. Broadcasters

and production companies are aware of the opportunities virtual production brings to improve the carbon footprint, content quality, and flexibility in creation.

With real-time rendering, augmented reality, and LED volume stages now accessible to a broader range of creators, the industry has reached a turning point. No longer confined to big-budget studios, virtual production is now powering everything, from independent films and news broadcasts to corporate presentations and branded content. Helping shape this future is Broadley Studios, a London-based virtual production facility known not only for its cutting-edge Brainstorm technology but also for its commitment to sustainable production practices.

Broadley has built a reputation for making high-end virtual production technology both accessible and adaptable. The studio uses Brainstorm’s InfinitySet with the latest Unreal Engine 5.3 in combination with a versatile chroma to enable clients to film in any virtual world they can imagine without leaving the building.

The increasing popularity of virtual production has enabled companies like Broadley to take on a significant role in the filmmaking and broadcast industry because of the technology’s ability to notably cut time and economic resources. Unlike traditional filmmaking methods, virtual production leverages real-time rendering technologies, LED volume stages, chroma sets, and sophisticated motion capture systems to bring creative visions to life with precision and flexibility, and it can even make them indistinguishable from reality.

“Our virtual production setup, headed by Brainstorm’s InfinitySet, allows us to put together high-quality productions in an insanely short period of time,” says Richard Landy, managing director

of Broadley Studios. “Productions like the videoclip for Oliver Andrew’s single Saviour used a hybrid approach that allowed us to plan, shoot, and composite the entire video in just two days, with fantastic results.”

At Broadley Studios, this flexibility and precision are delivered through a fast-paced, real-time pipeline that prioritises creativity, efficiency, and environmental responsibility. With an agile approach to virtual production, Broadley offers content creators the chance to work within photorealistic virtual environments without the high carbon footprint of location shoots. Whether enabling remote shoots or powering fast-paced productions like podcasts or live shows, this flexibility supports a diverse array of creative projects.

Elevating the holiday spirit with the Jonas Brothers

This last holiday season, Jimmy Fallon’s Holiday Seasoning Spectacular featured over 10 artists and celebrities, and although the special was primarily filmed in a studio in New York, Broadley Studios was brought in by the showrunners to contribute a key segment to the show with a unique scene featuring the Jonas Brothers, Jimmy Fallon, and LL Cool J.

The shoot required three motion-tracked cameras, each with its own render engine running on Brainstorm’s InfinitySet, which allowed Broadley to produce precise, high-quality footage that would align with the production values of NBC’s festive special. The result was a realistic, immersive scene that matched the show’s festive aesthetic and blended seamlessly with the on-location footage.

Virtual

production and podcasting

When Al Arabiya News decided to launch their flagship football podcast, The Dressing Room, they turned to Broadley Studios to create a cuttingedge production that matched the energy and ambition of the show. Hosted by football legends Joe Cole, Wayne Bridge, and Carlton Cole, the weekly series dives into global football stories, blending expert analysis with humour.

At the core of the production’s virtual workflow is Brainstorm’s InfinitySet, enabling a high-end, real-time set-up that delivers stunning visuals on a tight weekly schedule.

Combined with Unreal Engine 5.3, the system runs five motiontracked cameras and render engines, capturing dynamic multi-angle shots with smooth movement inside a custom-designed virtual environment created specifically for the podcast.

The Dressing Room has launched across major platforms, including Spotify, Apple Podcasts, and Al Arabiya News’ website, where it has quickly gained traction with football fans. The visually striking virtual production has elevated the podcast, setting it apart in a highly competitive market.

How virtual production is transforming content creation Virtual productions like The Dressing Room, Jimmy Fallon’s Holiday Seasoning Spectacular, and many others not only deliver high-quality results, they also help reduce costs, enhance industry sustainability, and open new creative horizons for filmmakers.

With a variety of local and international clients like NBC Universal, BBC Studios, Al Arabiya, and Nova Nordisk, the team at Broadley Studios has experience in both mixed and fully virtual productions, empowering creatives to choose the best path for each project while maintaining sustainability and budget goals.

As the technology continues to evolve, its impact on the industry is sure to grow, setting new standards for creativity, efficiency, and sustainability in visual storytelling. Broadley Studios, supported by Brainstorm’s virtual production tools, offers a clear example of how innovation can meet responsibility, and how the future of storytelling can be both beautiful and green.

“Virtual production isn’t just transforming how we tell stories; it’s showing that high-quality content can be made faster, greener, and more affordably,” says Landy. “At Broadley Studios, we’re proving that innovation and sustainability can go hand in hand.”

Al Arabiya News' flagship football podcast The Dressing Room
Jimmy Fallon’s Holiday Seasoning Spectacular employed virtual production to film the Jonas Brothers

animation PAST/ PRESENT

Walking with Dinosaurs was one of the first TV natural history series to focus on dinosaur life using state-of-the-art visual effects. Following its return to screens, Kevin Hilton looks at how a new selection of dinosaurs were created using modern CG animation techniques and creative sound design

Compared to how long ago prehistoric creatures lived, the 26 years since the BBC natural history documentary series Walking with Dinosaurs was first shown is a relatively short time. But in that intervening period, the visual effects technology that re-created the likes of Tyrannosaurus, Iguanodon, Stegosaurus, and Diplodocus has evolved significantly, and that has allowed today’s VFX to realise a fresh cast of terrible lizards for a rebooted series.

The new series comprises six hour-long episodes, each focusing on not just one dinosaur species but a specific individual that existed in a particular location and how it interacted with its family and the other creatures there. Also part of the story is footage of experts unearthing fossils that helped unlock the story. Backgrounds were filmed on location, with members of the visual effects team from Lola Post on-site so they could get a feel for the surroundings and how they would place their animated creations in them.

“The film crew scouted for locations that had similar vegetation to where the dinosaurs lived, although it is different today,” comments executive VFX supervisor

WIRELESS

Rob Harvey. “For example, the Cretaceous apparently didn’t have grass, and certain trees and flowers didn’t exist. We found a location that got a lot of the way there and then had to use props of the correct scale for the camera operator to frame on. For a Raptors sequence, one of our supervisors ran around the forest dressed in a blue suit. We’d be shooting empty plates the whole time and then imagining what the creatures would be doing within those, although it was all based on storyboards or previsualisation.”

Reference shot of Spinosaurus
Images

Crafting a prehistoric world

Prehistoric animals are now well-established on screen thanks to the Jurassic Park/World franchise and TV documentaries such as Dinosaurs—The Final Day with David Attenborough (2022), for which Lola Post also provided effects. Because of this, Harvey says, the team had to ensure the creatures in the new Walking with Dinosaurs were as convincing as possible. “There have been many creature shows now and the audience expectation is for quality,” he says. “The trick was trying to do something that worked within the budget and schedule but also looked good and told the stories.”

To do this, the Lola VFX artists used established animation and modelling systems such as Autodesk Maya and SideFX Houdini, as well as Unreal Engine. “I think it was the first time we’d used Unreal Engine at previz and then produced the entire background and environment,” Harvey says. Among the sequences created were the heavily armoured Gastonia fighting the vicious Utahraptors and the immense Spinosaurus in the forests of ancient Morocco. “For the underwater Spino, we transferred the Houdini creature into an Unreal environment and rendered it there. It sped things up and gave us freedom with lighting and atmosphere. It was a real groundbreaker.”

While there is a degree of suspension of disbelief on the part of viewers, who know dinosaurs no longer exist, the creatures still had to fit and move believably in the real filmed locations. This delicate balance was carried out during the colour grade, performed at ENVY Post Production, along with the online edit and sound mix. Senior colourist Sonny Sheridan says part of his job was to make the creatures sit in the backgrounds and create a sense of reality. “There was a lot of softening involved and contrast came into play massively because we really needed the dinosaurs to bed into the plates,” he explains. “It was about creating a world for each story. The experts were leading us but for me it was about making the worlds believable, where the dinosaurs sat in them and felt real.”

The images were further finessed during the online edit by editor Adam Grant. “My primary focus was working on the final plates, ensuring any objects that didn’t belong in the time period were removed, stripping it back to prehistoric times,” he says. “In some instances, I was painting in layers such as sand and dirt to build up the environment.”

The sounds of the past

The ongoing study of fossils, particularly those discovered more recently, has given palaeontologists a better idea of what dinosaurs looked like and how they moved. There is still some interpretation and speculation in this, but not as much as for an important element in a TV production; how the creatures sounded. “In the last ten years there have been huge leaps in the knowledge based on re-examining fossils and different ideas about how the dinosaurs might appear and certainly how they sounded,” comments Jonny Crew, sound designer and editor at Wounded Buffalo Sound Studios.

Crew has previous experience creating “dinosaur vocalisations” from his work on the 2022 series Prehistoric Planet. The key to creating noises that may be like how these long-silent creatures could have sounded, he explains, is to use source material as close to them as possible. “Anything that’s a descendant or related to some of the species,” he says. “The wider brief was no mammals, we had to stick to birds and reptiles. But those are much smaller, so you’ve got to use them as a starting point and beef things up to make it sound more impressive. There were some interesting behavioural notes from the palaeontologists, who think that, for example, the Albertosaurus was a social reptile that made [quieter] bonding noises, which was an interesting challenge.”

Based in Bristol, Wounded Buffalo is a specialist in natural history programming and has built up an extensive library of animal sounds, which provided the basis for many of the creatures in the series.

A dinosaur new to Crew–and one that generated some excitement when it was announced it would feature in the series–was the sailbacked, crocodile-snouted, mostly aquatic Spinosaurus.

“The main note from the producers was that this was a crocodilian character, so I used a lot of crocodile and alligator sounds, both for the adult and its babies,” he says. “The story was about the Spino Dad and his brood, and I worked with the producer on getting the right sounds for the kids. I pitched the effects down and then brightened them back

Assistant producer Sam Wigfield pushes a Spinosaurus model through water to create realistic water ripples
A cut out Spinosaurus snout shape used to give scale

up again but kept the depth by adding lots of bass and resonance to give a sense of scale, because these were pretty big.” Among the plug-ins Crew employed were the Waves Aphex Vintage Aural Exciter and Sonnox Oxford Inflator.

Another plug-in, The Cargo Cult’s Envy, was used on the Foley work by Arran Mahoney, supervising sound editor and re-recording mixer at Mahoney Audio Post.

A Triceratops takes on the T. rex

“It allowed us to extract the performance envelope, including the dynamics, transients and rhythmic flow, of our original Foley and apply it to new textures,” he explains. “For example, we could ‘borrow’ a well-synced pass on wet clay to simulate a muddy surface, and map it onto other materials, such as stone, foliage, and water. We recorded footstep performances using a variety of materials, everything from sand and gravel to thick mud and heavy cloth, which were selected based on the terrain and weight class of each dinosaur.”

The Foley and Dino vocalisations were mixed into the overall soundtrack by Bob Jackson, senior dubbing mixer at ENVY. This was something of a return to the past for Jackson, who worked on the

original Walking with Dinosaurs. “It was a stereo production back then,” he recalls. “This time I mixed the different dinosaur environments in 5.1, consisting of maybe 40 tracks, but did produce a stereo mix-down for editing. We started work in May 2024, with each of the six shows set in a different time period, millions of years apart."

With no location sound for the dinosaur scenes Jackson used tracks from ENVY’s “enormous” effects library. These he mixed together with the music by Ty Unwin and narration from actor Bertie Carvel on an Avid S6 desk into a Pro Tools digital audio workstation.

“I’d decided quite early on that the dinosaur scenes would be 5.1 and the ones at the dig with the experts would go down to stereo, so the audience would be subconsciously brought back to the present day,” he says. “Ty’s music was constant but we went to some lengths to make the effects come through.”

And come through they do, not perhaps in the big, blockbuster style of the Jurassic Park films, but in a believable way to work with the visual effects that have given life to long-extinct creatures.

modern media SAFEGUARDING TRUST IN

The Sony PXW-Z300 is the world’s first camcorder to support recording of authenticity information in video.

For news in particular, content authenticity is no longer a theoretical concern—it’s an urgent, growing challenge for broadcasters, news organisations, and audiences alike. As synthetic content becomes more sophisticated and widespread, particularly across social media platforms, the industry is grappling with the implications of fake or manipulated imagery entering trusted channels.

Navigating a sea of synthetic content

From deepfakes to AI-generated imagery, the volume of synthetically created content has exploded. While generative technologies can unlock creative potential, they simultaneously pose a critical threat to trust in journalism and media, a trust already partially eroded in the age of social media. Editorial teams are now under pressure to not only assess content for editorial value but to publicise to

their viewers why it can be trusted, outlining its provenance and the different changes it might have gone through. A situation that many organisations are tackling head on.

This new reality has created multiple challenges, as illustrated by many news stories over the past few years. Newsrooms must be able to evaluate the authenticity of incoming and user-generated footage, especially if it is to be featured within their own channels. They need to be prepared to prove the inauthenticity of media falsely attributed to their brand. And, most importantly, they must be able to stand behind the provenance of their own output—especially in an era when misinformation can go viral in seconds.

Erosion of trust: what’s at stake

Audiences rely on trusted media brands to inform them about the world. When that trust is compromised—even by accident—the reputational damage can be significant and lasting. In an environment where misinformation spreads rapidly and often elicits strong

emotional responses, media organisations risk losing credibility if they unwittingly publish manipulated footage. Conversely, hesitation to publish while verifying footage can result in competitors breaking news faster, albeit maybe less responsibly.

The stakes are high. In a climate of increasing scepticism and polarisation, audiences are more likely to question what they see, even from reputable sources. Broadcasters are increasingly finding that it’s no longer enough to be accurate—they must prove it. And that means rethinking the relationship between capture and delivery to ensure that what viewers see can be verifiably traced back to the original event.

Industry-wide collaboration

Recognising the scale of the challenge, leading industry players have moved to build consensus around content authenticity. Initiatives like the Coalition for Content Provenance and Authenticity (C2PA), where Sony has been part of the Steering Committee since 2022, are working to create open technical standards that define how authenticity metadata can be captured, stored, and validated across multi-vendor workflows.

These standards represent a major step forward, not just technologically but culturally. The implementation of cryptographically secure metadata—embedded directly into video files—opens the possibility of building end-to-end verification into their production pipelines. When universally adopted, this would allow content to pass between organisations in a tamper-proof digital “wrapper,” clearly identifying its origin and any modifications.

first camcorder to embed digital signatures directly into video files , enabling content authentication to address the evolving needs of the content creation industry. This groundbreaking development marks a significant evolution in broadcast journalism and professional video production.

A game-changer for video authentication: the PXW-Z300

The PXW-Z300 camcorder applies a digital signature to each video file at the moment of capture. For broadcasters, this changes the equation. In a fast-moving news cycle, knowing the provenance of a piece of content in this way, from the moment it is shot, means it can be processed, verified, included in the news item, and aired with greater confidence and speed. For legal teams, the verifiable chain-of-custody this signature provides could offer a powerful safeguard against disputes over provenance.

The Z300 thus becomes more than just a tool for recording. Combined with Sony’s connected ecosystem and secure cloud platforms, this camera represents a cornerstone of next-generation media workflows.

The work by Sony and the wider industry is far from done. Video poses additional challenges that still images do not, from larger file sizes and synchronised audio to complex post-production workflows.

But the progress made with the Z300 proves that these hurdles are surmountable.

Sony is now collaborating with broadcasters, technology vendors and standards organisations to implement C2PA-compliant metadata throughout the production and distribution pipeline.

Sony’s role: enabling trust from lens to viewer

Sony has long been a trusted partner to news organisations and media companies around the world. As a member of C2PA and a key participant in IBC’s high-profile Accelerator Project on content provenance, Sony is deeply invested in helping the industry with the tools and infrastructure it needs to adapt.

Sony’s leadership in still-image authenticity, particularly through the Alpha series of cameras, laid the foundation for the work in video verification. These cameras were among the first to support C2PA standards, embedding metadata such as who created the content, when and where it was shot, and what devices were used— all recorded directly in the image file.

Building on this foundation, Sony has now extended authenticity workflows to video with the launch of the PXW-Z300, the world’s

Initiatives such as the IBC Accelerator Stamping Your Content project, which Sony and major broadcasters are part of, are also helping to showcase the initiatives around content provenance.

Trust in media can no longer be taken for granted. But it can be rebuilt—through technology, transparency, and collaboration. By embedding authenticity into the very fabric of visual storytelling, Sony and its partners are offering a way forward in an uncertain digital age.

At this year’s IBC, Sony invites media professionals to see the PXW-Z300 in action and discover how authenticity can once again become the foundation of public trust. With C2PA-compliant technology now a reality, the journey from camera to screen can become not just a path of content, but a chain of trust.

Come and see the new Z300 on SONY Stand Hall 13 and learn more on authenticity in the IBC Accelerator Zone at stand 14.A21.

HOW LED PROCESSING IS transforming BROADCAST STUDIOS

In the fast-moving world of broadcast television, standing out is about more than good content: it is about delivering an experience that captivates, feels real, and keeps audiences engaged from start to finish. The evolution from traditional green screens to Extended

Betfred's Nifty 50 employs a virtual backdrop to deliver an engaging experience for viewers

Reality (xR) and high-performance LED walls has opened a new era for broadcasters, where dynamic visual storytelling meets reliability and cinematic quality.

Betfred, a family-owned brand with over 50 years in the betting and gaming industry, is committed to “delivering a fair, safe, and enjoyable experience”. While continuing to grow and innovate, the company remains true to its core values of integrity, innovation, and customer satisfaction. As part of a major refurbishment at their UK headquarters in Birchwood, Warrington, the company commissioned a 5m by 2m hi-tech studio for their new Nifty 50 lottery game. The game required a virtual backdrop to deliver an engaging experience for viewers.

The existing Betfred TV production setup, including a traditional studio with a simple grey backdrop, was functional but lacked scale, flexibility, and visual appeal. The aim was to introduce live presenters for the first time, create a more interactive atmosphere, and give the Nifty 50 draw the polished energy of a television broadcast, all while remaining within the footprint of the current space.

To realise this vision, Betfred turned to d&b solutions, a company that provides cutting-edge solutions for live events, corporate clients, and broadcasters. Working closely with Betfred from the initial design phase through to installation and commissioning, d&b solutions

provided a complete end-to-end service. The centrepiece of the new studio is a 4.9m-by-1.8m LED wall, installed and calibrated by d&b solutions and driven by Brompton Technology’s Tessera LED video-processing platform. This combination offers both the flexibility to transform the studio environment at a moment’s notice and the image quality required for broadcast.

LED: small space, big impact

Unlike green screens, which require post-production compositing and careful lighting to avoid spill, LED backdrops deliver instant realism. The set looks and feels real to the presenters, which in turn makes their delivery more natural. With xR environments, broadcasters can make a small studio appear like a grand stadium, bustling trading floor, or sweeping cityscape, all without the costs or logistics of physical location shoots. For Betfred TV, this means rapid set changes to match programming needs, greater creative freedom for producers, and a much richer viewing experience for audiences.

Central to the upgrade is the role of Brompton’s Tessera processing, which ensures that the LED wall delivers consistent, flicker-free performance across all camera angles. For live television, reliability is non-negotiable; a single dropped frame or colour mismatch can undermine audience confidence.

The Tessera platform brings precise colour reproduction, synchronisation between LED refresh rates and camera shutters, and the benefits of High Dynamic Range (HDR), producing deep contrast, vibrant colour, and cinematic detail that stand up under studio lighting. This level of performance gives Betfred TV a consistent, high-quality output day after day, even under the demands of live production.

For Betfred’s presenters, the change has been transformative. Instead of imagining a backdrop and timing their performance to cues, they can now interact with the visual environment as if it were a physical set. This immediacy not only improves delivery but also creates a stronger connection with the audience.

For viewers, the difference is equally striking: the flat, static backgrounds of the past have been replaced with immersive visuals that bring energy, depth, and a heightened sense of occasion to the Nifty 50 draw.

Setting a new standard for studio production

The success of the project lies in the collaborative approach between Betfred, d&b solutions, and Brompton Technology. d&b solutions took responsibility for the complete technical delivery, ensuring that every element of the installation met the requirements of a fast-paced broadcast environment. Brompton Technology, as a trusted supplier, provided the processing expertise that enabled the LED wall to perform flawlessly on camera. The result is a studio that is as robust as it is flexible, capable of supporting both Nifty 50 and any future programming Betfred chooses to produce.

This transformation reflects a broader shift within the industry. Smaller studios are now capable of producing content that rivals

big-budget productions, thanks to advances in LED technology and video processing. The combination of efficient space usage, rapid set changes, and broadcast-grade visuals is redefining what is possible, making high-quality production accessible without the overheads of larger facilities.

For Betfred, the investment has delivered more than just a new look. It has created a platform for ongoing innovation, allowing the brand to evolve its in-house broadcasting and explore new formats with confidence. For d&b solutions, the project demonstrates how integration expertise, combined with high-performance technology, can deliver lasting value for clients in the broadcast sector. And for Brompton Technology, it is a showcase of how Tessera processing can enable studios of any size to achieve uncompromising image quality.

With more LED elements planned for its studio, Betfred TV is continuing to build on its commitment to innovation. The upgraded facility has already set a new benchmark for what can be achieved in a compact broadcast space, and its impact is clear in the energy and engagement of the on-air product. In a media landscape where audience attention is harder than ever to capture and keep, Betfred TV’s transformation shows how the right combination of vision, integration expertise, and advanced processing technology can turn a studio into a powerful storytelling tool.

Ultimately, it is about more than just looking good on camera. It is about creating an environment where technology fades into the background, and the story, whether it’s a game draw, a news segment, or an entertainment show, takes centre stage.

And thanks to LED processing, that story has never looked better.

The centrepiece of the new studio is a 4.9m-by-1.8m LED wall

IRewriting

THE PLAYBOOK

Steve Reynolds, CEO at Imagine Communications, explains how IP is revolutionising live sport production and playout

n today’s live sport production landscape, SMPTE ST 2110 has become the de facto norm. Nearly every new project, whether a greenfield build or a facility refresh, starts with IP. That said, SDI remains an essential part of various operations, so many organisations are navigating the transition to IP with hybrid deployments—by creating IP “islands” within an otherwise SDI-based environment, or encapsulating legacy SDI workflows within an IP core. This multi-path approach reflects a simple reality: facilities must balance the business-driving flexibility and scalability that IP brings against the practical reality that much of their existing SDI equipment remains adequate for today’s needs.

For facilities with established SDI infrastructures or tight budgets, maintaining the status quo eliminates the need for retraining, replacing control panels, and overhauling existing workflows. But that doesn’t mean media companies should put IP entirely on hold, as gateway technologies allow for incremental modernisation without a full rip-and-replace.

The IP advantage

From a performance standpoint, IP becomes the dominant option once a facility approaches the limits of traditional SDI routing—a 1Kx1K matrix. Even at smaller sizes, perhaps all the way down to 200x200, IP has become the better approach from the standpoints of cost, flexibility, and time to implement.

There’s also a long-term benefit to IP: future readiness. For 3G HD facilities with no plans for UHD or high-density routing, SDI may still be suitable. But if plans include a future move to UHD and 4K workflows, where each stream demands 12Gbps, SDI limitations and workarounds become impractical and costly, making the deployment of a native IP infrastructure economically sound.

Virtualisation and cloud workflows

Another powerful advantage of IP in live sport production is virtualisation. SDI relies on a point-to-point architecture with a cable connecting one device to another. In an IP environment, however, signal and transport are decoupled, with video treated as data packets rather than waveforms. Any signal can be routed anywhere—from on-prem appliances to virtualised environments in a data centre or fully cloud-native platforms.

The ability to extend the virtualised model into the cloud is a gamechanger for remote production. Cameras remain at the venue, but switching, graphics, and replay occur entirely in the cloud, allowing technical directors, graphics teams, and replay operators to work from anywhere via centralised systems, using browsers or remote applications.

These cloud-native advantages are also reshaping playout. Functions like master control, switching sources, aligning audio, inserting graphics, and regionalising feeds can be executed in a virtualised, softwarebased environment. However, cloud infrastructure comes with ongoing costs, making it less economical for 24/7/365 channels. Where robust IT infrastructure already exists, running core services on-prem—while leveraging the cloud for overflow, disaster recovery, or rapid channel deployment—is often the most cost-effective approach.

Occasional-use scenarios further highlight the value of cloud-based infrastructure. For a short-term event, like a multi-week sport broadcast, spinning up cloud capacity offers better economics than a permanent investment in hardware. The same applies to redundancy: cloud-based disaster recovery provides a scalable, on-demand alternative for business continuity that reduces waste and simplifies maintenance.

Latency: the next frontier

In live sport production, latency gets between the producer and the action. Milliseconds can mean the difference between staying ahead of the moment or falling behind it.

A camera feeding directly into a switcher, then to playout, and out through transmission—all over copper or fibre—represents one of the fastest possible signal paths. It’s a tightly integrated chain that remains difficult to match with cloud-based or virtualised workflows, which introduce latency at various points from contribution to the final mile. This is a problem set that is rapidly being addressed, with technologies such as JPEG XS and other low-latency codecs.

This also points to perhaps the greatest benefit of the move towards IP technologies: the ability for the broadcast and sport production segments to tap into the much larger global IT industry and leverage the investments made into that economy around scalability, security, and performance.

Ready for what comes next

Major rebuilds don’t happen every year, so planning for future requirements should be part of any significant facility transformation. While UHD and HDR might not be on today’s menu, the call could come at any time, and the facility infrastructure must be able to adapt without requiring another costly rip and re-do.

The scalability and flexibility of IP at the core, coupled with the extensibility of virtualised and cloud computing, positions production and playout facilities to be ready for whatever comes next.

modern ancientMIXING THE WITH THE

In

7 Wonders of the World with Bettany Hughes, the historian and adventurer goes on a journey across three continents to investigate the world’s first travel bucket list. In a unique collaboration with Snapchat’s AR Studio Paris, viewers can use their phones to scan an on-screen QR code, bringing to life each of the ancient sites in a virtual, immersive experience

Where did the idea for 7 Wonders of the World come from?

Shula Subramaniam, series producer, SandStone Global: Bettany Hughes dedicated almost 10 years to her book of the same name, and upon its release, we knew instantly it would make a fantastic series. The core idea was to go beyond the familiar list: many people have heard of the 7 Wonders list, but they often don’t know the original sites or that ancient tourists and travellers actually journeyed to visit them, recognising them as the most significant human achievements of their era. Given that most of these Wonders no longer stand and Bettany had already spent years researching and consulting with experts and archaeologists for her book, we saw a clear path to use stunning graphics to accurately illustrate all the detail from the book. This combination of travel, surprising historical revelations, and deep archaeological exploration was the perfect formula for a new series our viewers would absolutely love.

Did you always plan to include the AR experience?

SS: At SandStone, our core mission revolves around connecting with people–delivering authentic and accessible global storytelling to our audiences. With evolving TV viewing habits, particularly the rise of on-demand content and second-screen usage among younger generations, we’re continuously exploring digital platforms to engage viewers while maintaining the high quality of our broadcast projects.

For 7 Wonders specifically, we saw an incredible opportunity to use our detailed

the QR code

a

Scan
to watch
behind-thescenes video of the AR in action
AR version of the Hanging Gardens of Babylon

graphics to offer an additional layer to the viewing experience. Our collaboration with the Snapchat team was perfectly timed. By combining their cutting-edge AR technology with our compelling storytelling, we could bring these ancient Wonders directly into people’s living rooms in a whole new way. This approach not only deepens engagement for our viewers but also helps us reach a digital-first audience who could come across the experience on Snapchat, ultimately drawing them to the series. Together, our teams wanted to demonstrate how AR can enhance traditional media and enrich cultural content.

How does the AR experience work alongside the TV series?

Antoine Gilbert, Snap Paris AR Studio senior manager.: The beauty of AR is that it allows you to bring a new layer of creativity to traditional experiences. AR transforms how we learn about the world today, from cultural experiences to art and history in a more engaging, immersive experience. This is especially true for 7 Wonders of the

World with Bettany Hughes. It’s really easy to access the experience; viewers are prompted during each episode to explore an experience that brings the 7 Wonders to life in a beautiful, virtual experience. For example, the Pyramid of Giza experience kicks off with a map slowly expanding on the floor as it reveals the location of the Wonder and its surroundings in Egypt, including the Mediterranean Sea and the Nile. Once the map is fully revealed, a golden pyramid will start to form with mysterious sounds and visuals of a structure being physically created. Once it is fully formed, people see the famous British landmark, Big Ben, alongside the Pyramid so they can get a sense of the scale of this incredible site when compared to a popular landmark. The experience ends with a small pop-up, which reveals the name of the Wonder and the era it was built in.

Can you explain the process behind recreating the Wonders?

AG: Let me start by explaining the mission of our AR Studio Paris. In 2021, we opened our AR studio to raise awareness of the potential of augmented reality and to show how

Bettany Hughes on location

it can impact cultural, entertainment, and education sectors. Since then, our AR Studio Paris has worked on some of the most incredible AR experiences, from partnering with Daft Punk for the launch of its new album, to working with the Louvre and National Portrait Gallery to enhance the museum experience. AR is transforming how we experience culture.

Here is where it gets a bit technical. For this series, our talented team of developers, 3D, visual, and concept artists worked closely with SandStone to develop the 3D model of the 7 Wonders for the AR experience from the visual effects they had created with Flow Postproduction for the series. Once the 3D model was created, we developed visual and sound effects to help create a compelling virtual experience. This was a fun discovery process as we played around with a few effects, working closely with SandStone to make sure the look and feel of the AR experience complemented the series. The AR experience was developed on our software called Lens Studio, which lets anyone easily develop an AR experience. Thousands of developers use it to create AR experiences and lenses for Snapchat and our AR glasses (spectacles).

Our second challenge was more technical. It’s easier to develop 3D models for TV than in AR, as there are some technology challenges between TV and mobile which our team had to work through to ensure the 3D models shown in the series on TV were as accurate as possible in AR. While these challenges stretched the thinking of our AR Studio team, we were able to work through it successfully to create an AR experience that truly reflects the beauty of the 7 Wonders.

What were your biggest achievements?

AG: It’s been great to partner with a historian like Bettany Hughes to bring her research to life in AR. Her expertise and insights on the Wonders really helped us bring these historical discoveries to life for millions of people who were able to travel virtually to moments in time to explore, learn, and see the Wonders.

Do you expect to update the experience in the future?

How faithful a representation is each end result—was there any element of ‘guesswork’ involved?

SS: Accuracy was paramount for us, and the detective story of what these Wonders actually looked like is the backbone narrative of the series. Bettany spent a decade on her book, so we had a wealth of research to hand, but it was crucial for us to visit all the sites and get access to the archaeological teams and experts who dedicate their time to understanding these Wonders. We meticulously drew upon every available piece of evidence: ancient texts, archaeological findings, and the latest scholarly consensus and fed this into every iteration of the models our graphics team Flow Postproduction produced.

While some elements require informed academic reconstruction due to the passage of time, it’s not guesswork, but careful interpretation of fragments and evidence and we’re really proud of the results. What often pops up online if you just search these Wonders can be quite different from the historical truth, so we’re thrilled to be putting our Wonders out there.

What were the biggest challenges you faced throughout the production?

AG: The beauty of the 7 Wonders is that they are truly breathtaking sights with so much rich history. Imagine trying to bring the beauty of these Wonders to life in an immersive experience—we wanted to ensure that the experience we created would help people understand the scale, the detail of each Wonder and create the feeling of excitement and awe that would be felt from seeing them in real life. This is where the magic of AR comes in, as it’s a powerful creative tool to bring moments to life.

AG: It’s too early to say what we might add to the experience in the future; however, as the pace of technology continues to accelerate, particularly with artificial intelligence, we’re able to supercharge our technical capability to create more quality AR experiences.

What did you learn about augmented reality while working on the project?

AG: We are grateful for the opportunity to work on this project. It’s another great example of how AR can completely transform how we experience and learn about the world around us. From helping people travel to Egypt’s Old Kingdom to check out the Giza Pyramids to seeing the beautiful Hanging Gardens of Babylon, millions of people around the world will be able to experience the 7 Wonders from their homes.

7 Wonders of the World with Bettany Hughes is available to stream on 5
Antoine Gilbert
Statue of Zeus at Olympia

revolutionLEADING THE MEDIA

Grass Valley returns to IBC this September with major new innovations covering every stage of the media production workflow. The company will demonstrate end-to-end solutions that empower media organisations around the world, setting the pace for industry transformation.

"For Grass Valley, IBC is more than a trade show: as a global leader, it’s an annual opportunity to engage with customers and partners, notably across EMEA, and showcase the latest innovations in the GV Media Universe (GVMU),” says Grass Valley CEO Jon Wilson. “It’s a key event in our calendar, and we’re excited to share our continuing growth story in Amsterdam.”

Grass Valley arrives in Amsterdam on the back of impressive organic growth, including strong year-over-year gains in bookings and revenue across its product portfolio.

The company’s commitment to innovation is helping to drive this momentum, alongside the rapid adoption of its AMPP ecosystem (with bookings up over 120 per cent YoY), and the continued success of core products such as the newly updated Karerra and

K-Frame VXP production switchers, and the refreshed LDX 100 series camera range, including the new LDX 180 Super 35 system camera.

“Our strong organic growth is setting the pace for industry transformation, driven by continuous innovation in our GVMU hardware and software offerings, powered by AMPP.

"Stand 9.A01 will show how our open-platform approach—together with our expanding partner ecosystem—is leading a media revolution,” adds Wilson.

LDX 180 camera

Visitors will be able to see the company’s LDX 180 camera, first launched in April and making its European debut at IBC2025.

Boasting an in-house-developed 10K Super 35 Xenios imager, the new camera combines true cinematic depth-offield with the speed and precision required for live production, giving media companies the ability to redefine storytelling with native UHD output in unparalleled quality.

“The LDX 180 is built for premium live production where cinematic storytelling

meets broadcast speed, delivering unmatched UHD quality for high-end sport, entertainment, concerts, and studio shows. Built on the LDX 100 platform, it integrates natively with the wider camera chain, for a shared shading solution, and consistent look and feel no matter the shot–giving directors creative freedom and greater emotional depth.”

The company is also unveiling the new LDX C180 compact version of the camera at IBC2025. Designed for Steadicam and PTZ operation, the C180 empowers content creators to achieve new levels of dynamic and dramatic camera angles to elevate their visual storytelling at the speed of live. Featuring the same in-house-developed 10K Super 35 Xenios imager as its flagship sibling LDX 180, the new compact offering captures true cinematic depth-of-field while being purpose-built for the fastpaced realities of live production environments.

Elsewhere, Grass Valley will be showcasing the K-Frame VXP production switcher and Karerra V2 control panels. The agile, virtualised systems are designed for production teams working in space-and budget-conscious environments and have become the trusted foundation for hybrid and software-based operations, transforming the way many forward-thinking companies work.

Also set to make their debut at IBC2025 are the company's newest networking solutions, which combine high-density FPGA processing with the flexibility of virtualisation and software-defined workflows across hybrid compute platforms—all managed through a single control layer.

All of the solutions integrate seamlessly into SDI, IP, cloud, or hybrid setups via standard formats, flexible APIs, and GV Alliance partner integrations, allowing customers to adopt at their own pace, extend asset life, and unify operations under AMPP without disruption.

GVMU’s ability to orchestrate the entire production chain, from first cue to final cut, will be demonstrated by major new developments in integrated replay and end-to-end production automation. Alongside upgrades to LiveTouch X replay system and the Framelight X ingest and content management platform, Grass Valley will showcase how the core capabilities of AMPP OS are providing tightly integrated workflows that reduce effort, increase flexibility, and deliver measurable efficiency.

The company will also showcase its instantly deployable playout and FAST offerings that provide media organisations with the tools to launch, adapt, and scale with ease. These solutions are developed in close collaboration with key GV Alliance partners.

GV Forum

Once again, Grass Valley will be hosting its annual GV Forum, which takes place on Thursday, 11th September, at the Rosarium, Amstelpark. Wilson will be joined by the company’s leadership team to provide insights on strategic industry perspectives and updates

on the latest innovations in software-defined technology. “We’ll be sharing our business outlook, unveiling new product innovations, and offering insights and approaches to the leading technology trends shaping the industry. Attendees will also hear from our customers— including members of the GVx Council—on how they are adapting to the realities of hybrid production.”

Among the topics expected to be covered is the issue of sustainability, something Wilson says Grass Valley is hearing a lot from its customers. “Through GVMU, we’re delivering agile, sustainable, cost-effective solutions that help customers transform with confidence.

“IBC continues to be a fantastic opportunity to connect face-to-face with our customers and peers from across the globe. It’s a chance to share our latest innovations and collaborate first-hand on how media companies can evolve their operations in shaping the future of agile, sustainable, and hybrid production.”

Visit Grass Valley at stand 9.A01

“Our strong organic growth is setting the pace for industry transformation, driven by continuous innovation in our GVMU hardware and software offerings, powered by AMPP”
Jon Wilson
GV Forum

Jetset is a compact, mobile-based virtual production toolkit

PUTTING VIRTUAL PRODUCTION IN every CREATOR’S POCKET

Virtual production has long been associated with the privilege of creators with blockbuster budgets and is often seen as inaccessible for indie filmmakers. Helen Dugdale meets Lightcraft, a forward-thinking tech firm changing the narrative

By leveraging the power of mobile devices and intelligent software, Lightcraft is making Hollywood-grade virtual production tools available from the phone in our back pockets. The company’s mobile filmmaking tool, Jetset, was born out of first-hand experience and frustration, after Eliot Mack, founder and CEO of Lightcraft, made a short film and realised just how inaccessible quality visual effects were for independent creators.

“I started Lightcraft to make the ‘missing pieces’ I wanted for myself after I directed a short film,” he explains. ”I couldn’t believe how hard it was just to make something that was bad. I wanted a tool that let me explore creative possibilities in film the way I was used to when designing robots.”

With a background in engineering and a deep understanding of maths, Mack developed a tool that allows small teams or solo makers to create without the need of big production crews. Jetset is a compact, mobile-based virtual production toolkit that brings advanced visual workflows into the hands of anyone with an iPhone or iPad.

The app is a welcome product that tackles a longstanding gap in virtual production: the lack of affordable and seamless integration between camera tracking and post production.

“The key insight from the beginning of Lightcraft was that you could build a good 3D virtual set once, and then use it for dozens, even hundreds, of shots, assuming you could track the camera very accurately and bring that data into post production,” Mack adds. “We did exactly that with our Emmy-winning Previzion and Halide systems, but the hardware cost was six figures.”

But once the team realised iPhones could track well, they began to build a pipeline that automated the handoff between production and post. This made it possible to create a low-cost but very powerful product that was within the reach of small teams and individual users.

A new era for Lightcraft

Founded in 2004, Lightcraft initially emerged from a startup incubator led by Avid Media Composer creator Bill Warner, who would later become a key investor and co-founder. The early years focused on hardware, influenced by Mack’s time spent experimenting in robotics. But by 2018, the team realised that hardware alone wasn’t going to be enough to scale the business, so Warner helped the company rethink its next steps.

“We briefly experimented with remote rendering, using an external workstation to render the on-set images, but the lag and connection problems soon convinced us that we wanted to run the entire virtual production process completely inside the iPhone,” reveals Mack.

Today, Jetset relies on Apple’s ARKit for camera tracking, uses onboard LiDAR for mapping environments, and incorporates machine learning through Apple’s Vision Framework for real-time image segmentation and AI matte generation.

Jetset brings accessibility to virtual production and is turning heads amongst indie and YouTube creators.

“It’s the big break that indie creators have been looking for,” Mack said. “It’s the ‘ILM in a box’ I always wanted for myself. It lets creatives make the movie they want to make, instead of waiting around for their ‘turn’. Jetset automates so much of the tedious, difficult 'grunt work' of visual effects and lets artists focus on the parts that end up on screen.”

The app automates complex tasks such as matte generation and compositing, allowing creators to focus on their stories, not on tech hurdles. Now, with a broad user base, the Lightcraft team has designed multiple tiers of Jetset: Jetset Free, a no-cost option with full pipeline integration; Jetset Pro, which adds 3D scanning, remote operation, and pro-level features; and Jetset Cine, designed for cinema cameras, enabling real-time compositing and post-processing automation.

While Mack admits he developed Jetset for solo creators, it’s also powerful enough for experienced professionals. “The most difficult part of these tasks is the integration. We solve this for both beginners and professionals the same way by figuring out the best way to go about creating projects and doing a deep integration to make that work as smoothly as possible. The professionals want more control, so we handle that by building interfaces with the tools that they know and need, like Blender, Maya, SynthEyes, Fusion, and Nuke.”

The versatility of the tech has led to it being used by some of the major industry players such as Amazon. While Jetset isn’t specifically aimed at broadcast crews, its capabilities have certainly sparked interest across the production world. An example being a father-son team who used Jetset to create Entrenched, a Star Wars-inspired fan film. After showcasing the trailer at industry conference AI On the Lot, the duo began collaborating with established directors.

Competing in an evolving landscape

Jetset is carving out a unique niche. While traditional tracking systems like Vive Mars or Stype offer robust solutions, they often fall short when it comes to ease of integration with post-production workflows. Jetset fills that void by offering an all-in-one solution.

The company is also preparing to debut a new collaborative storytelling platform, designed to work seamlessly with its existing technology and promising a lightweight, team-oriented approach to production.

While broadcast workflows still rely on dedicated hardware, mobile devices are becoming more important for visual storytelling.

Mack believes that mobile-first workflows will shape the next wave of filmmaking, making it cheaper, more flexible, and more creatively empowering than traditional methods. But as virtual production becomes more accessible, it’s also becoming more misunderstood. The Lightcraft founder warns aspiring tech lovers not to get distracted by flashy trends.

“The media is frequently very focused on the use of extremely expensive technologies like LED walls. These can be great, but they come with many limitations, from the expense to the need to have the 3D elements at a production state on the shooting day. The core ‘maths’ of virtual production is simply the integration of live action with virtual elements, with real-time feedback. You can achieve this at a radically lower cost.”

Instead, Mack suggests creators should start small, think big, and choose tools that can grow with their creative ambitions.

PICTURED ABOVE: Jetset is described as ‘ILM in a box’

Lightcraft’s ambitions are only just getting started. With immersive media on the rise and the boundaries between digital and physical worlds continuing to blur, the company is well on the way to leading the charge in the next wave of innovation in production tech. And with a collaborative platform set to launch soon, the company is on a serious mission to help make professional filmmaking tools accessible to every creator, no matter their budget.

“The new tool is designed to work hand in hand with Jetset to enable a team to go from concept to delivery while working in the fast, lightweight manner that our users have come to expect from us,” concludes Mack.

KEEPING IT REAL IN a galaxy far, far away

Just because it stars a 7-foot-tall, modified KX-series security droid with a dry sense of humour doesn’t mean that series two of Andor isn’t grounded in realism. In part two of Kevin Emmott’s interview with production sound mixer Nadine Richardson, she explains why it’s all about keeping it real

Despite being set in a fictional universe across a variety of alien planets in a galaxy far, far away, Star Wars is the real deal. Boasting physical set pieces and working props, realism is an aesthetic that is actively pursued, and its interweaving stories about farmers, thieves, and galactic emperors are mired in the dust and the dirt of everyday life. With a cinematic look that is entirely faithful to the wider Star Wars universe, the Emmy Award-nominated TV series Andor acts as a precursor to the events of Rogue One: A Star Wars Story and not only shares the same production standards as its big-screen sequel, but also its use of mechanical, physical and everyday equipment. This meant that the ships and props on Andor all had working buttons, lights, and switches, with the wider environments also as real as possible. To create the agricultural planet of Mina-Rau, the production team even convinced a local farmer to grow

an ancient variety of grain in their fields located close to the studio where the show is filmed.

For an experienced sound engineer like Nadine Richardson, this attention to detail is a dream, and having spent nine months as the production sound mixer on season two of Andor, she instinctively knows that all of these details add value that she could use to layer on even more ambience.

Field of dreams

“What’s lovely about Star Wars is there are all these different planets to travel to, and each time it’s a completely different set and a completely different feel,” she says. “It makes recceing the production sets very important, even if you’re just on set at Pinewood Studios. You have to work closely with the art department to really get an idea of what the sets entail, see how it’s been planned out and built; it’s vital to assess whether there are going to be any sort of issues for sound capture.”

Outside of the studio, it’s even more important. The aforementioned location set for the Mina-Rau planet was in Watlington, Oxfordshire, where the fields of

rye planted specifically for the series took a full year to grow and were used to illustrate huge expanses of space on the agricultural planet.

“The recce helps you to work out where you might be placed as a sound team and where the director and DoP might be thinking of shooting; it all affects the logistics and the practicalities of sound capture, like cable lengths and how far away from the set you will be. For these scenes, for example, we were about 200m away from the set, so I knew I needed to have fibre way in advance. These are very practical implications on how you make things work.”

Sound of the switches

Interior shots are no easier to plan, and capturing the audio inside Andor’s meticulously designed ships enabled Richardson to record not just the dialogue but the mechanical nature of the environment to add more realism to the production.

“The ships get completely locked down during filming and so the actors need to hear comms on a number of levels,” she explains. “On a practical level, they need directorial comms to be able to speak to

the director or the first AD because they can’t hear anything from outside the ship, but we also provided comms that drive the story forward with hidden speakers fed by cast members with handheld mics, which we fed into the ship.

“Because the set design is so detailed, with lots of mechanical switches and otherworldly quarter-inch sockets, we try and grab the sound of the environment as well. Our first priority is always to capture clean dialogue, but otherwise we grab all the sound that we can because it adds to the overall ambience of the scene. While the audio post teams at Skywalker Sound will have libraries full of sounds, everything we can give them from set leads to a more authentic ambience.”

Pre-post reality

Capturing as much ambience as possible is important to Richardson, who has always looked to deliver a product as close to the finished article as possible throughout her 17-year career in film sound. On set, Richardson’s aim with the dailies is always to recreate the environment accurately and ensure the dialogue is crystal clear.

“That means trying to quieten down any extraneous noises and sounds,” she says. “Meanwhile, the ISOs (isolated tracks) that I am recording on each microphone are of utmost importance, so getting good gain structure with nothing clipping is really important, because that is what the dubbing mixer is going to work with later down the line.

“My process is to send rushes on a daily basis, which include everything we have shot during the day as well as a sound report that details all the information I have about the shoot.

"Although Skywalker Sound does the post sound

mix, we still have conversations about what we have shot and discuss any issues that need attention.”

Real even when it’s not

We previously discussed how Richardson worked with the production crew to augment physical environments with real voices in real time, but it is impossible to avoid any visual FX on a production as complex as Andor

Having worked on blockbusters like Doctor Strange in the Multiverse of Madness and The Marvels, Richardson is no stranger to visual effects, but even their use on Andor was pragmatic.

“Set immediately before Rogue One, everything had to have the same look and feel to do justice to the story arc, but it also meant it shared much of the same cast,” she says. “It meant that Alan Tudyk was on set wearing a full motion-capture suit as the robot K2SO, and to get to an appropriate height for the other cast members, he did it all while wearing stilts. It meant he could walk around on set to interact with the cast, and we could still capture his lines of dialogue as K2SO.”

Now working on season two of Ahsoka for Disney, Richardson is still adding to the Star Wars legend and ensuring that it sounds just as authentic. She says she’s grateful for the importance that audio is given across the properties.

“The sound of the show is a hugely important element and Andor was one of those shows that allowed us to get a lot of sound,” she adds.

“What I love about these jobs is that everybody collaborates together to make it work. In this age of progressive VFX, they really want everything to sound realistic, and that was so joyful. It makes it a really interesting job to do and a joy to work on.”

To read the first part of our interview with Nadine, scan this QR code
Alan Tudyk was on set wearing a full motion capture suit as robot K2SO
PICTURED ABOVE: Nadine Richardson (right) on set

OREFRAMING VIRTUAL PRODUCTION’S ROLE IN

ver the last five years, virtual production has rapidly evolved from a bold experiment into a credible, scalable part of modern storytelling. But as we move through 2025, we’re hitting a pivotal point. Virtual production is no longer the new kid on the block, but it’s still not yet fully understood. The technology has matured, but workflows, talent pipelines, and industrywide education are still catching up. One big question lingers: Where does virtual production truly sit within a production team, and who leads the charge?

A general understanding of “The Art of the Possible” is still missing from many traditional departments. We know from our Starting Pixel community of more than 1,200 virtual production professionals that there remains a widespread lack of awareness and confidence in virtual production’s role in the traditional production hierarchy. Questions around responsibility, budget implications, and potential for failure are still prevalent, particularly among those who’ve not yet worked with the process directly.

If left unanswered, these questions become blockers. Quite rightly, if producers or creatives can’t see clear, predictable outcomes, they’ll default to more familiar workflows, and virtual production will remain on the sidelines despite its potential. So much of virtual production’s success comes from the pre-planning process. Overcoming adoption barriers is the single most important hurdle it currently faces. And if we overcome that hurdle, there is a lot to be excited about. Our community is telling us that virtual production is growing up, and so are its storytellers. A new “VP native” generation is developing, telling stories that lean into the tech, not just using it as a production tool that replaces locations or set builds. Virtual production supervisors are starting to be appreciated, leading them to be considered as head of department, so that expectations can be fully set and explored. That shift is essential. When virtual production leads are involved from the start, expectations can be properly scoped, creative ambition can be aligned with real-time capabilities, and production planning becomes proactive rather than reactive. However, the next challenge here is that senior virtual production supervisors are difficult to train. It’s as much a set of soft skills in order to discuss and advise what is and isn’t possible in preproduction and on-set with the directors and cinematographers, whilst managing the technicians responsible for imagery. That requires clear communication and a certain amount of clout on a busy set. These experts must be able to translate creative visions into

2025

technical execution while also knowing when to recommend traditional production methods. This nuanced understanding of both the technical and creative sides of filmmaking is crucial to the technology’s success and broad adoption. As virtual production becomes more common, the demand for professionals who can navigate both worlds will only grow.

Demystifying virtual production is the only way to help decisionmakers feel more comfortable with what to expect, and what can be achieved with the technology. At Starting Pixel, we’re aiming to share as much of the community’s knowledge with decision-makers across the industry: we’ll be hosting a virtual production jam session at IBC2025 to showcase the latest thinking and applications, and our third Starting Pixel Live event taking place in London in October is going to be bigger and better than ever. We’re spreading the word through our channels, and we’re lucky to have an incredibly positive, passionate, and proactive community that wants to see the technology thrive.

So what does the future hold for virtual production? Less of a big leap, more lots of small steps. Producers and creatives across film, HETV, advertising, brand comms, and social media will get to understand its creative capabilities and find the sweet spot for their productions. Standards will start to emerge and virtual production supervisors will start to be considered as heads of departments through their value. The revolution won’t be flashy, but it will be foundational. Get this right and virtual production won’t just support the storytelling of the future, it will help shape it.

“Demystifying virtual production is the only way to help decision-makers feel more comfortable with what to expect, and what can be achieved with the technology”

AUTOMATED GRAPHICS PRODUCTION A new era of

Billed as the broadcast industry’s ‘first agentic and multimodal AI platform’ for graphics production, Highfield AI is intended to reduce time-consuming editorial tasks, writes David Davies

There has been a steady stream of notable artificial intelligence product launches throughout 2025, but the recent announcement by Highfield AI of an eponymously-named graphics production platform feels especially significant in terms of its potential to streamline everyday workflows. Through direct integration with newsroom and graphics systems, the platform can take on repetitive, time-consuming tasks, such as populating story templates and sourcing visual assets, thereby allowing journalists to spend more time on storytelling and other key tasks.

With AI being such a fast-moving area of technology, clarity of terminology is especially important. Invited to unpick Highfield AI’s description of itself as “the broadcast industry’s first agentic and multimodal AI platform for automating graphics production”, co-founder Ofir Benovici replies that it is “built around the concept of agentic and multimodal AI–meaning it’s not just a tool, but a system of intelligent agents that can reason, decide and act across various data The Highfield AI team

types. In short, we created a set of AI agents that are tuned specifically for the production needs of media.

‘Agentic’ refers to AI agents that can autonomously handle tasks, make decisions and collaborate.

‘Multimodal’ means the system understands and processes different types of inputs–text, images, video and structured data–making it ideally suited for the rich, fast-paced environment of media production.”

And it is specifically the repetitive aspects of graphics production that Highfield AI is targeting with its platform: “Highfield AI dramatically streamlines the graphics creation process, focusing specifically on the tedious process of populating graphics templates with content,” states Benovici “Currently, this process is manually handled either by journalists, graphics operators or outsourced to external companies. Our solution is to extract the story from the newsroom system, choose the right templates, match media assets, and generate fully assembled graphics ready for editorial review. It’s important to mention that our solution is not a generative AI model, meaning we will not create content that is not there, but we will help broadcasters monetise their own content.”

Of the cited potential 75 per cent reduction in manual work related to graphics production, Benovici elaborates: “In discussion with our customers, the average time it would take to populate a template with content is around 30 minutes, which includes selecting the appropriate template, copy/paste information, searching for media, formatting, approvals, and so on. The 75 per cent efficiency improvement comes from automating all of those steps and reducing them to seconds, and gives various options available for journalists to approve before going on air. This

adds up quickly in daily workflows with dozens or hundreds of graphics, and the result is a much-higher efficiency in news but, equally important, better and more tuned content delivery.”

Interaction with broadcasters has been a major element of the developmental process, and is still ongoing. “When designing the solution we interviewed and brainstormed with editorial, graphics and engineering teams from key broadcasters to understand their pain points and workflow intricacies. Early on, we thought the value was primarily in time savings, but user feedback showed just how critical it is to also preserve editorial control and context. This shaped our agent design–they assist, but always keep the journalist or producer in the driver’s seat.”

AI streamlines the graphics creation process

This collaborative ethos has also ensured the platform has hit the ground running, with integrations

already confirmed with leading graphics and newsroom solutions from Vizrt, Unreal Engine, CGI OpenMedia, Avid iNews, ENPS, and Saga.

Core editorial canvas

Benovici cites a number of scenarios where Highfield AI could make a major difference to graphics production, perhaps the most intriguing being its potential to optimise the use of studio video walls.

He explains: “In many 24/7 newsrooms, large studio video walls are high-potential storytelling tools but often go underused. Why? Because creating custom visuals for these screens–such as dynamic explainers, data dashboards or contextual backdrops–is time-consuming and requires a dedicated design team.

As a result, they’re often limited to generic loops or reused graphics. With Highfield AI, producers can now

IPX-100 – Best Connections into the IP-World.

Ikegami Electronics (Europe) GmbH
ST2110
Highfield

generate tailored, high-quality visuals for video walls in real time as part of the editorial workflow.”

Examples include AI pulling live market data and generating a multi-part visual during a segment on rising energy prices, including a price chart, a map depicting affected regions, and a comment from a government source, all formatted for the video wall’s resolution. Or a broadcaster might air a health bulletin in which the system builds a dynamic explainer showing the spread of a virus over time, employing animated maps and statistic panels that update automatically with new data. For political coverage, it could create interactive backgrounds with candidate profiles, polling trends and timelines, allowing

presenters to provide “deeper narratives” without additional preparation time.

“What used to take a full day of design work can now be produced in minutes, directly from the editorial script, enabling the newsroom to visually enhance more stories throughout the day–even ones that previously wouldn’t have justified the effort. This transforms the video wall from a static background to a core editorial canvas, elevating the viewer experience with context-rich, real-time storytelling.”

Freeing human creativity

ABOVE AND LEFT: Integrations have already been confirmed with Vizrt, Unreal Engine, CGI OpenMedia, Avid iNews, ENPS and Saga

In terms of getting journalists up-to-speed with the platform, Benovici says: “When designing our solution, it was critical for us not to disrupt the current workflow. Therefore, training is minimal because the system integrates into existing workflows–NRCS, graphics systems, and media management. Users interact with HighField AI through tools they already know. We do offer onboarding and optimisation sessions for editorial and tech teams to get the most out of the platform.”

Whilst it’s not difficult to see how the Highfield AI platform might be applied to other aspects of broadcast production, it seems that the focus will remain on graphics for the foreseeable future. “Our current plans for both the short and mid-term are focusing on graphics. We want to make sure that we deliver excellence, and that requires focus. We have big plans to continue evolving our graphics solution, further enhancing the value we deliver to our customers,” says Benovici.

“That said, our platform architecture is modularly designed so we can further expand our solution and automate other areas in the media production chain that are repetitive or time-consuming tasks handled by agents, freeing human creativity for journalism.”

“It’s important to mention that our solution is not a generative AI model, meaning we will not create content that is not there, but we will help broadcasters monetise their own content” OFIR BENOVICI

Should broadcast be categorised as CNI?

As the UK tests its disaster alert system, Matthew Corrigan wonders if it’s time for governments to recognise the importance of broadcast media

Turning on the TV a couple of weeks ago, the morning news channels were filled with reports of a massive earthquake centred on a faraway Russian peninsula and ominous warnings of the potential tsunami that could follow in its wake. Alerted by eerily wailing sirens, coastal communities across the Pacific basin were on the move, desperate to reach the relative safety of higher ground. In Japan, memories of the devastating 2011 tidal wave that overwhelmed the country’s north-eastern seaboard and triggered the Fukushima nuclear disaster were raw. The Japanese take natural disasters seriously, with a comprehensive early warning system designed to buy time in case the seas should rise up again.

Thankfully, the predicted tsunami never came. As the risk subsided, those who might have been in the firing line returned to their homes, no doubt keeping a nervous eye out for any reports of aftershocks. Watching from the geographical safety of Western Europe, I wondered about the affected region. Not every country has the highly developed economy of Japan, with the infrastructure in place to protect its citizens. How many people were relying on their televisions for news, waiting for information that might literally be of existential importance?

So often, turning on the TV is an almost instinctive reaction whenever a big story breaks. Although we are fortunate that the threat from tsunamis is low in our part of the world, we still experience events that are felt nationally and internationally. Think back to the pandemic. How many viewers in the UK tuned in to the nightly press conferences that were broadcast during the lockdown period? Some wanted to find out about the latest restrictions, others to learn what efforts were being taken to combat the virus, maybe there were those who sought comfort in a kind of televisual connection as we all tried to make sense of those strangest of times. Regardless of our reasons, so many of us watched that several new phrases entered the national lexicon, while we waited to see what the “next slide, please” might reveal.

The pandemic was instrumental in demonstrating the immense value of broadcast media in times of national emergency, as a way of disseminating information, maintaining vital connections and even boosting morale among the populace. To inform, to educate, and to entertain, as someone once said.

While the pandemic might have revealed, at least in part, how

much we all depend on the broadcast industry at certain times, it most definitely had a significant impact on how the industry itself operates. Facing an urgent need to react, broadcasters embraced remote production at scale. Ideas and methodologies that had been talked about for years suddenly became essential, with migration to IP-based systems instantly transformed from being merely nice to have to absolutely mission-critical. Earlier this year, on the fifth anniversary of the original lockdown, TVBEurope published a wide-ranging article featuring opinions from a broad spectrum of M&E companies, almost all of which shared the consensus that the crisis instigated a revolution in the way TV is produced, distributed, and consumed.

Thanks to the exponential rate of technological advancement, new possibilities are opening up almost as quickly as they can be imagined. All this progress, however, presents challenges of its own. As innovations such as cloud computing become the norm, the need to ensure systems and data are adequately protected becomes ever more pressing.

Bad actors, both criminal and state-sponsored, are constantly probing for vulnerability. As the IP migration rush continues, new endpoints widen the available attack surface. Cybersecurity must be placed at the front and centre of operations. Now, more than ever, organisations across the media and entertainment landscape need to maintain a robust defence against those who would seek to harm others.

Yet while the industry accepts the responsibility, could now be the time for stronger action at state level? Governments across the continent have realised the infrastructure upon which their nations depend is vulnerable, and recognise the need to ensure robust measures are in place to protect it. In the UK, several sectors are designated as critical national infrastructure (CNI). The agency responsible, the National Protective Security Authority, defines CNI as “critical elements of infrastructure whose loss or compromise could severely impact the delivery of essential services or have significant impact on national security, national defence or the functioning of the state”.

The broadcast media in general—and television in particular— has often proved itself an invaluable asset during troubled times. Perhaps it is time for governments to recognise its necessity, and ensure it is afforded the protection it deserves.

VIEWS Flexible

It doesn’t matter which area of live production you work in, more than likely, you will use a multiviewer of some kind. They are the tools that manage and display multiple sources on a screen and can be used for both live video and audio. TVBEurope speaks to two end users to hear how they are deploying the screens, how interoperable they are with other technology, and the one thing they’d like to see added to the multiviewer’s capabilities

Cloudbass Ltd is a UK-based outside broadcasting company with clients including the BBC, Sky Sports, ITV Sport and more. It recently installed two manifold multiviewer systems in its new IP remote trucks; the larger truck has 21 heads (or screens), and the smaller has eight.

Cloudbass recently installed two manifold multiviewer systems across its new IP remote trucks

“They are used in our two new remote production trucks, for both engineering and production monitoring,” explains Cloudbass technical director, Michael Beaumont. “The trucks are used for top-tier English football coverage. One of the trucks can also operate in a hybrid mode with the director onsite, so low latency is essential.

“Each of these systems is based on small (but powerful) FPGAs, with 4 x 100GE connectivity that are housed in a single 1U system and has the capacity to add more in the same chassis in the future if required.”

Both multiviewers are IP-based, although Beaumont admits finding what he describes as “good” IP multiviewers can be challenging. “They all have some limitations, be that input count or limited to only a few heads, especially when using UHD heads. manifold offered a scalable solution, where, yes, there are limitations, but due to the scalable architecture, those limitations are way outside what we require.”

Real estate in any outside broadcasting truck is severely limited so opting to use a multiviewer instead

of single screens helps with flexible layouts. Multiple monitors are also impractical due to the number of cameras and feeds used on a production. While Cloudbass had a few initial challenges in terms of the configuration and set-up, Beaumont says the multiviewers have had “no real issues” with interoperability from other sources.

“The biggest development with manifold is the scalability, adding an extra head in a year’s time is a simple process that requires no extra hardware or connections into your switch fabric (assuming you have some spare FPGA capacity),” he adds.

ABOVE: Satellite Mediaport Services employs TAG Video Systems' Realtime Media Platform

Asked what one thing he would like to see added to the capabilities of a multiviewer, Beaumont says vendors should think more about the layout tools. “We use a whole raft of various multiviewer systems, and they almost all have clunky layout tools,” he states. “In OBs, we change the layouts on almost every job, so unlike MCR operations, where the layout is relatively fixed, we use the layout tools regularly.

"Being able to make changes easily, and it not looking a complete mess with unaligned PIPs scattered across the screen, is crucial. Being able to

AND KVM FEELS RIGHT.

You can’t always see G&D right away. The products and solutions are often hidden. But they are systemically relevant and work. Always!

You can rely on G&D. And be absolutely certain. That‘s quality you can feel. When working in a control room. With every click. When installing in a server rack or at workplaces.

G&D ensures that you can operate your systems securely, quickly, and in high quality over long distances.

People working in control rooms can rely on G&D.

G&D simply feels right.

create a default ‘PIP look’ with the UMD and Tally boxes across all the windows that scale to any size, is a key feature that often gets overlooked.”

Multiviewers in a Network Operations Centre

Satellite Mediaport Services operates a Network Operation Centre (NOC), which monitors and maintains its Teleport 24/7/365. As part of its monitoring, the company employs TAG Video Systems’ Realtime Media Platform with 12 channels and transport stream over IP (TSoIP) inputs.

“We use the multiviewer in our NOC to monitor channels visually and give us a quick indication of an issue,” explains the company’s deputy CEO, Valentin Kislyakov. “This allows us to troubleshoot issues more efficiently. There are usually between two-six people watching it at any one time.”

Key to the multiviewer’s success is its flexibility, says Kislyakov, “as well as the amount of monitoring features and capabilities that can integrate into other monitoring systems. However, we would always like better compression algorithms in order to utilise fewer hardware resources.”

IMONITORING THE TO STREAMING

success key

Interra Systems

n today’s hyper-competitive streaming landscape, delivering compelling content is only part of the equation. According to GWI, 76 per cent of global consumers watch streaming content daily, and with subscription fatigue rising by 77 per cent since 2020, the ability to consistently provide a seamless, high-quality experience has become a defining factor in subscriber retention.

The explosion of on-demand content and the proliferation of streaming platforms have fundamentally reshaped consumer behaviour. Viewers are no longer passive recipients—they’re active participants with high standards and countless alternatives. They expect personalised recommendations, binge-ready libraries, and seamless playback across devices. Features like intuitive navigation, accurate subtitles, and consistent audio quality are now baseline expectations. At the same time, concerns around privacy, data security, and ad intrusiveness are influencing platform preferences.

In this environment, even minor disruptions can lead to churn. Monitoring the preparation and delivery of audio-visual content has emerged as a strategic imperative—one that supports monetisation, compliance, and long-term viewer loyalty.

The challenge of a multi-network, multi-device world

Today’s streaming environment is defined by complexity. From a technical standpoint, providers must manage a wide range of content types—live sports, linear TV, VOD, and user-generated clips—across diverse formats, resolutions, and codecs. This content travels through a patchwork of delivery infrastructures, including cloud, private cloud, and public internet, each introducing its own variables.

For live premium content, minimising latency is critical, especially in OTT environments where bandwidth fluctuations are common. Viewers expect seamless playback whether they’re watching on a smart TV, tablet, or smartphone, and whether the stream is live or on-demand. But inconsistencies in codec support, DRM, and streaming formats across platforms often lead to buffering, resolution drops, AV sync issues, and even ad-insertion glitches. Captioning errors, missing language tracks, and sudden loudness changes during ads can further degrade the experience.

To meet these challenges, monitoring must be deployed at key touchpoints, from video processing to final delivery, and must provide accurate, real-time quality metrics, quick alerts, and root-cause analysis. Only then can providers ensure a consistent, high-quality experience across all devices and networks.

Real-time data: the engine behind engagement and monetisation

Real-time monitoring is the linchpin of modern streaming operations. It enables providers to detect and resolve issues instantly, often before viewers are even aware. By continuously tracking parameters like bit-rate shifts, AV sync, caption timing, and playback errors, providers can pinpoint the root cause—whether it’s a transcoding glitch, CDN congestion, or player malfunction—and take corrective action immediately.

For example, during a live sport broadcast, even a minor AV sync drift caused by a transcoder issue can be flagged within seconds, allowing operations teams to fix it before it disrupts the viewing experience. Real-time insights can also trigger automated failover, rerouting traffic or switching to backup streams when quality drops are detected. If a stream is suffering from CDN congestion or becomes unavailable, traffic can be redirected in real time to maintain smooth playback.

Monitoring systems that support DRM standards can detect access issues before they reach users, while subtitle monitoring ensures that missing or delayed tracks are corrected quickly. These capabilities reduce user complaints and support costs.

The benefits extend beyond performance. Real-time insights ensure that ad insertions are executed smoothly and in full, maximising monetisation opportunities. They also help deliver a consistent experience across devices and networks, an essential factor in building brand trust. By minimising disruptions and ensuring playback quality, providers can reduce churn, increase engagement, and turn satisfied viewers into loyal advocates.

Monitoring as a strategic investment in long-term success

More than just a back-end function, monitoring has become a strategic pillar for streaming providers aiming to thrive in a saturated market. By enabling real-time visibility into content performance, monitoring empowers teams to identify and resolve issues before they impact viewers. It supports root-cause analysis, ensures compliance with evolving standards, and provides the data needed to optimise workflows and delivery strategies.

As streaming services scale across devices and geographies, monitoring becomes the connective tissue that ensures consistency, reliability, and trust. Providers that invest in robust, flexible monitoring solutions are better equipped to adapt, innovate, and retain their audiences over the long term.

STREAMING FIFA CLUB WORLD CUP TO THE masses

No longer the sole domain of broadcast and pay-TV, live sport is now increasingly being watched on streaming services. A recent study found fans are now as likely to watch sport on streaming services as they are on broadcast networks. These shifting viewing habits are driving huge changes in how live sport is broadcast and experienced by viewers. Streaming services are reimagining and enhancing the viewing experience by offering a whole host of features, as well as improved discoverability and personalised elements that were not achievable with traditional sports broadcasting.

FIFA Club World Cup, live on 5

Recognising the growing demand for streamed

live sport, earlier this year UK free-to-air network 5, owned by Paramount Global, secured a sublicensing deal with DAZN, the global rights holder for the FIFA Club World Cup, to show 23 games from the tournament, which features 32 teams from around the world and takes place every four years. This year, the tournament took place in the United States between over four weeks in June and July. Anticipating increased traffic during this time, 5 scaled its infrastructure to ensure a high-quality viewing experience for fans that felt just as dynamic and immediate as traditional TV, but with an enhanced user experience.

Indeed, coverage of the final match, which saw Chelsea triumph over Paris Saint-Germain, saw 459,000 viewers watching the live stream on 5, alongside 2.4 million on 5 linear, according to overnight figures from UK ratings company Barb. To achieve these aims, 5 partnered with Accedo, a global provider of video streaming software and services, to enhance its streaming service with a robust set of new features designed specifically to improve live events. “To support the broadcast of the FIFA Club World Cup across the 5 platform, we collaborated closely with Accedo to deliver the best possible experience for our audience,” explains Sam Heaney, VP product, tech and ops at 5 Streaming.

Delivering the best possible viewing experience

From smarter scheduling tools to refreshed UI elements and new content rails, every update had a clear goal: to make the action easier to find, and even easier to watch.

5 rolled out upgraded support for live and simulcast content, ensuring matches were surfaced prominently and streamed seamlessly across every platform where 5 is available, including traditional pay-TV, connected TVs from Samsung, LG, and Google, as well as Fire TV, Roku, tvOS, and both iOS and Android devices.

A standout addition was the introduction of live-tagged tiles in the Featured section. These sat alongside popular on-demand shows but were designed to spotlight live events, letting viewers jump straight into a match with a single click.

Real-time progress bars and countdown timers gave users instant visibility into what was live, what was next, and how far along each broadcast was. As matches moved from “Upcoming” to “Live”, the UI shifted dynamically, offering subtle, helpful cues that kept fans engaged throughout the tournament.

Under the hood, Accedo delivered a series of front-end upgrades to sharpen performance, including code refactoring, a more-efficient homepage architecture for faster data fetching, and the introduction of pagination for smoother content loading. Channel switching, a notorious pain point in streaming, was also tuned to reduce lag. While still not quite as instantaneous as linear TV, the experience is now notably quicker and more responsive than many comparable streaming services.

where live streams, FAST channels, and on-demand episodes sit side by side. This approach removes the sense of jumping between different parts of the service, letting viewers explore across formats as if they were browsing a single, unified catalogue.

5 also added portrait-format rails, creating a more visually-led browsing experience that feels tailormade for mobile users. Striking imagery and vertical scrolling enables quick and effortless discovery on smaller screens.

On top of that, a new “Top 10” trending rail gives audiences a pulse on what’s hot right now, whether it’s live matches, highlight reels, or fan-favourite

shows. Updated in real time, it captures the energy of the tournament and helps surface content that’s generating buzz in the moment.

Another major win came from a newly introduced flexible scheduling system. This gave 5’s editorial team the ability to manually adjust match windows, critical for games that ran into overtime, and allowed the app to auto-refresh at key transitions to stay aligned with live broadcasts.

“Accedo enhanced our UI capabilities by promoting live events alongside our premium VOD content on the homepage,” explains Heaney. “They also worked with other key vendors to improve app performance and optimise backend integrations, resulting in faster load times for both live and on-demand content.”

To enhance the live viewing experience, 5 rolled out a series of UI updates designed to make content discovery feel seamless and intuitive. One of the key changes was the introduction of mixed-content rails,

The evolution of sport broadcasting Sport broadcasting is in the middle of a revolution and video providers are on the front line, enhancing the viewing experience to make live sporting events even more engaging and enjoyable. Streaming services are constantly refining and enhancing how live sport is discovered and experienced, which is set to reshape what it means to watch sport.

In the coming years, streaming is going to become millions of fans’ first choice for watching live sport, and this presents huge opportunities for video services that can capitalise on that shift. Having proven the service’s ability to manage something as popular as the FIFA Club World Cup, 5 is now in a great position to scale for future high-profile events across sports, entertainment, and beyond.

ABOVE: 5 rolled out a series of UI updates designed to make content discovery feel seamless and intuitive

Lessons from public media’s AI classroom

As you read this, in September 2025, the EBU Academy School of AI will have been teaching media professionals how to harness the potential of artificial intelligence for 18 months. In real terms, it’s a short period of time. In AI terms, it seems much longer, given all that’s happened!

As the learning arm of the European Broadcasting Union, it’s the academy’s job to support EBU members, Europe’s public service media (PSM), based on their strategic priorities, including artificial intelligence. The School of AI is an evolving curriculum of classes, courses and other activities to meet the needs of staff. To date, learners have come from more than 70 EBU members and the academy serves other media organisations.

The academy opened our virtual classrooms by supporting a lot of discovery–explaining what AI is, why it’s relevant to public media and how to use ChatGPT. Now, we are receiving more specific requests from EBU members, like strategy workshops for leaders or teaching journalists techniques to fight AI deepfakes. Quite rightly, broadcasters’ plans have been formulated with care and with some caution—public media has much to lose if AI harms the trust audiences have in it—but I feel learning needs are now developing almost as quickly as the tools for some.

Training isn’t always responded to positively within media organisations. Sometimes resisted by busy leaders and staff who “don’t have time”, it can be an easy target for budget cuts when classes are viewed as a “nice to have”, but not essential. If anything can change the minds of those who are less enthusiastic about training, it’s AI. It is a transformative technology that affects everyone; a challenge that requires the development of critical thinking as much as new technical skills.

It’s about people, not tech, as Minna Mustakallio, head of responsible AI from

Finland’s YLE, explained to me; it’s a significant culture-change exercise. And, yes, AI is changing. Continuously. Rapidly. Furiously. It means continuous learning must be part of a media organisation’s AI strategy. It is vital.

In fact, my colleague Natalia Beregoi, a respected media transformation consultant at the EBU, told an audience of heads of learning and development from public media outlets that AI gives them their chance to take a seat at the top table of their organisations. (L&D colleagues, take note!)

At EBU Academy School of AI, we also believe that training gives media organisations an excellent opportunity to bring together editorial (and other) teams with technology colleagues. Idea sharing and collaborative working on AI projects is not a “nice to have”, either! We even discounted the cost of training to content editors and innovators from the same organisation who attend our Master Class on developing recommendation and personalisation systems. Successful partnerships can start in the classroom.

So, what’s to come in the next 18 months of the EBU Academy School of AI? It’s an impossible question to answer given the speed of change. But we will continue to focus on serving the learning needs of media staff today, whilst identifying the emerging trends and the next likely classes. Course development is constant. This task is supported by our own highly collaborative way of working, plugging into the pioneering work of EBU members, the expertise of our own faculty members and benefitting from the insights of the EBU’s own technology and innovation department. I can say one thing for sure: working at the EBU Academy School of AI will never be dull. Happy learning!

EBU Academy School of AI offers support to EBU members and other media organisations via its website, academy.ebu.ch/schoolofai

Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.