Broadcast sustainability grows up



Broadcast sustainability grows up
How cutting-edge technology brought the award-winning musical to the big screen
Now that the dust has settled on the 2025 NAB Show, I can gather my thoughts and reflect on some of the biggest talking points. Of course, everyone was talking about President Trump’s planned tariffs, which changed even while we were in Las Vegas. Most vendors told me they were in “wait and see” mode, and I think that’s probably the best place to be at the moment with the constant, almost daily, changes.
During all of my meetings, I asked everyone to give me one word to sum up the current state of the broadcast industry. What I found is that most people feel the industry is in a period of transition, brought about by the likes of artificial intelligence and automation.
One phrase I’ve heard a lot recently is “revolution, not evolution”.
As broadcasters and manufacturers work towards changing their business models, rethinking how they do things, and “doing more for less” appear to be paramount.
That’s leading broadcasters and media companies to think differently about how they build and operate their facilities, driving the industry towards new and innovative technologies.
I expected to talk a lot more about AI while I was at NAB, but actually automation was one of the biggest topics of conversation (of course, AI fits into that). Automation can both improve efficiency and reduce mundane tasks. We know that automation is becoming key to areas such as playout and news workflows, but we’re also starting to see it have an impact on things like graphics, with the first multimodal, agentic solution for enterprise broadcast graphics automation unveiled in Las Vegas.
Of course, using automation increases a company’s power needs. All of that technology needs electricity to keep it running. So, if we move towards more automation, are we making our industry less green? This issue of TVBEurope focuses on the latest developments within the broadcast industry’s sustainability journey, with some interesting points raised about the impact AI and the cloud are having on our carbon footprint. For example, did you know that every image created by an AI model is estimated to use as much electricity as it takes to fully charge a smartphone?
My point is, if the industry is transforming, then it needs to think seriously about what impact that will have going forward. Can media companies afford to use these energy-greedy technologies if broadcasters want to be net zero by 2050? As always, please get in touch and share your thoughts.
X.com: TVBEUROPE / Facebook: TVBEUROPE1 Bluesky: TVBEUROPE.COM
Content Director: Jenny Priestley jenny.priestley@futurenet.com
Senior Content Writer: Matthew Corrigan matthew.corrigan@futurenet.com
Graphic Designers: Cliff Newman, Steve Mumby
Production Manager: Nicole Schilling
Contributors: David Davies, Kevin Emmott, Graham Lovelace, Neil Maycock, Neal Romanek
Cover image: Everett Collection Inc / Alamy Stock Photo; Universal Pictures
Publisher TVBEurope/TV Tech, B2B Tech: Joseph Palombo joseph.palombo@futurenet.com
Account Director: Hayley Brailey-Woolfson hayley.braileywoolfson@futurenet.com
SUBSCRIBER CUSTOMER SERVICE
To subscribe, change your address, or check on your current account status, go to www.tvbeurope.com/subscribe
Digital editions of the magazine are available to view on ISSUU.com Recent back issues of the printed edition may be available please contact customerservice@futurenet.com for more information.
LICENSING/REPRINTS/PERMISSIONS
TVBE is available for licensing. Contact the Licensing team to discuss partnership opportunities. Head of Print Licensing Rachel Shaw licensing@futurenet.com
SVP, MD, B2B Amanda Darman-Allen
VP, Global Head of Content, B2B Carmel King MD, Content, Broadcast Tech Paul McLane
Global Head of Sales, Future B2B Tom Sikes
Managing VP of Sales, B2B Tech Adam Goldstein VP, Global Head of Strategy & Ops, B2B Allison Markert VP, Product & Marketing, B2B Andrew Buchholz
Head of Production US & UK Mark Constance Head of Design, B2B Nicole Cobban
MAY/JUNE 2025
06 AI and sustainability: something better change
By Graham Lovelace, AI strategist, writer and keynote speaker
08 How green is your cloud?
By Neil Maycock, strategic business advisor
10 The Wizards of Oz
Dimension Studio’s Ozan Akgun explains how the company used volumetric capture to create pixel-perfect crowd scenes for Jon M Chu’s record-smashing Wicked
16 Broadcast sustainability has grown up
Neal Romanek looks at how UK broadcasters are turning green plans into corporate ambitions
18 Best of show
Future’s Best of Show at the 2025 NAB Show highlighted innovative products and solutions on show in Las Vegas. We celebrate the winners in the TVBEurope category
24 Scanning the horizon
Jenny Priestley discovers how Visualskies is driving the world of TV and film production forward with cutting-edge scanning technologies
30 Understanding the cloud’s environmental impact
David Davies explores a new white paper from the EBU and sustainable analytics specialist Humans Not Robots highlighting the carbon footprint of cloud-based infrastructure
34 What you see is what you get
Kevin Emmott speaks to James Medcraft, founder of Cyclops POV, and finds out about the physics of perception and why effective POV footage is all in the eyes
37 Ultra accessible
Thanks to its imaginative use of Object-Based Media, children’s animation Mixmups is widening accessibility for viewers with additional needs. Matthew Corrigan meets series creator Rebecca Atkinson and Kate Dimbleby, co-CEO at Stornaway.io, to find out how
By Graham Lovelace, AI strategist, writer and keynote
For all its magic, AI has a dark side. Dark as in unpleasant: biased models perpetuate stereotypes, hallucinate lies and spread misinformation. These are things we know, things we can partially mitigate. But dark can also mean hidden. Every new AI model release is accompanied by boasts about what it can do. What we’re not told is the cost of how it performs its conjuring tricks—the cost to our planet, and our ability to maintain its precious resources. This needs to change.
AI improves efficiency, automating tasks performed by humans. Robots first replaced unskilled labour performing repetitive tasks but AI is now coming for knowledge workers, including those in the screen industries, potentially replacing skilled professionals with systems that imitate their human-crafted expressions. Paradoxically, while AI is expected to deliver significant efficiency gains, the way its newfangled generative form works is wildly inefficient and resource-intensive.
Vast amounts of computational power and energy go into AI training (the initial step in which models learn from the information they’re fed) and inference (the process by which trained models generate outputs based on the probability that they’re plausibly correct). Training OpenAI’s GPT-4 required an estimated 50 GWh of electricity—enough to supply nearly 250,000 UK households for one month. It’s further estimated that every image output by an AI model uses as much electricity as fully charging a smartphone. Using AI to answer queries requires 10x the amount of electricity of classic search. Total electricity consumption by data centres—the nearworkerless factories of the AI industrial revolution—is forecast to roughly equal Japan’s annual consumption next year.
Data centres don’t just consume huge amounts of electricity (nearly a third of Ireland’s total demand by 2026, according to studies), they also have a gigantic thirst since they evaporate water to keep cool. Creating a 100-word email using generative AI requires the equivalent of a 500ml bottle of water to prevent servers from overheating. Between now and 2030, data centres could emit 2.5 billion tonnes of greenhouse gases globally—three times the amount if generative AI hadn’t happened.
That’s a lot of stats. The problem is that almost every one is now out of date. They were calculated before the recent release of even bigger AI models; before a plethora of energy-sapping video generators went live; before search went generative, spitting out
AI-written summaries whether we want them or not; before image generators were integrated within social networks; and before their use went mainstream.
In March, AI-generated images in the style of Japan’s Studio Ghibli went viral thanks to OpenAI’s new image generator, allowing outputs resembling Ghibli’s distinctive hand-painted animations. OpenAI CEO Sam Altman said while it was “super fun” to see people loving Ghibli images, the craze was causing its graphical processing units (GPUs) (the workhorses of the generative revolution that perform all that power-hungry inference) to melt. Altman’s short-term fix was to cap the number of images users could generate to three per day. That at a time when OpenAI’s weekly active users soared to 800 million, doubling since February.
Longer-term fixes to the AI companies’ demand for more energy include calls for deep investment in nuclear power. That sounds like a clean solution, but hardly anyone talks about the spent fuel rods. Much of the radioactive waste is simply buried underground for future generations to deal with. Elon Musk’s ginormous AI data centre in Memphis is powered by portable methane gas turbines. In April, President Trump signed an executive order saying new coal-powered generators were needed to support America’s AI developers. Yes, coal. Sound sustainable to you? Me neither.
What can we do? Here are three ideas. Our industry should commit itself to only using AI when it’s absolutely necessary. Think of the number of discarded AI images and video sequences and the wasted energy that went into producing those warping, morphing, physics-defying clips that never make it to the edit. We should extend industry initiatives such as BAFTA’s albert from calculating the carbon footprint of TV productions to include AI, making clear how much AI material (including discarded content) was generated and how much energy it consumed. And we should lobby for AI companies to be totally transparent about their energy usage, adding consumption labels to models and outputs similar to those slapped on electrical appliances in the UK and across Europe.
In their punk era hit Something Better Change The Stranglers said we were “too blind” to see what’s “happening right now”. Decades later we’re largely oblivious to the environmental damage being caused by AI. “Ain’t got time to wait,” they sang. And we don’t. Something better change.
By Katherine Nash, business operations manager, The Bottle Yard Studios
In April, BAFTA albert and Arup released their annual Studio Sustainability Standard Report. The global, voluntary scheme designed to help studios measure and reduce environmental impact is growing in reach, attracting 31 studios in 2025/26. In its third year, the report named The Bottle Yard Studios’ TBY2 facility as the highest scoring studio of the year—making it officially the world’s most sustainable studio.
We approach sustainability with the concept of longevity at The Bottle Yard Studios. It may be tempting to think that studio sustainability is all about physical infrastructure, but it is much more than that. The Standard’s scorecard is aligned with this way of thinking, taking more of a holistic approach. It assesses across six themes: Climate, Circularity, Nature, People, Management and Data. It’s looking at how you interact with biodiversity on site, for example, or how you embed circular economy strategies into day-to-day operations, how you build a culture within your own team and the production teams you are hosting, that supports transition to net zero.
We’ve participated in the Standard since its launch and still recall how rewarding we found the process of completing our first scorecard. For the first time, we had a place to document all the wonderful things we were already doing across the themes, and where there was room for improvement, the scorecard and albert report created a detailed action plan.
Bristol is a city with ambitious net zero targets and as a Bristol City Council-owned and managed studio, sustainability has always been front and centre of our culture and business planning. As a dual-site complex, the studios have a varied portfolio of physical infrastructure, all of which consists of re-purposed accommodation. The very concept of re-purposing over 30 acres of previously determined ‘end of life’ buildings into a thriving creative hub has saved a tremendous amount of embodied carbon, as well as establishing a resilient production hub that protects employment opportunities for local crew and provides a training ground for the workforce of the future.
At TBY2, we were able to work with contractors to prioritise sustainability and energy conservation at every stage of design and build. The most meaningful sustainability initiative is that the power supply is supported by a 1MW rooftop array, community funded by Bristol Energy Cooperative, which saves nearly 200 tonnes of CO2 per year. Aside from providing productions with onsite generated green
power, the array also benefits the wider city, being part of Bristol’s City Leap network, which ‘sleeves’ surplus energy to other buildings, helping Bristol reach its goal to be net zero by 2030.
Our other eight stages are located at our original site, TBY1, offering more affordable accommodation and presenting a different set of sustainability challenges to overcome. Our two sites may accommodate productions of different sizes and budgets, but we strive for a sustainability commitment that is consistent across the board. Both sites scored 100 per cent in the Standard’s themes of Circularity, People and Management, whilst TBY2 also scored 100 per cent in the Nature and Data categories.
Our strategy has to work for our entire operation, and circular economy activity is at the heart of our commitment. We know that by encouraging a thriving local filming eco-system where our low-carbon studio infrastructure sits alongside a robust supply chain, there will be a long-term, sustainable future for filmmaking in the West of England. This extends to nurturing crew depth too, we are a lead delivery partner for the BFI Lottery-funded All Set West skills package, supporting local emerging talent behind the camera in scripted film/TV production.
We’re often asked what our sustainability reputation means to us. Everything! It’s a huge part of our identity, driven by real passion within our team. It’s also fast becoming a deciding factor for clients weighing up where to shoot. Some of the biggest productions we’ve hosted recently—like Rivals, Wolf Hall: The Mirror and the Light, The Outlaws have grabbed our sustainability offering with both hands. We firmly believe that as a studio facility, our role is to enable, influence and incentivise productions to make sustainable choices, not just when basing at the studios, but also when filming in the wider region, and we work closely with Bristol Film Office and many local community groups and charities to implement this strategy.
We’ve created systems that unlock access to tangible support that makes a real difference. It could be our sustainability toolkit, with information about everything needed to work sustainably in our locality, including green suppliers and accommodation lists, our Tenant Hub of 15 supply chain businesses, and the community network we’ve built that helps with material repurposing and reuse. They trust we have done the hard work to connect them to sustainable solutions— and with more productions now arriving with a sustainability budget, we are thrilled to see this hard work being put into effect.
By Neil Maycock, strategic business advisor
Over recent years there has been a tremendous amount written on the economics of the cloud, primarily in our industry looking at the migration of traditional media technology to cloud applications. Much of the complexity comes from trying to accurately model the costs of the traditional approach, such as real estate, power, cooling, infrastructure, to compare with the ongoing subscription of a cloud service. As the market matures, companies are increasingly able to make informed decisions on this direct use of cloud in their businesses, but there is also a trend of increasing indirect cloud usage, which is less well understood.
In many areas of our lives, we are using cloud more and more, often not even aware of how or when we are using it. Take a simple example of the photos and videos on your phone; is this all on local storage, or have you exceeded the capacity of your device? Most phones automatically and seamlessly use cloud storage for media synchronising with the local device as content is accessed. Of course this is initially free, but then charges for storage can follow, and how many people have a clear understanding of this cloud usage?
Given the increased use of cloud in so many areas of our lives and businesses, an important question to ask is just how green is it? The cloud providers will argue that it provides economies of scale and efficiency in terms of the previously mentioned overheads like power and cooling. That is a reasonable argument when we are considering replacing X with Y, but there is one area in particular where cloud usage is soaring and it is incremental usage rather than replacing traditional compute, this is AI.
Much like the examples of photos on a phone, the impact or carbon cost of using an AI service is almost entirely invisible to the user. Typing a query into a web page and getting a text response doesn’t give the appearance of high computing demand in the same way that processing video might, but some of the data being published paints a different picture. One estimate for training the GPT-3 large language model is that it consumed 1,287 megawatt-hours of electricity and generated 502 tonnes of carbon dioxide emissions. Of course, it’s not just OpenAI, there are many services being rolled out. Google’s emissions
have increased 48 per cent in five years, partly due to the energy demands of AI. Before continuing, I should point out that I can’t credit the above-mentioned studies or data because they come from an AI summary of a web search. This was automatically done without being asked for, incurring additional cloud compute/power consumption, which goes back to the point of the energy/carbon cost not being apparent to the user.
Doing this small amount of research leads me to think that there needs to be much more transparency on the carbon impact of using these services. Placing my used plastic in the recycling bin feels a little futile if I then sit at my keyboard asking AI random questions and burn a gigawatt of electricity! It has also led me to question how any of this can be free. If these AI systems consume so much power, then they cost a lot to run, and I don’t buy into the altruistic corporate mission statements that claim they are doing this to help humanity! Ultimately, we will be paying for these ‘free’ services, with the payments falling into two main categories.
The first is that we are paying with our time and knowledge; every time we submit information to an AI system, we are helping train the model it operates on. There is a lot of debate on what this means for intellectual property, from Hollywood actors’ images and voices being duplicated, through to engineers using AI to check their source code, which potentially then effectively makes that code open source because the AI can reproduce it for another user.
The second model is where a free service is provided to build a dependency on that service over time, as it becomes essential to the user, charges and subscriptions can be introduced. The phone memory model at the start of the article is an innocuous example, but the extrapolation is potentially concerning. In the latest series of Black Mirror on Netflix there is an episode where a woman’s life is saved with brain surgery, but the solution requires a medical device in her brain which is linked to a cloud subscription service. Over time, the costs increase, and advertising is introduced for a cheaper subscription—interesting that Netflix allowed that!
Of course that’s an extreme fictional example, but it serves as a lesson that we need to be careful about what we subscribe to, and what the real cost is, both personally and for the planet.
TVBEurope’s website is your online resource for exclusive news, features and information about our industry. Here are some featured articles from the last month…
PROVIDING THE VIRTUAL PRODUCTION BUILDING BLOCKS FOR A MINECRAFT MOVIE
Disguise’s Talia Finlayson and Laura Bell discuss their work both on-set and off, working to create the amazing virtual environments used in the film.
In a special video interview, TVBEurope content director Jenny Priestley sits down with new Grass Valley CEO Jon Wilson to reveal his vision for the company going forward, and what that means for customers. The strategy aims to modernise the BBC’s broadcast infrastructure, ensuring audience resilience and cost-effective content distribution.
WHY EVERY ARTIST AT THE VE DAY 80 CONCERT WORE TWO MICROPHONES
Jonathan Edwards, head of RF at Terry Tew Sound & Light, details the technology used to deliver both the broadcast and FoH sound for the special concert.
NETFLIX MEDIA PRODUCTION SUITE ‘DEMOCRATISES ACCESS’ TO INNOVATION
Adopting open standards, the solution aims to provide workflow standardisation, allowing for automation and other innovations across a diverse range of markets.
The first part of Jon M Chu’s adaptation of Wicked has smashed global box office records, garnered countless award nominations, and had everyone singing Defying Gravity for months. Dimension Studio’s Ozan Akgun explains how the company used volumetric capture to create the film’s pixel-perfect crowd scenes
Ozan Akgun
How did Dimension get involved in Wicked?
Wicked’s production team approached Dimension to extend crowd scenes for some of the Munchkinland and Emerald City shots, which they wanted to look busier and more vibrant.
It was Dimension’s role to volumetrically capture the background actors who were in full costume and make-up, process the data and hand it over to Framestore, who would then composite them into the shots of Munchkinland and Emerald City, alongside the other VFX work they were doing on the scenes.
The nature of the project, and how quickly it came together meant that Dimension needed to run pipeline
tests with the VFX team, and then be able to deploy our volumetric capture shoot team at short notice, while our processing team were on standby to handle the data and produce the final assets before hand-off. Our team was about 16 people who looked after the production, logistics, capture and delivery.
How did you use the Polymotion Truck to capture the extras’ performances?
The Polymotion Truck is a partnership between Dimension and MRMC. Dimension originally designed the truck to integrate an expanding state-of-the-art volumetric capture stage that leveraged our Microsoft volumetric license.
For Wicked, we deployed the stage at Elstree Studio and over two days, we captured the supporting cast while they were in full costume and make-up. To accommodate the wardrobe that was green—a lot of green for Emerald City—we repainted and covered the
stage interior from green to blue screen, ensuring the background wouldn’t interfere when we needed to key it out to create the assets that Framestore could use. This set-up allowed Dimension to capture actors as 3D assets without them needing to move locations or revisit the costume, hair, or make-up departments.
For our volumetric capture work in film, we tend to deploy stages of between 70-110 cameras, in order to provide optimal data capture and asset quality. For Wicked, the stage was equipped with 106 cameras: 53 RGB and 53 Infrared cameras, with 96 positioned around the capture volume facing inward, and 10 mounted above, pointing downward, to achieve optimal results with any kind of performance. Our tech stack is based on the Microsoft Mixed Reality Capture solution, which is now licensed from Arcturus.
This technology is considered best-in-class and supports 10-bit colour, which is important for the VFX pipeline and our work in film. Dimension is the only volumetric capture studio that has developed a proven volumetric digital crowds pipeline. Previously this was used for Sony Pictures Whitney Houston: I Wanna Dance With Somebody, and Peacock’s Those About To Die. So our pipeline has been used to bring crowds to life in ancient Rome, ‘80s pop concerts, and now the land of Oz.
Were the performances captured in 4K or 8K, and why?
Performances were captured in 4K, which provides a strong balance between visual fidelity and manageable file sizes, allowing the VFX team to efficiently place and composite them where needed. Dimension also has a range of tools that support the VFX teams when working with the assets, such as motion blur, prop tracking and replacement.
How many performances did you capture? And how did you ensure that each one was different?
Dimension volumetrically captured 76 members of the supporting cast, under the direction of the Wicked team. The choreography and the desired action from the performances were pre-planned ahead of the talent being filmed in the volumetric studio.
Each actor performed for one minute, cycling through the actions and reactions so that the crowd in the end result was consistent with what was happening in the scene. Dimension then processed 76 minutes of final content.
Has the same workflow been used on any other project?
Our virtual crowds pipeline has been used in several different projects. On Whitney Houston: I Wanna Dance With Somebody, we volumetrically created huge crowds at Wembley and at several other performances in the film.
And for Those About To Die, our virtual crowds pipeline came together with our virtual production pipeline. We captured 90 actors and over 500 individual performances to create a crowd of more than 32,000 people, which could be scaled up to 90,000. This crowd was used in the Circus Maximus and Colosseum environments that were used for the ICVFX shots that used an LED volume.
Why was it important to track props?
We used prop tracking for the magnificent hats worn
by the citizens of Munchkinland and the Emerald City. Hats can shift, tilt or bounce during the performance, and without tracking, this subtle motion can be lost or appear disconnected in the final volumetric asset.
By tracking the hats as separate props using Optitrack motion capture system, we ensured accurate capture of their position and motion relative to the performer, which allows for cleaner reconstruction, better fidelity, and flexibility if any adjustments or replacements are needed in post.
What happened to the volumetric footage once it had been captured?
The 75 minutes of footage—equaling about 100,000 frames—was processed, the background keyed out and cleaned up so the 3D performances could be handed over to the VFX team at Framestore.
What was the biggest challenge of working on Wicked?
From our perspective, the turnaround time was one of the biggest challenges on this project. The production team didn’t decide they wanted to bulk out the crowds until they’d started the shoot, so it was important that we could move quickly and get the truck ready to go fast. Not to mention needing to also repaint everything from green to blue in that prep time.
How can volumetric capture help creatives reduce the cost of large crowd scenes?
One of the key benefits of using volumetric capture
for digital crowds is that it provides the creative and VFX teams with individual 3D assets for each crowd member. This allows complete creative freedom, for example, so directors can change camera paths, crowd composition, or select specific performances and indeed swap out real performances with volumetric late in the production if there is a need to, or a creative decision requires it.
This flexibility helps to reduce costs where typically you might have to reshoot the plates in order to change the shot. It provides a practical, cost-effective option for filmmakers to create large crowds using existing cast members, rather than additionally hiring, dressing and directing hundreds of background extras or using traditionally more expensive VFX approaches to creating clothing and animating fully 3D digital extras.
With volumetric, the performance is a verbatim one-to-one representation of the actors’ performance, including the movement of garments and hair, which can be expensive to authentically replicate using CGI and animation.
In most cases, volumetric offers a cost-effective and efficient solution for scaling large crowds. For
Wicked in particular, it provided the VFX team with realistic, authentic performances of background actors to use in the scenes and maintained the beautifully crafted costumes that they were in.
Jenny Priestley sits down with the DPP’s Rowan de Pomerai and Mark Harrison to discuss the organisation’s major milestone, plans for the future and challenges within the media industry
Originally launched in 2010 as an informal collaboration between the BBC, ITV, and Channel 4, the DPP officially became a new entity in 2015 under Mark Harrison’s leadership. After transforming into a fully independent organisation in 2022, it is now headed by Rowan de Pomerai, with Harrison serving as chief content officer.
The DPP began with a focus on building connections and relationships between different parts of the media supply chain to help the industry gain insight, but 10 years later, the organisation has evolved, with a strong international focus and a greater breadth of involvement from a wider range of companies.
According to de Pomerai, the evolution of the DPP has been continuous, with conversations developing over the years around the ways that content providers reach their audiences while adopting some of the newer technologies that have emerged, including the cloud. “That’s meant that the types of conversations we’re having are different,” he adds. “Often they’ve been much more about the technology strategy, whereas in those very early days, there was quite a big implementation piece around file-based delivery. It’s very much a continuation of that same theme about the big changes that matter to media companies.”
Key to the DPP’s success is providing both an understanding of what’s happening and where a company sits within the industry, and the connections members are able to make within the organisation’s membership. “Those two are really the things for me that anchor what we try and do,” states de Pomerai. “Different types of companies will get different benefits out of it, but everything really stems from that.” Challenges faced by the media industry have certainly changed over
the past decade, especially with the impact of the global streaming services. This has led to a change in consumer behaviour, which itself has seen the media industry moving from supply-led to demand-led, says Harrison.
“Really, the consumer sets the agenda,” he adds, “so the biggest challenge for any organisation is how you respond as those things keep changing. The picture’s become more and more complicated and interesting with more and more different content suppliers. That’s one of the key things that we always try to project when we are looking at insight and predictions of what’s going to happen in the future, any change that’s happening right now, will never continue in a linear way.”
That evolution will continue, agrees de Pomerai, who suggests that the makeup of the media industry will change in the coming years. “We’re seeing new types of players in the industry, both content companies and suppliers. I think that there will be different types of companies involved, and I think there’ll be different topics to address, ones that reflect the change in the industry as a whole.”
Asked how they envision the DPP in another decade’s time, Harrison says he hopes it will continue to respond to the industry as it changes, and remain relevant and useful. “I’ve often said to people that what’s important about the DPP is we’re not defensive. If things change, we don’t protest about it, we go with it, and we look at what that change means.
“So, however the industry might have changed in another 10 years, I just hope that we’ll still have that relevance.”
Scan the QR code to watch the full video interview
As sustainability becomes the price of doing business, UK broadcasters are turning green plans into corporate ambitions, writes
Neal Romanek
As foreign governments cut environmental institutions and Europeans spend more money on Russian fossil fuels than they do on Ukrainian aid, we could be forgiven for not knowing how much dedicated, frantic paddling is going on beneath the surface in the world of sustainability and decarbonisation.
The broadcast world has been making green improvements for over a decade. You could mark the
“launch” of the transition from the unveiling of the albert carbon calculator at the Edinburgh TV Festival in 2011. albert was then bequeathed to BAFTA, and a young team turned what was a piece of software into the top hub for sustainability across the industry, primarily serving the broadcast production sector.
Given the dearth of comparable organisations globally—and the influence of UK broadcasters— BAFTA albert quickly became the global expert in media sustainability, a prospect which threatened to overwhelm the small, green team.
Now dozens of media sustainability consultancies have sprung up around the world, a few of them in loose partnerships with BAFTA albert, that cover a variety of specialities. Many of these are for-profit consultancies offering sustainability education, support and management to everything from huge corporations to individual productions. As these organisations have picked up the slack over the past few years, BAFTA albert has turned its focus back home and reprioritised UK production. The result has been that UK broadcasters have led the world in sustainability awareness and practice.
Formalising the change
We are now seeing UK broadcasters systematise and formalise their sustainability planning in new ways. While broadcasters might have had a dedicated staff member to act as a source of sustainability strategy, departments tended to attack problem areas piecemeal, rather than overhauling the organisation from the ground up. But those years of experience are now being codified into more organisationtransforming approaches, with sustainability becoming embedded not just in daily practice, but in business strategy.
Europe’s Corporate Sustainability Reporting Directive (CRSD) requires large companies to disclose details of ESG issues both internally and in their supply chains. This means that vendors and service providers are also trying to clean up their act across industries, and momentum is moving toward normalising sustainability action across European businesses.
ITV unveiled its Climate Transition Plan in Spring 2024, which set the bar for other broadcasters. The plan was built on six pillars: Operations, Value Chain, Internal Culture, Engaging Audiences, Climate Resilience and Industry Transformation. But ambition on this scale rarely happens without government help. In this case, UK broadcasters are aware of the legally binding government mandate of “net zero” by 2050.
The government’s Transition Plan Taskforce (TPT), running from 2022 to 2024, also helped UK companies to better align with government policies.
The ITV Plan follows the recommendations by SBTi (Science Based Targets Initiative), a UK-based non-profit that provides businesses with advice on decarbonisation. The ITV Transition Plan has also engaged its teams with direct incentives. Since 2022, part of the annual bonus packages of ITV management has been linked to achieving climate targets, including meeting year-on-year emissions targets for the company and achieving 100 per cent BAFTA albert certification for content produced and commissioned. All other employees in the organisation have specific ESG targets tied to their annual bonuses as well.
Plans into action
But corporate roadmaps don’t create change by themselves. We’re all very familiar with detailed PowerPoint presentations that lead to zero action. Jeremy Mathieu, head of sustainability at ITV—and once BAFTA albert’s international manager—said when the ITV Plan was launched: “Publishing the plan doesn’t mean we have all the answers. There’s still a lot of work to do to align business strategy and climate objectives, define the activities, metrics and targets we need to bring the transition plan to life.”
Mathieu also kept in mind that the goal of sustainability is about human wellbeing: “It’s much more than a decarbonisation roadmap, it’s about ensuring we are fit to thrive in a net zero future.”
The BBC has had a systemic approach to its sustainability practice, attempting to incorporate sustainability awareness in everything from facilities to content, but they too are aware that sustainability practice needs to keep evolving. Over the next few months they will be launching their ‘TV Production Sustainability Strategy’, which will help inform decisions both internally and for vendors about greening content creation.
BAFTA albert continues to be a key hub for communication and collaboration in the industry, but broadcasters are more and more taking responsibility for their own sustainability destinies. Early in its history, albert really was home to the main experts in the industry, but now, as sustainability awareness has boomed, broadcasters and streamers can bring on board some of the best experts in the field—and pay them at rates well beyond the capacity of a charity like BAFTA albert which has become more of a neutral
space for competing companies to come together and coordinate.
“BAFTA albert has always worked closely with the broadcast and production community, over the last couple of years our working relationship has got even closer,” says April Sotomayor, head of industry sustainability at albert.
“Last year we introduced Task Forces which are made up from the BAFTA albert consortium members with specialised knowledge across key areas aligned with the Climate Action Blueprint," she adds.
“The groups cover Standards, Measurement and Reporting, Climate Content, Sustainable Production, and Culture and Capability. The task forces have played an important role in the development of some important projects, including the introduction of uniform climate content tracking across six broadcasters and the introduction of BAFTA albert’s new suppliers directory.”
The big transformational impact both albert and the broadcasters can have is not internal, but on the wider industry, including their production companies, facilities and vendors. In ITV’s Climate Transition Plan, the bulk of the job ahead is reducing the company’s Scope 3 emissions, which include all those supporting companies and supply chains whose emissions can be very hard to pin down.
In addition to the albert task forces, working groups across the likes of news and sports, continue to outreach to the rest of the industry. The albert Sports Working Group has been developing guidance for production at venues where events are held. This includes the creation and roll-out of a document that offers simple advice for venue management and sports production teams about how to cut emissions on location.
BAFTA albert has just relaunched its Suppliers Directory, which highlights industry suppliers who offer more sustainable solutions. Another big success is the Studio Sustainability Standard giving participating studios and facilities a sustainability rating and path to continuous improvement.
Broadcast sustainability has grown out of its adolescence. There are very few companies in the industry that can claim ignorance at least about the first steps in decarbonising. But of course, knowledge and action don’t always go together. Broadcasters are moving in the right direction. We can only hope that they move fast enough.
Future’s Best of Show at the 2025 NAB Show celebrated innovative products and solutions on show in Las Vegas. Here, we celebrate the winners in the TVBEurope category
Xplorer Max
AEQ's Xplorer
MAX is a wireless intercom terminal that redefines communication in professional environments. With its unique digital radio technology in the 5 GHz band, the Xplorer MAX ensures quality of service (QoS), protection against interference and an exceptional range of up to 600 metres in open field with a single antenna.
Thanks to its advanced design, the Xplorer MAX minimises the need for multiple access points, making it easy to install and ideal for temporary deployments and large-scale environments. Each access point can manage dozens of terminals simultaneously.
At the core of CLOUDPORT’s advancements is its reimagined playlist management, which transitions from playlist-level to show-level publishing. The architectural shift reduces edit publishing time down to just a few seconds, significantly improving operator efficiency.
This foundation supports the innovative ‘Always Edit Mode’, allowing continuous content manipulation with automatic publishing, while offering a protective View-only Mode’ to prevent accidental modifications. The platform now supports comprehensive recording options (Raw, Clean, and Branded feeds) with fast turnaround times, enabling nearly immediate content re-use across channels.
As the industry moves from the proofof-concept phase for generative AI applications to production-ready, Amazon Nova is becoming an essential component to company success. Amazon Nova has deep integration with Amazon Bedrock, including features such as knowledgebased Retrieval Augmented Generation (RAG) data grounding, fine-tuning, distillation, multi-agent collaboration and guardrails. Amazon Nova includes understanding models, which accept text, image, or video input and generate text output; and creative content generation models, which accept text and image input and generate image or video outputs.
As the media industry transitions towards hybrid and cloud-based workflows, VX Media Gateway ensures seamless content transport with robust security, redundancy, and high-performance streaming across IP networks. One of VX Media Gateway’s key differentiators is its ability to operate in diverse deployment environments, whether in a private data centre, on-premises broadcast infrastructure, or a public cloud. The platform integrates natively with Appear’s award-winning X Platform hardware to enable best-in-class hybrid processing solutions that optimise efficiency, reduce operational complexity, and future-proof media workflows. Leveraging Appear’s expertise in secure IP video transport, the platform supports end-toend encryption, seamless network redundancy, and industry-standard transport mechanisms such as SRT, SMPTE ST 2022-7, and IP-FEC.
EASY-IP
Bitmovin’s AI Scene Analysis uses advanced AI models to provide customers with contextual metadata at scenelevel granularity. This service can provide the basis for many downstream workflows or pipelines to provide additional value to customers, ultimately defining better viewer experiences. The most obvious is using this metadata to enrich content search and recommendation engines, a huge focus in the industry to keep users engaged and provide more personalised experiences. Another is with content monetisation, where contextual scene metadata can be used for contextual advertising, or with AI Scene Analysis’ pre-integration with other Bitmovin products like Bitmovin’s VoD Encoder, unlocking out-of-the-box pipelines.
DaVinci Resolve 20
DaVinci Resolve 20 introduces more than 100 new features, including powerful AI tools designed to assist users with all stages of their workflow. Use AI IntelliScript to create timelines based on a text script, AI Animated Subtitles to animate words as they are spoken, and AI Multicam SmartSwitch to assemble a timeline with camera angles based on speaker detection. AI Audio Assistant analyses the timeline audio and intelligently creates a professional audio mix. Blackmagic Cloud’s folders let users easily share extra clips, images or graphics for a project with other collaborators. All cloud content appears as virtual clips and folders until used in a project, after which it is synced locally. Creatives can also now review projects in Presentations with clients who don’t have a Blackmagic Cloud account.
EASY-IP is a modular, future-proof routing solution built using standard Ethernet switches and arkona’s programmable AT300 PACs (Programmable Acceleration Cards). Each AT300 PAC features dual 100G Ethernet ports and supports hot-swappable SDI/MADI I/O modules with up to 16 inputs and 16 outputs per module. For UHD, each PAC handles up to 16 inputs and 16 outputs without redundancy, or 8 in/8 out with full redundancy. Each PAC can also serve as a PTP master, JPEG XS encoder/decoder, or audio DSP engine with thousands of meters, filters, and faders—all dynamically licensable. Internal crossbars ensure full flexibility between SDI and IP workflows.
The Tessera SQ200 is a powerful 8K LED video processor, 20x more powerful than the current industry-leading Tessera SX40. It can drive up to 36 million pixels from a single processor, power canvases up to 64K pixels wide, and supports AV over IP at 100 Gbps with full network redundancy. It natively supports full 8K at 60fps 12bpc 4:4:4 and embraces the latest 100G Ethernet technology to drive an entire 8K LED wall down a single fibre cable. For input, it supports DisplayPort 2.1, HDMI 2.1 and 12G SDI with 4 sets of inputs enabling both 8K and 4x4K workflows.
ChurnIQ AI-ssistant
ChurnIQ AI-ssistant from Cleeng is an intelligent AI-powered assistant tailormade to help streamers and other D2C companies get precise answers from their data without requiring deep analytic expertise. By asking the AI-ssistant for any answer or report using simple prompts, it will quickly generate data visualisations, and uncover hidden trends and patterns from subscriber data. It equips users with real-time alerts on churn risks, automated intervention strategies, and deep insights into customer behaviour.
CuttingRoom’s cloud video editor transforms live video editing with its innovative Growing Live Timeline, allowing real-time manipulation of live feeds. It streamlines content creation, editing, and publishing, eliminating delays and enabling instant delivery. Its responsive UI, fast ingest, upload, rendering, and publishing make it the preferred cloud video platform for scalable, collaborative editing from anywhere. CuttingRoom allows users to capture content directly from live streams, the CuttingRoom Reporter iPhone app, or cloud services.
The ProDeck 24 is a professional-grade control surface, designed for desktop control in the broadcast and AV industry. The ProDeck 24 boasts a robust, embedded computer, ensuring smooth performance for tasks like content streaming and multimedia editing. The device also features a high-resolution 8” IPS display with 1280 x 800 resolution, offering a tactile touchscreen experience. Each of the 24 buttons programmable on the ProDeck serves a dual purpose; functioning as a control and providing interactive menu or real-time video
enTranslate Mobile
ENCO introduces enTranslate Mobile, a novel, disruptive solution to the challenge of making broadcast and in-venue content accessible to every viewer or audience member. For broadcasts, enTranslate mobile offers direct access to multilingual translations through QR codes displayed on TV screens that redirect users to a mobile-friendly website where captions immediately populate. In any scenario, including broadcast, enTranslate’s native on-prem capabilities ensure translations continue uninterrupted should customers lose network connectivity. enTranslate Mobile also gives audiences a way to follow along on their personal devices in their language of choice.
Studer Vista in VUE
Studer Vista in VUE introduces the expanded Vista control using the Evertz VUE Intelligent User Interface. This powerful combination offers over 2,000 bidirectional controls that cater to the needs of production control rooms, multiple small production suites and remote productions. The bi-directional control via MAGNUM-OS 2000+ control points ensures seamless interaction between the user and the system, providing a fluid and intuitive experience. The Vista control widgets include faders, EQ, filters, dynamics, mic pre gain, digital trim, aux control, panning, isolated PFL, mute, and snapshots.
Falkon X2
Designed for broadcasters who thrive on delivering the highest-quality live coverage of sports, breaking news, and events from the most remote corners of the globe, Falkon X2 sets a new standard for reliability, versatility, and innovation in live production. Dual-modem, quad-antenna 5G technology with state-of-the-art 2x2 MIMO support ensures seamless connectivity, long range, and maximum efficiency, even under challenging conditions. Its capabilities include 4:2:2 10-bit HDR video and both SDI and HDMI inputs. The Falkon X2 offers broadcasters a highly efficient solution to deliver pristine live video over any network while maximising the benefits of the latest 5G infrastructure.
Nuke Stage is a virtual production tool that gives VFX artists full creative control over imagery and colour from start to finish, increasing efficiencies and simplifying virtual production workflows for projects of all sizes. A purpose-built, standalone software solution, Nuke Stage requires no other virtual production, ICVFX tools, or game engines. It is hardware agnostic, so productions can use their preferred hardware and easily synchronise across render node clusters to support stages of varied sizes. It also uses open VFX standards like USD and EXR, so artists are free to use both the hardware and creative applications that best suit their production. Nuke Stage unifies virtual production and VFX pipelines in a simple, scalable manner that makes virtual production a viable technique for projects of all sizes. It was purpose-built for virtual production, based on extensive study and deep understanding.
MWareAI Support Assistant
Tier 1 premium sports demand the highest quality video outputs and the MCC-UHD is designed to deliver the best possible results, converting from frame rates as low as 23.98fps to 60fps in UHD 2160p and HD (1080i/p and 720p), incorporating the best deinterlacer and a full HDR-SDR conversion and colour management suite. InSync’s newest premium hardware has a 50 per cent reduced rack space from the company’s previous premium UHD device to only a single rack unit (RU) with power consumption at only 80100W. The product now features SDI and optional ST 2110 interface with 25GbE with NMOS IS-04/05, PTPv2 and 2022-7 support. The HDR-SDR conversion (HLG/PQ/Slog3) offers custom LUT management and BT 709/202 colour space conversion.
The MwareAI Support Assistant integrates artificial intelligence into streaming services to take user experience and operational efficiency to the next level. The MwareTV platform already has the ability for operators to create rich, branded user interfaces across 16 supported platforms, including mobile devices, tablets, and popular TV apps, including AndroidTV, FireTV, Samsung, LG, Vidaa and Titan. By adding intelligent functionality to these user interfaces, MwareAI Support Assistant empowers users and transforms the support experience by delivering faster, smarter and more intuitive solutions. Contained within the platform is the unique no-code App Builder, which allows operators to develop highly functional and fully branded interfaces for all the common platforms, without the need for any coding knowledge.
OBSBOT Tail 2
OBSBOT Tail 2 builds upon the legacy of the original OBSBOT Tail, the world’s first Auto-Director AI camera. Pushing the boundaries of smart videography, Tail 2 debuts as the world’s first three-axis PTZR 4K camera, setting a new benchmark for AI-powered live production. Designed for exceptional visual storytelling, Tail 2 is equipped with a 1/1.5” CMOS sensor and a 12-element optical lens, delivering 4K/60fps. With 5x optical and 12x hybrid zoom, it ensures crisp imaging across various shooting distances. Tail 2 is powered by AI Tracking 2.0, offering enhanced human tracking precision, expanded subject recognition. The new “Only Me” mode ensures unwavering focus on the selected subject, with Auto Zoom feature maintaining optimal proportions during tracking.
Media companies sell more inventory at higher prices when they have better forecasts, but many are faced with a major challenge: the more data they collect to make sense of their complex business, the harder it is to gain accurate insights. Operative’s new forecasting product OnTarget delivers accurate forecasts quickly and at scale. Built with the latest AI and machine learning technologies, OnTarget delivers hyper-accurate predictive audience analytics. OnTarget forecasts have proven to be 30-40 per cent more accurate than manual methods for media customers.
The demand for ultra-high-resolution content, global streaming, and AI-driven content is causing explosive data growth in the media and entertainment industry and threatening to overwhelm content managers and their budgets. The Quantum Scalar i7 RAPTOR is purpose-built to meet this demand, offering unmatched storage density, long-term preservation, and seamless integration into media workflows. The Scalar i7 RAPTOR is the highest-density tape archive solution available, delivering up to 200 per cent more storage density than traditional enterprise tape libraries. A single cabinet can store over 36 PB of native capacity with LTO-9 and up to 72 PB using LTO-10, dramatically reducing the physical footprint and cost of large-scale content archives.
Raiden is transforming weather graphics production in newsrooms, empowering meteorologists, producers, and designers to deliver captivating, data-driven weather stories. Raiden acquires, processes, and visualises preferred weather data from various sources for the graphics engine. By combining XPression and Voyager, Raiden seamlessly integrates real-time data, immersive visuals, and intuitive production tools to produce highimpact Augmented and Extended Reality (AR/XR) weather content. Raiden simplifies complex weather storytelling by using the XPression DataLinq™ Plugin. This eliminates manual data entry, enabling broadcasters to generate data-driven graphics in minutes.
Streamline Pro is a web-based Media Asset Management (MAM) and video editing system designed to meet the fast-paced demands of content creators. Unlike legacy MAM systems that require expensive infrastructure and siloed workflows, Streamline Pro is a web-based platform that simplifies content organisation, editing, and distribution, eliminating inefficiencies and reducing costs. The platform empowers broadcasters, editors, and producers to organise, edit, and distribute high-quality media faster, reducing time-to-air and enabling quicker response times for breaking news.
The MKH 8018 stereo shotgun RF condenser microphone is Sennheiser’s latest broadcast microphone. Its sonic sensitivity and low noise allow recording professionals to capture pristine sound. With its dual capsules, the MKH 8018 provides a coincidence stereo signal that creates an immersive sense of directionality and spatial realism. A switch permits the choice of flat response or low-frequency roll-off via an integral 70 Hz high-pass filter to help control undesired ambient noise.
The Diamond C1 chassis is a modular, high-end frame designed to support VITEC OG cards, offering a reliable, accessible, and cost-effective solution for video contribution and IPTV head-end applications. It allows users to integrate MGW Diamond or Ace Decoder OG cards within desktop or rack-mounted setups. Engineered for continuous operation, the chassis features adaptive cooling and dual redundant power supplies, with a front panel LCD providing real-time status and OG card type information. Its robust architecture ensures continuous operation, even in the most demanding environments.
tx darwin
tx darwin is Techex’s innovative, modular software platform engineered for live media transport, processing, and monitoring that’s designed to underpin the most demanding Tier 1 workflows. With a microservice architecture, tx darwin supports seamless encoder switching, real-time SCTE-35 manipulation, and high-performance SRT delivery, making it uniquely suited for modern, hybrid, and fully cloud-based broadcast operations.
SGAI for extended DVR
The way we watch live TV is changing rapidly. In recent years, demand has increased significantly for viewing via streaming, and viewers’ expectations for better viewability functions, like extended DVR/rewind windows, have grown. But extended DVR windows have been hard to monetise as they require a lot of server power when applying server-side ad insertion (SSAI). Up to now, broadcasters with a live rewind mode have had to serve the same ads from the original live stream without generating any extra ad revenue.
Jünger Audio flexAI Platform
Jünger Audio introduces new features to the flexAI platform with the release of flexAI software version v2024-04r11, introducing Next Generation Audio S-ADM metadata authoring and rendering for immersive and personalised workflows for Dolby Atmos. Other features include advanced support for Ember+, NMOS IS-04 and IS-05, SMPTE ST 2110-30 and -31 Level A, B, and C, SMPTE ST 2110-41 metadata transport, a new stereo and 5.1-channel Automixer, as well as Jünger Audio’s FM Conditioner channel strip for radio production.
With the surge in 4K, 8K, HDR, and virtual production, legacy storage systems have become a critical bottleneck, risking delays, data loss, and rising costs. TrueNAS H30 solves this challenge head-on, delivering high-performance, scalable, and resilient storage infrastructure tailored for modern European broadcast and post-production workflows. TrueNAS H30 addresses Europe’s unique needs — strict data protection laws and the need for full data sovereignty — with a solution that is scalable, secure, and fast. It helps broadcasters protect high-value assets, meet compliance standards, and scale for growth, all without the complexity and cost of legacy vendors.
Jenny Priestley discovers how UK-based volumetric data experts Visualskies are driving the world of TV and film production forward with cutting-edge scanning technologies, innovative data capture techniques, and a commitment to delivering seamless solutions from set to screen
Specialising in volumetric capture with techniques such as LiDAR, and photogrammetry, Visualskies first launched in 2016 and is the brainchild of Joseph Steel and Ross Dannmayr.
Originally focused on utilising drones for visual effects, the company has quickly grown, broadening its services to create “digital versions of real things” and working across film, TV and advertising. Visualskies’ data is also used in other industries, with the company recently creating a digital twin of the UK’s Houses of Parliament as part of the venue’s restoration project.
“We’ve been using photogrammetry technology since 2016 and combining it with LiDAR data,” explains Steel. “That’s really been our USP, so you get the best of both worlds, the high-resolution geometry from LiDAR and the high-resolution textures from photogrammetry.”
Visualskies expanded into volumetric video around five years ago, capturing people in 4D, enabling clients and vendors to create a precise likeness or performance with no animation required. One of its first projects involved Justin Bieber performing in a computer-generated environment.
“Currently, we’re part of a Bournemouth University research project looking at Gaussian Splat rendering or 4D avatars,” reveals Steel. “You can retain the precise likeness of everything from material surfaces to hair, and that’s something that volumetric data has found very difficult to achieve.”
For 4D capture, the team uses a rig made up of the IOI Volucam and Z CAMs to capture performances in 4K, with human intersections segmented in order to achieve a 16K resolution. Nikon and Sony cameras are used for stills.
Please look after this bear Visualskies’ recent high-profile projects include Paddington in Peru, the third part of the hugely successful film series from StudioCanal. The team was asked to travel to Colombia and Peru and digitise landscapes for the production. Among the locations scanned was Machu Picchu, the 15thcentury Inca citadel.
“We had the honour of digitising that entire landscape and walking up and down the Stairs of Death with our LiDAR scanners and our drones, and mapping that landscape to the highest level of detail possible,” says Steel. “Our remit was very much location scanning on that film. We also scanned the UK sets with drones, did some LiDAR
Joseph Steel
and photogrammetry, and worked on the rivers that Paddington needed to travel through during the story.”
Access to Machu Picchu had to take place before it opened to tourists at 10am, which meant a very early start for the Visualskies team. “We were there for five days and at 4am the team would travel to Machu Picchu, and start scanning at about 6am. There aren’t any driving routes. They had to get the train there and then hike the equipment up every day. It was quite a challenge," adds Steel.
"Unfortunately, I was on another project also in Peru at the same time, so I didn’t get to go. Lydia Fauser, who’s head of 3D processing, and one of our main drone pilots, did the drone capture, Duncan Lees did the LiDAR, and then Mat Hay was the photographer doing the photogrammetry. It was a really good team because it’s a tough location, and very special for them to go and do that.”
The team created around 16,000 images for Machu Picchu alone, and 600 LiDAR scans.
Once all the images had been captured, they were brought back to the UK and ingested into Visualskies’ servers, with each one processed individually. “We have created our own image processing tool, VS Labs, that processes the images automatically. It isn’t publicly available as it’s an in-house tool, but it automatically colour corrects all the thousands of images so you don’t have to do it manually.”
The images were then combined with the LiDAR data, which was processed in Leica Cyclone REGISTER 360. Once all the LiDAR data was prepared and all images processed and colour corrected, the data was entered into Reality
Capture, which created a combined model of the images and the LiDAR. “It aligns them all together and then processes it into a textured mesh,. That process, usually for a landscape, takes between a week and a couple of weeks. Machu Picchu, being a super large data set, took a couple of weeks to process.”
Travelling to and digitising far-flung places isn’t the only location service Visualskies offers. The company has developed VS Scout, which allows users to scout locations from the comfort of their sofa. The iOS application has been built and developed in Unreal Engine, and was initially developed in 2018 for use on National Geographic series, Lost Cities with Albert Lin
“They wanted to be able to visualise scan data in context with the landscape, and also see virtual elements, like rebuilt fortresses and things like that. We developed the application for that show, and then that brought us into Covid just shortly afterwards.
“We modified the app and tailored it to be able to shoot content for recce, and to place cameras and add digital actors, digital sets so that you could recce your locations. That was first used on season one of House of the Dragon for the Dragonstone bridge location. Since then, it’s been deployed on a few different Disney and Netflix shows to visualise content.”
Steel describes the app as a precursor to virtual production, enabling production teams to create shots and solve technical problems. “They used it for the virtual production scene in House of the Dragon, where they had Dragonstone as a background, and a real foreground. One thing they found out by using our app was that the steps actually needed to be rotated ever so slightly so that they could see the castle in the background when tilting the camera up. That was just a really simple thing they solved, and without being able to see it all digitally in one scene, they wouldn’t have been able to figure it out so easily.”
As well as digitising landscapes, Visualskies is an expert at creating 3D versions of humans. This was demonstrated on Ridley Scott’s epic, Napoleon, when the team helped create an army. Visualskies first began digitising actors with their Cyber Rig during the pandemic. The company built all of the computing and hardware in-house.
“We built and created all of the systems to operate it ourselves and manufactured the trussing that it gets attached to. We designed it with transportation in mind, so we could take it to locations. On Napoleon, for example, we were in a different location for around
10 weeks, so that meant scanning with it Monday to Friday, sometimes Saturday as well, packing it up on Sunday and moving it to the next location. It was pretty crazy,” says Steel.
The rig was used to digitise actors for the film’s battle scenes which needed hundreds of thousands of people on screen. “Shooting that many people is almost impossible. If you think about just the catering involved, it’s more than an army! You have to have some element of digital actors in those scenes, and that’s what we were scanning, all of the range of cast for the different battles that happened across the film.”
The visual effects industry has undergone huge changes in recent years with the advent of virtual production and Unreal Engine. Steel and the Visualskies team saw which way the wind was blowing in the late 2010s, adapting their scanning and processing workflows to suit the requirements of Unreal Engine. “That’s become easier these days with the advent of Nanite, which enables you to import large data sets into Unreal Engine without affecting real-time rendering.
“Over the past year or so, we’ve been developing solutions with a new rendering technique called Gaussian Splatting, which essentially allows you to retain the material quality, the transparency, the reflections within your 3D scan—something that photogrammetry doesn’t do. Photogrammetry captures the texture and the shape of something, but Gaussian Splats capture the material qualities as well. We see
that as being one of the next big things to be adopted by visual effects.”
Steel says he also expects artificial intelligence to have an impact on visual effects, especially as it converges with real-time rendering, enabling creatives to enhance their output and visualise content in real time. “Currently, that technology has obviously been used in virtual production stages and volume stages, but we see it going outdoors very soon, things like drone shots, drone plates, you can shoot final pixel quality outdoors,” he states.
“Also, VFX companies owning their own generative AI models to do things like de-ageing, like Metaphysic which was used on Robert Zemeckis’ Here. The important thing that they’ve done is train their own data and they own their own models.”
In terms of what the future holds for Visualskies, Steel says the company is focused on automation, Gaussian Splat versions of rendering and 3DGS, which is creating geometry from Gaussian Splats. “This year for us is all about automation and refining our pipelines,” he adds.
The company is also working towards a reduction
of hardware and the previously mentioned partnership with Bournemouth University is looking at how to reduce the quantity of cameras required to generate Gaussian Splats.
“Everything is progressing so fast at the moment,” admits Steel. “We’re a future-focused scanning company, and we’re always looking to that next thing to try and stay on top of it.”
In the latest of our series focusing on the day-to-day realities of working in media and entertainment, TVBEurope meets production development producer Hannah Robinson, an integral part of the team at dock10 Studios
What is your job title and what does it entail?
I’m a production development producer in the innovation team at dock10, where I work closely with our studio and post teams to support visiting clients and production teams. My role is all about helping clients navigate the world of virtual studio production, motion capture, and occasionally, integrating AI into their projects. This can involve everything from giving tours of our facilities and running tech demos, to developing schedules and quotes, and managing the pre-production process. I also produce and host events for dock10’s key partners, and sometimes take on hands-on roles in the studio—like directing or floor managing for small projects and charity events. I regularly attend industry events and sometimes I am invited to speak at those events as a virtual production and dock10 representative.
Tell us about your most recent project.
We’ve just completed a 12-month InnovateUK, Bridge AI Grant-funded R&D project called ALL:VP. This collaborative research project, developed in partnership with the University of York and 2LE Media, explored how cutting-edge computer vision AI and generative AI can work with green screen virtual studio technology to enable real-time lighting interactions between physical and virtual worlds. I was responsible for coordinating efforts across all consortium partners, managing timelines, and ensuring we delivered on everything outlined in our original proposal. This included reporting progress and outcomes to InnovateUK, as well as helping to oversee the design and build of a new R&D studio at dock10, equipped for motion capture, facial capture, talent tracking, and real-time animation. Alongside our ALL:VP project, our team have delivered several commercial productions, including most recently Sally Lindsay’s 70s Quiz Night for Channel 5’s 70s Week, where my team created the show’s virtual set.
Describe your daily working routine.
I’m very lucky because, as cliché as it sounds, every day really is different. We get so many projects sent in our direction that can be in the very early stages of development (sometimes they are not even named yet), but clients trust us to help them to develop their initial concepts so they can take them to commissioners to fund a sizzle or a pilot. This means one day we could be researching the skeletal structure of a Quetzalcoatlus and the next day, we’ll be in the studio experimenting with ageing technology—I have genuinely seen what I’m allegedly going to look like in 30 years’ time! I manage our team’s workload, coordinate where everyone needs
to be to get projects delivered and try to have as much knowledge as possible for what’s going on in the studios that could require or affect our team.
What sort of technology do you work with on a daily/ frequent basis?
For my specific role, I use all the Microsoft Office suite to try and keep the team and myself organised. I regularly delve into Adobe Creative Cloud with Premiere Pro being my favourite platform to use. I’ve edited demonstration pieces we’ve filmed in the studio for clients as well as submission videos for industry awards which have helped us bring back the trophies. I’ve filmed for these pieces on our Sony 3500 studio cameras and a trusty GoPro Hero 10.
I have been utilising ChatGPT, InvokeAI and other AI platforms such as Adobe’s Firefly for project-specific reasons, as the interest in AI in our industry grows rapidly. For our team, in pre-production and in studios we use technologies and platforms such as: Unreal Engine, Mo-Sys, Zero Density, Vicon, XSens, Faceware, Epic’s Live Link and much more. We’ve also run R&D sessions testing various virtual reality headsets.
How has technology changed your life since you started your career?
Technology and innovation have always been at the forefront of the companies I’ve worked for and in my roles. Before dock10, I worked in the world of sports media as a producer and director where I developed, alongside Wolverhampton Wanderers, the first pyro, firework, laser, DJ and light show to come to a Premier League football stadium before a match. The content I developed for Molineux’s big screens and advertising boards ran alongside the pre-match show to integrate and produce an incredible display. All only possible thanks to developments in technology that gave the matchday audience a show they’d never forget.
Now, I work in the field of innovation for a broadcast facility which means we’re constantly looking for ‘what’s next’. Technology is developing at a rapid rate which keeps us on our toes, the challenging and exciting nature of this is why we’re all working in the world of innovation!
What piece of equipment can you absolutely not do without?
Personally and professionally, it is absolutely my Bose wireless earbuds. As someone who loves a walk to start/ end my day, I can’t go without music. And professionally, music and sound are such an important and integral part of our industry both in live production and post
production, crisp clear sound is always key. If I’ve not got access to a pair of Sennheisers, my Bose earbuds are the next best thing!
And what do you wish someone would invent to make your job easier?
For a producer, one of the most common challenges is dealing with the unexpected; plans change last minute, and sometimes your Plan A, B, and C all go out the window in quick succession. So, anything that gives even a 30-minute heads-up before things shift would be a game-changer! That said, navigating those curveballs is also part of the thrill of my role. Working as a tightknit team to adapt and come up with a solution can be incredibly rewarding.
What has been your favourite/most memorable assignment?
One of my favourite projects, both professionally and personally, was working on an episode of BBC Studios’ anthology series Inside No. 9. Growing up watching The League of Gentlemen means I’m a huge fan of Steve Pemberton and Reece Shearsmith. Our team was able to work on an episode where we developed a virtual quiz show set. Being a part of the set design team meant we were able to see the script as it was in development, hold demonstrations for the production team and then Steve and Reece themselves. It was a real privilege to have been a part of!.
And least favourite (names may be withheld to protect the innocent/guilty)?
Working in the innovation team means we’re constantly pushing the boundaries and trying something completely new. This, as you can imagine, can certainly come with its challenges. But the challenges and harder days in studio have always led to a development that all parties have taken a lot of learnings and understanding from.
What do you see as the next big thing in your area?
With the success of our ALL:VP project, I believe the next big thing in virtual production and broadcast in general will be the continued blending of real and virtual worlds. The advancements we’ve made in lighting means that we no longer need to be on a physical beach to replicate that exact lighting environment, we can now achieve it all within a green-screen studio. I really believe this is going to be developed even further with the integration of virtual reality allowing viewers to become fully immersed in a programme or film, to not just watch a scene, but to be in it.
A new white paper from the European Broadcasting Union and sustainable analytics specialist Humans Not Robots provides welcome insight into the power consumption and carbon footprint of cloud-based infrastructures, writes David Davies
As more effective methods of measuring and reporting the carbon footprint of broadcast operations have emerged over the past few years, one particular area of activity has remained decidedly elusive.
With the tech giants behind the main platforms not always inclined to provide full and transparent data, it has become increasingly apparent that the true impact of cloud-based infrastructures is going to be difficult to ascertain.
With migration to the cloud accelerating in many areas of broadcast production, it’s an issue that is only set to become more acute—hence a new white paper on the challenges associated with understanding the power consumption and carbon footprint of broadcasting cloud-based infrastructures feels both timely and welcome. Entitled Cloud Energy Use Tools, it’s the result of a collaboration between the European Broadcasting Union (EBU) and Humans Not Robots (HNR), the company behind HNR to Zero, an analytics platform that helps data-heavy
companies optimise their technology operations in order to run more sustainable and efficient businesses.
As to how the collaboration came about, EBU senior sustainability lead, Dr Hemini Mehta, says that “the actual sustainability community is very small, so it’s pretty easy to find everyone.” But upon speaking with the HNR team, led by founder and CEO Kristan Bullett, it was clear they shared concerns about “the fact that cloud-based operation is going to be a very big challenge in terms of both understanding and propagating sustainability. A lot of the time it’s like a black box, especially with the big cloud providers, so trying to measure the tech stack in general is really difficult.”
“The weighting between on-prem versus cloud is going to move more and more towards cloud,” notes Bullett, “and I don’t think it’s overly contentious to say that the cloud providers aren’t being as transparent as we would like them to be when it comes to understanding [their cloud services].” However, he stresses that he doesn’t think this is “something that’s happened as a point of deceit. For example, it’s easy for marketing teams to get excited over new topics and take [some claims at] face value.”
Nonetheless, with both organisations especially aware of the prevalence of unrealistic assertions about renewables, it was apparent that “trying to take a more independent view on this is something that is needed and wanted,” says Bullett.
The starting point for what ultimately became the Cloud Energy Use Tools white paper—which can be downloaded from the EBU website was an analysis of the EBU’s own relationship with the cloud.
"We wanted to do an analysis piece on how we were using our cloud services, the times that we were using them, and whether there were any recommendations regarding whether we were using the cloud during a more intensive period of energy consumption and if it was possible to change things around,” explains Mehta.
The analysis was based around an ingestion of three months of data concerning the EBU’s use of AWS, fed into the EBU PEACH platform. To assess the impact of cloud usage, the Thoughtworks-sponsored open source Cloud Carbon Footprint (CCF) tool was employed as it “provides a methodology for estimating power consumption and carbon footprint derived from cloud usage for AWS, GCP and Azure.”
Focused on providing a location-based cloud emissions estimate, the CCF tool collates compute, storage, networking and other usage data from major cloud providers in order to calculate estimated energy (Watt-Hours) and greenhouse gas emissions expressed as carbon dioxide equivalents (metric tons CO2e).
The report notes that “good work” has been initiated by the EBU
in terms of being able to gain a clearer breakdown by activity type, but also highlights the broader requirements – including the need to measure more consumption on a continual basis, define KPIs that make sense for the business, and optimise tag resource utilisation so that data can be separated by customer, resource and workflow –that continue to confront the industry as it attempts to gain a better understanding of cloud sustainability.
It also presents some pressing questions to be addressed across the media business, including ‘How to create a baseline so that reporting is made against a marker?’ “Without a baseline there is nothing to measure against,” notes Bullett. “So having a baseline that means you can look at the trends and how that changes over time is [hugely beneficial].”
“The weighting between on-prem versus cloud is going to move more and more towards cloud and I don’t think it’s overly contentious to say that the cloud providers aren’t being as transparent as we would like them to be when it comes to understanding [their cloud services]”
KRISTAN BULLETT
The paper also notes the importance of being able to deal with attributional (direct impact) vs consequential (indirect impact), and apportion data in an effective way, in garnering a meaningful understanding of an organisation’s cloud usage impact.
The collaboration between the EBU and HNR is ongoing; in addition to a second project that aims to deepen understanding of cloud tech stacks, the two organisations are also involved in the ECOFLOW IBC Accelerator project which continues with a second phase in 2025 that will “zoom in on some linear IP-based distribution workflows with a view to exploring progressive technology that will support a reduction in the environmental impact of those workflows,” explains Bullett.
In the meantime, there is a hope that the HNR/EBU white paper will provide a starting point and basic methodology for broadcast stakeholders everywhere to begin reviewing their own cloud usage. “The idea is that a lot of this analysis piece hasn’t been done before, or at least not many broadcasters or content providers have done it. And by looking at this white paper they can see that it doesn’t have to be overly onerous,” notes Mehta.
“We would like to get to a point where there can be a next phase to this,” adds Bullett. “It would be great if we can look at the report as a blueprint, take it to broadcasters, and ultimately come back with some other examples of how organisations are looking at the environmental impact of their cloud usage.”
Presenting a story from a character’s perspective might be all the rage, but getting effective POV footage is difficult if you can’t see what you are doing. Kevin Emmott speaks to Cyclops POV founder James Medcraft about the physics of perception and why it’s all in the eyes
These days, you can’t switch on the TV without seeing the world through someone else’s eyes. Immersive point of view (POV) footage might not be anything new—it’s been around in various forms since the 1920s—but today it’s everywhere, from films and television to advertising and gaming.
But here’s the rub: POV footage is impossible to do properly if you can’t actually see what you are doing. It’s down to the simple physicality of it, and it means that most POV footage just isn’t all that good. Or easy to do. Or especially efficient.
James Medcraft is a full-time director of photography with a background in 3D design, photography and cinematography. Much of Medcraft’s professional career is spent in virtual production and virtual reality environments, and he’s done his fair share of POV work.
“There’s definitely been a shift in the way content is created, shot and consumed,” he says. “There is a much younger audience growing up with firstperson video games, and the market is responding by steering them towards what they are used to seeing, but I’ve never really been satisfied with the user experience or the resulting footage.
“I always found both were a bit of a compromise. In 2021 I worked on a project that required a week of
Using Cyclops POV to film a cyclist's point of view
POV work on a virtual production stage, and not only was it difficult to operate but it was very difficult to see what you were doing. While the footage ultimately looked really good, there was lots of stopping and starting to review it, and my experience on that project confirmed that there was a gap in the market for a better solution. I’ve been working on the Cyclops POV project ever since.”
Cyclops POV
Built around Sony’s E-Mount system and weighing as little as 3.4kg for a basic build, Medcraft is the founder and developer of Cyclops POV, a self-contained helmet-cam with a full frame camera system that integrates camera accessories, filters, wireless video, and power into its design.
Uniquely, it uses custom optics to ensure the operator’s view is reflected with nothing to obscure them from interacting with their subject, which Medcraft says captures exactly what the operator sees with a depth of field relative to the human eye.
“The way camera operators currently film POV footage is to get the camera as close to the head as possible, but there are a number of challenges with that,” he explains. “The biggest problem is that the closer you try and match reality, the closer the camera sensor
“As a cinematic tool, Cyclops is able to create true-to-life, cinematic POV shots…it means that exactly what you see is exactly what you shoot”
JAMES MEDCRAFT
needs to be to the operator; and the closer the sensor, the less that operator can see. Meanwhile, although chin and head cams provide more visibility, neither of these options enable the camera to arc in the same way as your eyes move around your neck. It means that the user experience isn’t great, but more importantly, the resulting footage isn’t accurate.”
Repositioning the camera further forward to achieve more user perceptibility only compounds this. “Doing this requires a much wider lens which means that the footage is always too wide. But it’s also from the wrong perspective because the sensor is four or five inches in front of your face, so when you move, the sensor arcs in the space around your head.”
A different perspective
It’s a conundrum; a more realistic perspective obstructs the operator’s vision, while moving the camera further out to provide more usability ruins the illusion. So how did Medcraft solve this? With a cunning combination of maths and clever design.
Using a mirror system to align the camera sensor with the operator’s eye line, Cyclops POV adopts a down-facing camera on the front of the helmet and a reflective plate located just below the user’s eyeline to align the camera exactly with the user’s perception.
“The nodal point of the sensor of where the footage is captured is virtually between my eyes, so the footage mirrors the way the head arcs exactly and everything is totally aligned with human perception,” explains Medcraft. “It looks simple, but there’s a lot of complex maths at work, and the physics of it all recreates the operator experience exactly.
Checks and balances
“The design also keeps weight down. If I have a camera on the front, I need an equal weight on the back to ensure it is balanced; the more weight on the front, the more weight I need to add on the back. By placing the camera close to the top of the head I need less counterbalance, and Cyclops’ counterbalance is actually a standard Anton Bauer battery that supplies power to the unit. This design distributes weight more evenly to deliver a far more comfortable user experience.
“In fact, everything has a dual use, so just as the battery acts as a counterbalance, the structure for the mirror box supports the rod bowlers for the focus monitors. Every aspect has a functional and structural use to optimise the design for functionality and weight.”
As a cinematic tool, Cyclops POV also adopts industry-standard equipment to ensure it is easy
to update. In order to match footage from different cameras, a drop in filter is another integral component, and Medcraft says he is also developing an RF comms system to interface with standard comms equipment via an internal speaking unit.
“For a rental house it has to be upgradeable,” he says. “To make it as accessible and universal as possible, it needs to use standard components and the system supports four standard camera types; the Sony Venice Rialto 1 and 2, the Sony FX3 and the new Sony Venice Extension System Mini, which is 300g lighter than the FX3 and enables a whole system payload weight of just 3.4kg.”
A constant work in progress, Medcraft has spent the last three years with the helmet on set to refine his design, 3D printing a succession of prototypes and assembling confirmed designs with industry partners such as camera support specialist Romford Baker.
And while he admits that there are other options on the market, he’s seen a real boom in terms of POV shots in the last 12 months.
“I see POV footage all the time on YouTube and in advertising,” states Medcraft. “As a cinematic tool, Cyclops is able to create true-to-life, cinematic POV shots and delivers professional camera control like focus monitors, filter integration and depth of field. It captures the image quality required for grading and post. And it means that exactly what you see is exactly what you shoot.”
And being able to see what you’re shooting is a good place to start.
Thanks to its imaginative use of Object Based Media, children’s animation Mixmups is widening accessibility for viewers with additional needs. Matthew Corrigan meets creator Rebecca Atkinson and Kate Dimbleby, co-CEO at Stornaway.io, to find out how
When stop-motion animation series Mixmups first appeared on Channel 5’s (now rebranded as 5) Milkshake children’s platform, it broke new ground in representing disabilities in a programme for 3 to 5 year olds. Now, thanks to a collaboration with interactive video provider, Stornaway.io, it’s become even more accessible for viewers with additional needs.
Making its 5 debut in March, Mixmups with Ultra Access goes beyond the traditional subtitles, audio description and British Sign Language (BSL), allowing viewers to tailor the experience to meet their needs.
Explaining the background to the show, creator Rebecca Atkinson begins, “I’m partially deaf and I’m partially sighted, and I’m in the unusual position of being the executive producer and the creator and also a disabled person. So when I was creating Mixmups, I thought very hard about how different kinds of children with different abilities would access the storytelling, and I wondered whether there was anything more that we could do beyond just subtitles and audio description and in-vision sign language.”
However, the accessibility options available were rather limited in range. Atkinson was sure they could be improved upon. “I knew that there were some problems with those offers,” she says. “Some streaming platforms won’t allow you to have audio description and subtitles at the same time because they think that hearing impairment and sight impairment are binary. There are lots of children who have both and can’t turn off sign language interpreters. So often, signed content lives in dead of night graveyard slots and the viewer can’t choose to turn them on and off like they can with subtitles and audio description," she continues.
While developing Mixmups, Atkinson became aware of You vs Wild, the Netflix Originals series in which viewers are able to interact with the programme, making decisions for Bear Grylls as he tries to complete missions in some of the planet’s harshest environments. “[It] had interactive TV technology that allowed you to influence a narrative so you could send the presenter down the river or up the mountain,”
Atkinson explains, “and I wondered if this technology could allow for more access features to viewers than was currently being offered by mainstream streaming platforms and terrestrial TV.”
Would it, for example, be possible to turn down the background sound to make the dialogue more accessible? Could the visuals be simplified, removing the background colour to allow the viewer to focus on the character, without having to process additional visual information? Atkinson wondered if Makaton—a language based on symbols and signs—might be used alongside British Sign Language, and if full introductions to the series could be offered in audio description, delivering additional information in a form that met the specific needs of the audience.
“When you watch TV, it’s a sandwich of pictures, special effects, music, dialogue, subtitles, audio description,” says Atkinson, providing a perfect analogy for the make-up of a programme. ”But we’re only serving viewers a completed compressed file. So I wanted to know whether we could separate all those ingredients and then allow customers to go for a menu [and say] ‘I want this. I want that,’ to be assembled and played out on the fly.”
based media
Enter Stornaway.io, the interactive video solutions provider that is bringing innovative new ideas to audience engagement with its concept of content islands. Launched in 2020, the company was founded by Kate Dimbleby and Ru Howe, following an extensive period of beta testing across the film/TV and gaming industries. Atkinson approached the company and outlined her concept, asking if it would be possible.
Not only was it possible, but the timing was fortuitous, as Kate Dimbleby explains. “I think what’s so lovely about this project is that we actually had created this platform at the same time as Netflix was doing Bandersnatch and experimenting and interacting, just like Rebecca says. But Ru’s background was at the BBC and working with the Natural History Unit, for example. [He was] seeing a lot of money spent on this kind of project, like coding it from scratch, not learning anything, throwing away the code.”
What had been developed was termed Object-Based Media (OBM) by BBC R&D. By separating the media into “objects” of video, audio and layers, each object could be selected—as from the menu in the sandwich analogy—and combined in a multiplicity of ways to enable personalisation and adaptation to suit a viewer’s particular preferences and needs. The innovation may be seen as an example of the technological convergence taking place across the broadcast and gaming industries, among others. Recognising its significance, Ofcom published a report about OBM in 2021. A working group was established to further explore its potential, including the EBU (European Broadcasting Union), Sky, Channel 4, NHK Japan, CBC, Dolby, Fraunhofer and MIT Media Lab, alongside Stornaway, the RNIB, RNID and a number of universities.
The technology seemed to be looking for a reason to exist. It needed a use case. The concept worked, but a creator was required to
“…one of the things to remember about preschool years is that deaf children can’t read the subtitles”
REBECCA ATKINSON
effectively breathe life into the innovation. With Mixmups, Stornaway was able to deliver a viable, production-ready project.
The numbers are impressive. Chat GPT has calculated that in picking two or more of the 14 variables, there are more than 4,000 permutations. “It’s as personalised as you can get,” says Dimbleby. “With interactive TV, which has been looking at narratives, generally it’s splitting the narratives maybe two or three ways. If Chat GPT’s maths is right, it is telling you there are 4,000 ways for you to build your viewing experience.”
“The reason we created Stornaway as an interactive video platform for creators was to solve this problem,” Dimbleby continues, “And Rebecca came in with all this imagination, a really significant use case that is actually transforming the possibilities of how children and their carers can watch television together, can have their accessibility seen, and they can talk about it. And I think that’s where we hoped that, by enabling the technology for this kind of content, the use cases would come.”
Commissioned by Paramount, the ten MixMups with Ultra Access episodes mark a groundbreaking achievement for OBM. Part-funded by the British Film Institute, they will be archived in recognition of their national significance.
Each episode was built from more than 200 individual modular short video ‘objects’ which were edited in Stornaway’s non-linear “Story Map” editor. The objects, which the company calls “Story Islands”, can be
moved around and connected within the Story Map editing interface, with clickable button overlays added to allow the viewer to make choices. An in-built game engine sets up the logic of what the viewer should see next, based on what has been chosen before.
The workflow was designed around a single template built in the editor and duplicated across the ten Ultra Access episodes. This allowed for a consistent structure enabling creatives and executives to easily review each episode.
Content is produced by Manchester-based animators Mackinnon and Saunders, whose credits include Pinocchio, Moon & Me and Fantastic Mr Fox, with Stornaway providing the framework that underpins and enables Ultra Access. “Underneath the hood is a kind of game engine for video,” says Dimbleby. “We’ve got integrations with Adobe, so we’re set up to work with McKinnon and Saunders, a traditional video production company.”
Dimbleby describes them as a “really special use of the technology” which is opening the door to new use cases. “Once you can see the potential for this, it can break down resistance for what is possible for accessibility,” she says, adding that there has been interest from different platforms. “The blocker in the industry was always that people couldn’t conceive [of it] because it felt too hard and too complicated… The point is that when you get the technology to meet the need, it becomes less hard.”
“The point is that you need to work with creators to build the technology that serves them, and do it in a way where you’re bridging that gap between. There’s such a silo between the two. Ru’s background, because he worked with natural history producers with huge, weighty pieces of media, [meant he] understood that you can’t just swap one system out for another, that there is a shift happening, but that we need to bring traditional production people into this world.”
Developing additional material naturally has a cost but, as Dimbleby points out, “some of that extra material already existed through the broadcaster, so they were able to supply us with [for example] the object of the sign language interpreters, because they’re signing all episodes anyway. They’re doing audio description on all episodes anyway, and subtitles, so they will be, if you like, recycled.”
Innovations such as AI may bring costs down in the future, perhaps creating synopses or shortening dialogue, but both Atkinson and Dimbleby remain united in the belief that human creativity will always be at the core.
Both also believe that what has been accomplished should be seen as a starting point. “In terms of Ultra Access in future, it would be brilliant to see public service broadcasters like the BBC offering a proportion of their—especially children’s—content with this,” says Atkinson. “Because one of the things to remember about preschool years is that deaf children can’t read the subtitles, and most are born into hearing families so they don’t have sign language. So this is about literacy opportunities. If [broadcasters] say they will put one per cent of our output out with Ultra Access, they’re actually opening up literacy to thousands of children in that age group who are currently not getting it through television.”
By Barbara Lange, co-founder, Media Tech Sustainability Series (MTSS)
he media technology sector stands at a pivotal point, with sustainability evolving from a peripheral concern into a strategic business priority. Spurred on by a convergence of global regulations, shifting stakeholder expectations, and rapid technological advancements, companies across Europe and North America are rethinking their environmental impact and longterm resilience.
While environmental responsibility is a driving force, many media tech organisations are initially approaching sustainability through the lens of cost savings, especially by improving energy efficiency. Highperformance computing, data centres, and production infrastructure consume significant power, and optimising energy use delivers immediate benefits: lower emissions, reduced operational costs, and faster ROI. These early wins help build internal momentum. Once companies see the savings, the shift to broader sustainability, through circular product design, emissions reporting, or responsible procurement, becomes a far more attainable next step.
The European Union’s Corporate Sustainability Reporting Directive (CSRD), adopted in late 2022, originally introduced sweeping requirements to increase corporate transparency around sustainability impacts. Under the initial scope, the CSRD would have applied to approximately 50,000 companies across the EU and beyond, including large listed and non-listed companies, many non-EU companies with significant operations in Europe, and even small and medium-sized enterprises (SMEs) that were publicly listed. Companies were required to report extensively on environmental, social, and governance (ESG) matters using the European Sustainability Reporting Standards (ESRS), with disclosures tied to double materiality, both how sustainability issues affect the company and how the company impacts society and the environment. The first wave of reports was set to begin for financial years starting in 2024, with subsequent years expanding the scale of corporate sustainability reporting across the European market. As the CSRD implementation approached, however, concerns mounted over the administrative burden it placed on businesses, especially smaller companies. Policymakers recognised that the original timeline and breadth of the directive risked creating compliance bottlenecks and competitive disadvantages for European companies.
To address these challenges, the European Commission introduced the Omnibus legislation in early 2025, aiming to “stop the clock” for smaller and non-complex companies. By extending deadlines, raising reporting thresholds, and allowing voluntary opt-ins for listed SMEs, the EU sought to maintain its leadership in sustainable finance while ensuring a more manageable and phased rollout for businesses. As a result, most media tech companies will see a reprieve in reporting deadlines, in order to help them prepare.
The United Kingdom has established a robust framework for corporate sustainability reporting, focusing on transparency and climate risk management. Since April 2022, over 1,300 of the UK’s largest companies and financial institutions have been mandated to disclose climate-related financial information in alignment with the Task Force on Climate-related Financial Disclosures (TCFD) recommendations. This includes publicly traded companies and private companies with more than 500 employees and £500 million in turnover.
In addition, the Streamlined Energy and Carbon Reporting (SECR) regulations require large UK companies to report on their energy use and greenhouse gas emissions. This applies to public companies and large private companies meeting specific financial and employee thresholds.
North American developments: California and beyond
In March 2024, the United States Securities and Exchange Commission (SEC) adopted a rule requiring publicly traded companies to disclose climate-related risks and greenhouse gas emissions. However, in April 2024, the SEC issued a voluntary stay on the rule’s implementation pending judicial review due to legal challenges. In March 2025, the SEC voted
to cease defending the rule in court, effectively halting its enforcement. Despite this, the rule technically remains in effect, though its future is uncertain. Companies are advised to monitor developments and consider aligning with other climate disclosure frameworks, such as California’s SB 253 (see below) and the EU’s Corporate Sustainability Reporting Directive (CSRD).
California continues to lead the United States in advancing climaterelated disclosure regulations. In September 2023, Governor Gavin Newsom signed into law two groundbreaking laws—SB 253 and SB 261—which require large companies doing business in California to report on their greenhouse gas emissions and climate-related financial risks. These laws are the most ambitious climate disclosure mandates in the US, affecting thousands of large companies regardless of whether they are headquartered in California. Their requirements parallel many aspects of the EU’s CSRD and are seen as a domestic alternative amid uncertainty surrounding federal SEC rules.
In March 2025, both laws were amended to extend the timeline for finalising details as well as delaying the reporting of Scope 3 emissions. Additionally, businesses are now allowed to submit consolidated reports at the parent company level, easing the compliance burden for multinationals. Like the CSRD Omnibus, this change gives companies more time to collect and verify complex emissions data.
Canada advanced its sustainability agenda with the introduction of the Canadian Sustainability Disclosure Standards (CSDS) in December 2024. These standards align with international frameworks and aim to enhance transparency in sustainability reporting.
The regulatory landscape, while more defined than in previous years, remains fragmented. From the EU’s tightening of ESG disclosures to California’s pioneering state laws and Canada’s voluntary frameworks, media tech companies operating across borders must manage compliance across a mosaic of standards, requiring strategic alignment and agile reporting systems.
Media tech leading by example Irdeto, a leader in digital platform security, has embedded sustainability into its core operations through its Sustainability@ Irdeto programme. The company’s initiatives include integrating ESG
principles into all programmes and policies, focusing on protecting the planet, empowering communities, and adhering to international sustainability standards.
EVS has been recognised with the MTSS Excellence in Sustainability Honours and other awards for its leadership in ESG practices. The company’s sustainability strategy centres on reducing emissions from product use, which account for over 60 per cent of its carbon footprint. With commitments aligned to the Science Based Targets initiative, a DPP Commitment to Sustainability badge, and a Silver EcoVadis rating, EVS continues to integrate eco-design and responsible governance into the heart of its media technology solutions.
Finland-based Genelec has long been a sustainability frontrunner in the media tech sector, embedding environmental responsibility into every aspect of its product design and operations. Known for its energyefficient loudspeakers and durable enclosures made from recycled aluminum, Genelec manufactures all products at its ISO-certified factory powered by 100 per cent renewable energy. The company’s commitment to longevity, serviceability, and circular design has earned it global recognition, and consistent praise across the pro audio community for setting a high bar in sustainable manufacturing.
MetaBroadcast, an emerging ESG leader in the media tech sector, is taking early but impactful steps toward sustainability. The company has begun integrating environmental considerations into product development and internal operations, focusing on energy-efficient software architecture and cloud optimisation. Socially, MetaBroadcast fosters an inclusive culture, emphasising mental health, flexible working, and equitable hiring practices. Governance efforts include transparency in decision-making and a growing commitment to ethical business practices. As a small, agile team, MetaBroadcast is wellpositioned to innovate responsibly, setting the foundation for a purposedriven approach to growth and ESG leadership in the evolving media technology landscape.
The Media Tech Sustainability Series (MTSS) presented the Excellence in Sustainability Honours at the 2025 NAB Show, celebrating organisations and individuals leading the way in ESG practices. Awards were given in three categories: Top ESG Company Award, ESG Leader Award, and ESG Person to Watch Award. These honours highlight the industry’s commitment to sustainability and recognise those making significant impacts in the media tech sector.
On June 17, the 3rd annual MTSS Summit will be presenting the latest in sustainability and ESG initiatives within the media tech sector.
As the media technology industry navigates this evolving regulatory and market landscape, sustainability is no longer a nice-to-have—it is a strategic imperative. Organisations that embrace sustainable innovation, invest in responsible technologies as well as social responsibilities while they report transparently will be best positioned to thrive. The road ahead requires coordination, creativity, and commitment—but it also offers an unprecedented opportunity to align industry growth with global climate goals.
By Lorenzo Zanni, lead data analyst, The Hive Group
NAB and IBC have long served as milestones in the media tech calendar, bringing the industry together to connect, launch products, and generate momentum. But their role is changing. Faced with tighter budgets, evolving buyer behaviour, and the growing need for digital engagement, exhibitors are rethinking not only what they say, but how, where, and when they say it.
This shift is being driven by a combination of business pressure, buyer behaviour, and digital acceleration, and as one leader at NAB told us candidly, “marketing spend has been pressured even more as we prioritised engineering and product development.”
Trade shows still dominate media technology vendors’ marketing budgets, yet attendance at major events has been falling. At NAB Show, attendance has dropped from a peak of over 103,000 in 2016 to 55,000 registrations in 2025, leading marketers to ask hard questions about ROI. It’s not just whether to attend, but how to show up. What are we really achieving? Who are we reaching? And what happens beyond the booth?
The result is visible. Booth sizes are shrinking. Big brand spends are being reallocated toward more targeted experiences. Increasingly, marketers are looking beyond the stand and toward curated, insight-led events. These formats allow for deeper engagement and are better suited to today’s buyer behaviour.
Today’s buyer is different
It’s not just the events that have changed, it’s the people attending them. Media organisations are undergoing structural shifts, with new leadership bringing different expectations. Buyers are now more digitally native, more data-driven, and more values-led. Many arrive at shows with decisions already made. They’re using the event to validate choices, not explore them, changing how time is spent on-site.
While, anecdotally, meeting “no-shows” appear to have decreased, competition for buyer attention is still high. Some vendors have noted shorter, more focused meetings, with less time available for exploratory conversations or unplanned discussions. As one show attendee put it: “You can’t afford to wing it anymore.”
This is where the transition from “moments” to “movements” becomes real. Marketers know that spontaneous booth traffic and post-show emails have never been reliable ways to drive engagement at events like NAB or IBC. But today, buyer attention
is limited and carefully managed, which means meaningful conversations must be earned in advance. The most effective strategies are those that build momentum long before the show and continue delivering value afterwards. NAB is no longer the destination in itself, but a key touchpoint within a broader, integrated campaign.
In parallel, we’re seeing a dramatic shift in content strategy. The rise of AI has flooded the market with lookalike messaging. What was once a differentiator—being fast, being visible—is now just noise. To stand out, media tech marketers are turning back to original, high-quality insight.
Vanity metrics like booth footfall or social media likes are no longer enough. As one marketing leader put it, “Where’s the ROI?”
Measurement is becoming more granular and more outcomefocused. Businesses want to know who’s engaging with them and what behaviour those engagements are driving, with CMS and CRM systems being used to track intent and influence over time.
This is also driving a closer integration of marketing and sales teams. Traditional boundaries are dissolving. Revenue is now a shared responsibility!
One final transition is gaining momentum. More vendors are investing in community strategies. These aren’t just mailing lists or user groups. Done well, they are platforms for feedback, advocacy, and long-term loyalty.
At NAB, multiple marketers told us they see community building as a core priority—it’s resource-intensive, but it is also a powerful way to move beyond transactional marketing toward sustained engagement.
The future of marketing in media technology won’t be defined by bigger booths or louder campaigns. It will be shaped by those who create continuity, not just attention. The strongest brands will understand that today’s buyer is on a journey, and that vendors, and specifically marketers, need to walk alongside them every step of the way.