TVB Europe UK 0118 - Dec. 2025

Page 1


WDon’t look back in anger

ell, that was the year that was. 2025 has turned out to be a year of ups and downs within the industry. For me, it has once again been a learning curve, as new ideas and technologies have begun to make their way into the media technology mainstream.

Many of the things I expected to dominate industry conversations at the start of the year have been overtaken by brand-new topics. I thought 2025 would be dominated by artificial intelligence, and it has been a hot topic. However, what I’m finding is that in the media tech industry, agentic AI is much more of a talking point than generative AI. I’m interested to see how that continues next year.

Heading into NAB in April, I certainly wasn’t expecting the conversation to be dominated by tariffs, but that proved to be the case. There’s still no real consensus on how America’s tariffs will impact the industry, from a proposed tax on non-Hollywood films to possible extra charges on certain kinds of hardware. Maybe we’ll get more clarity in 2026 (although I doubt it).

Three acronyms have entered my vocabulary this year: DMF (Dynamic Media Facility), MXL (Media eXchange Layer) and TAMS (Time Addressable Media Store). I’m still learning more about all three, but it seems to me that they will definitely become major topics of conversation within the next few months, and I’m looking forward to educating both myself and hopefully others.

Elsewhere, it’s been a very busy year in terms of business. From Skydance’s acquisition of Paramount, to Comcast selling Sky Deutschland to RTL, there have been some major media ownership changes. As I write, there’s speculation over the future owners of both Warner Bros Discovery and ITV, which is likely to rumble into 2026.

As always, my highlights of the year included both NAB and IBC. I always find it incredibly inspiring to catch up with industry colleagues, hear about the latest trends, and sometimes I even get to try out the technology I spend so much time writing about.

To conclude the year, this issue of TVBEurope focuses on one of the most important parts of the media industry, but also one of the most overlooked: audio. As our new columnist Larissa Görner-Meeus writes, audiences will tolerate a pixelated video stream far longer than a bad sound mix. That’s why audio is so important to the whole viewing experience. We explore the rise of immersive sound, Dolby Atmos, and MPEG-H Audio and ask, just how modern is modern broadcast audio?

All that remains is to say thanks for your ongoing support of TVBEurope during 2025. We look forward to continuing to bring you all the latest news and in-depth features from our industry in 2026. See you then!

FOLLOW US

X.com: TVBEUROPE / Facebook: TVBEUROPE1 Bluesky: TVBEUROPE.COM

CONTENT

Content Director: Jenny Priestley jenny.priestley@futurenet.com

Senior Content Writer: Matthew Corrigan matthew.corrigan@futurenet.com

Graphic Designers: Cliff Newman, Steve Mumby

Production Manager: Nicole Schilling

Contributors: David Davies, Kevin Emmott, Larissa Görner-Meeus, Kevin Hilton, Graham Lovelace

Cover image: Olena Agapova / Getty Images

ADVERTISING SALES

Publisher TVBEurope/TV Tech, B2B Tech: Joseph Palombo joseph.palombo@futurenet.com

Account Director: Hayley Brailey-Woolfson hayley.braileywoolfson@futurenet.com

SUBSCRIBER CUSTOMER SERVICE

To subscribe, change your address, or check on your current account status, go to www.tvbeurope.com/subscribe

ARCHIVES

Digital editions of the magazine are available to view on ISSUU.com Recent back issues of the printed edition may be available please contact customerservice@futurenet.com for more information.

LICENSING/REPRINTS/PERMISSIONS

TVBE is available for licensing. Contact the Licensing team to discuss partnership opportunities. Head of Print Licensing Rachel Shaw licensing@futurenet.com

MANAGEMENT

SVP, MD, B2B Amanda Darman-Allen

VP, Global Head of Content, B2B Carmel King MD, Content, Broadcast Tech Paul McLane

Global Head of Sales, B2B Tom Sikes

Managing VP of Sales, B2B Tech Adam Goldstein

VP, Global Head of Strategy & Ops, B2B Allison Markert

VP, Product & Marketing, B2B Andrew Buchholz

Head of Production US & UK Mark Constance

Head of Design, B2B Nicole Cobban

In this issue

Europe’s broadcasters need to tune into

As TV broadcasting video technology advances, Kevin Hilton wonders why audio developments are not adopted at the same pace and asks: how modern is modern audio? 16

Jenny Priestley sits down with Warner Bros Discovery CTO Avi Saxena to discuss technological advancements driving the company’s global streaming platforms

Blueprint Studios London is a new facility offering broadcast-quality technology to corporate clients, broadcasters and video-first podcasts. Jenny Priestley pays a visit to find out more

A new paradigm in ‘content-centric’ workflows

Following a live broadcast demo at On Air 2025, David Davies talks to AWS and Techex about the potential of Time Addressable Media Store 36

Key creatives from Downton Abbey: The Grand Finale discuss the thoroughly modern technology that helped create a timeless classic

Matthew Corrigan visits dock10 and Versa Manchester to explore production in the regions

Roland Heap of Sound Disposition explains to Kevin Emmott why spatial audio represents the future of sound design

The constant underdog

Early in my career at IRT, I had the privilege of meeting Gerhard Stoll, co-inventor of MP3. He told me at the time: “Larissa, audio innovation is always ahead of video innovation—we just don’t make such a buzz around it.”

Some 20 years later, that perception is still with me. While I’m not an audio expert myself, I decided to talk to a few of my audio industry friends to find out why audio still feels like the underdog, whether it’s still innovating, and what might come next.

For decades, audio has been the invisible constant of our media experience—technically advanced, commercially resilient, and often more innovative than the moving image it supports. Still, when budgets tighten or roadmaps are drawn, audio rarely gets the front-page treatment. It’s the classic underdog: less visible, but no less vital.

“There’s a video director, but no audio director,” says Christian Gobbel, senior technology advisor. “A video director switches between 40 cameras, but an audio engineer manages 200 sources, balancing, composing, and making them all fit together. Hearing is a more refined sense than seeing; auditory information builds imagery in the mind.”

Ahead of its time, but rarely recognised

Long before cloud and IP became buzzwords, the audio world had already made the leap. Audio over IP reshaped workflows years before ST 2110 became standard for video. Audio engineers were doing remote production before it even had a name.

Phil Hey, director of global business development at Riedel, notes that this early maturity came with a price: “Audio came first because, in the past, data volumes were smaller and easier to process and transport—but that also meant we experienced the challenges earlier. Workflows, budgets, and perceptions reflected that imbalance.”

He adds, “You see giant cameras, lenses, lighting rigs, and one audio engineer with a boom mic in the corner. The perception of value is different, even though the work can be more complex.”

And yet, audiences will tolerate a pixelated stream far longer than a bad sound mix. The moment audio fails, the experience collapses.

Immersive audio: the next frontier

The rise of immersive and object-based audio is turning sound into a storytelling tool of its own. Sports broadcasters now offer customised mixes; streaming platforms deliver spatial audio experiences that go far beyond stereo.

“At events like theme-park rides or stadium shows, you realise how

much identity and emotion you can create through sound,” says Craig Newbury, managing director at Stagetec. “Video can show you what’s happening, but audio makes you feel where you are.”

Christian Struck, senior product manager audio infrastructure at Lawo, remembers how early the industry saw the potential: “We had 9.1 demos at NAB more than 10 years ago, we were simply too early. Today immersive still belongs to big events, but it’s coming home fast through simpler speakers and sound bars. The emotion is the driver.”

AI and audio: the powerhouse for personalisation

Artificial Intelligence is rapidly reshaping the soundscape. Automated mixing, transcription, and translation are already mainstream, but what’s happening now goes far beyond efficiency. AI is making audio one of the most powerful tools for personalisation in media.

From multilingual commentary feeds to alternative tracks, accessibility mixes, or familiar synthetic voices, sound has quietly become the most flexible and inclusive layer of content.

“AI will allow a tremendous amount of personalisation,” explains Newbury. “Think of a sports event where the raw energy of the original commentary can be remixed to suit how you want to experience it. Audio can finally adapt to how you want to feel.”

That adaptability makes audio far more dynamic than video, where personalisation often stops at recommendation algorithms. Audio, by contrast, speaks directly to the listener’s preferences, identity, and their mood.

As the industry chases visual innovation, we risk undervaluing the medium that most directly connects with emotion. Sound carries the story when the screen fades.

Hey concludes: “Ultimately, audio and video are growing together. It’s less about two disciplines and more about one experience. The audience doesn’t separate them, they just feel the story.”

That’s precisely why everyone in our industry, from engineers to executives, should pay closer attention to what’s happening in the world of audio. Its engineers have quietly solved problems of latency, interoperability, and user experience that video is only now beginning to face, and have mastered emotional storytelling.

If we listened more, not only to the sound, but to the people behind it, we might discover approaches that make our entire industry more human, more connected, and more future-ready.

Maybe the underdog has been leading all along. We just haven’t been listening closely enough.

Smarter sound

Media outlets and live productions are continuing to multiply, putting pressure on audio teams to deliver more with fewer hands on deck. Traditional hardware-based workflows aren’t built for this pace. Engineers spend hours manually routing signals, configuring I/Os, and troubleshooting signal paths, often without remote access. These limitations slow down production and increase the risk of error.

As audio systems grow more complex and engineers oversee multiple productions simultaneously, remote access has become essential. Imagine one operator monitoring commentary mixes for 15 football matches, each with separate stereo and 5.1 outputs in multiple languages. A traditional console surface with a large mix core quickly reaches its limits. Since only one mix output can be monitored at a time, cycling through all of them may take minutes, long enough for dropouts or loudness drift to go unnoticed. Smarter design, automation, and remote accessibility with auto-follow audio monitoring are vital to help close that gap.

A major limitation of traditional, hardware-based infrastructures lies in their rigidity. Manual routing depends on static physical resources, requiring operators to know which devices or processes are available and how to reconfigure them. If a commentator moves booths, a technician must reroute signals manually. With automated signal management, that step disappears. The system assigns processing, manages routing, and presents the right control interface streamlining operations and eliminating manual overhead

The case for automation and virtualisation

Across the industry, development is moving toward systems that manage this complexity automatically with intelligent infrastructures where the operator defines what needs to happen, not how or where it happens. A promising way forward is role-based production models, where users log in and are automatically assigned to productions, with routing and configuration handled behind the scenes. This accelerates setup and improves security by tying access and control to authenticated profiles.

A connected concept is rule-based processing. Rather than requiring technical operators to fine-tune every signal path, systems can now make those adjustments automatically, following predefined rules. Automatic gain staging prevents clipping, adaptive equalisation aligns tonal differences between voices, and loudness-based auto-levelling maintains consistent output. These processes make

professional-grade audio achievable even in smaller productions or remote environments. By embedding this intelligence directly into the signal chain, modern broadcast systems can move toward selfoptimising, self-configuring operation.

Virtualisation extends automation by decoupling processing from specific hardware. Instead of relying on a fixed audio core with a set number of channels and buses, virtualisation allows resources to be allocated dynamically. One day, a server might host a 200-channel mixer; the next, it could run hundreds of processing plug-ins or fully automated voice-over mix applications. The same infrastructure can support multiple productions and improve scalability.

Automation and virtualisation are reshaping day-to-day operations. While engineers will continue to work hands-on, their role is shifting towards system supervision, troubleshooting, and creative oversight. Even as AI and automation advance, human expertise remains vital. While systems can automatically match a commentator’s spectral fingerprint, an engineer must still define what ‘good’ sounds like by analysing and tuning voices in controlled conditions. AI can then compare and evaluate based on this qualified data.

Professionals exploring these new approaches are enthusiastic about concepts like automated resource management, simplified interfaces, and device virtualisation. Yet they also acknowledge the challenges of integrating flexible, software-based systems into existing infrastructures. Transitioning to these environments requires not just technical adaptation, but a shift in mindset.

What’s next for broadcast audio?

Looking ahead, machine learning will increasingly influence broadcast audio workflows. The effectiveness of AI rises and falls with the quality of available data. With high-quality datasets, AI can execute tasks with extreme accuracy. Even if the quality of the data doesn’t allow full automation, AI can offer smart suggestions or pre-selections, allowing operators to confirm or refine the system’s decisions.

I believe emerging technologies like Direct Memory Access (DMA) and Remote Direct Memory Access (RDMA) will redefine infrastructure efficiency. These technologies make it possible for an audio mixer to extend beyond hardware boundaries, distributing channels and buses across multiple servers to maximise resource use and scalability. In the long run, automation, virtualisation, and AI will converge to create a new paradigm: dynamic, adaptive audio infrastructures that respond in real time to production needs

AI won’t kill the radio star

Artificial intelligence of the generative variety is proving a useful ally in audio broadcasting, automating repetitive tasks and removing drudgery. The BBC is using AI to add subtitles to programmes on its audio app BBC Sounds as part of a series of trials that also include instant transcripts of local football commentaries and translations across BBC language services. Elsewhere in the industry, AI is being used to brainstorm call-in topics, curate playlists and tag files for archive retrieval.

These are low-stakes uses of generative AI in production. While everything needs to be checked by humans, they have the potential to improve efficiencies. Risks rise when AI is used to replace the dulcet tones of real human voices.

There’s something incredibly personal about the experience of listening to radio and podcasts. Perhaps it’s because in our infancy, we listened to stories being told by those we trust. The stories fired our imaginations and strengthened our bond with the storyteller. In a similar way audio conjures imaginative scenes and fosters a connection with presenters. We think they are speaking to us, as individuals.

For AI to succeed, it must sound authentic and appear relatable. Early synthetic audio had a robotic quality, so it was easy to spot. Uses need to be clearly disclosed, as happened in 2023 when a midday radio host in Portland, Oregon, trained an AI model on her voice. ‘AI Ashley’ started as Ashley Elzinga’s co-host, then filled in on Live 95.5 when she went on holiday. Live 95.5 listeners were informed throughout.

more than 23,000 people calling for humans to return forced station chiefs to abandon a three-month pilot that didn’t make it to the end of its first week.

Knowing how to position AI ‘talent’ is also key. In January this year, musician and tech entrepreneur will.i.am launched an AI-themed radio show on SiriusXM with AI co-host qd.pi (‘cutie pie’). Will.i.am told The Hollywood Reporter that while he was “ultra-freaking colourful and expressive” qd.pi was “ultra-freaking factual and analytical”.

Could AI presenters move beyond the “factual and analytical”? To learn how text-to-audio technology is improving, I tested three popular podcast generators. In each case, I uploaded a document—warning: never share anything with an AI model you wouldn’t want to see reproduced in part by someone else— and within a few minutes I was presented with an audio file with two hosts discussing its content. A 402-word article submitted to Jellypod turned into a 516-word script and four-minute audio recording, which sounded, well, synthetic. ElevenLabs’ script was twice the length of the original article, with its five-minute 20-second output including fillers such as one host asking the other a question, and using generic phrases such as “that really puts things in perspective” and “here’s another aspect to consider”. It was better, but sounded soulless.

However, that didn’t happen in Australia. In April this year, a radio host was exposed as an AI-generated avatar, trained on the voice of a station employee who happened to work in accounts. ‘Thy’, as the host was called, had presented a music show on Sydney radio station CADA for nearly six months before a local blogger spotted something was off. A group representing voice artists criticised station owner ARN for leading listeners to “trust a fake person they think is a real on-air person”.

Labelling AI is clearly vital. And so is knowing when not to use it. Last year OFF Radio Kraków replaced human presenters with Gen Z-friendly AI ‘hires’ as part of an experimental revamp. In a bizarre twist, the AI avatars interviewed a Nobel Prize-winning Polish poet who had died in 2012. Listeners complained and a petition signed by

Google’s NotebookLM took things to another level. Its 2,200-word script generated a 12-minute 40-second podcast chock-full of conversational tricks, interruptions, invitations to go over something complex, pithy explanations and human-like reactions such as “Right!”, “Exactly!” and “Wow!”, plus slowly pronounced “Mm-hmms”, “Yeaaahs”, and “Okaaays”. NotebookLM deviated massively from the contents of my article, but in ways that didn’t feel like padding. While there was one error in the script, the output came close to the experience of eavesdropping on two well-informed humans chewing over a topic. I found it useful, but not captivating in a way that radio and podcasts are.

What was missing, and what AI will never fully replace, is what humans bring to the party: a lived experience. AI will make it to air, as a sidekick. But it won’t kill the human radio star.

Graham charts the global impacts of generative AI on human-made media at grahamlovelace.substack.com

Why Europe’s broadcasters need to tune-in to MPEG-H Audio

Next Generation Audio (NGA) is the set of technologies that moves audio beyond fixed-channel mixes into immersive, object- and metadata-driven experiences that are personalised, accessible, and scalable across devices. NGA lets a single programme carry discrete elements (channels, objects, dialogue tracks, audio description) plus metadata so that a receiver or renderer can adapt the sound to the listener’s playback system and preferences; such as boosting dialogue, switching languages, or rendering height channels for a soundbar or headphones.

Among the technologies enabling this shift, MPEG-H Audio has emerged as a key standard, offering an end-to-end framework for object-based and scene-based sound that integrates seamlessly with broadcast and streaming workflows. It enables features like adjusting dialogue levels or choosing commentary tracks, and delivers audio across everything from broadcast TV to streaming platforms.

Two codec contenders

MPEG-H Audio supports channel-based, object-based and scenebased representations, extensive personalisation and efficient delivery for live and file-based workflows. It is attractive for European broadcasters not only because it aligns with broadcast-centric workflows and standards, but also because it has seen successful real-world deployments in live sports, concerts, and streaming.

MPEG-H’s design aligns neatly with the operational priorities of public and commercial services, including accessibility, multi-language distribution, and live-event immersion, which are high on the agenda for European public-service media and regional networks.

The other leading technology is Dolby Atmos, which has achieved remarkable success in cinema and high-end home theatre installations; much of that brand prestige has trickled down to the consumer device level as well. Atmos is a comprehensive ecosystem: a creative format integrated into digital audio production toolchains, a marketing-strong brand, and a commercial model that spans device vendors and content platforms. That combination helped Atmos secure placement with major streamers and wide device support, which matters when broadcasters want on-demand content to play consistently across smart TVs, soundbars, phones, and tablets. This is one of the main catalysts for consumers wanting more NGA content and experiences.

Atmos’s strengths in cinema and the premium home theatre market make it a natural choice for content creators who want precise object placement, a familiar production workflow, and a recognisable consumer proposition.

MPEG-H is increasingly appealing to broadcasters for several practical reasons. First, it aligns with broadcast-specific profiles and integrates seamlessly with Serial-Audio Definition Model workflows. Its open metadata and interoperability naturally fit into S-ADM based production and distribution chains, which are favoured by many public and consortium-driven broadcasters.

Second, MPEG-H tools prioritise live production and IP media workflows, facilitating integration into playout and live-mix operations common in sport and live entertainment. It is widely used for delivery across broadcast, streaming, and mobile networks because it is designed specifically for efficient transmission and flexible rendering under varying network conditions and diverse endpoint capabilities.

Finally, licensing and device certification for broadcast-grade rollouts of MPEG-H are often easier to align with broadcasters’ procurement and regulatory requirements (though specific commercial terms still require negotiation). When operational costs, regulatory expectations for accessibility, and multi-device interoperability are key, MPEG-H is shown to be a pragmatic, standards-forward choice.

its strengths include a broadcast-first design, strong personalisation and accessibility features, native fit with production workflows, and an emphasis on efficient delivery over broadcast, streaming, and mobile networks alike; however, it enjoys lower consumer-brand recognition than Dolby and varying levels of consumer playback support across markets. It is strategically attractive to European broadcasters that prioritise standards-aligned workflows, live-event scalability, and open metadata interoperability, and has become popular on the delivery side across broadcast, streaming, and mobile networks.

In the end, broadcasters should choose the technology that best aligns with their operational model. Standards-driven, delivery-centric broadcasting and live sport workflows naturally point to MPEG-H, while cinema-grade creative workflows and broad consumer device reach make Dolby Atmos compelling.

For viewers, the net effect is positive whichever route a broadcaster chooses: enhanced immersion, better dialogue clarity, richer accessibility options, and personalised listening experiences.

TVBEurope’s newsletter is your free resource for exclusive news, features and information about our industry. Here are some featured articles from the last few weeks…

BROADCAST IN 2026: NINE TRENDS AND PREDICTIONS

Richard Jonker, vice president of commercial business development at Netgear, predicts nine operational challenges facing the broadcast industry in 2026, and explores how network technology can help overcome them.

THE FUTURE OF FREE TV: FREELY’S INNOVATIVE FEATURES AND GROWTH PLANS

Sarah Milton, chief product officer at Everyone TV, discusses the success of Freely, the UK’s free-to-air IPTV service, which is set for significant growth with new device launches and a focus on personalisation.

‘IT’S BEYOND WHAT I IMAGINED WE COULD ACHIEVE’: INDUSTRY AND STUDENTS JOIN FORCES TO GO ON AIR

TVBEurope meets the students and members of the industry who helped get a 24 hour, worldwide livestream on air.

DO YOU SUBSCRIBE TO TVBEUROPE’S FREE NEWSLETTER? SIGN UP VIA THIS QR CODE

WELCOME TO THE SOUND

TV broadcasting is pushing forward with up-to-the-minute video technologies, including UHD and HDR. Audio is also making advances, but implementation is slow, prompting Kevin Hilton to ask, how modern is modern broadcast sound?

Sometimes an area of technology can lag behind others in terms of implementation, despite developments being made at the time. Sound for television is a classic example of this. It languished in the shadow of visual milestones, such as the transition to colour, while its own evolution, notably the eventual shift to

NICAM stereo in Europe during the early 1990s, advanced fitfully.

Today, the variety of audio formats and systems has broadened substantially, with Audio over IP (AoIP) now becoming part of broadcast production workflows and distribution chains. Spatial sound is also regarded as the way ahead for TV and streaming as they move further into 4K/UHD (Ultra-HD) and HDR (high dynamic

range). And, just as it is in other media areas, the cloud is offering great potential for many areas of sound. As ever, technological innovation is often ahead of real-world implementation. But, unlike the slower development of the past, the situation does appear to be moving in the right direction. France Télévisions began its transition towards full IP—working to the SMPTE ST 2110 standard—in 2022, a process that was expected to last at least three years. "All the technical facilities we have renewed over the past few years are now IP-based," explains 2110 technical lead Yannick Olivier. "We are focusing on ST 2110-30 [a version of AES67 for AoIP interoperability], as it is compatible with other standards and keeps us fully aligned with the broadcast TV environment. The major challenge of AoIP is integration, not bandwidth or synchronisation."

Mikael Vest, director of operations and chief operating officer of digital routing systems developer NTP Technology, agrees that new broadcast centre installations and refurbishments are moving towards IP-based audio infrastructures. "But for audio, the picture is more differentiated than video," he says. "ST 2110-30 and 31 [AES3 for exchanging digital audio signals] is used, as is Dante, RAVENNA and AES67, as well as proprietary AoIP formats like LiveWire."

Is cloud the next step?

A more circumspect view comes from Henry Goodman, director of product management at console manufacturer Calrec Audio. "Broadcasters are

certainly building AoIP infrastructures now rather than baseband ones," he says. "But there's still some way to go. If you've got a lot of investment in baseband audio and video, it's not a slam dunk that when you upgrade one of those studios, you would necessarily choose to go IP because of the impact it has on the rest of the system. And we're still selling some consoles that are not ST 2110."

A BBC source observes that AoIP "has been with us for a while" and sees the next step for audio being in the cloud, which is now beginning to happen.

Last year BBC R&D published the TAMS (TimeAddressable Media Store) API as a way of working with material, including audio, in the cloud. The opensource interface has been implemented by AWS and

“We are focusing on ST 2110-30 as it is compatible with other standards and keeps us fully aligned with the broadcast TV environment. The major challenge of AoIP is integration, not bandwidth or synchronisation”

demonstrated as part of cloud native workflows for fast turnaround editing. Principal R&D engineer Robert Wadge explained in an online feature that it "fuses object storage, segmented media and time-based indexing" and "lays the foundations for a multi-vendor ecosystem of tools and algorithms".

NTP Technology's Vest observes that cloud-based audio "is already being implemented", with outside broadcasts and other collaborative applications being the most likely early adopters. "However, many broadcasters are reluctant to have too much in the cloud since it may be safer to have servers based inhouse," he says.

France Télévisions is beginning to shift some of its operations into the cloud, with a major implementation for last year's coverage of the Olympic Flame in Paris based on the TVU Producer platform. "The main problem with cloud solutions is that most apps are 'all-in-one'," says Olivier. "An app that tries to do everything is rarely the best at every part, especially when it comes to audio. We're currently focusing on

deployment in our external data centre, managed by a third party, using our own infrastructure. We're following the Dynamic Media Facility initiative, part of our shift towards software-based systems, to eventually have the same infrastructure both on-prem and in the cloud, with MXL exchanges between hosts."

For the majority of viewers, at least, the most obviously modern example of broadcast sound today is immersive capability. Also known as spatial audio, due to its ability to reproduce or mimic how humans hear and locate sounds—immersive is more an experiential description—it is part of the move towards Next Generation Audio (NGA), which is object-based with a foundation of channels. While the enveloping sound experience (providing height as well as width and length) is the main selling point, there is also potential for personalisation, including alternative commentaries and languages. Among the systems that offer all this are Dolby AC-4, which includes Atmos, and MPEG-H Audio.

Many TV programmes today, both drama and documentary, are being mixed in Dolby Atmos and then folded down to meet broadcaster requirements (usually 5.1 or stereo). While Netflix, Amazon Prime and other streamers have made Atmos part of premium services, the take-up by more traditional broadcasters ranges from zero to slow. In the UK, Sky helped pioneer surround sound (5.1) for sports coverage and is now a proponent of Dolby Atmos. German public broadcaster ARD supports Dolby Atmos on its Mediathek app, although this is only available for Apple devices (iOS 17 and above).

France Télévisions is also looking at NGA and Dolby Atmos, says Olivier, to "enhance the audio experience for our viewers." Right now, this is mostly for live events, with Atmos being a major part of the broadcaster's coverage of both the French Open tennis tournament and Paris 2024 Olympic Games. "These workflows are still complex and have not yet become standard at France Télévisions," he comments.

NGA has been promoted as the audio partner for 4K broadcasts, although the UHD World Association now has its own contender in the form of Audio Vivid UHD. This combines objects, channels and higher-order Ambisonics for its spatial aspects with an AI-based codec for audio compression, among other functions. It may have taken some time—and there is probably still a way to go—but broadcast sound is definitely part of the modern world and looks as though it can go further into the future.

Henry Goodman
PICTURED BELOW:
Yannick Olivier

WAI:

ARE WE SPOOKING OURSELVES?

Watching reactions to Channel 4’s groundbreaking experiment, Matthew Corrigan wonders, are overthinking the threat of Generative AI?

ell, someone finally did it. Viewers of Channel 4’s Dispatches: Will AI take my job? found all was not as it seemed as the programme drew to a close. The experiment pitted four professionals against AI, seeking to determine whether humans or machines would prevail in a series of work-based challenges. Rather unsettlingly, the digital creations were represented by convincing replicas of the humans they were competing with. The four, a doctor, musician, lawyer and photographer, were all visibly shocked upon first encountering their avatars.

The documentary marked journalist Aisha Gaban’s screen debut. A straight-from-central-casting presenter, easy on the eye and articulating fluently in accentless English, Gaban handled things with aplomb as the plucky foursome battled their algorithmic adversaries.

Thankfully, for those of us with hearts that beat, the result went our way. Flawed though we undoubtedly are, it was the organics ‘wot won it’, narrowly outperforming the digital upstarts. It was, however, no pushover. AI did not disgrace itself, demonstrating how close it is to at least achieving parity, with all the potential repercussions that outcome might unleash.

Underlining the significance of what had just happened, Dispatches saved its big reveal to the end. Rather than being a hitherto unseen television anchor, Aisha Gaban announced that it was itself an AI creation.

Croydon to California, workers in the creative industries are increasingly worried about the march of the machines.

But do they need to be? First of all, let’s not forget that AI isn’t really anything new. Sure, the mass collection and processing of data and with the power to process it is increasing at an exponential rate, but as Moore’s Law tells us, that was always an inevitability. What maybe wasn’t so inevitable was the technology’s ability to fool us—or, more accurately, our ability to fool ourselves.

Losing the imitation game

Arguments around the ability of machines to pass the so-called Turing Test, in which “an average interrogator will not have more than 70 per cent chance of making the right identification [human or machine] after five minutes of questioning” still rage, even after UC San Diego researchers achieved a 73 per cent rate earlier this year. Indeed, some experts suggest Turing’s ‘Imitation Game’ is no longer a reliable measure.

Channel 4 was careful to point out that the stunt was under control. By revealing the robot, it ensured adherence to the ethical policies it unveiled earlier this year, designed to ensure transparency in its use of the technology. Perhaps surprisingly for a broadcaster not known for its reticence to court controversy, it seemed to fear a public outcry.

It needn’t have worried. If the producers hoped for howls of fury, they were disappointed. Most of the comment around the event seemed to come from within the media and entertainment industry itself where, possibly unwittingly, Channel 4 had tapped into a rich seam of strife. Concerns around the use of AI—more specifically, generative AI—are growing. Barely a week passes without a doom-laden prediction of mass unemployment brought about by competition from the robots. From

The technology behind AI is certainly ingenious, its capacity for data processing phenomenal (as is our own), but it just isn’t human. Aisha Gaban might look and sound the part, but it is simply incapable of reacting with empathy to an upsetting story. Nor can it hold a captivated audience in the palm of its hand as it relays the still-breaking details of a world-changing event, or leap excitedly out of its chair to cheer on a recordbreaking athlete closing in on a gold medal victory. Gaban would never have “thought” to count them all out and count them all back again.

We have been here before. When the Luddites smashed the machinery of the Industrial Revolution, they did so because they feared their jobs would be taken. Instead, the world of work evolved. Humanity co-existed with its mechanised assistants and they jointly advanced our species.

While there are understandably concerns over AI, and regulation is urgently needed to head off a Wild West-style free-for-all nightmare, we would all do well to remember it is just another machine. AI is an incredibly powerful tool, but we built it. It exists to serve us. And, at least so far, it just can’t do what we do—it can’t successfully replicate spontaneous, emotionally-driven human behaviour. Can it?

Aisha Gaban

Avi Saxena joined Discovery Networks (as it then was) back in 2019, having previously worked at tech giants including Amazon and Microsoft. His initial mission was to build a direct-to-consumer product, which led to the creation of discovery+. This paved the way for a larger role when Warner Media and Discovery merged, forming Warner Bros Discovery.

As chief technology officer, Saxena now oversees all digital products, including HBO Max, discovery+, and Eurosport, uniting them under a single backend platform called Bolt, to ensure a consistent, high-quality streaming experience across all devices and content genres. With HBO Max now available in over 100 countries and 30 languages, the past three years have been a period of intense growth and expansion.

Saxena cites sport as a key area for innovation within the industry, especially in terms of personalisation and dynamic viewing experiences. “If you watch football on discovery+, you’ve probably noticed some of the innovations we have brought to the product, like timeline markers,” he states. “If you watch the Olympics on discovery+, we do medal alerts to make it easy to switch between different sports going on at the same time.”

All of these innovations are transforming the way viewers engage with content, particularly if it is live. “When people watch a movie, they want a more lean-back experience. They get their popcorn and coffee and just sit down and watch a movie and really enjoy it. But when it comes to sport, if it’s golf, football or cricket, it could last hours. People don’t always have hours to watch the game, so they’re looking for things such as timeline markers to quickly show them the next goal, the next yellow card.

“Multiview lets viewers

INNOVATIVEentertainment

Jenny Priestley sits down with Warner Bros Discovery CTO Avi Saxena to discuss technological advancements driving the company’s global streaming platforms

switch between games. There might be three football matches going on, and they might want to keep an eye on the score and quickly flip between them when something happens. We are working on a lot of innovation and then scaling how we do this for all of the different sports like football, cricket, and the Olympics.”

As CTO, Saxena’s role encompasses technology strategy, product development across global markets and platforms, and building an effective technology organisation, which leads to him asking a lot of questions. “On the technology strategy side, it’s about exploring how we build a platform that supports all of our products. How do you build a technology organisation to build the product, because as you know, our organisations are split across the globe? How do you build an organisation which is effective and delivers on the promise of one global platform and a great consumer experience?

“Operations is also a very big aspect of my role. For example, during the Olympics, there were 60-70 concurrent events going on. How do you operate that, all the way from live encoding and getting the right sport to the right consumer? It’s a very complex undertaking.”

AI: The new foundational technology

Artificial intelligence is at the core of WBD’s innovation strategy. Saxena reveals that AI is being applied across every aspect of the business, from internal operations such as content localisation to enhancing consumer personalisation.

He cites the previously mentioned timeline markers as a prime example of how AI is helping identify exciting moments in sports streams. “You can have humans sit and press a red button and say, there’s a goal here, there’s a yellow card here. But that doesn’t scale if there are 20 games at the same time,” he explains. “We use AI to identify if something is happening in a stream, and where it started. It puts in a marker to detect exactly what took place.”

As well as employing AI for its consumer experiences and content processing, WBD is also leveraging the technology in engineering best practices, code development, and system testing. “You name it, we are using AI in that area.”

Asked what words of advice he would offer his creative colleagues about the impact of AI, Saxena states that while it is helping with certain aspects of content, it is not a content creator.

“Warner Bros Discovery is really a company of creators. Content is our product. People come to us to consume the best-of-breed content that our creators produce. We don’t think AI is ready to create new content,” he stresses.

Instead, when talking to creatives, he encourages them to consider AI as an

extension of what they do, not a replacement. “There was initially a little bit of anxiety around CGI, but once people adopted it, they realised they could create more and offload some of the more manual and mundane tasks to technology.

“The other thing I would say is, technology evolves continuously. Look at the last 100 years in media or any other industry, technology has constantly evolved, but storytelling has endured. It’s about how you tell the story, how you engage the customers. There are all these disaster scenarios about AI and how it can make a movie by itself. But AI is not going to make a movie by itself. I work with all the top AI model providers in the world, and none of them are even remotely close. They can create short-form video, but making a movie? That’s not going to happen, not in the foreseeable future,” he states.

“Storytelling is all about creating new content, new concepts, new characters, new emotions. That’s very hard for AI. However, being a technologist, I would never say never. Someday, in future, that might just happen.”

Don’t forget the audio

Beyond visual innovation, WBD is heavily invested in delivering immersive audio experiences to its at-home audiences.

“We really believe audio is more than 50 per cent of the experience, especially when it comes to theatrical content. More and more consumers are watching theatrical content in their living room, so when you’re watching a Superman movie or a Barbie movie, you really want to have a theatrical experience.”

To help meet those expectations, all of the content produced by WBD’s studios is created in Dolby Atmos, even if viewers at home don’t have that capability. “We give you stereo or 5.1, whatever your device is capable of, created from the Dolby Atmos feed. That means the quality is much, much superior to something which was originally recorded in 5.1 or stereo.

“These things are really helping our audience enjoy our content at home,” states Saxena. “We’ve also gone back and reencoded a lot of our library so that the dialogue is front and centre, like Dolby Atmos.”

A look into the future

The broadcasting industry faces significant challenges, primarily driven by evolving consumer habits. Viewers increasingly prefer ondemand content, watching it across multiple devices, whether they’re at home or travelling.

“These disruptions in broadcast are creating an opportunity for us to meet our consumers where they are,” continues Saxena. “We are investing very heavily into digital extensions, FAST channels, streaming platforms, download features, and then making our products, HBO Max and all other digital products, available on different platforms.

“We need to focus on multi-platform storytelling. How do you seamlessly transition between these experiences? This is where we are investing the most, because we know that the days of somebody sitting in front of one TV all day long are gone.”

MAKING waves

La Monnaie Opera in Brussels, Belgium, first opened its doors in 1700, well before broadcasting was even a possibility.

With the advent of TV and streaming, the venue has developed a special audio workflow to make its productions sound as immersive as possible for audiences watching at home.

La Monnaie employs two Lawo mc²56 consoles, with the broadcast desk located in a different building but connected to the same IP network as the desk mixing the live production. Each production is recorded twice and then assembled, edited and post produced using Avid’s Pro Tools with the resulting audio or video files available to stream on the opera’s own and third-party streaming services.

The sound team at La Monnaie describe themselves as “avid” users of Waves Audio’s processing tools for immersive audio environments following their integration into Lawo’s mc² production consoles.

“This allows us to remain in control of the entire audio chain, including our Waves effects,” explains sound engineer Niels De Schutter. “Our Lawo consoles come with excellent dynamics DSP processing, so we use our Waves plug-ins chiefly for effects that the consoles do not provide.”

This includes reverbs and special effects, as used in the opera’s recent production, ALI, which included a character with a distinct

The sound team at La Monnaie describe themselves as “avid” users of Waves Audio’s processing tools for immersive audio environments

walkie-talkie voice. “I used a dynamic EQ to boost the strident character of the voice, followed by an enhancer to alter certain harmonics, a distortion effect, and finally a compressor.”

At the start of each new production, De Schutter chooses to use Waves plug-ins for assigning inserts (Waves channels) to as many console channels as possible, even if only a few will actually be used. “Other sound engineers prefer to add inserts as they go, which I find more time-consuming,” he adds.

La Monnaie employs the Manny Maroquin, Abbey Road, Horizon, Renaissance, and SSL4000 collections from Waves. All effects are played back via the venue’s speaker system, which is occasionally controlled by d&b Soundscape for an enveloping listening experience.

“One of the most intricate effects I have ever prepared for a live production was for The Turn of the Screw. It was based on a 5.1 reverb plug-in whose signal was transmitted to four speakers on stage,” explains De Schutter. “The director wanted the female soloist’s voice to move around—and the reverb signal to follow her. I sent the voice to a group that was processed by four reverb channels as an insert effect. This allowed the guest audio engineer to use the PAN control to track the female soloist.

La Monnaie employs two Lawo mc²56 consoles, with the broadcast desk located in a different building but connected to the same IP network as the desk mixing the live production

“Unlike other sound engineers who tend to rely on their personalised effects stack, I prefer to start from a preset and its default settings. Most productions are so different in nature that a standard effects stack would be of little help anyway.”

For live broadcasters, De Schutter and the venue’s other sound engineers tend to employ a much wider gamut of Waves plug-ins in order to recreate the listening experience inside the opera hall. “We mainly use bus processing for the singers’ headsets, the orchestra and the choir.

“Even classical music broadcasts require effects to make the mix shine, but unless a heavily processed sound is called for, I look for something that sweetens rather than alters the sound.”

A SMALL STUDIO WITH big

AMBITIONS

Blueprint Studios London is a new broadcast and content creation facility offering professional broadcastquality technology to corporate clients, traditional broadcasters, and video-first podcasts. Jenny Priestley pays a visit to find out more

Nestled in the leafy streets of Fulham, Blueprint Studios London is part of Blueprint Partners, an agency specialising in marketing and events, which also has experience in working on multi-camera outside broadcasts.

The company celebrates its 30th anniversary in 2026 and has satellite offices in Dallas and California, with London as its headquarters.

When an opportunity arose to expand the office footprint by repurposing a kitchenette, the team jumped at the opportunity, building a new facility capable of hosting both corporate and traditional broadcast productions as well as video-first podcasts.

“We were originally thinking about starting a production company,” explains Mark Anand, chief creative and executive officer of the Blueprint Group.

“Through the process of thinking about that, we identified that it could be a space where we could create content.”

After some consideration, the company chose to forgo the idea of a production company and turn the space into a studio instead.

“We knew that the first 20 per cent of our capacity would be taken up by our existing client base,” adds Anand. “Anyone can shoot content on a phone or a laptop, but these days, there’s an expectation of quality combined with the crunch in budgets.”

He cites an example from before the pandemic, when Blueprint’s agency side would be asked to shoot a video with a corporate CEO. This often involved spending upwards of £5,000 once the cost of a crew, travel, parking, lighting etc were factored in.

“As corporate clients’ budgets shrink, we can say, if your CEO can come to us, it’s going to cost you £200 to shoot rather than a couple of thousand, and it will look better than shooting in a meeting room, which is what we inevitably ended up doing.”

However, Blueprint Studios London isn’t just about corporate clients and visual podcasts. The studio is also capable of TV-grade production, and with studio space hard to come by in London, that could turn out to be Blueprint’s USP. “We are aiming at the top end of podcast producers and the bottom end of broadcast or sports,” explains Anand.

“The studios you can currently hire are black, white, green, or might have a view of Tower Bridge, and you spend a day dressing it, hiring stuff in, and it ends up with a plasma screen and a logo on it, which doesn’t look great. We wanted a turnkey solution that can handle a certain level.”

Anand is also keen for Blueprint Studios London to tap into the new trend of sports leagues awarding rights to non-traditional broadcasters. “Suddenly, you’ve had these podcasts that have existed for ages, and all they needed was a couple of cameras. But now, they’re going to need clips, they might have live links, or highlights and replays. They need telestration because they’re basically producing Match of the Day and podcast studios aren’t going to be able to deliver the facilities that they need.

“We can broadcast out of the building. We’ve got all the capabilities that you would get somewhere like Timeline Television, but we’ve done it in a very small

footprint that you can use for an hour, a half day, or a day. Also, if a project wants to bring its own director and producer, our team will act as support, or if a content creator wants to come in and needs some help, the team will be there to help them figure it out.”

“I think, particularly as sports podcasts become more like TV shows but they don’t have huge budgets, there’s a gap in facilities that can support them. Blueprint is there and ready to go. In fact, we are talking to a broadcaster about wraparound content for the FIFA World Cup next year. It’s a long way from being confirmed, but we’ve got everything they need.”

Another key factor for any creatives considering using Blueprint’s studio is its ability to expand to meet their needs. The studio and agency are located at flexible workspace Uncommon, Fulham. “There’s a coffee shop, a green room. We can provide dressing rooms. We’ve got showers in the building. We’ve got parking if needed, and we’re between two tube stations on the District Line. We feel that that’s quite a compelling case,” states Anand.

Although it hasn’t happened yet, there are plans to connect the London studio to Blueprint’s offices in the US. “We put so much investment in the gallery, and because we’ve got the connectivity, we could easily spin up a studio in our Dallas office that relies on the infrastructure here. All we would need is cameras, lighting and an internet connection.

“That could also be true for upstairs in our London building if we decide to expand out or open another office in the city. We are called Blueprint Studios London, with a view that there’s expansion in that. We’ve built the studio and the gallery with the bandwidth so that we could expand quite easily without having to make the same investment again.”

From kitchen to content creation

As mentioned earlier, the studio is a reconfigured, soundproofed and extended kitchenette, capable of offering seven different sets. It includes a 2.4 mil LED screen that can be used for branding, a wide shot of a football ground or sports scores if needed.

“When we were building it, we wanted to avoid anything that looked like a podcast studio, such as the vertical wooden slats that you see everywhere

Mark Anand

PICTURED ABOVE:

The gallery can be operated by just one person or more if needed

PICTURED BELOW:

The studio includes traditional Shure SMB7 podcast mics

and in six months, everyone’s going to be sick to death of. We decided to have kind of an abstract background, where we can use the RGB, DMX colours, and we can change and make it on brand.”

Away from the LED set, the studio features a ‘display set’ which has been used by authors who have had their books on show. “And then we have a very nondescript wall, which, if you just want a plain background, gives you a bit of texture.

“We’ve also got white, black and green screen cloths and the tab tracks go all the way around the room, so we can make any corner work. The studio also has a green floor. There are lots of options within one space.”

In terms of audio, Anand says he prefers to use

Sennheiser lavalier microphones, but the studio can also accommodate the traditional Shure SMB7 podcast mics.

It utilises four Canon C80 cameras, chosen for their depth of field. “We run them fully open, which on camera helps make the set feel a lot bigger. The LED screen falls nicely into their depth of field. We also have an automated jib, which can be remotecontrolled from the gallery.”

In the gallery

The gallery itself has been designed to work as a one-person operation, but can be easily expanded to include EVS or additional graphics.

It has an Allen & Heath SQ-5 48-channel digital mixer for sound, with a full Dante card and virtual sound cards. For talkback, Blueprint uses Glensound’s Beatrice. “We can do in-ears for presenters, but also, if we’re doing OBs or remote galleries, we can very quickly tie into those if we need to. Again, all patched through Dante.”

The gallery uses vMix for the LED output and virtual sets. “In fact, we have both vMix and a Blackmagic ATEM Constellation with 40 channels in and out. Sometimes we’ll be going through vMix to Blackmagic and other times we’ll swap them over, so it’s very flexible, and we can repatch anything wherever we need it to be.”

Both the studio and gallery operate in an SDI world. Anand explains they do have IP capabilities, but only for sound and streaming. “Any input, as soon as we get it, if it’s not SDI, we convert it. We run everything through SDI so we can patch it anywhere.”

The gallery also includes four Blackmagic HyperDecks, with recording sent to Blueprint’s NAS drive, enabling the team to edit in real time. “We made sure we futureproofed both the studio and the gallery. There are lots of inputs everywhere for extra audio and pictures, there’s Ethernet because producers want to come in with their own graphics, or we might want to bring additional equipment, so we can expand the gallery out if need be.”

All of Blueprint’s monitors have been supplied by Eizo, and are the company’s grade one monitor. “It has a little sensor that, when we calibrate it, flips down and does a self-calibration, and that colour profile goes across all the monitors, both in the gallery, in the studio and in the office. So everyone is looking at the same picture colour, which is really valuable.” Blackmagic DaVinci Resolve is used for colour correction.

PICTURED ABOVE:

The ‘display set’ has been used by authors who have had their books on show

“We had to custom-build our racks. We ran out of space with the first one we brought in. Media Powerhouse worked with us to do the installation, and then our in-house IT director handled the storage and everything else.”

Anand says that Blueprint is an Adobe company “through and through”, partly because the company is one of its clients. “We use Adobe’s AI tools for clipping. We can take an hour-long show and, for a flat fee, give you 10 subtitled 9x16 clips ready for you to share on social media. So we’re using AI where it helps for tasks, which, frankly, are boring to do, but we’ve got a skilled in-house team to craft the messaging that needs care and attention.”

What’s next?

While the studio has only been open for a few weeks, Anand has lots of plans for its future, including a possible push into virtual production. “You see virtual sets done well for Wimbledon or the Olympics, and I think we are not far away from being able to rival that quality in a much smaller and more cost-effective space.”

He’s also hopeful Blueprint could be a useful base for anyone covering next summer’s FIFA World Cup, where the cost of sending pundits and crew could prove to be prohibitive. “Broadcasters might send

their key presenters, but for wrap-around content, Blueprint would be perfect. We could run in the mornings as a normal facility, and give it over to a broadcaster for the afternoon, and if they want to take a production office here, we can facilitate that in the building.

“The phrase I like to use is, we have everything you need and nothing you don’t. We’re also not restricted by space, so if we had to spin up another studio temporarily, there are plenty of offices within the building. We could even convert the yoga studio into a set and use our gallery.”

It’s not hard to see why Anand is so enthusiastic about the future of Blueprint Studios London. While it may be small, it has big potential. “They say, if you build it, hopefully they’ll come, and we spent a lot of money building it. I spent every day of the build thinking back to all of the years standing in broadcast studios and everything that frustrated me.

“This studio was designed by people who work in broadcast, day in, day out, and have done so for a long time. Some of the decisions we’ve made are not necessarily the most architecturally obvious solutions, but they are for broadcasters and for producers. They’re designed to let them get on with doing their job and let us worry about what they need to get it done.”

A NEW IN ‘CONTENT-CENTRIC’ WORKFLOWS paradigm

In

the wake of an inaugural live broadcast demo at On Air 2025, David Davies speaks to AWS and Techex about the potential of Time

Addressable Media Store

The decision to implement a Time Addressable Media Store (TAMS) API in conjunction with Techex’s tx Darwin modular software platform in a debut live case at On Air 2025 underlined the extent to which, in only a couple of years, the concept has established itself at the cutting edge of broadcast. Yet it’s arguable that, in general, the extent to which it diverges from existing hybrid and cloud storage models is not yet widely understood.

TAMS, which was open-sourced by BBC R&D in 2023, received its public debut at IBC 2024 with a demo on the stand of early enthusiast Amazon Web Services (AWS). Since then, the BBC has continued to work with AWS, “building the TAMS community to add more partners and end-users through a programme of AWS Cloud Native Agile Production partner enablement sessions on both sides of the Atlantic.”

Chris Swan, a principal solutions architect specialising in content production at AWS, recalls that the company’s awareness of TAMS emerged from a “critical inflection point in 2023” when it

PICTURED BELOW:

Chris Swan

was confronting fundamental challenges with customers’ cloud migration trajectories: “What became immediately clear in our conversations with customers like the BBC was that traditional ‘lift and shift’ approaches weren’t going to deliver the transformational benefits our customers needed.”

When AWS became aware that BBC R&D had published the TAMS specification as open source, adds Swan, it was a “eureka moment".

"Here was a cloud-native approach that could fundamentally reimagine how media workflows operate, moving from file-based to content-centric architectures. The application potential was immediately apparent: this wasn’t just about storage, it was about enabling true interoperability and breaking down the vendor silos that have constrained our industry for decades.”

Chunked media

In a nutshell, TAMS enables media storage in chunked form, facilitating the transfer of only the section of content required at any one time. Swan explains: “At its heart, TAMS uses standard object storage to hold the media as small chunked segments with timing and identity as key primitives. This removes the need for high-performance file systems and duplication between the edit file storage system and the archive object storage, all of which drives cost savings. Being software-driven, it also allows for faster resource deployment times compared to traditional workflows and the ability to scale event-driven workflows up or down in minutes.”

David Mitchinson, solutions director of Techex,

highlights some of the other benefits offered by the API: “The TAMS timeline means that media sections are indexed uniquely and synchronisation between essences is retained, making it possible to store video and audio components separately. Once stored, the original asset is never changed (it is ‘immutable’) but can be registered multiple times in metadata to efficiently implement a range of functions from editing to time delays. Multiple operators can be working on the same content at the same time, and with the scalability of cloud it’s easy to see how powerful this methodology can be.”

Swan is equally sure that it is a transformational technology, describing it as a “paradigm shift” that delivers “transformational advantages across multiple dimensions. Unlike proprietary systems that lock customers into single-vendor ecosystems, TAMS

PICTURED BELOW: David

provides an open framework that enables best-ofbreed tool selection across the entire production pipeline, fundamentally changing how organisations approach media infrastructure.”

The technology can also play a significant role in preparing broadcasters for the AI era. "Perhaps most importantly, TAMS creates a foundational layer for AI-powered workflows, enabling real-time analysis and automated content enhancement that simply wasn't possible with legacy file-based systems, positioning organisations for the next generation of media production capabilities,” says Swan.

Taking TAMS to market

Having recognised the potential of TAMS, AWS formalised the Cloud Native Agile Production (CNAP) programme in early 2024, partnering with

the BBC and Sky as anchor customers alongside eight technology partners—including Adobe, Techex and CuttingRoom—to create a comprehensive ecosystem approach. A period of validation and enhancement of the TAMS API specification preceded the industry launch at IBC 2024, where AWS’ TAMS demonstration was presented over 60 times, generating interest from 23 customers and 24 partners, reports Swan.

Subsequently, there has been further work with the BBC to develop the TAMS specification, a series of jointly-run enablement workshops to help attendees “dive deep” with the TAMS API, and then, in October, the first-ever live production use case of TAMS at the global student-led broadcast event, On Air, hosted out of Ravensbourne University in London. The ambitious project brought together over 900 people, including students and industry professionals, to deliver a 24hour continuous live programme streamed worldwide on YouTube.

John Biltcliffe, a senior solutions architect at AWS, explains how TAMS was used during On Air: “Given that this was the debut live use case, the deployment was carefully scoped to ensure no impact on the broadcast. The core function of TAMS was to perform a continuous single, 24-hour record of the content into the TAMS store, completely replacing traditional video server infrastructure.”

He points to simultaneous access and clipping as the “breakthrough capability” for this project. “As the live broadcast was running, students around the world were able to log into the TAMS UI, view the live ingest, select specific time-addressable sections, and

clip/download that content. For instance, one student focused on editing clips for social media, relying on TAMS to scour the stream for content immediately after it happened. The final highlights of the 24-hour broadcast were created using content sourced directly from TAMS, and after the event, all the YouTube deliverables were created from the TAMS recording.”

PICTURED BELOW: TAMS demo at IBC 2025

Reflecting on the demo, Biltcliffe says the key learning was the validated ability of TAMS to enable workflows where live content can be utilised right away during ingestion, regardless of location. “This real-life environment successfully tested and pushed the technology on an ‘epic scale’. The success has reinforced our strategy to adopt TAMS for future live production use cases, validating the model of making content instantly available for repurposing (such as creating social media clicks or highlights packages) during the event itself.”

With AWS and the BBC working towards the management of the TAMS specification being undertaken via an open-source foundation, and demand expected to spread from initial news and sports applications to reality TV, live entertainment and post production, the next few years are looking extremely busy.

Indeed, Swan concludes by offering the following prediction: “By 2027, I expect TAMS-based workflows to become a significant component of new media infrastructure deployments, driven by the compelling combination of cost efficiency, interoperability, and AI-readiness that makes adoption inevitable for organisations seeking competitive advantage.

“The ultimate vision is an industry where content flows seamlessly between organisations, tools and platforms, shifting focus from managing files to creating compelling experiences for audiences worldwide while enabling new forms of collaboration and content monetisation not previously possible.”

John Biltcliffe

Broadcast audio is entering a transformative era. As technology, audience expectations, and production workflows evolve, the industry is reimagining how sound is captured and delivered. Array microphones are emerging as a pivotal innovation, offering new ways to create immersive, authentic experiences that bring fans closer to the action than ever before.

Why array microphones, and why now?

Several trends are converging to make array microphones essential for the future of broadcast audio:

Empowering storytellers

Everett Salyer, senior specialist of business development at Shure, explores the creative impact of array microphones on modern broadcast production

• Evolving workflows: The shift to IP-based and cloud-driven production environments demands flexible, remotely controllable audio solutions. Array microphones integrate seamlessly, supporting hybrid and REMI workflows that are becoming the industry standard.

• Budget and resource pressures: As traditional revenue streams shift, broadcasters need tools that maximise efficiency without sacrificing quality. Array microphones offer multi-lobe coverage and remote control, reducing the need for extensive on-site crews and equipment.

• Exploring new revenue streams: Broadcasting companies are moving beyond cost-cutting to explore new revenue opportunities. By integrating AI software with array microphones, they can truly

capture the dynamic energy of live events and offer paid, premium audio experiences previously unattainable with legacy technology.

• Audience immersion: Array microphones capture the nuance and energy of the environment, delivering a sense of ‘being there’ that traditional microphones struggle to match. As broadcasting companies compete for viewership, immersive experiences have become strong, unique selling propositions that can separate them from the pack.

How array microphones are changing the game

Array microphones represent a leap forward for the industry in both technology and creative potential. For example, multiple virtual lobes can be directed to capture specific sources, adapting in real time to the flow of the event. This flexibility is invaluable in dynamic settings like sports, concerts, and live interviews.

Moreover, engineers can adjust pickup patterns and coverage from anywhere, supporting safer, more efficient workflows and enabling rapid response to changing production needs. Built-in auto-mixers and advanced algorithms help isolate key sounds and minimise unwanted noise, resulting in cleaner, more focused audio that enhances storytelling.

Everett Salyer

Real-world impact: from the court to the crowd

The adoption of array microphones is already reshaping the broadcast landscape. At major sporting events, array microphones have delivered audio so true to life that even athletes are surprised by the details, like pinpointing the location of a hit in AMMA (Armored Mixed Martial Arts) or capturing the exact feel of a basketball court.

When broadcasters experience array technology, it sparks new ideas for integrating sound into their storytelling, from immersive crowd audio to customisable at-home listening experiences.

AI and the next evolution in audio

Artificial intelligence is amplifying the power of array microphones. AI-driven tools, like Edge Sound Research’s Virtual Sound Engine, can identify and prioritise key sounds, like a ball bounce or a referee’s whistle, while suppressing distractions, ensuring the most important audio always comes through. Machine learning enables microphones to follow the action, activating lobes where the energy is, and adapting instantly to the pace of the event.

As part of the AI Incubator at IBC 2025, Shure developed an agentic AI assistant that enables an array microphone to respond instantly to spoken or typed commands. The project included a cloud-based audio agent that controlled an array mic in real time over standard network protocols, enabling engineers to adjust settings using natural language. This approach allowed a single array microphone to replace multiple analogue mics, streamline setup, and provide flexible, consistent audio coverage that could be changed mid-programme with a simple command.

The demonstration showed that AI-driven systems and array microphone technology can manage broadcast tasks quickly and accurately, reducing manual workload and supporting more creative storytelling. Intelligent, adaptive audio is now proven to be achievable, marking a significant step forward for immersive and responsive broadcast production.

Looking forward: the broadcast audio landscape ahead

The next five years will see array microphones become foundational to broadcast audio. As venues upgrade to IP and cloud-based systems, array microphones will be built into the fabric of production, always ready for remote access and creative deployment.

The detailed coverage arrays open new possibilities for analytics, from crowd engagement to player performance, supporting both production and coaching. With less time spent on setup and more on creative decisions, audio professionals can focus on what matters most, crafting compelling stories and unforgettable experiences.

Empowering creativity: the true value of array microphones

The most exciting aspect of array microphones isn’t just their technical prowess; it’s how they empower broadcasters to imagine new workflows, experiment with sound, and tell richer, more impactful stories. Every deployment reveals new possibilities, proving that the future of broadcast audio is limited only by our creativity.

AI-driven audio systems free engineers from repetitive technical adjustments, allowing them to focus on crafting compelling stories and enhancing the emotional impact of live events. By automating routine microphone management, production teams gain more time and flexibility to experiment with sound design, respond to dynamic moments, and tailor audio experiences to the unique atmosphere of each broadcast.

This shift to automation empowers creative professionals to push boundaries, innovate with new workflows, and deliver richer, more immersive content that resonates with audiences. Ultimately, intelligent audio tools become collaborators in the creative process, supporting engineers and storytellers as they shape the future of broadcast production.

As the industry moves forward, array microphones will play a central role in defining the next generation of broadcast audio. By enabling immersive, flexible, and creative sound capture, they are helping broadcasters meet new challenges and deliver experiences that resonate with audiences everywhere.

A rray microphones deliver a sense of ‘being there’ that traditional microphones struggle to match
Machine learning enables microphones to follow the action, activating lobes where the energy is, and adapting instantly to the pace of the event

A day in the life…

What is your job title and what does it entail?

I’m a development producer specialising in unscripted television, shaping ideas from the spark of a concept through to a pitch-ready deck for broadcasters, streamers and brands in the UK and beyond. Alongside development, I frequently work across pre-production, on location and in the edit when the project requires me to. This tandem production experience really informs and strengthens the ideas that I’m able to develop. For the last two years, I have been working at King of Sunshine Productions in Salford. No two days are the same, and that’s exactly why I love it.

Tell us about your most recent project.

I recently returned from filming a documentary in Norway about the famous Bergensbanen railway, following its journey and exploring the communities who live and work along the train route as they prepare for the busy Christmas season. I worked closely with the managing director, Sohail Shah, in choosing which train we would feature, as well as casting the main contributors beforehand and on location. We shot the entire film with three camera teams in a single week, an intense but brilliantly collaborative effort made possible by a very skilled crew. A personal highlight was staying in a remote, snow-covered hotel only accessible by the train itself.

Describe your daily working routine.

My day starts with a quick commute into MediaCity, where I work closely with our MD to shape and manage the company’s development slate. We prefer working through briefs and ideas in person as it makes the creative process faster and far more collaborative. Most days are spent building decks and pitch materials across factual entertainment, formats, documentaries, reality and everything inbetween. When I’m not designing a deck, I’ll be writing treatments, researching stories, meeting specialists or casting potential contributors. One day I might be deep in the history of the Bernina Express, the next I’m learning about health gurus or tropical, remote islands.

What sort of technology do you work with on a daily/frequent basis?

The technology I use mainly comprises websites and apps that help me create my vision for a show. A finished deck may sometimes include a taster tape or sizzle and it’s part of my job to make all our materials

as coherent as possible, so every element we’re bringing to the table when we pitch, helps to build the ‘world’ that this idea inhabits.

Canva has become increasingly popular and it’s one of my go-to tools, especially for quick-turnaround designs that still make an impact. ChatGPT is also incredibly useful, I often use it to streamline early research, cutting hours off the initial digging stage.

How has technology changed your life since you started your career?

When I started out, decks were mostly built in PowerPoint and everything from research to image sourcing took considerably longer. Now, AI tools have transformed the early stages of development. From generating bespoke visuals in Midjourney to research support through ChatGPT. Used thoughtfully, they make the process far more efficient, especially for smaller indies where time and resources are precious. I try to stay in the loop when it comes to these tools and how they’re evolving, because they’re already reshaping what development teams can achieve.

What has been your favourite/most memorable assignment?

I’ve loved different elements of every production I’ve worked on, but BBC’s Idris Elba’s Fight School for Workerbee will always be a standout. Casting took nearly two years, and I met a lot of incredibly inspiring people along the way. Filming the series in London was just as rewarding and even encouraged me to start boxing, which I still do now.

Another favourite is Alpine Train at Christmas, my first commissioned idea for Channel 4 and my first for King of Sunshine. I love taking an idea from concept straight through to delivery and I’m lucky to work somewhere that actively supports that journey.

And least favourite (names may be withheld to protect the innocent/guilty)?

My least favourite development assignments are the ones that refuse to click. These are usually formats where something in the gameplay just won’t land, no matter how many versions we try. They can be very frustrating, but there’s usually a moment, after a lot of staring out the window, when it finally falls into place.

What do you see as the next big thing in your area?

It’s always hard to predict exactly where development is heading, but AI is already shifting the landscape. With new cost effective technologies, we may well start to see smaller development teams achieving the same quality of output as their larger competitors. From visualising how a gameshow might look in a studio, to early-stage world-building for original IP, these elements can now be developed much earlier, which could dramatically speed up the journey from concept to pitch to commission. Social media is also playing an increasingly important role, creating space for new talent to emerge and build audiences, which offers production companies more opportunities to champion unheard stories and fresh voices.

On location in Norway

TNAVIGATING MUSIC RIGHTS IN A

fragmenting TV landscape

he dramatic changes in broadcasting this century have transformed broadcasters’ relationship not only with consumers, but also with the music used in their productions. Video-on-Demand (VoD) and Free Ad-supported Streaming Television (FAST) have exploded, and the TV, film and music industries are more international than ever. With more content being produced and crossing borders than at any time before, the process of licensing music and reporting its usage is increasingly difficult. Traditional national blanket licences, built for linear TV, no longer cover the global and permanent accessibility of digital platforms.

Broadcasters now face complex individual rights negotiations, burdensome reporting rules, and the challenge of manually handling increasing numbers of cue sheets—the documents essential for royalty distribution. This increases financial and operational burdens just as advertising revenues are squeezed and competition rises. Fortunately, broadcasters are not facing this perfect storm alone, with new technologies offering automated, one-stop music rights reporting that help navigate this fragmented landscape.

Changing media consumption

The growing complexity of music rights coincides with rapid shifts in European consumption habits. Paid streaming revenues overtook public TV revenue for the first time in 2024, with global streaming subscriptions rising from 1.1 billion in 2020 to around 1.8 billion in 2025. FAST is booming too: active channels have grown 76 per cent since 2023, with EU revenues forecast to jump from $2.2 billion in 2024 to $7.5 billion by 2030.

Digital distribution requires detailed cue sheet reporting. As the number of FAST channels grows, manual workflows can buckle under the weight.

Meanwhile, public service broadcasters, who commissioned 43 per cent of European TV titles in 2024, face stagnant funding. If limited budgets are diverted from commissioning programmes to managing music rights compliance inefficiently, it could undermine this critical area of the European production sector, including the cultural diversity and local storytelling it provides.

Music licensing challenges

There are further challenges as the entertainment sector becomes increasingly international. The EU’s 2014 copyright framework

introduced the concept of multi-territory licensing, enabling musical compositions to be licensed directly across multiple territories. They have, however, proved to be complicated to navigate. If, as expected, they are to become the norm for audiovisual content, EU broadcasters will have to deal with a kaleidoscope of rights owner representatives to secure all rights necessary to broadcast a programme.

The framework also doesn’t include neighbouring rights—the rights to specific sound recordings—creating additional confusion as these rights must still be licensed on a territory-by-territory basis.

Cue the music

The bridge between the productions that use music and the composers who make it is the cue sheet, the critical document listing all music used in a production, including copyright details, duration and context of use, which is necessary for paying royalties.

Every broadcaster must deliver cue sheets, yet there is no standardised format or submission system across Europe, with different requirements for different rightsholders and regions. Dealing with these processes manually often leads to mistakes and can even delay the delivery of a production while problems are fixed.

Solutions

Cue sheets themselves are not the problem. They are invaluable tools to ensure those who create music are paid when it is used in audiovisual productions.

The solution to these issues for broadcasters lies in automation and standardisation.Manual workflows cannot handle digital scale. In the streaming economy, Music Recognition Technology to detect and identify tracks automatically and generate accurate cue sheets is not just useful, it is essential.

Industry-wide standardisation efforts such as the Global Cue Sheet Standard 2.0, backed by major industry bodies, promise to harmonise data collection for all types of music rights. However, we are still some way off widespread adoption.

The key to further progress is in maximum adoption across the ecosystem. From news and sports networks to film production studios, automating these processes can free up resources to focus on making great programming. Spending time on creativity over compliance is crucial to protecting the future of the European broadcast sector.

BTHE FUTURE OF agile audio

roadcasters are redefining how premium content is produced. The maturation of innovative IP technologies has empowered them to embrace new remote and distributed workflows that reliably decouple audio and video from physical hardware. Balancing efficiency, agility and sustainability at scale, hybrid production workflows have become essential to modern broadcasting. Blending on-premise infrastructure with remote operations and cloud resources, these flexible workflows enable broadcasters to meet today’s complex economic pressures while maintaining creative ambition and technical excellence.

This broadcast evolution signals a more collaborative and locationindependent future for live production. Hybrid workflows can expand or adapt to meet the scale of any project; thousands of audio channels can be orchestrated across multiple DSP environments, with workflows dynamically adapted to production needs, no matter where teams are located.

Today, hybrid and remote workflows are not just run in parallel for backup; they are at the very heart of live broadcast operations. Remote Operation Centres (ROCs) and hybrid trucks are heralding flexible paradigms where IP cores can sit onsite at a venue, on the edge, or in the cloud, and control surfaces can exist anywhere. These trucks can work autonomously or be completely remote-controlled, while smaller units that pair powerful processing cores with more compact control surfaces for disaster recovery are also emerging.

It’s exactly these reasons why investment in new broadcast trucks is on the up. The ability to tap into a truck’s full potential from a remote location means that these remote units are no longer solely responsible for the success or failure of a live broadcast, and they no longer have to be fully staffed. They are simply an additional resource on a broadcaster’s extended network.

Having control from any location gives users ultimate flexibility, and the growth in the number of ROCs is a perfect illustration of how committed broadcasters are to this new paradigm.

Agility without compromise

The industry is not done yet. Hybrid infrastructures continue to evolve in the cloud. The growing acceptance of virtual DSP resources is encouraging more broadcasters to spin up cost-efficient processing for one-off productions, without additional capex investment in

hardware. It is a shift that delivers multiple benefits, allowing teams to deploy agile technologies and adopt business models that precisely match the scale and needs of every production.

Virtualised processing engines can absorb the additional load of larger presentations. They can be adopted as standalone processors or blended with existing hardware DSP cores, while control of these resources is just as flexible, with the ability to use virtual or physical control surfaces, wherever they are located.

Room to manoeuvre

This ultra-flexible approach enables broadcasters to dynamically scale up and down to meet specific project demands. The ability to lean into temporary, virtualised processing resources instead of investing heavily in capex infrastructure gives broadcasters much more room to manoeuvre, especially for one-off productions that need a temporary boost of additional processing power. It has led to an increase in more ambitious large-scale orchestration systems and distributed DSP environments that enable large mixers with thousands of channels of audio to be replaced or enhanced by multiple interconnected DSP cores located anywhere in the world.

Ultimately, accessing remote mix engines in either a virtual or physical environment underscores the growing efficiency of modern broadcast workflows. Processing audio and video content from geographically diverse locations is now second nature to broadcasters who are accustomed to aligning independent audio, video and data flows.

And while virtual resources are seldom used exclusively in live broadcast scenarios right now, many broadcasters and content providers are routinely augmenting their hybrid setups to gain extra capacity and flexibility.

Calrec has been at the forefront of this transformation. The company’s True Control 2.0 platform anticipated the move toward distributed DSP and control, giving users the freedom to manage audio systems seamlessly across multiple locations and operational models.

The expansion of these production ecosystems is where the real value lies, and effective management and control remain essential to making them work. Looking ahead, the priority for broadcasters will be selecting the right acquisition and deployment models to align with the commercial realities of each production.

SAVING

timeTheresa Vondran, category market manager, pro audio at Sennheiser, explores the differences

between narrowband and wideband transmission

In recent years, broadcasting has seen much innovation in the way that content is created, produced, stored, and delivered. In content capture for productions and ENG, wireless audio has played an important role ever since the wireless microphone rose to popularity in the 1950s. Since that time, the underlying transmission scheme has remained unchanged: narrowband transmission with a bandwidth of 200 kHz per RF channel, with ‘fixed’ transmitter/receiver pairs replacing the cable and freeing the talent, the host, the reporter.

For the past year, the audio industry has been discussing wideband/ broadband wireless audio transmission for multichannel applications. So what is this new technology about? What are its benefits? And how can it contribute to a modern broadcast workflow?

How does narrowband compare to wideband?

Let me start by saying that wideband transmission, which came to be known as Wireless Multichannel Audio Systems (WMAS), can employ different technologies and have different features, depending on the manufacturer of the wireless system. For this article, I will look at a fullyfledged wideband system that ticks all the boxes of the ETSI TR 103 450 system reference document.

Narrowband RF wireless is frequency-based, meaning each audio channel operates on its own RF frequency, and you need to add channel after channel, plus guard bands between mics and IEMs/ IFBs for a multichannel system. Wideband, however, takes us into the time domain. Through techniques like OFDM, TDMA and TDD, the entire hardware, plus the control data stream, shares a

single RF channel the width of a TV channel (8 or 6 MHz). Within that RF channel, each component, such as a beltpack, only transmits or receives when its turn comes up (see graphic below). A so-called Base Station is at the heart of this system, which can manage up to 64 audio channels depending on the audio mode selected.

Workflow improvements

The ‘time-based’ wideband approach with OFDM, TDMA and TDD results in various workflow improvements for broadcasters and live audio operators. Microphones and IEMs/IFBs can be accommodated in the same TV channel for the very first time. The Base Station auto-arranges its audio channels within the RF carrier, which saves the RF manager or audio engineer tedious frequency calculations. Also, all-time control and monitoring of all devices are handled in the same RF channel; there is no need to set up a separate control network.

The footprint of such a wideband system is much smaller than that of a comparable narrowband system, as the amount of gear required is reduced massively. Splitters, combiners and boosters are a thing of the past; a 1U Base Station can handle many audio channels, and the beltpack both the mic and the IEM channel. Extending the range of an IEM system—something which required extensive power calculations before—becomes as easy as adding an additional transceiving antenna. Antennas are connected via Cat5e cables; BNC cables are no longer needed. Another workflow benefit for larger studios is an easier reuse of frequencies, as a ‘time-based’ wideband system has a considerably reduced power spectral density compared to a narrowband multichannel.

Large bandwidth operation also mitigates fast fading. This destructive combination of reflected signals is especially pronounced in indoor venues, particularly those with many metal surfaces. Due to the fact that each audio channel gets to transmit on the entire wideband RF channel, fading notches that can kill a narrowband audio signal are no issue for a wideband system. In a wideband system, resources can be allocated flexibly, not only as a spontaneous addition of, for example, another microphone or IEM, but also in regards to audio quality, range and latency. Via various audio modes with different codecs, the engineer can decide per channel (and for IEM and mic separately), who gets what quality—from the engineering crew to the talent. In a nutshell, broadcasters will achieve faster setup times, more efficiency and flexibility, as well as a less complex RF wireless architecture in general.

Wideband systems have proven that they work alongside narrowband systems without any issues, and with a much simpler RF configuration.

Wideband plays out its full potential in multichannel applications, whereas narrowband will remain the technology of choice for setups with a low channel count. With these two transmission types now available, a new world of possibilities has been opened up to broadcast engineers.

BACK TO THE ‘30s

Over the last 15 years, audiences around the world have followed the ups and downs of the Crawley family and their servants at Downton Abbey. Following the conclusion of its run on ITV, the story moved to the big screen with its final instalment, Downton Abbey: The Grand Finale released in cinemas in September. TVBEurope talks to some of the behindthe-scenes creatives who worked on the third part of the big screen trilogy to hear how thoroughly modern technology helped create a timeless classic

Visual effects company BlueBlot delivered 138 VFX shots to help recreate the 1930s pre-war era of Downton Abbey: The Grand Finale, including the opening sequence of a trip through London’s Piccadilly Circus.

Henry Badgett, VFX supervisor and creative director at BlueBolt explains more.

Had BlueBolt worked on any of the Downton Abbey projects before?

No, but we were delighted to be asked to work on the finale! Downton has been such a long-running series, first broadcast on ITV back in 2010. I started watching it in lockdown and raced through all the TV series, before enjoying the films when they came out. So it was very exciting to be on set and to be involved.

How did you get involved with The Grand Finale?

We have a long relationship with Carnival Films having collaborated on The Last Kingdom and many other projects with them and producer Mark Hubbard. We’ve also worked with Focus Features on projects such as Nosferatu and The Northman

How long did the process take?

Our first contact and initial brief was in March 2024, filming was over that summer and we delivered in March 2025.

How big was the team at BlueBolt that worked on the project?

Approximately 20 2d artists and 10 3D artists, including CG supervisor Dave Cook and 2d supervisor Graham Day,

in addition to our production team, led by VFX producer Theo Burley.

Did the team work at BlueBolt’s offices, or remotely?

We have a hybrid set-up. Most of the senior team are in the office regularly, along with the London-based crew. A small number of our team are remote—some of whom worked on the project.

How did you draw inspiration for the Piccadilly Circus sequence?

We worked with the art department to find a bunch of period photographs, although they were mostly black and white. We had to make some compromises on the scale and layout, at that time, the ‘Eros’ statue (known as Eros but actually Anteros) was in a different position, and our layout was based on that, but also made concessions to the timing of the camera move to get through the doors of the theatre on cue.

Can you talk us through the work you did on the Ascot Races sequence?

We were tasked with recreating elements of Royal Ascot, filmed at Ripon Racecourse, including a CG grandstand

and surrounding stands, as well as seamless 2d and 3D crowd replication, adding to the jubilant atmosphere in which more of the Crawley family story unfolds.

For the Ascot crowd, we had multiple cameras shooting each setup at the same time. Our VFX crew on set helped coordinate these passes, sticking to the general rule of having the hero pass contain the crowd nearest the camera, and then doing crowd rep until it was small enough to get away with using CG crowd for the masses in the distance. DoP Ben Smithard was very helpful and patient, and gave us some excellent material to work with.

We needed to ensure the crowd felt natural and alive, and adding parasols really helped. Although we had been told that the women at that time wouldn’t raise their arms over their heads when cheering the horses on, we ignored that for the deepest background to get a bit more movement.

And the other shots the team worked on, how did you produce them?

There was a lot of cleanup for period London, and some other nice supporting DMP work for environments.

What technology did you use?

The software we used consisted of Maya, Nuke, Houdini, Golaem Crowd, Substance Painter, Photoshop.

What were the biggest challenges you faced throughout the production?

The stunning opening shot!

What were your biggest achievements?

Without a doubt, I’m most proud of our work on the opening shot of Piccadilly Circus. I love it in the film with the credits over it. A huge team worked on it at BlueBolt; it was a nice thing to be a part of.

There were all sorts of challenges involved with the sequence. Chiefly combining the two filming locations. Before shooting, we had to tech-viz the camera positions to ensure a smooth transition from the blue screen set to the Richmond Theatre location. The shot has a wipe designed into it when the man runs for the bus in front of camera, but it didn’t end up covering the whole frame, so the remaining street was momentarily fully CG with the foreground crowd elements tracked in.

On the day of filming on the blue screen set, we only had a four-hour window between sunset and wrap, and it was pouring with rain with no improvement forecast. We had to just go ahead and film in the rain. We thought of it as a free wet-down, which we would have done anyway for the reflections of the Piccadilly lights stand-ins.

PICTURED ABOVE: BlueBolt created the film's opening scene, set in 1930's Piccadilly Circus
Henry Badgett

It then turned out in the edit that the amount of live action vehicles we had still looked relatively quiet for Piccadilly Circus, so adding more CG vehicles in between existing ones was a major unanticipated challenge.

The lighting of the blue screen set was probably the single thing that did the most heavy lifting creatively. It gave us really solid reflections on the wet ground for us to match with Piccadilly adverts, but in a very forgiving way that we could add to as well.

THE LADY IN RED

Gareth Spensley, senior colourist at Company 3, has graded all three of the Downton Abbey films after being brought on board by long-time collaborator Ben Smithard, director of photography on both the first and third films. He discusses the process of colour grading The Grand Finale as well as his collaboration with Smithard.

How early did you get involved with The Grand Finale? I became involved in the film at the pre-production stage, when we had grade sessions for both the camera tests and the hair, make-up and costume tests. Ben and I worked together on well over 20 films and have a wellestablished working relationship. He shoots specifically knowing what we can achieve in the grade, based on the tricks we’ve used in the past.

We use the first stage camera tests to build a LUT that Ben uses in the next tests. Ben likes to shoot quite elaborate tests with the principal actors in full costume, hair and make-up on a fully lit set. While this is traditionally

something every film would have done a few decades ago, it’s a rarity now.

What was the process of creating the look for the film? For the first Downton Abbey film, we developed the grade look on the principle of the upstairs scenes in the house using a Kodak film emulation and the downstairs scenes using a Fuji emulation. That particular film’s narrative played heavily on the juxtaposition between upstairs and downstairs. While experimenting with ideas, we found the Kodak emulation accentuated the golds and warm colours in the upstairs world, highlighting its gilded paintings and luxurious fabrics, whilst the Fuji emulation created a visual contrast by accentuating the earthier greens and copper tones of the servants’ quarters and kitchens in the downstairs world.

PICTURED BELOW :

We blended these looks when the characters left the Downton Abbey house and leaned on the Fuji look more in the exterior garden scenes as it brought a richness to the greens of the gardens and grounds. For The Grand Finale, we adopted the same general approach we had laid out in the first film, however, we had the added element that it had several scenes set in London. For these scenes, Ben wanted to explore an early Technicolor look. We eventually settled on a subtle 2-strip Technicolor look that skewed the foliage greens cooler, desaturated the highlights and thinned down the overall palette to create a crisper, more stylised look for the London exteriors and society scenes. For the first film, we used Baselight’s Look Tool to help develop the feel of the grade but by the third film, the Chromogen feature had been released and this revolutionised the whole process.

Talk us through the process of grading The Grand Finale

The Grand Finale was graded on a Baselight X grading system. Additional Baselight One systems were used by assistant colourists for support work like paint-outs and cosmetic fixes. The majority of the grading schedule was spent in one of our grading theatres working to projection. For the home entertainment deliverables pass, we moved to a monitor-based suite and graded on a Sony X300 and Sony 77inch A95L.

Are there any particular tools within Baselight that make your life easier?

For this third film, the Baselight’s X Grade tool proved invaluable. It allowed me to intricately manipulate the individual colours in every scene to create more complementary palettes; whether subtly shifting hues to make objects less distracting or pop out colours to draw the eye. In the ballroom scenes, I could precisely deepen

Lady Mary's red dress causes a stir
Gareth Spensley

the red of Lady Mary’s dress to focus the audience’s attention. The X Grade feature is a really tactile tool that allows you to manipulate colours without the need for keying or time-consuming shape work.

How long did the grade take?

We had three weeks for the main grade of the film and another week or so for the deliverables. This can sometimes sound like a lot, but it’s always useful to not get complacent, as reviews can easily demand several of those days from the schedule. In terms of passes, I tend to build a cutdown of the film (10 minutes or so) that we use to build a look guide. I then generally do a first pass, with the DoP popping in and out, where we get the grade in shape and carry out a full balance of the film. I try to do this with as little secondary work as possible. Then we get into a much more detailed pass with secondaries, shapes, vignettes and now, more frequently, several X Grade tweaks for each scene or shot.

We delivered the film in traditional SDR projection, Dolby Cinema and Dolby Vision Home Entertainment.

Did you grade at Company 3’s offices, or remotely?

We graded at Company3 London, predominantly in Theatre 4, which is equipped with both traditional DCI Barco projection and Dolby HDR Cinema projection on the Christie lasers.

This is an incredible facility to have in-house and often allows for a hybrid workflow where we switch between SDR projection grading and HDR projection grading all within the same session.

What was the biggest challenge?

The biggest challenge in the grade was dealing with the weather variations. We did around 80 sky replacements or sky enhancements in the Baselight software. Ben shot sunny, idyllic plates of the skies he ideally wanted in the exterior scenes, and when the weather in the actual scene was too grey, we selected a plate from this library and composited it into the scene. We’re very careful to work

out whether this work is best done in the grade schedule, so when we find a more time-consuming composite, we send it to VFX. The benefits of having this live in the grade have proven to be invaluable in getting a consistent look by being able to quickly increase the strength of a sky or subtly reduce it.

Another technique we used to add sunshine to some of the overcast exteriors was to add shadows to the ground. For the scene where the principal cast take tea under a tree in the grounds, we added a shadow shape around their feet and in the backgrounds of the shots to gently introduce the idea that the sunshine was simply not hitting the actors. We took a similar approach to another scene where we used more custom shapes to add shadow lines of the house itself to the lawn in an effort to suggest the actors were again sitting in the shade.

What are you most proud of achieving?

I am proud of having graded all three of the films. It’s been a privilege to deliver the world of Downton Abbey to the big screen. The show and the films are an institution that I think instantly conjures up a distinctive visual style.

PICTURED ABOVE: Ripon Racecourse played the role of Royal Ascot thanks to a CG grandstand and 2d and 3D crowd replication
PICTURED BELOW : The Crawleys enjoy a day at the races

TWO CITIES A tale of

In the first of a series looking at production in the regions, Matthew Corrigan turns the spotlight on Greater Manchester

For longer than many in the industry care to remember, there has been a desire for a more even spread of production across the UK. Ever since the idea of regional television first took hold, with the introduction of the Television Act in 1954, successive governments have expressed their support for devolution, implementing a range of strategies aimed at extending the reach of UKPLC’s vitally important media and entertainment business beyond its traditional London home.

There have, of course, been some high profile successes. Last year, Channel 4 confirmed plans to expand its footprint across the UK with a commitment to support 600 roles outside the capital. One initiative offered opportunities for Londonbased employees to relocate to offices in Leeds, Bristol, Manchester or Glasgow, and all of the company's roles are now being advertised across all locations.

The BBC has also set out a comprehensive five year strategy, BBC Across the UK (ATUK), which aims to transfer power and decision making to the regions with a series of projects including investment in local creative economies and moving jobs across the country. In 2011, the corporation famously moved a substantial part of its operations to Salford, in the county of Greater Manchester, establishing a large presence at what has grown to become MediaCityUK, on the site of the former Manchester Ship Canal docks.

has built an enviable roster of productions and provides a home for some of the biggest and best-loved shows in the country including Who Wants to be a Millionaire, Countdown and The 1% Club. The facility’s ten studios combine to equal a staggering 43,395 square feet of floor space, making dock10 the largest purpose-built TV studio complex in the UK.

Impressive figures form a continuous thread throughout the dock10 story, as visitors to the facility quickly begin to understand. There are ten studios with ten sets of galleries, with everything networked across the entirety of the site. Each studio has its own Master Control Room (MCR) with each one able to control any of the studios as required. A single control room - almost a Master Master Control Room - provides a complete operational overview.

“It’s Granada on acid”
EDWARD HARVEY, VERSA STUDIOS

MediaCityUK has not so much repurposed the area as transformed it. The ships that once kept the heart of the Industrial Revolution beating may no longer line up along the quaysides, but the decaying industrial hinterland has been reborn to help meet the demands of the burgeoning broadcast revolution, its 81 hectares providing a home for more than 250 enterprises, from established parts of the UK’s national fabric to brand new enterprises at the cutting edge of technological change.

The location’s rich history was the inspiration behind the name for one of MediaCityUK’s best-known residents, TV production facility, dock10. Officially opened by Her Late Majesty Queen Elizabeth II in 2012, the company

At 12,540 square feet, HQ1 is the largest soundstage in the complex and the largest multi-camera studio anywhere in the UK. The space is big enough to fit Shakespeare’s Globe Theatre entirely within its cavernous walls. In sheer size terms, HQ1 bears an immediate comparison with an aircraft hangar - indeed, it would be possible to park a Boeing 737-100 airliner inside.

Slightly smaller—this one only able to accommodate a World War Two-era B-17 Flying Fortress bomber (the aircraft featured in Masters of the Air)—HQ2 has recently undergone an extensive modification. All of the lights in its 11.5 metre ceiling have been converted to LED units, a programme that will eventually be extended throughout the whole of the facility. The project marks the first time a studio has been completely refitted with LEDs and all of the accompanying infrastructure, and one of the many benefits it will bring is a significant impact on dock10’s electricity costs, which currently stand at a staggering £1.3 million every year. In a win/win for dock10 and the wider environment, the transition will also deliver in sustainability terms.

Across the site, all studios have been designed to enable both traditional and virtual production to take place and the boundaries of what is possible are frequently pushed. Indeed, dock10 seems to relish a challenge. Filming episodes of ITV’s game show, The

Studio 8 at Versa Manchester

1% Club, requires mics for 100 competitors as well as the programme’s host, Lee Mack, a complex logistical feat that entails highly intricate orchestration to ensure everything runs as it should.

Designed with sports production in mind, dock10’s remote gallery solution provides complete control for production teams, whether live from stadiums around the country or at major international events. The innovative facility enables simultaneous management of concurrent feeds from multiple locations, as well as playing-in content.

Last year, dock10 launched an industry-first real-time virtual lighting solution to enable simultaneous control of both virtual and physical lighting via a single DMX console in the gallery. Virtual lighting can be fully integrated directly into sets either before, after or even during filming, handing productions complete creative control over a vast array of lighting options.

Testing is also currently underway on a multi-camera UHD HDR project for a major UK broadcaster, and the company is constantly looking for new and innovative ways to drive efficiencies in film making. Inevitably, AI is making inroads into dock10’s workflows and some remarkable new capabilities are currently being evaluated.

Each year, more than 3,000 shows are made at dock10, and the site plays host to an annual 200,000 audience visitors. In its decade and a half at MediaCityUK, dock10 has developed a reputation as one of the leading lights in television production, not only in the region, but on the national and international stage.

However, as everyone at dock10 understands, studios cannot exist in a vacuum. In order for them to succeed, there needs to be an established network of support services to fuel the creative process. MediaCityUK, with its open spaces and enviable transport links, was designed to enable the necessary infrastructure to grow. The idea of creating a similar facility in the middle of a major city should be almost unthinkable, but the team at Versa Studios dared to dream.

A sleeping giant awakens

A few miles away from MediaCityUK, the River Irwell marks the historic boundary between the cities of Salford and Manchester. For decades, a giant of British television occupied a huge concrete edifice by its eastern bank, its broadcasting tower and giant red sign becoming almost as famous as the creations for which it was responsible. Between 1956 and 2013, Granada Studios served as the headquarters of Granada Television. Predating the BBC’s Television Centre by five years, the 27-acre Quay Street facility housed the oldest purpose-built TV studios in the UK and played a major role in shaping the city’s media identity.

The complex was steeped in history. In 1962, it hosted the first ever televised performance by The Beatles and the UK’s first general election TV debate was held there in 2010. However, by 2013 the studios were past their sell-by date. In part a victim of MediaCityUK’s success, production was transferred there and the site was sold and earmarked for residential development. The Mancunian real estate boom was well underway, the city’s skyline changing rapidly and beyond recognition. All expectations pointed towards another giant glass and steel tower.

Versa, however, had a very different vision. With remarkable foresight, the company saw the potential in refreshing the wider St John’s district. Working in collaboration with Allied London, which is responsible for the regeneration of Manchester’s Spinningfields district, the company has created an entire tech and media-focused neighbourhood in Manchester city centre. Reopened as a film and TV production campus in St John’s, its location makes it uniquely attractive as a one-stop centre—a self-contained network of amenities and services situated within metres of hotels, restaurants, outdoor amenities, heritage buildings renovated with flexible workspaces and lounges and numerous onward transport links.

Originally established in London and expanding to include international operations in Los Angeles and Dubai, Versa opened its Manchester facility in February this year. Blending the latest in cutting edge technology with some of the studios responsible for for such iconic programmes as Brideshead Revisited, Coronation Street, University Challenge and The Jewel in the Crown, With its more than 200,000 square feet of production space and galleries fibre linked to Aviva Studios and ABC Studios next door the St John’s Media Hub is

dock10's studios combine to equal a staggering 43,395 square feet of floor space
Once due for demolition, Versa Manchester has taken over the former Granada site

“designed to handle any size and type of TV production,” says Versa’s head of studios, Edward Harvey.

The sleeping giant has not just been awakened, it has been revitalised and repurposed to meet the rapidly evolving demands of a changing industry. As Harvey says, with a smile, “it’s Granada on acid.”

The company has revitalised studios 4, 5, 8 and 12 to use alongside new capabilities in a project that is still ongoing. “All of the galleries were gutted and updated for 4K,” explains Harvey, adding that fibre links are being extended to various sites across St John’s.

An eclectic mix of new and old is evident throughout. Studios 12 and 8 retain monopole lighting, with 220 and 140 monopoles respectively. Trusses can be added as required. Traditional resin floors allow cameras to seamlessly track, although as Harvey says, “our focus is on new technology and a more diverse production slate.”

Evidence of the contrast can be found in the 4,300 square foot Studio 6, which handles voice and motion capture as one of the largest mocap studios in Europe, and a 14m x 5m x 1.9 LED volume stage in Studio 12 for virtual production.

In the galleries, production control is managed by Sony 7000 X Vision Mixers with varying multi-function positions with windows through to sound and lighting control. A combination of Calrec Artemis 48 channel and Brio mixers handle sound control.

Versa Manchester is already creating breakthroughs. Earlier this year, the facility enabled the first virtual production for CBeebies. In collaboration with BBC Studios Kids & Family Productions and VP provider Immersion Science, The Great Ice Cream Hunt blended

physical filmmaking with digital environments. Synchronising camera movements with the on-screen environment, the production allowed performers to interact directly with the world on-set, creating dynamic shots in real-time. The complex now provides a home for productions including Dragons’ Den, Bullseye and more. Across the campus, the BBC’s Morning Live programme is based in Versa’s ABC Building and its Campfield Facility is also providing a new home for Blue Peter in a studio created to enable natural lighting.

The recent Netflix series Building The Band demonstrated the scale and integration of the campus. Versa provided three in-house studios, converted nine floors of a nearby apartment building into 261 rooms for cast and crew, fibre-linked the filming in the accommodation directly into its galleries, and activated an additional 12,000 sq ft of external studio space that was fully integrated into the production and also fibrelinked to Versa’s technical facilities.

Explaining the holistic approach, Paul Greer, VP of marketing at Versa, says: “This end-to-end, campus-wide setup highlights Versa’s broader offering: production services, virtual production support, technical equipment, accommodation, hospitality, secure locations, transport and consultancy, making it a complete home for production rather than simply a filming location.”

Sitting at the heart of the St John’s redevelopment, Versa Studio Manchester’s campus serves as a model for what can be achieved. Once a forlorn derelict due for demolition, the Granada site has returned to the forefront of media and entertainment, with Manchester once again at the centre of a technological revolution.

All of Versa Manchester's galleries have been updated for 4K

TAKING THE AUDIO PRODUCTION brakes off

Roland Heap of Sound Disposition explains to Kevin Emmott why spatial audio represents the future of sound design and how investing in a 23-channel Dolby Atmos setup has positioned the company to meet evolving demands across film, broadcast, and installation work

Spatial audio is everywhere and across a wide range of experiences; it’s on the TV, in the cinema and at the theatre; it’s adding value at live events and drawing in more people at art galleries; it’s across multiple streaming services, and companies are investing millions in R&D to find more creative ways to bring it into people’s homes and pockets. It’s why Roland Heap of audio post specialist Sound Disposition is so invested. He believes that you can do amazing things with spatial audio; in fact, he believes that when you have full control of the playback environment, you can do things that are simply impossible in other formats.

Roland Heap

Evolving needs

A self-confessed audio nerd, Heap has a long history in sound design, graduating as a Tonmeister in 2003 before spending several years as an assistant engineer at London’s legendary Abbey Road studios. He founded Sound Disposition in 2008 as a way to draw his freelance projects together. Today, Sound Disposition is an audio post production facility that covers a range of specialities. With spaces in Tottenham and Great Titchfield Street in Fitzrovia, Sound Disposition covers mixing, Foley, ADR and sound design for film, TV, and advertising. And, increasingly, immersive audio presentations for museums and art spaces.

“Spatial audio gives the sound designer the ability to really place the audience inside the story,” Heap explains. “You can do so much more with spatial audio when you’re in full control of the environment that you’ll be playing it in, which is why I find the immersive experience so exciting. It’s like audio with the brakes off.”

Earlier this year, the company doubled down on its commitment to spatial audio with the culmination of a two-year development into a 13.4.6 Dolby Atmos install designed to adapt to Sound Disposition’s evolving client needs.

“We’ve taken on a wide range of projects over the years, from VR projects to a host of different immersive experiences, and we found ourselves expanding into more broadcast work,” Heap says. “Our bread and butter has always been feature films, but the one thing that had always held us back was having a sizeable, flexible mixing space able to accommodate all of our many strands of work.”

Levelling up

“We’ve been involved in spatial audio since the very beginning,” he continues. “Whether it’s been

experimenting with Ambisonics for 360 VR, creating multi-screen video installations in art galleries, or delivering Dolby Atmos mixes, immersive audio has always really excited me and there has been an enormous amount of investment into R&D for spatial audio over the last few years.

“One of the upsides of Dolby Atmos in particular is that it’s got everyone thinking more about sound, whether they are getting the advantages of full immersion or not. Despite the limitations in terms of immersion of some of the supposedly Atmos replay systems, I think consumers are getting the benefit of this simply by the level of investment from companies focusing on the sound portion of whatever device they’re listening on.”

And that is just as well. Not every client can afford to invest in full Atmos presentations; Heap estimates

a minority of his film projects and an even smaller percentage of his broadcast projects are going fully immersive, so the fact that Sound Disposition’s system is so comprehensive enables the company to service a much broader range of clients and protect itself against third-party budget constraints.

Flexible enough to sit happily with a range of project types, Heap settled on the new Dynaudio Acoustics M-Series monitors for the studio’s 13.4.6 configuration, a setup considerably more detailed than the standard 7.1.4 Dolby Atmos Home Entertainment config. The install also includes a Barco 5k laser projector, Severtson screen, Avid S6 console and a Trinnov monitoring system to ensure that mixes translate between rooms.

“Our aim was to make the space as flexible as possible while not being unfamiliar to people doing conventional film work. We work extensively with SPAT, Ircam’s real-time spatial processor, as well as Ambisonics, which gives us the ability to work across all of those formats,” says Heap.

“I still plan to get some wedge versions of the MF15 wall speakers so that we can have below height information for working in Ambisonics, because as Apple headsets start to become more prominent, the notion of being able to monitor in actual Ambisonics in real-world spaces is going to become more relevant. I feel like we’re only a software version or two away from that, and there are already creators making content that very few people are getting to hear because it is still very niche. But I don’t think it’s going to stay that way forever.”

PICTURED ABOVE: Sound Disposition's 13.4.6 Dolby Atmos install is designed to adapt to the company's evolving client needs

“You can do so much more with spatial audio when you’re in full control of the environment that you’ll be playing it in, which is why I find the immersive experience so exciting. It’s like audio with the brakes off”
ROLAND HEAP

On the map

The scale of the install was soon justified, with an immersive experience premix which transposed every single speaker in the room directly to that of an external venue.

“Although they were in a completely different configuration, because we had mixed it in SPAT, we were able to very quickly remap one to the other,” says Heap. “The mapping worked astonishingly well. It meant that we were able to be up and running almost immediately in the new space, and we’re now working on remapping some of that same content in a completely new configuration for a project in the new year.”

In the end, the investment is about more than safeguarding against budgetary fluctuations or even about the ability to cater to a wider client base. It’s about having the ability to create more emotive, personal and uniquely immersive experiences with complete confidence in the process.

“Immersive experiences aren’t going anywhere, and the process of mixing has just as much of an impact on the audience as the process of creating the sounds,” says Heap. “I am a great believer that the term sound designer should be applied equally to sound designers, supervisors and mixers. From a technical perspective, the great joy of working with spatial audio in controlled environments like this allows for much greater creative freedom.”

Building resiliencecloud

Afailure at an AWS server centre triggered by a minor DNS update on 20th October 2025 affected services around the world, with at least 2,000 companies having to manage with degraded performance or downtime. While the outage didn’t seem to significantly impact broadcasters as such, the industry is well on the way to migrating operations to the cloud, so resilience remains a hot topic.

The industry’s transition to the cloud has matured from a lift and shift approach to one where broadcasters deploy cloud-native, microservices-based architectures to ensure reliability, agility, and scalability. The recent outage has served as a stark reminder that when transitioning to the cloud, it’s critical that workflows are built to provide high resilience. If there’s one thing that we should have learnt in the last year, it’s that disruption is inevitable. What really matters is how well we’re able to recover from it.

The recent AWS incident was not the first of its kind, and it won’t be the last. The hyperscale cloud providers have all experienced major outages at one time or another. Earlier this year, Google Cloud services experienced an outage that lasted several hours and impacted Google services such as Chat, Gmail, Google Drive, Google and Docs, as well as external services.

Last year, an error in a CrowdStrike security software update triggered disruption for Microsoft users, causing outages on a massive scale that affected everything from broadcast, finance, and healthcare to shipping and logistics.

According to Statista, AWS, Microsoft and Google account for more than 60 per cent of the cloud market, so it’s easy to see why these incidents have such a huge impact on global services. So, does this mean all the cloud naysayers were right and broadcasters should all move back to safe on-prem facilities?

Absolutely not because no system is 100 per cent failproof, and the benefits offered by the cloud are too great to pass over. What’s important is how we prepare for and manage these incidents.

Backups and disaster recovery still key

The age-old advice of making sure critical data is backed up still applies to the cloud. The difference now is that it’s important to make sure backups are maintained in a separate cloud location to the primary data, rather than a separate physical site.

Disaster recovery is another important and critical aspect of resilience. Just as traditional on-prem facilities would have a

disaster recovery playout system at a different location from the main broadcast facility, disaster recovery systems are equally as important in the cloud, and should be set up in a different zone or region, ready to kick in if necessary.

However, it’s no good having backup and disaster recovery systems in place if teams are unsure how to access and deploy them quickly and effectively. Therefore, processes should be well tested, and teams well trained so they know precisely what to do in the event of an incident.

Workloads need to be built to withstand failure. One way to build in resilience is to distribute the load across zones within a single region. The data centres in each zone are built to operate independently to those in other zones within a region in terms of power, networking, and cooling. This enables each zone to operate in isolation from other zones. If one zone goes down, the workload can quickly move to another as needed. But is multi-zone architecture enough for high resilience and business continuity?

It really depends on your attitude to risk. The recent AWS outage originated in the US-East-1 region in northern Virginia and affected multiple availability zones operating in that region. So, in this case, a multi-zone approach would not have prevented system failure. It’s worth remembering, however, that failures of this scale are rare. If it is deemed necessary to mitigate this risk, then workflows can be built across multiple regions. While more complex than operating across multiple zones within a single region, a multi-region set-up provides even higher levels of resilience than a multi-zone approach.

What about multi-cloud?

While multi-region cloud architectures provide increased resilience over a multi-zone approach, going multi-cloud with services running across more than one cloud provider offers an even higher level of resilience. While this may be appealing for those who want increased service redundancy and flexibility, it is extremely complex and costly to implement, so may not be the best option for everyone.

It may well be better to focus instead on building robust architectures within a single cloud by distributing workloads across zones as a minimum, and across regions if deemed necessary. This is effective, costs less, and is easier to set up and manage than a multi-cloud strategy. The goal isn’t to avoid disruptions entirely but to build systems that can adapt and recover quickly.

Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.