Skip to main content

TVB Europe 110 December 2024

Page 1


Inthisissue: mediaassetmanagementandstorage;isbroadcastnow anITindustry?

Cutting to thechase

TVBEurope meets the editors of multi-award winning show, Only Murders in the Building

System T

The Ultimate Solution For Any Broadcast Application

S400 Flagship Control in a Compact Surface

• Premium fader tiles with dedicated OLED displays for every path, advanced level metering and status LEDs

• Available in 16+1, 32+1 and 48+1 formats

• Perfect for studio, OB, event space and music applications

Tempest Control App

Full Feature-Set of System T in a Software App

• An ideal solution for broadcast environments where a powerful broadcast audio mixer is required, but a traditional console is not

• Perfect for automated newsroom production, remote broadcasts and flypacks

• Available in a number of pre-configured packages or custom configurations

I’d like to thank…

Ihope you’ll forgive me for patting myself on the back, but I am writing this on the morning after the Rise Awards 2024, where I was honoured with the Influencer award. It was a real shock to win and wonderful to be honoured by my peers within the industry. I never think of myself as an influencer, but if I help to get the word out about what a fantastic industry this is to be a part of, then I’m happy to be called one.

As always, it was inspiring to be in a room full of colleagues and friends, and I came away feeling that the future of media technology is in excellent hands. My congratulations to all my fellow nominees and winners. It felt like the judges had a tough time choosing just one winner in each category.

I also want to say thanks to the whole Rise team, not just for the awards but all the work that they do in reaching out to young students and supporting women already on their career journeys. In particular, I’ve always been impressed by Rise’s work in mentoring.

I’ve written before about my own mentor. Howard Hughes was my first boss when I joined Capital FM back in 1996. He taught me how to be a journalist (there was no such thing as an influencer back then). Unfortunately, the weekend prior to the awards I received the shocking news that Howie had passed away after a short illness and so I want to dedicate my award to him, and everyone who acts as a mentor or guide to colleagues. The work you do helping both new entrants into the industry and those who have been here a while is invaluable.

I can’t quite believe this is our last issue of 2024! What a year it has been. Three major elections (European, UK and United States), an Olympic Games, Euro 2024, floods, hurricanes and Earth’s hottest day on record; this year has certainly kept everyone working in television busy.

It would be easy to say 2024 has been the year we’ve all been talking about artificial intelligence. It has certainly been the topic of much discussion at trade shows, but in terms of news within the industry, it’s still finding its feet. There has been a lot of talk about AI and creativity, with many on the production side worrying about their jobs. But actually, on the media tech side, we’re hearing how AI can be a partner, assisting end users rather than replacing them.

What will be our big talking point in 2025? I suspect AI might still be up there. I hope 2025 will see a resurgence in the UK’s TV and film industry after the so-called ‘survive to ‘25’ theme of this year. That can only be a good thing for everyone. Whatever happens, I wish you all a very happy festive season. See you in 2025!

www.tvbeurope.com

FOLLOW US

Twitter.com/TVBEUROPE

CONTENT

Content Director: Jenny Priestley jenny.priestley@futurenet.com

Content Writer: Matthew Corrigan matthew.corrigan@futurenet.com

Graphic Designers: Cliff Newman, Steve Mumby

Production Manager: Chris Blake

Contributors: David Davies, Kevin Hilton, Graham Lovelace, Neil Maycock

ADVERTISING SALES

Publisher TVBEurope/TV Tech, B2B Tech: Joseph Palombo joseph.palombo@futurenet.com

Account Director: Hayley Brailey-Woolfson hayley.braileywoolfson@futurenet.com

SUBSCRIBER

CUSTOMER SERVICE

To subscribe, change your address, or check on your current account status, go to www.tvbeurope.com/subscribe

ARCHIVES

Digital editions of the magazine are available to view on ISSUU.com Recent back issues of the printed edition may be available please contact customerservice@futurenet.com for more information.

LICENSING/REPRINTS/PERMISSIONS

TVBE is available for licensing. Contact the Licensing team to discuss partnership opportunities. Head of Print Licensing Rachel Shaw licensing@futurenet.com

MANAGEMENT

SVP, MD, B2B Amanda Darman-Allen VP, Global Head of Content, B2B Carmel King MD, Content, Broadcast Tech Paul McLane VP, Head of US Sales B2B Tom Sikes

Managing VP of Sales, B2B Tech Adam Goldstein VP, Global Head of Strategy & Ops, B2B Allison Markert VP, Product & Marketing, B2B Andrew Buchholz Head of Production US & UK Mark Constance Head of Design, B2B Nicole Cobban

In this issue

DECEMBER 2024

08 From fragmentation to focus

12 3 is the magic number

Reflecting the three stars on screen, Only Murders in the Building relies on a trio of editors to help bring the comedy and intrigue of the show to the screen

18 Stop-motion triumph

Colourist Deidre (Dee) McClelland details how she helped shape the distinct look and feel of whimsical Australian stop-motion film, Memoir of a Snail

24 An inevitable evolution?

At IBC2024, Grass Valley CTO Ian Fletcher told attendees at GV Forum that broadcast is now an IT industry. Industry stakeholders share their thoughts on the subject

28 The power of connection

Announcing its latest audio solution, Sennheiser aims to start a revolution in audio production. TVBEurope’s Matthew Corrigan hears how and why from company co–owner and CEO, Dr Andreas Sennheiser

34 Exploring AI-enhanced TV sound

A mainstay of the pro-audio calendar, the latest edition of the Audio Collaborative conference – organised by Futuresource Consulting –once again took an unflinching look at a handful of issues presently defining the industry, writes David Davies

42 Improving asset security and governance with AI

44 Sharing the love

Following their work on series such as The Jury and Love is Blind, Matthew Corrigan finds out the secrets behind rental and services provider HOTCAM’s success

48 Racing into opportunity

MotoGP partnered with Tata Comms Media to move from an onsite approach to a hybrid model that incorporates remote production, streamlining processes and security

52 Goal: Liverpool (assist: Wasabi)

Liverpool Football Club produces over 100 pieces of content every week across its linear TV channel, OTT platform and social media channels. LFC’s Drew Crisp tells Jenny Priestley how a cloud storage partnership with Wasabi is helping the club achieve its goals

Content is King!

Content is King has been a mantra for many years in the broadcast and media technology world, although the original reference was by Bill Gates in 1996 regarding online content.

It certainly seems to have been reinforced recently by the streaming giants battling for subscribers with some Disney+ and Apple TV+ series estimated to cost $15 million per episode. While the post-pandemic dip in subscriber acquisition has caused some level of curtailing of content budgets, investment from the streaming giants remains at incredible levels.

One type of content that has proved particularly effective in retaining subscribers is live, predominantly sport. If proof was needed of the audience power of live sports, then Netflix’s entry into the market tells us it is very real. It was only in 2018 when their CEO stated the company had “no plans to start streaming live sport events,” but today the company is active in the live sports market.

Whether it’s creating content or buying rights, new content can be expensive, potentially prohibitively so. However, there are many companies with large archives which can potentially be leveraged.

For many years there was an industry challenge to have good data on what was in an archive. Early MAM systems attempted to solve the problem, and although searching a database for that critical video or image was straightforward, this was only as effective as the metadata populated in the database. The example often given was the Clinton/Lewinsky scandal, to date that for everyone it culminated in 1998, coincidently the year that Google was formed. Fast forward to today and the ability to catalogue, index and search content is not only solved but is getting more and more advanced with AI providing completely new ways to analyse content in a library.

If we can take it as read that the problem of understanding what’s in our media archive is solved, the next thing we need is innovation in how we get the content to users. This is an area that is seeing a lot of change. To reference Netflix again, in the early days of streaming many content owners licensed content to the company as a simple way to generate revenue from their archives. The more recent trend is for companies to have their own streaming platform and leverage content under their brands and with that own the customer relationship.

In addition to on-demand platforms, companies are also looking to monetise their archives through linear streaming channels. Certainly,

there has been tremendous growth in FAST channels over the last couple of years, and while they are a cheap way to get content to consumers, the commercial performance of these early services has been very mixed. Companies are talking about FAST 2.0 or 3.0, basically a more sophisticated experience for consumers with scheduling and advertising planning closer to the levels of primary broadcast channels.

Switching focus, any analysis of media content wouldn’t be complete without considering social media. The growth metrics and sheer volume of ‘free’ content on social platforms is hard to comprehend. Referring to the title of the article, I think it’s fair to say there is a lot of content on social media that doesn’t deserve the royal title of ‘King’, however, there are serious revenues being created and outside of the social media companies there are influencers and content creators earning significant income.

So far, social content has felt quite distinct from that produced by the major media companies, especially content with multimillion dollar budgets, but there is a definite trend of improving production values. Top influencer brands demand quality that is appropriate to the revenues they are generating, just consider the Kardashian dynasty. Another example of high production values is the vast volume of content created by political parties for the recent elections in the UK and United States.

That leads me to a final content topic, fake news. So many people now consume news and current affairs on social media ahead of any other platforms, and the danger is that any message can be delivered unauthenticated and potentially deliberately biased. Even mainstream news organisations that we might turn to as trusted sources of information face the problem of validating information when the source is online. There are some interesting industry initiatives between news organisations that are collaborating and sharing information to establish provenance for news stories.

Outside of news, AI is making the creation of fake content so easy that it feels that the only limiting factor is one’s imagination. Coupled with the largely unregulated ability to directly reach an audience through social platforms there is a concerning threat of content misuse. Let’s hope technology and some regulation are able to provide enough protection so content remains King and doesn’t descend to criminal.

Blac km ag ic UR SA Br oad ca st G2 is an in cr ed ib ly po we rf ul

a de signed for both tr aditional and online broadcaster s. The 3 came ras in 1 de s ig n all ow s it to wo rk a s a 4K pr od uc ti on ca mer a, a 4K st udio ca me ra or a 6K di gi ta l film ca mer a! Now wi th su pp or t fo r li ve sy nc to Blac km ag ic Cl ou d and DaVi nci Re so lve me di a bins, yo u ca n get br ea ki ng news to ai r wi th i n se co nd s!

Get Digital Film Quality for Broadcast

The large 6K sensor combined with Blackmagic generation 5 color science gives you the same imaging technology used in digital film cameras. The 6K sensor features a resolution of 6144 x 3456 so it’s flexible enough for broadcast and digital film work. With 13 stops of dynamic range, you get darker blacks and brighter whites, so it’s perfect for color correction.

Compatible with B4 Broadcast Lenses

The URSA Broadcast G2 features a B4 broadcast lens mount that includes optics specifically designed to match the camera’s sensor. B4 lenses are fantastic because they are par-focal, so the image stays in focus as you zoom in and out, so you don’t need to constantly chase focus as you shoot You also get full electronic lens control to adjust focus, iris and zoom using the camera’s controls, or remotely!

Add Viewfinders, SMPTE Fiber and Lens Mounts

There’s a wide range of accessories that are specifically designed to work perfectly with URSA Broadcast G2. However, a shoulder mount kit, V-Lock battery plate and top handle are included so you don’t need to purchase anything extra! Plus you get a spare EF lens mount if you don’t own a B4 lens There’s also an optional fiber converter that can power the camera from 2 km away via the single SMPTE fiber!

Live Sync and Edit Media while Recording

Blackmagic URSA Broadcast G2 now supports creating a small H.264 proxy file in addition to the camera original media when recording The small proxy file can upload to Blackmagic Cloud in seconds so your media is available back at the studio in real time If you have multiple cameras, then the new multi source feature in DaVinci Resolve’s Cut page will show each camera angle in a multiview.

Blackmagic URSA Broadcast G2 ¤3825

From fragmentation to focus

Market fragmentation in the media industry has led to significant challenges in both content distribution and audience engagement. Unlike the consolidated viewing experience from just a few years ago, the current landscape is marked by a proliferation of niche services and specialised content.

Media organisations must now reach a complex network of smaller audiences instead of a large, concentrated viewer base. Ensuring that viewers can easily find relevant content is crucial to sustaining revenue. But what about content discovery behind the scenes?

Monetisation matters

A significant challenge for entertainment providers has been managing the diversity and complexity of their metadata, stemming from a mix of original productions and acquisitions. But AI-driven search and discovery in media libraries is changing the way content reaches wider audiences, making retrieval and recommendation more intuitive. AI search tools offer media companies a pathway to both better audience engagement and enhanced monetisation.

AI-powered search allows providers to create richer descriptions that improve searchability and enable accurate, tailored content recommendations. Users can locate content based on highly specific attributes, from genres and actors to particular themes. Content owners need targeted content discovery, so they can maximise the visibility of their assets and leverage the right content from their archives at the right time. Rather than being buried in extensive libraries, valuable content surfaces effectively for the right audiences, increasing its impact and revenue potential.

AI-powered learning can also profile individuals’ viewing habits, ensuring for instance, that the preview of the asset that a person sees is focused on action, a favourite actor or a favourite team. There is an obvious barrier to new monetisation opportunities: the volume, size, and diversity of media content. Traditional search functions lack context, and so often struggle with large media libraries, making it challenging for teams to quickly locate specific assets. AI-driven search platforms address this by automating categorisation, using advanced AI models for audio transcription, video content recognition, and optical character

recognition (OCR). This allows content to be searched and retrieved with greater accuracy and speed. It goes beyond the restrictive nature of a fixed tag, or manual search term that can vary based on human interpretation, opening up a more flexible and contextually driven search functionality.

Enhanced metadata not only makes content more accessible but also adds value to older assets by uncovering and tagging details that might otherwise remain obscure. For example, AI can identify and tag objects, locations, or specific keywords in archived material, bringing long-tail content back into circulation. For media professionals, it means finding hidden connections in content, enabling teams to deploy and repurpose valuable assets with greater ease across departments and regions.

Optimising media workflows

Many of the time-consuming aspects of content management, such as indexing, localisation, and compliance verification, are ideal candidates for automation. Predictive analytics and machine learning models further streamline these processes, allowing media teams to focus on the creative aspects of their work rather than the administrative. If you free your team up to do more of what they care about doing, it has a huge impact on productivity.

By eliminating routine tasks, AI-powered automation contributes to better morale among media teams. Editors, quality control professionals, and localisation specialists benefit from uninterrupted time for high-impact work, leading to a more satisfying and efficient environment. With fewer repetitive tasks, media professionals can remain focused on areas that require human judgment.

Collaborative functionality is another valuable aspect of AI-driven media workflows. Conversational AI interfaces allow teams to query data directly, simulating a “conversation” with their content library for real-time insights or specific information. This interactive, natural

“By simplifying metadata ingestion and tagging, AI tools can guide companies through this transition, helping them prepare for more advanced content management capabilities in the future”

access to content information encourages cross-departmental collaboration, as each team member can retrieve relevant content independently. The result is a more synchronised work environment where collaboration flows smoothly.

Preparing for integration

The advancements in AI bring their own set of challenges for media companies. Rapid developments in model quality and performance demand ongoing testing to ensure accuracy and efficiency. Generative components in AI models can occasionally produce incorrect information, known as “hallucinations,” and Perifery has mitigated these risks through rigorous validation processes. These safeguards help ensure the reliability of AI search outputs, minimising the chance of errors in content management.

There is also an industry-wide concern about AI’s impact on job security. But rather than replacing human roles, AI’s value lies in complementing media professionals. Automating repetitive tasks and allowing experts to focus on the judgement-based aspects of their work. Open communication about AI’s role in the workplace can help ease anxieties. From a personal perspective, and having seen a huge number of technical changes in the media industry over the years. I view AI integration as the first step in a new wave of innovation. Much more of an opportunity than a threat.

A

foundation for content management

For decades, metadata consistency has posed a significant challenge in the media industry. With content coming from such diverse sources, we have struggled with standardisation. This has been compounded by vast legacy archives, stored on outdated formats. Digitising these assets is often a necessary first step before implementing AI-powered search solutions. By simplifying metadata ingestion and tagging, AI tools can guide companies through this transition, helping them prepare for more advanced content management capabilities in the future.

AI platforms with open architectures offer comprehensive media analysis that integrates video, audio, image, and text data into a cohesive system. This adaptability allows organisations to continuously update and refine their content management systems without being locked into a single proprietary solution. By avoiding high recurring costs and offering control over both data and processing, AI platforms provide better resource management and enhanced data security.

As companies adopt these systems, they are equipped with the flexibility to grow with new advancements while maintaining a firm grasp over their costs and operations. These tools don’t just address the complexities of today’s media environment — they allow companies to refocus on delivering valuable content with speed and precision. n

Why IP video distribution is all about control

European broadcasters are realising it’s time to look beyond satellite and fibre to explore IP-first alternatives. Costs, complexities and inefficiencies mean legacy distribution mechanisms can no longer support a modern, multi-platform media landscape. Companies across the content distribution chain are adopting a range of IP-based video transport protocols to facilitate more flexible, cost-efficient ways of working. That’s good news. The even better news is that intelligent, interconnected IP ecosystems are helping manage the complexity of diverse protocols at scale to simplify a fast-growing, global IP video distribution environment.

Finding the right fit

While many media organisations are comfortable with an IP-first approach, an ever-expanding ecosystem of protocols, platforms, and connected devices creates significant complexity for content providers.

On their own, protocol-based IP transport solutions like Secure Reliable Transport (SRT), RIST, or Zixi don’t have inherent multicast capabilities. Content distributors today need the ability to take one feed and automatically deliver it to thousands of destinations. Take SRT for example — a point-to-point protocol with no ability to adapt or route based on network conditions or video workflow requirements. Opting for a managed, multicast-native, network-based IP solution enables media businesses to acquire content once and manage complex normalisation, versioning, and distribution workflows at scale, all within one interconnected ecosystem.

IP innovation has come a long way, let’s not forget that the middle of the internet is a chaotic environment that wasn’t originally designed to handle live video at scale. Media companies harnessing any number of common IP transport protocols such as SRT or RIST at either side of a transmission require a connective layer to manage and control the middle of the internet and achieve guaranteed reliability through powerful routing and error recovery systems.

Gaining control with next-generation IP video distribution

Recent advances in IP networking and more granular, deeper levels of monitoring and visibility mean that media companies favouring an all-IP approach benefit from more comprehensive control over content and data compared with legacy ways of working — particularly a satellite distribution model where control over live feeds is lost the moment the video leaves the satellite antenna.

Alongside table-stakes assurances around reliability and latency,

media companies exploring IP video distribution want to know they can access more intuitive, user-friendly customisation options that simplify complex distribution workflows, unify siloed ecosystems, and deliver operational efficiencies. Managed IP video distribution solutions with closely interconnected production, versioning and playout workflows enable operators to easily automate and modify video workflows while using their own booking and management, conditional access and ad insertion systems.

Control is a question of cost too. Engineering leaders should consider the benefits of a managed IP approach that allows for greater cost predictability compared with a hybrid system comprising any mix of protocol-only solutions, in-house tech, and a reliance on cloud workflows which can incur hard-to-predict charges.

Simplifying regionalisation at scale Europe is home to 24 official languages, with over 200 languages spoken regularly across the continent. Media companies looking to expand and diversify their audience in Europe need to be prepared to deliver customised versions of their content for many diverse markets. The adoption of IP marked a paradigm shift in moving far beyond the ‘world feed’ approach, enabling more scalable and hyper-regionalised distribution. The next phase in IP-powered regionalisation centres on a more streamlined approach to creating content once and tailoring it cost-effectively for multiple regions, languages and audience demographics.

Automated versioning for live events and full-time channels enables sports, media, entertainment and technology companies to create and deliver limitless versions of content feeds — with captioning, custom graphics, audio, ad triggers and local language commentary — all within the same IP ecosystem. A managed IP video distribution approach is the only way to enable this level of customisation with SLA-backed reliability and availability.

LTN works with European media companies that are interested in delivering customised language feeds across neighbouring countries like France, Germany, and the Netherlands — as well as broadcasters that want to spin up new versions of live news channels for global audiences. Today, a managed IP video distribution approach makes this type of large project far simpler than it would be with traditional distribution mechanisms or less sophisticated IP solutions. Now is a great time to explore IP for the first time — or ask if your existing IP video distribution system is really up to scratch.

HOW BBC STUDIOWORKS BUILT A FLYAWAY KIT FOR IT TAKES TWO

TVBEurope talks to the team at BBC Studioworks about the technology included in a new flyaway kit for BBC Two's Strictly Come Dancing spin-off, It Takes Two.

MAKING THE DRAGONS FLY

TVBEurope’s website is a hive of exclusive news, features and information about our industry. Here are some featured articles from the last month…

UK CREATIVE INDUSTRIES NAMED AS ‘GROWTH DRIVER’ IN GOVERNMENT’S INDUSTRIAL STRATEGY

The strategy aims to create a pro-business environment and play to the country’s strengths, with business secretary Jonathan Reynolds stating it “will hardwire stability for investors and give industry the confidence to plan for the next 10 years and beyond”.

ANALYSTS: THE RATE OF DECLINE IN BROADCASTER VIEWING IS SLOWING

Research from Enders Analysis found that by 2030 broadcasters' share of total video viewing will be 52 per cent, down from 58 per cent in 2023.

FANTAS-AI? DISNEY ‘SET TO ANNOUNCE MAJOR AI INITIATIVE’

Focusing on post production and VFX, the move is expected to mark a 'sea change' in the industry.

Martin Pelletier, visual effects supervisor at Rodeo FX, explains how House of the Dragon’s fantastic flying beasts are able to take to the air.

IS THE MAGIC NUMBER 3

Reflecting the three stars on screen, Only Murders in the Building relies on a trio of editors to help bring the comedy and intrigue of the show to the screen

What happens when you take two baby boomers, a millennial, lashings of comedy and a murder mystery?

You get 49 Emmy nominations, and the most-watched comedy premiere in streaming service Hulu's history.

Only Murders in the Building launched on Hulu in the United States and Disney Plus internationally in August 2021, and has just wrapped up its fourth season, with season five already commissioned.

The show was created by Steve Martin and John Hoffman, and reunites Martin with his Three Amigos and Father of the Bride co-star Martin Short, with actress/singer Selena Gomez appealing to the younger generation.

Each season follows the trio as they investigate suspicious murders in their Upper West Side apartment building, the Arconia, and produce a podcast about each case.

T hree editors

Due to the pandemic, the show’s second season began filming later than planned which meant all of the editors involved in season one were no longer available. Shelly Westerman, Payton Koch and Peggy Tachdjian joined the show in season two and all have received Emmy nominations for their work on season three.

“A mutual friend recommended me,” explains Tachdjian. “The producers reached out to interview me, but at the time, the schedule didn't work, and I wasn't going to be available. I recommended Shelly

and she got the job right away. A few weeks later, their schedule was pushed and she called me and told me that the third editor had dropped out. So I reached back out and asked if they were still looking, and it just worked out perfectly."

“I’ve spent 28 years editing on Avid.I like the ability to customise it. My layout looks pretty simple. I'll see my assistant's layout, and I'll say, are you running a spaceship here?”
SHELLY WESTERMAN

Westerman, Koch and Tachdjian had all worked together previously on a number of projects for Ryan Murphy Productions, with Koch working as an edit assistant to Westerman. “Shelly got the job on Only Murders and she called me up and said, come and be my assistant on season two, let's edit everything together, and we'll try and get you the bump up,” says Koch. “I was like, done! Signed, sealed, delivered, I'm on my way.”

The pair collaborated on season two, editing every scene of their allotted episodes together. At the end of season, showrunner John Hoffman told Koch he would be making the move up to editor when the show returned. “Two seasons later, and here we are. It's just been the most amazing experience, and I'm eternally grateful,” he says.

For all three editors, joining Only Murders in the Building gave them the opportunity to work with two comedy icons. “I grew up watching Steve and Marty,” says Westerman, “and then when I met John Hoffman he was just lovely. It was an easy decision and I'm so grateful to be given the opportunity.”

“When we were first reached out to about the show, season one hadn't even aired yet. There was only a teaser trailer. I saw that it was Steve and Marty and Selena Gomez, who I didn't know that much about at the time but I've grown to love now,” adds Tachdjian. “I was just really kind of fascinated by the trio, and it just looked really fun. We'd all come from working on Ryan Murphy shows where there's a lot of horror or kind of dark stuff, and this was just a very light comedy so it felt

“We'd all come from working on Ryan Murphy shows where there's a lot of horror or kind of dark stuff, and this was just a very light comedy so it felt really exciting”
PEGGY TACHDJIAN

really exciting. Also, I'm from New York. I loved the idea of working on something that was taking place in New York, where the city feels like a character, so that was kind of exciting for me.”

T hree meetings a week

Each editor takes responsibility for a set number of episodes per season, with the three working on a rotation. “On season three, Peggy did number one, and Shelly was on two, and I was number three, and then we rotated through the episodes,” explains Koch.

The trio would meet two or three times a week with all the assistant editors, the entire post team, and visual effects to discuss deliveries and any issues. It also helped them know what was happening with the other episodes. Discussions would often involve what should and shouldn’t be included in each episode, with all three editors keen to not give anything away. “You want the viewers to feel like they're learning new information,” Tachdjian continues. “Sometimes scenes would have pieces of information that we would withhold because we knew that in the next episode, the audience were going to get that piece of information.”

“There’s constant communication between the three of us. We’ll discuss if we need to hold an extra beat on a character to plant an idea for a later episode,” agrees Koch. “We might lock the first few episodes, but then we're always going back and relooking at stuff and putting more emphasis on certain things that we find out down the road.

Images courtesy Hulu
Martin Short, Selena Gomez and Steve Martin shoot a scene

Because of the shorthand that the three of us have with each other, there's that comfortability where I can call Shelly or Peggy and ask them to watch a scene. We're artists, and when you play something, you're vulnerable, so you want to feel that comfortability to know that you're in a safe space and you can share your work and feel that you're going to get good feedback.”

A big part of the attraction of working on Only Murders in the Building is raising the suspense across each season. Tachdjian starts not knowing who the murderer is and tries to stay spoiler-free as long as she can, while her colleagues like to know right from the start. “We'll have meetings where they'll say, Peggy, it's time to go, we're going to talk about the murderer. At some point, I have to know, because I need to know where the next episode is going, but I try to hold out as long as possible, just for the fun of it, I like to try to figure it out.

“Even once I know who it is, it's really fun to figure out how we're going to get there,” she continues. “In season two, I knew who the murderer was by episode three, but I could not figure out how it was going to get there. I definitely didn't see the big twist, but by episode seven, I had to know, because there were certain images that were framed specifically to be like an Easter egg for later if you go back to that episode.”

Throughout their time on the show, all three editors have worked remotely. That began with season one and has continued ever since. “We make it work,” states Koch. “It would have been lovely for all of us to be in an office together. There are difficulties in communication when you're all not in the room or an office together, but it works.”

“It helps immensely that we all know each other,” says Westerman. “We have a shorthand, it’s easy.”

T hree takes

Working with two hugely experienced comics in Martin and Short means that on average, the editors receive between 2-3 takes of each scene due to how well-rehearsed the cast are before the cameras begin rolling. “There are gags that Steve will do and the director will tell me he’s been rehearsing for three weeks,” says Westerman. “I'll be like, what? He's Steve Martin! The director will say no, he's very rehearsed, very ready to go and there's very little ad-lib.”

Of course, each director works differently. Koch cites season two's finale where director Jamie Babbit shot 52 hours of footage for a 30-minute episode. “[There] was a huge scene with all the characters,” he explains, “it was the biggest scene I've ever had to do, and it was the most amount of footage that we've had on the show. Normally it’s 20-25 hours of footage per show.”

All of the editors get involved with the show during production, cutting dailies as soon as they’re available. The first assembly takes around two weeks to put together before the editors work with the episode director on the final cut. The show is edited in Avid Media Composer.

“I've used Avid my whole career so I don't even know how to use anything else,” states

“Because of the shorthand that the three of us have with each other, there's that comfortability where I can call Shelly or Peggy and ask them to watch a scene”
PAYTON KOCH

Tachdjian. “I really love the workflow. I love the ease of it. I can do everything I want to do. There are so many tools that help us day to day to be faster, more organised, more efficient.”

“I’ve spent 28 years editing on Avid,” adds Westerman. “I like the ability to customise it. My layout looks pretty simple. I'll see my assistant's layout, and I'll say, are you running a spaceship here?”

Each editor has a specific tool within Media Composer that helps with their work. For Koch, it’s fluid morphs. “Those are helpful when the producers are telling us to make it faster and pace it up. I also like the Animatte tool, we do a lot of split screens, and those are very easy for us to slap on. The edit assistants will help clean it up, but we can get a rough idea of what we're trying to do.”

Only Murders in the Building is shot using multiple cameras, which provides the editors with different options for each scene. Using Media Composer enables them to group takes, which they say is particularly useful in scenes with a lot of actors.

All three editors have an idea of what they want to achieve with each episode they work on. Westerman says that for her it’s “emotion, emotion, emotion”.

“I agree with Shelly,” adds Tachdjian. “The heart of the show is the emotional connection between the characters, and so I think it's really important to always make sure you're hitting those moments as well as the comedy. It might be adding a look between Marty and Steve, or Selena rolling her eyes. Those things are really important in helping the audience connect to the characters."

Koch explains that for him, the joy of working on Only Murders is the different genres the show plays with each episode. “Each season has its own theme. In season four, we've been talking about movies and old Hollywood, and I kind of try to lean into whatever theme it is per episode.

“My first episode in season four was Gates of Heaven, so I tried to lean into documentary storytelling, and how we could use that. I try to just

play with it and have fun with the elements and try to make them feel different or stand out. I just think that's the joy of the show. Not every episode is the same, there's a different format implemented in each one that is fascinating.”

Asked which episode is their favourite, all three editors cite a different one for a different reason. Westerman says hers is Sitzprobe (season three, episode eight), when she was able to work with directors Robert Pulcini and Shari Springer Berman. “I was their assistant editor in New York back in the day, so to reunite with them now as their editor was very special. It gave us a shorthand. I gave them the lowdown on set, so they felt more comfortable going to shoot. That episode also includes Steve Martin performing The Pickwick Triplets song, so that was my favourite episode.”

For Koch, it’s Grab Your Hankies, (season three, episode three):

“Meryl Streep sings Look for the Light. It's special because it was my first solo venture, and it was intimidating having her do that big song. I just remember playing it for both Shelly and Peggy, and they were both so sweet and said how great it was. That's my special moment from the show, it was just a beautiful memory, and I'll always remember it.”

“The first episode that I cut is my favourite, The Last Day of Bunny Folger in season two,” says Tachdjian. “We were alone with Bunny, who we didn't know very much from season one, except that she was kind of a brat and that she got murdered. It felt like a big challenge. I was very worried about what everyone would say, especially the director and the producers. At each step, I would get so nervous about what the reaction to it was going to be, and everybody ended up really loving it. That episode is very special to me.”

With season four now at an end, pre-production is already underway for season five, however only Westerman will be back. Tachdjian is again working with Ryan Murphy Productions, while Koch is working on season two of Netflix’s The Last Airbender

“It's so weird because having been part of the show for three years, you feel part of the family,” he says. “It's sad to think that Only Murders is going to keep going on without me. I will be very excited to tune in and watch Shelley's beautiful work on season five.”

Martin, Gomez and Short with co-star Da'Vine Joy Randoph

WHY AN EFFECTIVE POC IS ESSENTIAL TO UNLOCKING AI SUCCESS

According to AI and video search specialist Moments Lab, setting well-defined objectives during a proof of concept is essential for any media company planning to invest in AI

It’s no secret that we are in the midst of an AI revolution. Every day now there seems to be a news story about another groundbreaking solution or a whole other area of R&D opening up to AI. And with research firm Gartner recently predicting that AI software spending will hit a colossal annual total of $297.9 billion by 2027 – up from $124 billion in 2022 – it’s a trend that is only set to accelerate.

Because of its ability to improve productivity in various areas of news and broadcast, Generative AI-powered media indexing, content discovery and creation technology is sure to be one of the main beneficiaries. According to a recent study by Caretta Research conducted on behalf of TVBEurope, 73 per cent of broadcasters surveyed said they are looking to existing media tech vendors to embed AI functions in their products and workflows.

Over the last few years, Moments Lab (formerly known as Newsbridge) has cultivated a growing profile in AI-powered media production workflows with a product range powered by its patented generative and multimodal AI indexing engine, MXT.

One of the characteristics that has seen it resonate strongly with specific areas of broadcast is that it is trained on news, entertainment and sports data – and includes capabilities specifically designed for those industries’ use cases.

Along the way, the company has gained valuable insights into the developmental process surrounding AI – some of which are now shared in an AI Buyer’s Guide for Media Organizations, which can be downloaded as a PDF at the Moments Lab website . In particular, it found that an AI application can only stand a chance of realising its full potential if it is preceded by an effective PoC (proof of concept).

Above all, a company needs to approach a PoC with a sense of realism; for instance, it would be foolish to expect 100 per cent accuracy from an AI-powered media analysis technology during the PoC phase.

This is because an AI is trained on and improves its relevance to a specific company over a longer period of time than that typically covered by the PoC – a new form of ‘continuous learning’, it might be said.

Similarly, it’s important to have robust metrics for determining the success of your PoC. Moments Lab in its AI Buyers’ Guide states that a rewarding trial is one which showcases a reduced workload for your team; this could involve, for example, proven automation of repetitive functions like media logging and scheduling, freeing up staff to work on other, more ‘creative’ tasks.

It’s also wise for a company to think very carefully about its own

business needs when it comes to finding and using content. In this context it can be helpful to ask four basic questions:

1. Who is looking for content? Creative services teams and journalists, for instance, could well be looking for very different media files.

2. What are they looking for? This refers to the subject matter as well as objects and places that might be especially relevant.

3. Where are they looking? This relates to what existing tools or platforms are being used to search within, such as asset management or content management systems.

4. How are they searching? This refers to how users are searching for content, whether by simple plain text search or a more detailed query using specific labels or filters.

Moments Lab highlights three potential PoC methodologies that it has recently explored with leading media organisations:

Methodology 1: Content Discoverability:

This approach involves companies using AI indexing tools to process significant volumes of media, with predefined search criteria helping them to focus on the discoverability of content. Comparing the ability to search for content in the Moments Lab Cloud Media Hub with their existing content or asset management system, this methodology – which can employ different AI modalities such as facial recognition and transcription – allows companies to test the effectiveness of their content metadata via searching processes that are identical to those employed in everyday production.

Methodology 2: Generative AI Usability

A recent project involving French news organisation Le Parisien exemplifies the second methodology. Having made use of AI for several years to archive its online video content, the company wanted to see if it could use multimodal and GenAI tools to help create the SEO-optimised titles, chapters, descriptions and hashtags required for publishing to platforms such as YouTube and Dailymotion.

The main aim here was to complement the everyday work of the journalists and reduce the amount of time they spent writing video content titles, descriptions and chapters. Essentially, they wanted to automate most of this work but allow for a final review and editing stage by the journalists prior to distribution.

Le Parisien prepared rigorously for the PoC and established some clear success parameters. These included enhanced SEO optimisation so the videos could be found more easily on the different content platforms; the ability of MXT to generate titles and descriptions with an appropriate tone; and, during the final part of the workflow, support easy editing of the AI output by the news team.

After a period of fine-tuning, MXT is now part of the everyday

workflow and is said to have brought significant time savings. So whereas it previously took Le Parisien’s journalists 15 to 30 minutes to manually summarise and upload a video, the process now takes an average of just three minutes.

Methodology 3: Indexing Accuracy

This approach takes as its starting point the idea that AI can bring significant improvements in accuracy to the indexing of content. A company can opt to process a set number of files – a few hundred would be a good sample – with various AI indexing tools that analyse the media using computer vision and/or speech processing such as transcription, facial recognition, object detection and optical character recognition. A committee of users can then assess the accuracy of the metadata generated, at which point it will be possible to decide which AI tools hold the most promise for their requirements.

Meanwhile, Moments Lab continues to enhance its range of products, with MXT-1.5 combining multiple AI modalities including vision and speech with natural language models. This allows video to be analysed quickly and divided into sequences and sound bites, accompanied by concise, human-like descriptions of the content. (As an aside, the company’s AI indexing tech was recently found to outperform state-of-the-art AI models GPT-4o and Gemini 1.5 Pro on long video understanding, based on the VideoMME benchmark.)

Forthcoming developments include a new AI prompting solution designed to enable the quicker production of a rough cut, and a groundbreaking new AI approach – in development with a leading broadcaster – to identify what’s inside physical tapes predigitisation in order to discover and select only material of greatest potential value for digitising and monetisation.

Please visit momentslab.com, to find out more about its AIpowered products, recent research, and notable work with customers spanning media and entertainment, sports and brands.

Contact: hello@momentslab.com

STOPMOTION triumph

Colourist Deidre (Dee) McClelland details how she helped shape the distinct look and feel of whimsical Australian stop-motion film, Memoir of a Snail

Working on Memoir of a Snail felt like coming home. Director Adam Elliot and I had collaborated on several projects before, so when he contacted me again, I felt honoured. Adam has a way of laying out his vision with clarity and precision. He’s a professional in every sense, his planning meticulous, his colour palette carefully chosen, and every scene’s emotional

Each city carried its own unique palette, mood, and meaning. Paris, for instance, was portrayed as warm and nostalgic, a memory from Grace’s parents’ past as she imagines it. To create this, we used a soft-edged vignette, giving Paris an almost sepia-toned warmth, contrasting with Melbourne’s muted, grey tones that reflect Grace’s more subdued emotional state. Canberra, where Grace spent much of her childhood, fluctuated visually depending on her mood. When she felt overwhelmed, her bedroom was a dark, cocoon-like space; but in happier times, it brightened, capturing her more hopeful moments.

triumph

intent mapped out in detail. Collaborating with someone as thorough as Adam means you don’t just learn the aesthetic they aim for — you almost start to anticipate it. In fact, I often found myself applying looks to the footage even before he voiced his ideas.

Adam’s vision was incredibly specific, with each location crafted to reflect [lead character] Grace’s memories and emotions as the story moves through Melbourne, Perth, Canberra, and even Paris.

PICTURED ABOVE: Watching the colours come alive on screen, from Pinky’s vibrant glasses to the nostalgic hues of Paris, reminded McClelland of the immense power of visual storytelling

One scene in particular that stands out to me is Gilbert’s fire-lit music box sequence. Gilbert’s home in Perth is dry, barren and symbolically tied to his isolation and loneliness. In the grade, we enhanced the flames to make the scene crackle with heightened saturation while preserving the music box’s deep, original red as it melts. To accomplish this, I used colour keys, the colour warper, and layering techniques to balance the bright, intense flames against Gilbert’s solitude. The goal was to amplify the raw, emotional weight of the moment without overwhelming it visually.

Another element I loved working on was Pinky, a touch of colour and brightness in Grace’s life. We wanted her red glasses and her presence in Grace’s scenes to be a bit more vibrant, cheerful, and full of life. For this, I used selective colour keys and curves to ensure her glasses and any props around her retained a rich, pop of colour. It was subtle work, but the result gave Pinky’s character a distinct visual voice, adding warmth to Grace’s life story.

This delicate dance of colour changes was largely created in the grade, where I dialled up or down the warmth and saturation to reflect the ever-changing emotional landscape. It was about balancing reality with emotional perspective, leaning heavily into tones that echoed her journey.

Pushing the emotion

Collaborating with Adam and DoP

Gerald Thompson throughout the grading process was incredibly fulfilling. Gerald’s input and Adam’s vision

Deidre (Dee) McClelland

allowed us to push particular looks to the extreme, especially in emotionally charged scenes. For instance, in Grace’s most isolated moments, we desaturated the colour nearly to black and white, capturing the sense of emptiness in her world. When Grace contemplates taking the snail bait, we darken the scene and emphasise the storm clouds gathering behind her, heightening the gravity of her internal conflict. Here, I leaned into darker tones, keeping small details like the snail visible, using a subtle yet powerful visual language to underscore her struggles.

Romantic moments, too, were given a unique look. We used soft vignettes and blur tools to heighten the sense of intimacy, emphasising saturated reds to make objects like hearts stand out. The in-camera aesthetics inspired this look. However, I added contrast and carefully tracked key points to achieve a softer, dream-like effect, allowing these scenes to flow seamlessly within the film’s overall warm palette.

PICTURED

From the P3 DCP final to HDR and SDR Dolby tone-mapped versions,

Consistency was key to managing the look and feel across departments. The Soundfirm team played a significant role here, making sure the colour space remained consistent as we sent and received shots from VFX vendors. With constant updates and close coordination, we kept everything on schedule despite the inevitable last-minute changes.

Some challenges were particularly satisfying to resolve. In one scene, two boys are connected to

an electrical device meant to “cleanse” them of their demons, but the intended in-camera electrical flashes didn’t all register clearly. In the grade, I replicated the effect by layering overexposed nodes and using random, controlled transparency, guided by the soundtrack. This workaround saved us the cost and time of redoing the VFX and added a realistic, electric charge to the scene.

DaVinci Resolve was invaluable throughout this project. As updates rolled in from other departments, Resolve allowed me to keep working on the grade without interruption, which helped us stay on schedule. From our P3 DCP final to HDR and SDR Dolby tone-mapped versions, Resolve facilitated every deliverable, making the entire process efficient.

Memoir of a Snail was a rewarding experience, working with an inspiring director and a talented team. Watching the colours come alive on screen, from Pinky’s vibrant glasses to the nostalgic hues of Paris, reminded me of the immense power of visual storytelling. It’s an honour to have helped bring this beautifully crafted world to life, enhancing each scene to resonate with the story’s emotional depth and complexity.

Memoir of a Snail will be released in the UK on 14th February 2025

Pinky brings a touch of colour and brightness into Grace's life
ABOVE:
DaVinci Resolve facilitated every deliverable of the project

KEEPING MEDIA HERITAGE ALIVE, ENGAGING, AND interactive

Through its history, tools, and activities, the Netherlands Institute for Sound and Vision brings media-savviness to visitors of all ages and backgrounds

The Netherlands has its own media city, but it’s likely not the one you’re thinking of. Hilversum is nestled 24 km southeast of Amsterdam, surrounded by picturesque meadows, nature reserves, and tiny lakes. With a history of being the main centre for radio and television broadcasting in the country, it’s also home to the Netherlands Institute for Sound and Vision (NISV).

A Dutch cultural institution responsible for the archiving,

preservation, and accessibility of audiovisual heritage, the NISV holds one of the largest audiovisual collections in the world. Consisting of radio and television broadcasts, films, music, web content, and other forms of media, the NISV plays a key role in preserving the media history of the Netherlands, making its collection available for research, education, and public use.

“Our mission is to keep the media heritage alive. We do this in various ways: for instance, with our interactive Media Museum or

The exterior of the Netherlands Institute of Sound And Vision in Hilversum (Image courtesy Jorrit Lousberg)

with a workshop for young people focused on fake news,” says Phillip Maher, head of IT and development at the NISV. “[We count] on many partners, including academic researchers and creative media makers, with a shared value of making it easier and more accessible for people to be media-savvy.”

In the Media Museum, visitors can create their own piece of news, engage in the impact of advertising, design and play games, and even tell stories like a filmmaker. Blending the experience from media consumer to media maker, NISV visitors can learn, experience, and practice different levels of media creation – for instance, by creating a news website, a sports summary, or a news piece about a current event.

Storing content

The NISV has a growing library of 1.5 million hours of content. Acting as the media archive for the Netherlands’ public TV and radio broadcasters, the archive receives content from a variety of sources; from the national news and the Dutch parliament to heritage institutions and even the football league.

Managing this vast content archive is Viz One, Vizrt’s enterprise asset management system. Sitting at the heart of the operation and backed by a powerful suite of APIs, Viz One integrates across the NISV’s technology stack, making the archive accessible and searchable, as well as securely stored.

More importantly, Viz One can place the content where it is needed, when it is needed, so different user demands are catered for quickly. For instance, media professionals negotiating rights access to a show, or general public users downloading the content they require for research or production.

“Archiving content is important for various reasons, but a key one is that it keeps history accessible and alive through documentation,” says Jochen Bergdolt, global head of Vizrt’s MAM business unit. “Especially nowadays, having a rich archive of video is extremely valuable. Most of the stories we learn about come to us through video form, whether that is a documentary, television special, or a developing news story.”

Making the most of the archive

The NISV has dedicated web portals that enable external access to a wealth of content, allowing thousands of users – including broadcasters, education institutions, and NISV’s Media Museum – to search the archive.

Vizrt’s partner, Mayam, worked with NISV to build dedicated licensing workflows, so broadcast users can make their content available to other broadcast and professional organisations. Viz One’s integration with Mayam’s Tasks orchestration application, means different content ingest routes and media licensing workflows can be managed, reviewed, and updated through a centralised tool.

“Collaboration is part of our work. We want those who visit us to

understand that they are also producers and influencers, not just consumers of media. The growing archive is often used for different productions by creators and companies. Events are also held to encourage engagement and thoughtful discussions,” says Maher.

Media in the digital age

Now more than ever, it’s clear that keeping a record is not only valuable as an owner, but also for the public. The NISV states on its website: “Our starting point in everything we do is the importance of free media for our democracy.” For the NISV, media literacy is indispensable for a well-adjusted society.

Various public activities, debates, and lectures take place in the NISV spaces for all ages. As they interact with the digital possibilities in the museum, questions are asked of visitors: Who or what determines your worldview, the media or yourself? How good are you at recognising fake news? Or, after learning media techniques to tell stories, which hero type suits you best?

With these activities, the content stored is used by the NISV to take a step beyond the usual exhibition-visitor relationship – visitors become a part of what can be produced with the archive, understanding its fundamental role in storytelling, but also how to become a storyteller too. This way, visitors leave the museum with further knowledge about media in all its formats, but also their own role in shaping it.

“We want visitors to come out of their experience at the Netherlands Institute for Sound and Vision knowing that they are active participants in the media developments of our age. The cultural heritage preserved in our archive is used to show visitors how we live in media, how it affects our lives, and how it is affected by us,” concludes Maher.

A section of the Wall of Fame at the NISV

AN INEVITABLE evolution?

At IBC2024, Grass Valley CTO Ian Fletcher told attendees at GV Forum that broadcast is now an IT industry. During our discussions at the show, TVBEurope heard varying degrees of agreement with that statement. Here, industry stakeholders share their thoughts on the subject

The shift from traditional broadcasting to an IT-driven operation hasn’t happened overnight, but it’s been in the making for years. The broadcasters who have successfully adapted to this shift are those who recognised early on that the worlds of broadcast engineering and IT were not separate entities but interdependent ones. Historically, these departments operated under very different rules and conditions, which sometimes led to conflicts — like the infamous Friday afternoon firewall changes by IT, leaving broadcasters scrambling over the weekend.

However, as we move towards software-based processing and cloudbased solutions, it’s become clear that these two fields need to work more closely together. Some broadcasters are essentially transforming their on-prem facilities into data centres, making them face a critical decision: should they continue owning these infrastructures

ROBERT

or outsource them to the cloud? It’s a balancing act, with pros and cons to both.

But while the technological evolution towards Kubernetes and microservices is inevitable, it’s crucial to acknowledge that these systems require much more attention than the old hardware setups. I’ve seen broadcasters who haven’t touched their systems in a decade because they simply didn’t need to. With modern tech, though, it’s a different story — you need to “feed and water” your Kubernetes clusters, applying security patches and keeping everything up to date.

The reality is that not every broadcaster has the resources or technical expertise to manage these complex systems. That’s why it’s important to use the right technology at the right place in the chain. For enterprise broadcasters with in-house expertise, a deep dive into advanced IT systems makes sense. For others, a more simplified, broadcast-friendly solution is essential to ensure they’re not bogged down by complexity, allowing them to focus on what they do best: broadcasting.

The broadcast and media industry has adopted software and IT solutions at scale for 20+ years, and the rise of public cloud providers means more and more workflows are running efficiently and reliably on standard IT infrastructure – from live production to linear playout and streaming. That’s helping to drive down costs, and increasing innovation and agility.

What makes us different as an industry is scale – or rather the lack of it. Some vendors are adopting a standardised SaaS model, especially where there’s scale across other industries. But for most products, a “one size fits all” approach is too much of a challenge.

What a major public broadcaster needs is different from a telco pay-TV operator or a small content distributor. They can all use Google Workspace or Microsoft 365 to run their business, but they each have quite different needs and budgets when it comes to MAM, rights management or streaming platforms.

But, and it’s a big but, we need to get away from the “we’re different” mindset. There’s still far too much reinventing of the wheel by buyers when an off-theshelf product will meet 80 per cent of the requirements much faster. When it comes to return on investment, time-to-market wins every time.

At the same time, some technology vendors need to do much more to invest

in keeping their products up to date. The IT industry delivers multiple releases a day. Some broadcast products are lucky to see a few updates a year, and frankly, too many vendors have underinvested in their products for too long, leaving them to wither on the vine and banking on a captive audience.

What’s still to play for is who, if anyone, will control the platforms on which our industry’s content supply chain runs – in the way that, say, Google has achieved in pay-TV with Android TV Operator Tier. Scale and economics suggest there’s value in a common industry operating system. But technology buyers consistently tell us that they don’t want their core operations in the hands of any one vendor.

ANDY BELL CHIEF ENGINEER, CHANNEL 4

There are many reasons why we could label the broadcast Industry, as another part of the IT industry. If you look at the last decade and our industry’s transition away from traditional bespoke broadcast infrastructure to everything having an IP address and deployed in the cloud, you can see why the argument is made.

Having experienced the transition from analogue technology to digital in the 1990s, where video and audio signals took a giant leap from a broad level of variation to ‘zeros and ones’, I accept that switching to a digital world was quite straightforward to understand. Of course, switching from broadcast to IP could also be argued in the same way, as that transition has been relatively fast. Only ten years ago, nobody would have dreamed of trying to deliver broadcast signals via anything other than broadcast-grade kit from a specific manufacturer. Fast forward to 2024 and the first decision is how you architect your platform in the cloud, not whether you should try and use the cloud.

With all of this said, and with nothing but respect for the IT industry, there remains quite a difference between running an IT operation and running a broadcast operation. In broadcasting you require specific craft skills to create, editorialise, prepare, convert, localise, add accessibility and market content. The differences between broadcast and IT are closing but are still vast.

CHRIS BAILEY HEAD OF INNOVATION, JIGSAW24 MEDIA

While the Sony hack in 2014 may have kickstarted the slow blending of IT and media engineering, it was probably during the pandemic that the dynamic changed most dramatically. Before Covid, media teams would provide the IT department with a pre-requisite sheet of what we needed from them to enable our systems. Now IT is the first port of call, and they provide the specifications that solutions need to be designed around. Security is paramount, and if a media system doesn’t fit into IT’s network architecture and security policy then it’s a non-starter.

As a result, media and IT engineering roles have become more blended, especially in small to medium-sized organisations. But it’s not necessarily a case of meeting in the middle. In addition to specialist knowledge around media systems, the new breed of broadcast engineer has a much better understanding of how networks are put together, but IT engineers (particularly in enterprise organisations) don’t always have as much in the way of media chops. They may know what they need to achieve from a networking, security and architecture standpoint to integrate media technologies, but they generally would not need to understand the workflows or intricacies of the media systems.

So, media engineers may understand networking and IT Infrastructure, but does that mean they’re the same as a corporate IT engineer? Absolutely not. That’s why we have two sides to this business – because while broadcasting and IT have become increasingly blended, they are still very different specialisms. And there’s probably not enough bandwidth in one person’s brain to do it all.

STEVE REYNOLDS PRESIDENT, IMAGINE COMMUNICATIONS

The evolution of the broadcast industry is increasingly intertwined with IT — a shift that we anticipated following the successful transition of various applications to commercial offthe-shelf (COTS) servers. Over a decade ago, automation, media asset management, and video servers were among the first to move to software, paving the way for further innovations in virtualisation and cloud technologies.

At Imagine, we recognised that infrastructure would also inevitably migrate to standard IT topologies, which is why we invested early in SMPTE ST 2110, leading development of the standard and championing the adoption of

open, multi-vendor solutions. This strategic move derived from our belief that embracing IT would transform the broadcast market — accelerating the adoption of next-gen innovation, reducing total cost of ownership, and delivering the R&D benefits of working with a broader base of suppliers, technologies, and development skillsets.

Transitioning to IT infrastructure has also proven to be a necessary “on-ramp” to cloud integration. While not every aspect of broadcasting will migrate to the cloud, we are confident about the industry’s move toward IP-based systems. This shift is not just a trend; it’s an inevitable evolution that will affect every facet of media production and distribution.

To navigate this transformational change, Imagine worked to expand our IT skillsets and knowledge base, and it’s essential for our broadcast customers to do the same. Organisations like SMPTE and AIMS have prioritised training and certification programmes to bridge IT best practices with the unique demands of broadcast workflows. This focus on education will equip our customers to adapt to new technologies and remain competitive in an increasingly digital world.

Opinions may vary, but Imagine’s take is that the convergence of broadcast and IT is inevitable, and we are ready for a future where IT expertise will be critical for success in the media landscape.

PAOLO PESCATORE FOUNDER PP FORESIGHT

I’ve been following this space for a few years now and have seen the growing dominance of big tech in telco. A few years on and the big tech takeover of media and broadcast is almost complete. The likes of AWS, Google Cloud, and Microsoft, as well as others, have been steadily increasing their presence and continue to evolve by adding more features/ functions to their suite of capabilities. They have and continue to play a crucial role in the transforming of legacy platforms into this new software-driven cloud-based virtualised world. All of this plays to the strength of big tech. It is surprising not to see more traditional media and broadcast players take the leap of faith into this brave, bold new world. Some are holding on to old SLA cash cow contracts but will soon follow others in facing revenue decline. The need to pivot is in the air due to numerous and critical factors, ageing systems,

The line between broadcast and IT infrastructure is becoming increasingly blurred, but to say broadcast is now entirely an IT industry would be an oversimplification. While the use of IP technology, cloud computing, and off-the-shelf (COTS) hardware in live broadcasting has accelerated in recent years, the transition is not complete. Broadcasting has relied on IT for a long time, especially in file-based workflows. Live production however presents unique challenges that have been (and to some extent remain) difficult for IT to solve fully, most notably the need for ultra-low latency, very high reliability and the handling of massive data volumes.

When it comes to media processing (including the video encoding used in media transport), software is making ever greater inroads into live production,

providing much greater equipment flexibility and indeed sustainability through re-use.

While COTS and cloud platforms can now provide acceptable performance for some productions, software running on optimised ASIC-based hardware platforms remain the only option to deliver the processing power needed for many high-quality productions.

This shift to software is also changing the way broadcast functionality is being developed and delivered, with IT approaches now being adopted.

The most notable difference with the traditional broadcast approach is that products can now be improved and extended continuously through software upgrades. There is no longer a need to create and deliver “the final product," but instead the IT approach of continuous integration/ continuous delivery can be used. At the same time, broadcast is beginning to adopt the IT industry’s Open Standards approach,

reducing physical footprint, more sustainable and remote operations and cost savings.

New ways of content creation, production, distribution and consumption are changing very fast driven by next-generation connectivity such as fibre and 5G. Combined with AI (which some folks at IBC believed will be the saving grace of the industry), they are forcing a change in existing culture towards embracing new ways of working.

We are now seeing a different phase with big tech collaborating with small niche solution providers to further complement their existing offerings.

Therefore, as I predicted many years ago (at least six, maybe more), it is unsurprising to see these IT companies move and takeover the media and broadcast industries. Arguably, it is somewhat surprising not to see more organisations like IBM and Oracle showcase at key events. Overall, all paths lead to big tech with connectivity powered by telcos!

further signalling the growing overlap between the two industries.

Sony’s own offering reflects all these concepts, with an increasing use of IP and software in products, a commitment to standards, and the adoption of IT development methodologies.

That said, aside from the performance challenges, there is also a major issue holding back the full adoption of IT in live production: the scarcity of combined broadcast and IT knowledge in our industry.

The good news here is that companies like Sony can help bridge that gap, with expertise developed over the course of many years.

In conclusion, one should not become fixated as to whether broadcasting is now an IT industry or not. There is plenty to be learned and embraced from the IT world, but ultimately the best tools are required for production, and that may mean sometimes taking a non-IT route.

connection THE POWER OF

Announcing its latest audio solution, Sennheiser aims to start a revolution in audio production. Matthew Corrigan hears how and why from co–owner and CEO, Dr Andreas Sennheiser

At the launch of the latest addition to its product suite, the Sennheiser Group left nobody in doubt that it has bold ambitions for Spectera, the world’s first bi-directional wideband digital wireless ecosystem. The product was unveiled at three separate locations across the globe, Hong Kong, Nashville and the main event, Amsterdam, at the opening of IBC2024. The choice of venue was notable. A former tram depot, de Hallen, in the city’s Oud-West district, has been repurposed as a communications hub for the 21st century — connections are still made there, but are no longer limited by the constraints of yesterday’s technology. A fitting place to debut a groundbreaking development which aims to build on more than six decades of innovation.

Simplicity is at the heart of the solution, which, says Dr Andreas Sennheiser, joint-CEO and coowner (with brother Daniel) of the group that bears his name, aims to usher in, “a new era for how people work with audio gear.”

Explaining what he believes represents nothing less than the future of pro wireless audio, Sennheiser says the ethos behind its design is rooted in the simplicity and reliability demanded by today’s audio professionals.

Spectera is, he says, “both easy to set up and easy to use, integrating within and streamlining existing digital workflows”. The solution, he continues, is “based on a simple principle: it allows connection of audio hardware and software devices”. Throughout its operational life, Spectera will benefit from continuous updates of both hardware and software. “We don’t have to worry about what is coming in the future,” he adds.

The technology aims to change the way people think about the pro-audio landscape, overcoming the limitations that have shaped wireless innovation for over six decades. By taking a new approach, the company has designed an ecosystem that is both interoperable with existing infrastructure and evolves to meet the changing needs of audio production. Its bidirectionality enables both audio signals and control data to be handled simultaneously via a single RF carrier. Spectera’s modulation and multiplexing capabilities provide robust protection from RF dropouts and interference, and full system redundancy ensures continuity and quality is maintained.

In what the company claims is a first of its kind, Spectera offers SEK bidirectional bodypacks which handle both mic and IEM in a single pack, able to transmit and receive audio signals simultaneously with latency as low as 0.7ms. Sennheiser Lithium-ion batteries provide up to seven hours operating time, depending on the audio link selected.

LinkDesk, a brand new software application, enables full monitoring and remote control of all connected system devices. Latency, channel count and audio settings can be managed in addition to battery charge status and RF health. An intuitive desktop solution designed with ease-of-use at its heart, it offers continuous RF spectrum scanning while providing assistive behaviours and smart notifications, empowering users with full system oversight and control of the audio.

Enabling the system to operate is the company’s transceiving DAD antenna, transmitting and receiving bidirectional audio and data simultaneously on a single RF channel. Integrated RF components and a single cat5E connection eliminate the need for any additional components or cabling, effectively building in simplicity. The base station provides 64 wireless audio channels (32 input and 32 output) in a single-rack unit. Four antenna ports offer either extended zone coverage, redundancy or additional spectrum capacity via two RF channels that can be deployed.

point out that Spectera represents a starting point, a redefinition of the possible upon which tomorrow’s audio production professionals can build. “The future will bring greater integration and automation, to simplify technology even further,” he believes.

In creating Spectera, Sennheiser sees a natural overlap between simplicity and sustainability, with one having an inevitable impact on the other. “Sustainability was not the primary driver behind its development,” he says, “but it is a commercial benefit.” By minimising the complexity of the infrastructure, companies are able to drive efficiencies, which have an impact on both the balance sheet of a business and its carbon footprint. “This reduces every piece of technology flown around the world,” he explains, “And by simplifying, we

also reduce the need for technicians to travel.”

As is the case throughout countless other industries which rely on technological innovation, there is an overarching need for simplification. With IP adoption and wider digitalisation advancing at an exponential rate, the requirement for interoperability and connectivity grows ever greater. Describing the company as having, “one foot in the creative world and one foot in the high technology world,” Sennheiser recognises that by embracing developments such as Spectera, the media and entertainment industry will be further enabled. “It is allowing the person to concentrate on the creative side, rather than mastery of technology,” he adds.

Sennheiser is quick to

The launch of Spectera created a palpable buzz in the pro-audio environment, just as WMAS (wireless multi-channel audio systems) is beginning to reshape the technological future. Offering a concluding thought, Sennheiser relates what he sees as he peers over the horizon.

“Collaboration,” he says, with clear conviction. “Collaboration at company level is something the industry greatly benefits from.”

CloudFlow Hub - from I ngest to Archive with Cloud and AI

Get ready for the future of produc tion. From live video feed ingest to final distribution, CloudFlow Hub covers all aspec ts of the media produc tion workflow.

Discover the power of seamless workflow automation and elevate your media produc tion:

• Automated Workflows.

• Scalable Per formance.

• Live AI Captions.

• Fast-Turnaround Translations.

• Lower Operational Costs.

Spectera SEK 1G4

SUCCESSFULLY CROSSING UNCHARTED

waters

VR production company Light Sail VR made a big splash with a unique focus on creative storytelling and performant capacity from OpenDrives’ data storage platform

This is the story of how two good friends set about changing the way immersive storytelling is done in the emerging, highly competitive world of virtual reality (VR) with their company, Light Sail VR.

Founded by managing partners Matthew Celia and Robert Watts and based in Los Angeles, Light Sail VR specialises in “storyfirst” content that prioritises compelling narratives, dynamic characters, and entrancing visuals. Since 2015, the company has been pushing the boundaries of immersive content, working with Lionsgate, Amazon MGM Studios, Paramount, and more. Their projects have immersed audiences in both 180- and 360-degree experiences, spanning genres that include music and concerts such as Sabrina Carpenter: A VR Concert, and live-action TV series like Eli Roth’s The Faceless Lady. But before they found their niche in VR, the two college friends really just wanted to tell better stories together.

The genesis of Light Sail VR

Celia and Watts met as students and shared a passion for film, production, and emerging technology. Their college years laid the foundation for a lasting friendship and professional partnership. After graduation, they pursued different paths in production — Celia in technical and narrative and Watts in content development and business operations — but reunited over a shared vision of telling great stories through technology. Watts, who is also the company’s executive producer, was the driving force behind starting Light Sail, and it was he who convinced Celia, the company’s creative director, to look into VR as a storytelling medium. But despite being impressed by VR’s action, 3D effects, and gaming experiences, the two agreed that there was still something missing: the story itself.

“Everything to me lacked the kind of stories I wanted to see,” Celia reveals. “There wasn't a lot of character. There wasn't a lot of emotional resonance, the reason why we should care about content. And since then we’ve been putting this story-first focus on all of

our content. It's not just a job to us. You can teach anybody how to make VR content. It's a lot harder to teach someone how to tell a good story.”

Pioneers in ‘story-first immersive narratives

For nine years now Celia and Watts have continued to explore all the ways that VR could change the immersive storytelling experience. “We are both very passionate about this as a medium,” Celia says. “We are struck with how powerful it can be, the kinds of stories we could tell, and the challenge of birthing a new medium and writing all the rules. For every project that we touch, we are always asking, ‘What is this adding to the canon of immersive media?’”

One of Light Sail’s first success stories was around Paramount’s Paranormal Activity movie. After going viral and amassing 10 million views, Watts felt this was proof Light Sail could be a successful commercial production company if they continued to focus on telling immersive, story-first narratives. From there, Light Sail went

on to work with Google, slowly building and expanding the company brick by brick.

Light Sail is also eager to teach the world more about VR, along with helping their partners be more successful in using it. According to Watts, “Basically, we've continued that story-first mantra through our company's history. We’re always story-driven and always asking, what is the end user getting out of VR?”

“That kind of ethos is really powerful in VR because it's very much an observer medium… you're transporting audiences into this world where there's no screen, nothing separating them from the story,” Celia agrees. “VR unlocks a lot of storytelling techniques and different kinds of stories that you can tell,” he continues. “The most exciting thing about working in this medium is using technology in service of the story. But I will say this technology is also the single thing that can destroy the story.”

A good story needs great characters, great camera design, great writing, and great acting, not a reliance on fancy effects. It’s easier

for a bad 2D story to hide behind fancy effects, but with VR there’s no hiding if there’s no story.

Watts explains that VR storytelling involves educating viewers on how they interact with the cameras, why they interact with the cameras, and how the cameras are placed.

“We want our talent to look at the camera and play to the camera because in VR, the camera is the audience. It’s like having a personal one-on-one connection to their fans. So how do we balance creativity with the technological limitations that exist in VR? I would say we actually flip that. We take the technological limitations, and we use that to foster our creativity.”

Navigating uncharted waters in technology

With the pair's combined storytelling chops and VR technology expertise, you would think they had everything needed for smooth sailing ahead. But it wasn’t so simple. A centralised storage platform to keep Light Sail VR's disparate team efficient, productive, and organised was missing. Their growing success also meant they needed a way to cost-effectively scale and evolve to prepare for choppy and unknown murky waters. “The reality is you can't derive a good story if you can't work. We want to spend less time fighting technology and more time being creative,” Celia says. “Because when your technology breaks, when it doesn't work, when it's frustrating, it takes you out of that creative zone.”

The company set out to find the fastest, most reliable data storage platform available. “And that’s what I found in OpenDrives,” Celia explains. “I'm so impressed with how fast the system is, and how they continue to invest in the software that makes it grow. I felt like this was a product that was going to scale with us and grow with us and help us reach the next level.”

Unleashing

the creative storm

Light Sail was first introduced to data storage and workflow solutions provider OpenDrives a few years before putting them to the test for The Faceless Lady. With an ultra-tight deadline and a widely distributed team working out of multiple cities and countries, a fast, reliable centralised storage platform was necessary to help the team accomplish the seemingly impossible: deliver more than 300 visual effect shots that were massive in size (8K and 60 frames a second) in less than two months.

Celia’s decision to host everything on a central server made it possible to deliver the project in a short amount of time. The team also achieved speed and efficiency by inventing a new process where they could send the VR signal from headquarters back to any remote location and view it in a VR headset without having to plug back into the box.

“Centralising everything is the fastest way to work, and combining it with our NDI remote preview allows artists to utilise their VR headsets to QC their work anywhere in the world,” Celia says. “OpenDrives helped us to plug more artists into scale and not worry about copying files, shipping drives, or organising updated versions.

In OpenDrives, Light Sail VR found a centralised storage platform to keep its team efficient, productive, and organised

“It was kind of a miracle, to be quite honest. It was really, really awesome to have that kind of stability, to maintain that real-time playback, to maintain all the exports working, and witness nothing crashing. We had our render farm cooking out shots 24x7 for six weeks. And I don't think we could have done that with any other system today.”

Watts adds, “We have tens of terabytes worth of data that we are shipping back and forth daily, and what's nice is OpenDrives enables us to have that all worked into our technology in a very seamless way so that my teams can operate between editing and finishing and post and distribution all in the same server, all, whenever they need.”

Light Sail now stores all of its critical, active files on OpenDrives with the system’s centralisation fundamental to the company’s success.

Light Sail ahead: what’s next?

The future of VR is promising, and as content continues to get bigger and heavier, and technology grows more complex, Watts believes storage is going to play an increasingly larger role in the space. VR companies like Light Sail will need data storage and management solutions that help drive efficiency and interoperability as workflows are ever-changing and never the same.

As demand for VR content increases, as evidenced by key players like Meta and Apple heavily investing in the business, Light Sail is already targeting how to support more projects in narrative and episodic television.

According to Celia, “There's very few certainties in life, but one of the most certain things is we will always need more hard drive space because VR projects are enormous. So as we continue to film longer shows and create longer content with more cameras, it means that we're going to have more data to work with. That's going to require us to scale, and one of the greatest things that I love about the OpenDrives system is that it is very easy to scale.

“How we got here, and where we’ll end up going is down to our ability to iterate extraordinarily quickly. We couldn't do that without OpenDrives, who understand not only how to build a system for today, but to build a system for the future. I like that, because I'm not going to close up shop in a year. I'm looking forward to the next 10 years of Light Sail VR.”

Exploring AI-ENHANCED TV SOUND

A mainstay of the pro-audio calendar, the latest edition of the Audio Collaborative conference – organised by Futuresource Consulting –once again took an unflinching look at a handful of issues presently defining the industry, writes David Davies

Marking its 11th year in 2024, Audio Collaborative has never shied away from addressing the latest technology and business trends, or exploring what they mean for the future of audio. This year’s event was no exception, with several of the topics – notably AI and sustainability – currently preoccupying the entire broadcast and professional AV industries.

Taking place at The Soho Hotel in London on 4th November, Audio Collaborative 2024 found time for sessions as disparate as The Role of True Wireless Earbuds in Hearing Health, Maintaining Brand Relevance in a Digital World and The Driving Force Behind Auto Audio But for the purposes of this review, we’ll focus on two key sessions –AI-Enhanced TV Audio: Clearer Dialogue, Better Sound, presented by DTS vice-president of audio research and development Martin Walsh, and Sustaining the Future, presented by Lisa Stafford, CEO of TAZAAR In his presentation, Walsh examined the appetite for an enhanced

experience of dialogue in the home TV environment. It is now very common for viewers across age groups to report intelligibility problems when watching TV, with 84 per cent of those surveyed by DTS noting that they had at least “occasional difficulties” understanding what people were saying. The response to this might be to turn on subtitles or simply turn up the volume – the latter an unsatisfactory solution as it involved “turning up the ‘bad sound’ as well to make it louder."

Walsh went on to contrast the positive aspects of TV design development over the last 25 years –including HDR and 8K resolution – with the fact that generally, “the audio experience in TVs has often gone backwards. You’ll have two back-firing speakers and they’re not going to send the audio in your direction; fix the screen to the wall and it will fire the speakers into the wall”.

Stafford highlighted some generational changes affecting customer choices in AV; for example, those born in the ‘90s and beyond are more likely to “gravitate towards brands that have good sustainability credentials” and are also tending to buy a lot of their equipment on resale marketplaces. The concept of a Digital Product Passport – which is expected to become mandatory in the EU between 2026 and 2030 – is set to give the customer more confidence about a whole array of factors, from basic details to information about environmental impact, as the record of the product will be embedded into the physical item itself.

All of this is somewhat at odds with a creative community that “hopes you have a big surround system at home and can recreate the immersive soundtracks” that have been carefully finessed in high-end post production studios. With the inevitable downmixing to two channels, it’s not surprising that “dialogue can be masked” and viewers are left unhappy, with complaints about audio intelligibility frequently polling highly in the list of criticisms logged by regulators such as Ofcom.

Noting that it’s a “multidimensional problem” with environmental conditions such as housing materials and design – “there are so many more open-plan houses now” – also contributing to the challenging audio environment, Walsh said that ultimately “it always has to come back to one thing: fix the mix.

“This is usually associated with the content creator, and how they do that is one of the biggest challenges. The more preferable way of doing it is to send the ingredients of the mix […] to the home, have it mixed there, and do it in a more bespoke manner according to the needs of the listener and their environment.”

Enter DTS Clear Dialogue, an AI-based audio processing solution that identifies, separates and enhances dialogue to increase intelligibility and deliver a customisable audio experience. “The whole premise is that you solve the TV hearing issue at home, but through personalising it to everything we know about you as a customer –rather than putting it all on the content creator,” explains Walsh.

Sustaining the Future was presented by Lisa Stafford from TAZAAR, which works with electronic and electrical hardware manufacturers to trace their products from point of purchase to end of life – the overarching aim being to promote the circular economy, including resale, repair and recycling.

In a follow-up conversation with TVBEurope, Stafford noted that pro-AV brands are under pressure to align themselves with environmental standards and reporting – not only from an increasing regulatory perspective, but also from end-users and event organisers. For example, event organisers are under pressure to report their environmental impact, but only 6 per cent of event carbon emissions come from the AV hardware used at events. However, it is the event organiser’s highest control area, where they actively assess the most sustainable products to use in their production and frequently request lifecycle assessments that showcase and estimate a product’s Scope 1-3 emissions.

She also observed that “hardware companies are cost-driven and will make their operations more sustainable if it means reducing costs. However, the upfront cost and coordinating new/existing stakeholders are barriers that prevent many brands from taking action.” But there are steps that can help mitigate these expenses: “In Europe, manufacturers are investing in on-site renewable energy to reduce energy costs; purchasing energy, time and materialefficient machines; and re-shoring manufacturing partnerships where there has been a significant rise and growth in electronic manufacturing service partners.”

With regard to areas that manufacturers should be prioritising, Stafford says: “Looking at energy consumption and product packaging are easier wins for a manufacturer because they typically involve few people in the business to instigate and manage, and have a shortterm cost benefit. Every hardware company is different, which is why we take a partner approach to every manufacturer we engage with. However, there are some common aspects hardware manufacturers could consider now such as appointing a project manager to assess, adopt and implement sustainability-driven business initiatives, digitise product manuals, and regularly train dealers/distributors on product repair and maintenance to support end-users and make spare parts more readily available.”

Lisa Stafford discussed the idea of a Digital Product Passport

Protecting THE FUTURE WITH DNA DATA STORAGE

Karl Paulsen explores the intersection of information technology and molecular biology

In our current information/data age, the storage and protection of data is an extremely important part of any business venture. The degree of digital data being produced, globally, has long been outpacing the amount of storage available—irrespective of the medium onto which that data is stored.

Cloud, obviously, is currently taking first place on the storage agenda given its flexibility and user expectations, however, there is a relatively new means for data storage that arrived several years ago and is now taking on some very different new perspectives.

We’re speaking of DNA data storage — yes DNA or more formally known as deoxyribonucleic acid — that tongue-twisting, nearly impossible to write term we learned back in high-school physics. That is, this storage concept is the enabling of molecular-level data

storage into DNA molecules that leverages biotechnology advances in synthesising, manipulating and sequencing DNA to develop archival storage.

The next few paragraphs will take a very basic overview of the chemical make-up of elements in molecular science. You don’t necessarily need to be a biochemist to fully understand, but you can quickly see how this leads into the LOCO coding structure and error detection properties which are essential for DNA data storage.

The term 'LOCO', or in this case D-LOCO (for DNA-LOCO) means Lexicographically-Ordered Constrained Codes, which are "line codes that make it possible to mitigate interference, prevent short pulses, and will generate streams of bipolar signals with direct-current (DC) powered content" through the "employment of balancing". These principles are found in magnetic-recording (MR) devices, in Flash devices, in optical recording, and in certain computer standards.

Ten terabytes on a pinhead?

Exploring the intersection of information technology and molecular biology has created a scientific means for the storage of over 10 terabytes of data in the space of a faint smear less than the space of 0.25 x 0.25 inches (0.0625 sq-inches). Comparatively, from American theoretical physicist Richard P. Feynman’s dissertation (December 1959), he theorised how to manipulate, manufacture, and control things on a micro-level (small) scale.

From a coding perspective, this micro/nano technology practice dates back to work in 1948 whereby the density of data is increased through the use of such "constrained codes" which resulted in increased storage density in MR (i.e., magnetic recording). Such practices are still widely in use today which mitigate interference in today’s two-dimensional MR systems.

F unctional artificial objects

The science of this DNA storage remains somewhat experimental and is very much an ongoing research project of well recognised coding and by biochemistry experts throughout the world.

For example, using in-silico (i.e., experimentation performed by computer) and wet lab experiments, the Molecular Information Systems Lab (MISL) at the University of Washington (UW) in partnership with UW Computer Science, Electrical Engineering, and Microsoft Research, have brought together faculty, students and research scientists with expertise in computer architecture, programming languages, synthetic biology, and biochemistry to enable the use of DNA as a high-density, durable and easy-tomanipulate storage medium.

Historically, the idea for DNA digital data storage began around 1959 when Feynman explored the prospects of artificial objects of the microcosm and biological microcosms having similar or even more extensive capabilities in his paper There’s Plenty of Room at the Bottom Another book worthy of further understanding is Nano: The Emerging Science of Nanotechnology by Ed Regis (April 1996), which tells the gripping story of how K. Eric Drexler and other scientists

Fig. 1: Diagrammatic differences in RNA (left) and DNA (middle) with the CG-content pair on the right inset (portions courtesy of Technology Networks)

pioneered this emerging science. It explores what molecular nanotechnology could mean for our future as it was presented to scientists and congressional representatives on June 26th, 1952. These concepts were further recognised by future Vice President Al Gore, who presided over the presentations to the panel in 1996.

B iochemical DNA & RNA

Functionality wise, DNA digital data storage is a process that encodes and decodes binary data to and from synthesised strands of DNA. Arguably, there is enormous potential resulting from its high storage density, but the practical use of DNA data storage is (currently) "severely limited" due to its high cost and very slow read and write times.

In biochemistry, the depth of this topic is ginormous and well beyond the details presented in this article. However, the basics of what composes the DNAelements include some of the following micro-biological concepts and principles: a nucleoside is comprised of a nucleobase (which function as the fundamental units of the genetic code and five-carbon sugar (ribose or 2-deoxyribose). Nucleobases are nitrogen-containing biological compounds that form nucleosides, which, in turn, are components of nucleotides, with all of these monomers formulate (i.e., the constitute) the basic building blocks of nucleic acids.

of the DNA nucleobases mentioned in the biochemical descriptions above and as shown in Fig. 1.

C old data

DNA is part of the next-generation technology that can support storing mass "cold-data" (i.e., archival) or information which does not require regular or continuous access. Pools of synthetic DNA are being proposed as a potential medium aimed at archival (long term) storage purposes. Through the use of coding and data processing, errors are prevented during the biochemical processing proceeding the development of the DNA strands. For long term storage, all of the data sequences must contain limited runs of identical symbols and a balanced ratio (percentage) of A-to-T (Adenine to Thymine) and G-to-C (Guanine to Cytosine) nucleotides. These compositions are referred to as "constrained codes", which are a class of nonlinear codes that by proper processing, eliminate a chosen set of "forbidden patterns" from the codeword sets.

A nucleotide is "a compound consisting of a nucleoside linked to a phosphate group". The nucleotide is the molecular building block of nucleic acids, RNA and DNA, both of which are essential biomolecules within all life-forms on Earth. The four bases used in DNA are adenine (A), cytosine (C), guanine (G) and thymine (T). In RNA, the base uracil (U) takes the place of thymine (T).

Ribonucleic acid (RNA) is a molecule that is present in the majority of living organisms and viruses. Like DNA, it too is made up of nucleotides — which are ribose sugars attached to nitrogenous bases and phosphate groups. This is a nucleic acid found in all living cells that have structural similarities to deoxyribonucleic acid (DNA). In Fig. 1 the makeup of RNA (left) and DNA (middle) present in most living organisms on our planet and referenced to those 4-ary (i.e., a group or set of four [quad/ quadary] data elements — in this case the "nucleobases"), are used in the coding of DNA/RNA for storage. The CG-content is depicted in the smaller diagramme.

From the theoretic information perspective, strands of DNA serve as a storage medium for 4-ary data over the alphabet {A, T, G, C}. The "alphabet" referenced here aligns with the four components

DNA data storage promises formidable information density, long-term durability, and ease of replicability. However, information in this intriguing storage technology might also become corrupted. Experiments have revealed that DNA sequences with long homopolymers and/or with lower guanine-cytosine (i.e., "GCcontent," see inset of Fig. 1) are notably more subject to errors upon or when moved into DNA-storage.

Guanine-cytosine content is the percentage of nitrogenous bases in DNA or RNA molecules which are either guanine (G) or cytosine (C). A higher GC-content level indicates a higher melting temperature. GC-content should be in the 30%-80 per cent range, with 50-55 per cent being ideal; and the GC content influences the evolution of proteins because of energy cost.

One probably never realised the complexities in storage beyond how magnetic areal density or how the fluctuation of magnetic energy relates to the pickup head of a spinning magnetic (hard) disk drive; but storage density will always need to increase if we’re going to sustain this ever-growing thirst for digital data.

When or if DNA data storage becomes prominent is still a vision into the future, but it is being promoted by companies including Illumina, Microsoft, Iridia, Twist Bioscience, Catalog and Thermo Fisher Scientific.

According to analysts Markets and Markets, the DNA data storage market is projected to increae from $76 million in 2024 to $3.3 billion — growing at a CAGR of 87.7 per cent — by 2030.

Fig. 2: A brief summary comparison of RNA vs. DNA

CREATING A COHESIVE language

visual

Claudio Del Bravo is a senior colourist and head of long format at Frame by Frame in Rome, Italy. He has years of experience grading a variety of formats, including commercials, indie, film and TV. Del Bravo has an ongoing collaboration with Italian film director, Luca Guadagnino, having worked together on Bones and All, Challengers, and latest project, Queer

Tell us about your journey to becoming a successful colourist

My journey as a colourist began with a deep passion for the visual arts and cinema. I studied arts and entertainment at university, which gave me a solid foundation in the theory and history of moving cinema.

After finishing my studies, I started working at a post production lab in Rome, initially as a conformer. It was during this time that I discovered my passion for colour grading. I experienced the industry transition from analogue to digital, which was a fascinating period, working with some of the earliest DI software. This period really shaped my understanding of both technologies, analogue and digital workflows, which has been invaluable throughout my career.

About 12 years ago, I joined Frame by Frame when the post production department was newly formed. It was an exciting challenge to help build something from the ground up. My goal was to make it grow into a major player in the Italian film industry and compete on an international level. Over time, by focusing on quality and surrounding myself with talented professionals, I helped the company progress from working on commercials and smaller films to handling major international projects.

What’s your career highlight to date?

My career has grown alongside Italian cinema. I’ve had the privilege to work on some fantastic projects, which have received significant recognition here, like films by Edoardo De Angelis, Mario Martone and Riccardo Milani. I have also had the opportunity to work on international projects like My Brilliant Friend, an HBO series, which allowed me to challenge myself with global standards and audiences. The series was also one of the first projects in Italy to be graded in HDR, which made it even more special.

That said, the highlight of my career has been my ongoing collaboration with Luca Guadagnino. Starting

PICTURED BELOW: Challengers was a mix of formats — 35mm (3 and 4 perf), ALEXA, and lots of VFX

with Bones and All, then Challengers, and now Queer, these projects have taken me to new creative heights. Collaborating with Guadagnino has been a dream come true, and Challengers is probably the most important project I’ve worked on so far.

How would you describe your grading style?

I’d say my style is deeply respectful of the cinematographer’s work. I usually choose a naturalistic and gentle style, gradually creating the desired result. I approach each project with the belief that everything in the frame — the lighting, set design, costumes — is part of the cinematography. My job as a colourist is to bring all these elements together and help them serve the narrative. My goal is to serve the film’s story and communicate its message through elegant colour choices.

How do you use colour to communicate with an audience?

Colour plays a vital role in enhancing the audience’s emotional experience. For instance, in Bones and All,

I had the pleasure of working with the talented DoP, Arseni Khachaturan, to create a contrast between the expansive, warm American landscapes and the darker, more intimate moments. Despite the gruesome theme of cannibalism, it’s ultimately a love story, so the challenge was to balance horror with a sense of tenderness and poetry. Colour played a huge role in achieving that balance. The result was a mix of unsettling, beautiful visuals.

How long have you been grading on Baselight?

I’ve been using Baselight for about nine years now. In Italy, Baselight isn’t as widespread as in other countries, but I knew from the beginning that it was the right tool for us. Baselight has allowed us to distinguish ourselves with complex workflows and complete creative freedom.

The Base Grade feature is particularly brilliant as it provides incredible control, especially to touch highlight. I am also a huge fan of new tools such as

PICTURED ABOVE:

The look of Queer was inspired bt the Technicolor ‘threestrip’ process that evokes the rich colours of early ‘50s films

Texture Equaliser (combined with paint, for example) and Texture Highlight.

I used Baselight 6.0 on Queer, Luca Guadagnino’s latest film. The new Chromogen tool has already become one of my favourites. It offers a new way to control the image and create unique looks. Face Warp within the new Face Track feature is also remarkable – it’s incredible how fast and accurate it is! What used to take hours can now be done in minutes – tasks like making shapes around the eyes for hundreds of shots. Both these tools are impressive leaps forward in efficiency and creativity.

Do you prefer to get involved with look development before/during the shoot of a movie? I always prefer to get involved as early as possible. Establishing a visual tone at the start helps guide the project in the right direction from the very beginning. For instance, I worked with Sayombhu Mukdeeprom again on Queer, and we developed a look inspired by

the Technicolor ‘three-strip’ process that evokes the rich colours of early ‘50s films. Reference to old films like Black Narcissus, Amarcord and Edward Hopper’s paintings helped us create a distinct aesthetic. It also saves time in post production, as everyone is already aligned on the visual goals. This kind of collaboration helps to ensure that the final result matches the original vision. We started by creating a LUT for the dailies, which then evolved into an LMT for ACES in the final post workflow.

How and at what point did you get involved in Challengers?

I got involved quite early in the process, working closely with Sayombhu during pre-production to define the look. Challengers was a mix of formats — 35mm (3 and 4 perf), ALEXA, and lots of VFX. We used an ACES workflow to ensure that everything blended seamlessly, and Baselight was the natural choice to handle the complex workflow. It was an exciting challenge, especially with the mix of analogue and digital elements.

Was that the key challenge for you on the project?

Yes — blending different formats, sets and periods of the narrations, into a cohesive visual language and maintaining a consistent look across different environments (also technical) and emotional tones. The film explores a range of intense emotions, and we wanted to reflect that through the colour palette without being too overt. We wanted to create a visual

pop to the film, in the best sense of this word. To us ‘pop’ doesn’t mean simplistic or superficial, but rather enjoyable and fun. All this combined with that incredible electronic and techno soundtrack creates a ‘match’ that makes you jump out of your seat.

What was the desired look and how did you achieve it?

PICTURED ABOVE: Del Bravo says the challenge with Bones and All was to balance horror with a sense of tenderness and poetry

We aimed for a modern, clean look while still preserving the warmth and texture of 35mm film. I started by grading the HDR version to establish the overall tone, then refined it in the cinema room to ensure the transition between formats felt natural. The result is a film that feels simultaneously classic and contemporary – shot on film, but with a modern edge. After the main P3 DCI version graded in theatre with Sayombhu and Luca, I moved again to the HDR Dolby Vision workflow to obtain all the other versions. Baselight allows me to have multiple trims (in multiple layers) for different targets (including Dolby Cinema) and everything merged in the same timeline. Lovely!

What are you working on now/next?

I recently worked on on Queer, Luca Guadagnino’s latest film, showcased at Venice Film Festival and Toronto International Film Festival, as well as a couple of new high-end TV series and Italian movies. I’m really excited about what’s coming next!

The HDR version of Challengers was graded first to establish the overall tone

TIMPROVING ASSET SECURITY AND

governance with AI

he need to manage media assets efficiently and keep them secure between raw footage capture and final post production edits is nothing new for content producers. Naturally, security is a critical priority when managing media assets, particularly when working with high-value content, and media companies must protect content from unauthorised access and data breaches, as well as prevent accidental loss through mismanagement.

All of this is becoming more challenging as media workflows get increasingly complex and as the volume of content being produced soars. These factors are making data governance, the process of making sure that content is properly managed, protected, and accessible, all the more important. AI has been touted as a miracle, fixall tool for a whole host of industry challenges, so it’s hardly surprising that many media companies are looking to AI technology to see if it can also help to improve content security.

Automating access controls

After raw footage is captured at the production stage, content moves through the post workflow, typically involving multiple teams, platforms and cloud services, before it gets to the point of final post production edits. If not properly managed, this can create vulnerabilities, leaving assets at risk of security breaches and loss. To prevent unauthorised people from accessing content, whether access is attempted with malicious intent or just by accident, it’s vital that access controls are set so that only authorised individuals can access specific content.

However, even with the best processes in place, if you’re manually setting access controls, human error is always a possibility, particularly when you have thousands upon thousands of assets moving through the workflow. This is where AI can make a difference because after being trained on your assets, AI content identification tools can analyse and classify assets using defined criteria, automatically setting access permissions. There’s potential that this can significantly reduce the risk of unauthorised access because of human error when setting permissions.

Boosting organisation and efficiency

For content to be managed securely, it obviously needs to be properly labelled and organised. Imagine the fallout if scenes from a muchawaited new season were accidentally shared with the wrong people because incorrect metadata was attached or a file was mislabelled. Already, AI is proving invaluable at making sure critical metadata is automatically generated in real-time so it can be front-loaded with the

content at the production stage rather than being applied later during post. As well as improving security by reducing the chance of mislabelling or data loss, this approach ensures that media is organised from the outset, streamlining the entire content workflow and improving content location and retrieval. After all, it doesn’t matter how much content you have, without really good, suggested metadata, the content is kind of meaningless. If you have thousands of hours of footage and you’re trying to make great content with it, if it’s not organised well, you’re up the proverbial creek without a paddle.

But media companies don’t just want to be well organised, they want to be super-efficient too, which is also an important part of good governance. More and more media companies and content producers are looking towards AI for automation help, with AI transcriptions and captioning being of particular interest. It’s no coincidence that these processes just happen to be pretty much the biggest time gobblers for workflows and for localisation where a vast amount of content often needs to be adapted for different markets quickly.

AI is also helping to improve the efficiency of dailies workflows, where raw footage shot that day is reviewed. Content makers working at studios don’t want to have to wait until the end of the day to see what’s been shot, they want to see it straight away. With AI, on-set or in-cloud captioning can be enabled so that they can review the content right away, with no need to wait. Having the ability to automate those processes and harness AI power for those workflows is a real win for media companies.

Enhancing security

Many organisations across different industries are also starting to apply AI tools to detect operational anomalies by identifying unusual patterns or behaviours. This approach can help to prevent potential issues like unauthorised access, security system failures and even help to detect vulnerabilities. There’s every reason to expect that in time media companies will be able to use AI in this way to improve governance and enhance the overall security of content.

While AI might not yet be the cure-all solution that it has been heralded as, it offers powerful tools that can help media companies enhance security, streamline workflows, and safeguard high-value content throughout the post production process. By automating access controls, improving asset organisation, and detecting possible security threats, AI has the potential to become an indispensable part of content management. And as AI technology continues to advance, its role in protecting and managing media assets will very likely grow.

Sharing THE LOVE

In more than a quarter of a century at the heart of UK TV production, equipment rental and services provider HOTCAM has witnessed the industry change beyond recognition. Following an MBO, Henry Coulam and Pete Green have led the company through the last two years, working on high-profile shows including The Jury and Love is Blind. Matthew Corrigan caught up with them to find the secrets behind HOTCAM’s success

HOTCAM has successfully navigated turbulent times since the management buyout in 2022. What is your secret?

Our growth has really been about providing good people and expertise at HOTCAM. When we took over through the MBO (Management Buyout), we had a clear vision of what TV needed, and we were fortunate to receive an incredible amount of goodwill from the production community. That support was crucial in building our initial momentum, and it has continued to drive our success to date. A big part of this has been bringing in the right people – those who share our values and dedication.

The TV landscape has changed dramatically over the past two years, creating opportunities to adapt and pivot. While some genres have seen a significant decline, the market has also shown resilience. We've also witnessed a resurgence in areas where the complexity of productions makes them difficult for influencers or smaller social media creators to synthesise. These types of projects play directly to our strengths and expertise at HOTCAM.

Our team has worked tirelessly to seize every opportunity, navigating market shifts and the unspoken challenge of rising interest rates. It hasn’t been easy – there's been a lot of personal sacrifices

to keep the business stable and our staff secure. Through it all, we’ve built a loyal client base, and we’re immensely grateful for their trust in us. Our commitment to always being there for our clients has been a cornerstone of our resilience during tougher times.

In what ways has evolving technology impacted the company over the period?

The defining commercial and technical trend of the past decade has been the gradual reduction in barriers to entry. Today, anyone has the potential to become a content creator. However, while technology has become more accessible, the expertise required to deliver complex multi-camera productions remains unchanged.

There’s been a lot of attention on evolving technologies, but in the areas where we operate, meaningful advancements in camera technology have plateaued. New ‘high-end’ cameras like the Alexa 35 and Venice 2 now function more as post production tools than as acquisition tools. In the non-scripted space – and for most multi-cam projects we handle – there’s often no need to fully utilise these advanced features as the rental price of the asset and step up in post production costs is hard to justify, particularly at scale when budgets are tight.

AI advancements in post production, like tools for logging, cleaning up noise profiles or rescaling resolution, but the part of the production pipeline we focus on remains largely untouched.

We can see AI making significant strides in the scripted and commercial spaces in a shorter time frame, but those aren't areas we operate in, nor do we plan to. Ultimately, AI struggles to synthesise the subtle, inherently human moments that are crucial in storytelling. AI can’t feel, and that emotional depth is essential to well-crafted non-scripted TV.

What other technologies are your customers looking at currently, and how are you helping them meet that demand?

We put a huge amount of effort into understanding our clients' needs with a view to introducing technology that streamlines their productions. Our philosophy is that the workings of a solution should be almost invisible — we want our clients to be able to focus entirely on the creative and editorial unhindered. Anything we can do to eliminate friction points is a step in the right direction. Mike Ransome, our head of audio, has done some phenomenal work with the

However, the same market forces that have made video creation more accessible have also opened up previously exclusive sectors like outside broadcasting (OB) to smaller facilities companies. This shift has allowed us to benefit from the same dynamics that have empowered individual creators and smaller operators, enabling us to compete more effectively in spaces that were once the preserve of much larger players.

AI is a current industry hot topic - are you finding customers are looking for AI solutions?

Artificial intelligence hasn't impacted the services we offer our clients directly, yet. This may not age well, but we are still a long way from AI being able to create a convincing or compelling portrayal of the human experience. While there's no doubt that AI will have a profound effect on the creative industries, it's hard to imagine viewers tuning in to watch AIgenerated models in a reality TV setting.

In terms of the physical act of camera operation, we anticipate that sports coverage, and perhaps fixed rig might be the first genre we deal in to be affected – however, these changes are likely still five to ten years away. We’ve seen some interesting

PICTURED ABOVE:
Matt Willis and Emma Willis, hosts of Love is Blind UK
Inside the Love is Blind pods

freelance sound community, developing wireless multichannel IEMs. These innovations have significantly reduced restrictions on how story producers and loggers operate on location, giving them more freedom and flexibility to focus on their work.

Do customers always understand exactly what they need to complete their productions?

Customers always have a clear vision of what they want to achieve, but they may not always know the best way to get there from a technical point of view. We frequently see a lot of waste on budget-conscious productions because the conversation around technology and crewing didn’t start early enough, or even worse, they have relied on unqualified advice at early stages.

Without a doubt, our most successful projects are those where we have long-standing relationships with clients and are brought in during the pre-budget phase. HOTCAM has been on location, specialising in non-scripted and multi-cam productions for 25 years. There’s not much we haven’t encountered, and we bring a wealth of experience to every project we support.

HOTCAM recently worked on the UK version of Love Is Blind - can you expand on your role?

We managed key technical aspects of the production by utilising all our departments: camera, audio, comms, engineering, and logistics. We assembled their PSC and PD/DV kits and managed the carnets from a production management perspective. Our project managers ensured that third-party suppliers were seamlessly integrated into the pipeline, creating a smooth transition between acquisition and post production.

Additionally, we hosted internal HoD technical meetings, ensuring that when the client arrived on location, the technical framework was already in place. By combining our in-house technicians and assistants with experienced freelancers, the client benefited from an established, proactive, and friendly culture of problem-solving at every step.

What sort of equipment did the production need?

We implemented digital Riedel Bolero comms alongside our bespoke wireless IEMs package. Additionally, we deployed multi-channel digital Shure Axient systems for audio capture, which have impressed us with their range and quality, making them highly versatile for nonscripted shoots that require flexibility.

Our camera packages were primarily centred around the Sony FX range, including FX9s, FX6s, and FX3s,

PICTURED ABOVE:

The HOTCAM team have previously worked on Britain's Got Talent

paired with premium Canon PL zoom lenses like the CN10s and CN7s. And of course, a full complement of accessories from manufacturers such as EasyRig, Sachtler, ARRI and Bright Tangerine. From an engineering point of view, we’ve found the Stage Racer 2 from Ereca can significantly reduce our technical footprint and expedite rigging.

What were the biggest challenges you faced?

Most of the difficulties in TV production aren’t technical; they arise from communication issues or siloed decision-making. By offering a complete solution, we ensure that contributions from each department are harmonised and aligned, all while remaining mindful of the budget. This approach allows us to streamline the production process and deliver a higher-quality, better-value product, resulting in fewer friction points, allowing everyone to focus on making great TV.

What other projects are you working on at the moment - and how is HOTCAM’s business being impacted by the slowdown in production in the UK over the last 12 months?

Covid and the writers' strike, while significant, were ultimately temporary disruptions rather than broader market shifts. To understand the current changes in commissioning, attention needs to be focused on how evolving audience trends are impacting ad revenue distribution. Our observation is that if a genre can be replicated by influencers, user-

Working on a live performance by Chase and Status

generated content (UGC), or other online creators, it’s unlikely to make a comeback on ‘traditional’ platforms.

While the last two years have been rocky, we’re starting to see a resurgence in high-barrier-to-entry content that can only be produced by experienced professionals. We believe advertisers are beginning to recognise that and despite the shift in viewership, the Instagram generation still has a strong interest in reality, dating, and competition content. The question going forward is not whether these formats will be made, but what platforms they will be shown on.

Do you work with companies outside of the UK, and if so how?

We collaborate with clients from all over the world - the UK remains an attractive market for television production. The UK boasts some of the best producers and most capable crews, and we have been successfully developing formats for global audiences for decades. If you're looking to produce non-scripted TV, partnering with a UK production company is the way to go; they excel in this field.

HOTCAM has sought to promote gender diversity in the industry. How have you achieved this and how has it helped you?

I wouldn’t say we have achieved this - there’s always more work to be done - but we are consciously striving to make progress. When any section of an industry is disproportionately weighted toward one segment of the demographic, it becomes intrinsically more difficult for those outside that demographic to feel included and comfortable participating.

Our focus has been on opening up and professionalising rental by investing in management training, rolling out proper recruitment processes, and launching welfare initiatives. To us, professionalism means creating a leadership framework where everyone feels comfortable contributing. For us the outcome has been balanced crews and better outcomes in terms of on-location culture and problem solving.

We feel it is the responsibility of business leaders to consciously challenge existing models and foster an environment where anyone can raise their hand and say, 'I want to have a go.’ We would love to see an ‘albert style’ initiative for inclusivity and welfare in broadcast television. This would go some way to addressing many of the grassroots training issues in film and TV which at the moment are not being adequately addressed by industry leaders.

Looking over the horizon, what developments can you see coming, both for HOTCAM and the wider industry?

The market will continue to evolve, but this change isn’t primarily driven by capture technology; it’s fundamentally about how media is consumed. There’s considerable discussion around how 16-35-year-olds are moving away from traditional TV in favour of social media. However, this demographic still has a strong affinity for dating and competition-based shows. Complex formats and major human-interest stories require large, skilled teams – often numbering in the hundreds – to execute successfully, regardless of technological advancements. While influencers and UGC may encroach on the singlecamera space, few can produce content that matches the creative or editorial production values of major entertainment shows.

As the international streaming market competes for audience share by producing shows of increasing spectacle, we believe that global commissioning is looking to the UK as a leader in delivering highproduction-value, large-scale non-scripted productions. With this in mind, we are focused on consolidating our market position. Our goal has always been to achieve the scale and breadth needed to meet the demands of the international streaming market.

We will continue to double down on non-scripted and multi-camera content — there is no other production community in the world that rivals the UK in this regard, and we are committed to investing in the next generation to ensure this remains the case.

PICTURED ABOVE:
The crew behind the Red Bull Tyne Ride

into Racing M

opportunity

otoGP is the pinnacle of Grand Prix motorcycle racing, known for its breakneck speeds, elite riders, and cutting-edge technology. With a calendar spanning 22 races across five continents, each season delivers high-adrenaline action to hundreds of millions of fans worldwide. As the sport’s global popularity has surged, Dorna Sports, MotoGP’s exclusive commercial rights holder, has partnered with Tata Comms Media to help meet increasing demands for top-tier media infrastructure and live broadcasting expertise.

By harnessing Tata Comms Media’s experience in live sports delivery along with its global network reach, MotoGP has redefined its approach to production and distribution of race coverage worldwide. Together, Tata Comms Media and MotoGP have overcome the technical challenges of broadcasting one of the world’s fastest sports across 200 markets –all while streamlining production and laying the ground for innovations that enhance the viewer experience such as low-latency Ultra-High Definition (UHD) and 360-degree video.

Delivering high-quality broadcasts for a global audience

MotoGP is broadcast and streamed globally from a wide variety of venues, including some challenging locations – ranging from the Pertamina Mandalika International Circuit in Lombok, Indonesia and the Circuito de Jerez-Angel Nieto in Spain to Phillip Island Grand Prix Circuit in Australia and the Circuit of the Americas in Austin, Texas in the United States.

The sport's far-flung and often iconic venues present unique hurdles for connectivity, bandwidth,

and real-time video transmission. As the sport has grown, so has the complexity of delivering world-class broadcasts worldwide.

Amid all these moving parts, MotoGP has transformed its approach to producing the world feed for its events, moving from onsite mobile production to a hybrid model with much of the production handled remotely. Tata Comms Media provides robust connectivity between each racetrack and Dorna’s central production facility in Barcelona to ensure that live, low-latency feeds are delivered to broadcasters worldwide. Key requirements include:

• Support for hybrid production capabilities, as MotoGP has retained some on-site production while transitioning many operations to remote production in Barcelona

• High-bandwidth, resilient data transfer, both to support massive video data loads without interruption and service MotoGP’s real-time on-site communications

• Secure, piracy-proof distribution to ensure secure delivery of content, given the value of MotoGP’s broadcast rights.

“MotoGP goes to some of the same racetracks almost every year, so we can establish connectivity on these places by adapting existing communications networks, but others are on new sites – often remote locations where we need to build infrastructure literally from the ground up,” says Dhaval Ponda, VP and global media head at Tata Communications.

The right media and connectivity services

PICTURED ABOVE: MotoGP has transformed its approach to producing the world feed for its events

To meet these requirements, Tata Comms Media deploys a bespoke solution that provides end-to-end connectivity, along with key live video compression services – supporting MotoGP’s hybrid production model. At the core of the solution is the deployment of two dark fibre circuits at every race venue, offering unparalleled bandwidth to enable the transmission of massive video and data feeds in real-time. These high-capacity circuits connect each MotoGP track directly to the production centre in Barcelona. They also support the live presence of major broadcast and streaming rightsholders capturing all the action from the paddock at each racetrack, enabling on-the-ground coverage while connecting onsite crews to their broadcast centres, whether in London, Milan, or Miami. “Dark fibre delivers one Gigabit per second (Gbps) of bandwidth for each circuit, giving MotoGP

and its broadcast partners the capability to handle as many as 200 video feeds in total,” Ponda notes.

In addition to dark fibre connectivity, Tata Comms Media introduced SRT (Secure Reliable Transport) technology in place of satellite transport for distributing video feeds to regions in Asia and the Americas. SRT provides end-to-end encryption, making it more secure than satellite – significantly curbing piracy at roughly the same cost.

For added redundancy, Tata Comms Media continues to provide satellite disaster recovery (DR) services. In the event that both dark fibre circuits fail, live broadcasts can switch to satellite feeds, ensuring uninterrupted coverage.

Tata Comms Media also manages all video encoding and decoding for MotoGP, both at the track and in Barcelona, ensuring that video is transmitted in pristine quality regardless of where the race is taking place. “This set-up gives Tata Comms Media full visibility of the content flow,” says Ponda. “We can troubleshoot issues faster and maintain the high standards that Dorna expects for MotoGP broadcasts.”

At the track, the two dark fibre circuits connect into a climate-controlled pod housing all of the encoders and other equipment, alongside another nearby pod where crew managing the entire process can monitor the last mile connectivity of the race and make any necessary adjustments.

Empowering

MotoGP’s global broadcast strategy

The partnership between MotoGP and Tata Comms Media enables seamless delivery of live race content across the globe. Key benefits include:

• Superior global distribution – Tata Comms Media enables MotoGP to seamlessly deliver live race broadcasts to over 200 million households worldwide. With direct connectivity to key

PICTURED ABOVE: At the track, the two dark fibre circuits connect into a climatecontrolled pod housing all of the encoders and other equipment

rightsholders such as TNT Sports on Discovery+, Sky Italia and DAZN, the feeds are delivered in real-time, maintaining high quality even during peak demand.

• Enhanced viewer experience – MotoGP viewers benefit from ultra-low latency video feeds, bringing them closer to the action. Tata Comms Media’s infrastructure and encoding/decoding solutions also allow fans around the world to enjoy races through some rightsholders in HD and UHD.

• Robust security – The use of SRT technology significantly reduces the risk of piracy. This technology also allows MotoGP to sell rights and connect a new broadcaster even shortly before a specific race, maximising revenue opportunities.

• Unmatched redundancy: Tata Comms Media’s dual dark fibre circuits and satellite DR backup ensure that MotoGP broadcasts are highly resilient, maintaining uninterrupted coverage across all race weekends, even in the event of connectivity disruptions.

“MotoGP viewers benefit from ultralow latency video feeds, bringing them closer to the action. Tata Comms Media’s infrastructure and encoding/decoding solutions also allow fans to enjoy races through some rightsholders in HD and UHD”

Through its strategic collaboration with Tata Comms Media, MotoGP has been able to transform into an even more immersive and secure broadcast experience. The infrastructure and services have provided the flexibility, speed, and reliability needed to keep pace with MotoGP’s global growth, ensuring that fans can enjoy every twist and turn in real-time – no matter where they are in the world.

Goal: Liverpool (assist: Wasabi)

Liverpool Football Club produces over 100 pieces of content every week across its linear TV channel, OTT platform and social media channels. LFC’s Drew Crisp tells Jenny Priestley how a cloud storage partnership with Wasabi is helping the club achieve its goals

Football fans are ferocious in their appetite for the latest news and videos about their favourite club.

With the rise of social media, digital streaming and club-branded linear TV channels, the need to keep up a constant flow of content has only increased. Feeding that monster means that clubs need a reliable media storage facility to store their latest assets as well as archive material.

Liverpool Football Club (LFC) began partnering with Wasabi three years ago. The most successful football club in England utilises the company’s cloud technology to identify, organise, and categorise files, as well as tag and tailor content.

LFC was already using Wasabi technology, but the club’s digital team wanted to find a better way of solving some of their archive storage problems. “We wanted to prevent the spinning disk environment in

an office,” explains Drew Crisp, SVP of digital at LFC. “Covid, in that sense, helped us accelerate a few things and therefore using Wasabi and starting on that journey was a very obvious answer.”

Crisp’s role encompasses the media side of LFC, including linear channel LFCTV and OTT platform LFCTV Go, as well as partnerships and marketing, and all of the club’s product development, technology and infrastructure. “We use Wasabi for the majority, if not all, of our content storage in the media space,” he explains. “We also ingest a lot of the games that we record into Wasabi so that we can work on them after the final whistle. We use Wasabi as a corporate backup solution so that should anything happen, we have our organisation in one place.”

On average, the club produces 100 pieces of content across its social media channels a week. Depending

on the first team’s schedule, the linear channel and OTT platform could broadcast two matches a week with both a pre- and post-match show, plus highlights and interviews. “It's in the hundreds of pieces of content that go out in varying different guises each week,” Crisp explains. “Also, a lot of our archive is stored in Wasabi. There's a continual feed of archive content that goes into Wasabi that we can access and edit to create pieces. I think we're currently at 1.2 petabytes of content storage.”

LFC opted for cloud storage over on-prem because of the speed of access and accessibility. Crisp says that was always the plan, but the pandemic helped accelerate the move. “It wasn't sustainable for editors to come into the office and download content from physical servers onto big remote hard drives so they could work. The move to Wasabi cloud absolutely transformed how quickly people could access content and what they could do remotely.

“Now we're very much in a flexible model. I like people to be in the office three days a week. I think that human interaction from a content point of view creates good content because you challenge each other and come up with ideas, and you can't do that alone. I think that kind of exploratory, creative thinking is much better in person, but we recognise that everyone has got used to flexibility.”

Salah scores!

Before partnering with Liverpool, Wasabi had worked with a number of sports organisations in the United States. But according to Whit Jackson, VP of media and entertainment at Wasabi, LFC is unique in its scale of content creation. “That has forced our hand a bit,” he admits.

To keep up with demand, in April 2024 the company launched Wasabi AiR, object storage powered by artificial intelligence and machine learning. The solution enables LFC to create a “laundry list” of metadata, from the players included, spoken words, keywords, tags, and logos, as well as speech to text rundown of every word spoken in the broadcast.

“That index then becomes a really powerful search tool, so if Mo Salah slots in a goal and there might have been a moment where the Wasabi logo was visible in the stadium, the user can easily find the footage they’re looking for,” explains Jackson.

LFC is currently using Wasbai AiR across ingested match content as a starting point and will gradually apply it to their archive. “To give you an example, before we started using the solution,

after every match, somebody would rewatch the game and then write down minute by minute when a certain partner appeared on the LED and send this long spreadsheet to someone else who would get the game, and then create all of the different clips for us to able to say to a partner, this is when you appeared,”

“It was honestly three days of Excel and editing pain. Now we can push a button, and it's done literally within probably 30 seconds, but the reality is, it's more like three," he continues. "The power of it is really, really impressive. It's allowing our editors to spend more time on being creative rather than some of the operational

Crisp can already see ways in which Wasabi can help LFC in the future, describing a scenario where the club could use the company's technology to let fans create their own content in the same way the club’s editors do now. “How do you build a front-end that would actually create a football archive that's accessible?” he muses. “There are lots of licensing rights challenges in that but you could overcome it. From my perspective, that's really exciting.”

He is also keen to integrate Wasabi AiR into LFC’s media asset management system. “To have it integrated into our MAM, which is then integrated into our editing suite, is really powerful because then we're applying AiR at the source. We don't have to ingest the match into the MAM before we use it, it can immediately go into our editing suite and we can make some tweaks, add some overlays, and bang, we can publish it. In an ideal world, that's where we'd be.”

Wasabi is keen to continue working with the club to reach its goals, says Jackson, and is aiming to deliver integrations with certain MAMs in a timely manner.

“Some other things that have come out of this engagement include new feature sets, like speech-to-text, but the natural language description is also very helpful as well. In a football match that could be: several men on the field with a football, the grass is green, it appears to be Anfield, all of that. But it can also find moments such as the players getting on a bus, walking across the street, a pub scene, things of that nature.

“If you're looking for those kinds of clips, you're not searching again just for the people that you want to see, but now trying to find them within some context. We'll continue to expand on that as we go forward.”

Wasabi AiR can tag the LFC players, making it easier for the media team to find what they want
Drew Crisp
Whit Jackson
Official

television and film archives exist to preserve the history and creativity of media. But even these formal institutions do not contain everything that has been produced, due to material being lost or destroyed. However, as Kevin Hilton explains, the efforts of a dedicated band of collectors is now filling in the gaps

the past of TV and film for the Saving future

Collecting is one of the great hobbies and can have cultural and historical significance in preserving important artefacts. But it can also tip over into being an obsession, sometimes leading to hoarding. Very often the contents of a collection are known only to the collector, which means that when they die and someone unfamiliar with the material has to deal with it, valuable items could be unknowingly disposed of and lost to posterity.

This is now a reality in the world of vintage film and TV programme collecting. Several prominent collectors

have died in recent years and there are fears important material could have been just thrown away as their homes were cleared. This realisation led to the founding of an initiative called Film is Fabulous (FiF) with the aim of retrieving such collections, cataloguing them, restoring any damaged material and returning them to the relevant archives.

FiF came about after an open day for the film collecting community at a late collector's home in 2022, where people were confronted by film cans and other material filling the bedrooms and garage of a

Film is Fabulous works with other enthusiasts and film historians at the Cinema and Television History Institute of De Montfort University

three-bedroom house. "It was rammed," comments Paul Vanezis, who went along on behalf of the BFI. "From what I could see there were some interesting items but nothing groundbreaking. However, you can't go through an uncatalogued collection in a day and the house needed to be cleared, so there was a danger everything would end up in landfill,"

Vanezis is a freelance producer/director who has worked on a number of significant restoration projects, including the entire Monty Python's Flying Circus series, Morecambe & Wise and Doctor Who He is also a collector but more focused on finding lost programmes or elements to restore existing material. While at the open day, Vanezis met John Franklin, whose interest is in feature films. "John is purely into film and didn't know about archive TV," Vanezis explains. "so I brought him up to speed on that side.

Realising that rare features and vintage TV shows could be at risk as more collections became vulnerable to disposal, Vanezis and Franklin began working with other enthusiasts and film historians at the Cinema and Television History Institute (CATHI) of De Montfort University (DMU) to secure and identify movies and old TV programmes held on film stock.

FiF was formalised as a project with DMU in the summer of 2023 and collections started arriving at the university towards the end of last year. Case studies began earlier in 2024 to catalogue and secure specific items. Vanezis emphasises that neither FiF nor DMU is able to act as an archive; the aim is to recover and log material, after which it can be transferred onto digital media, which may also involve cleaning or more extensive restoration work. The physical film is then

either returned to the original broadcaster, if it still exists, other archives or sold at auction.

The first case study was of a collection that had been built up over 30 years (Vanezis preferred not to name individual deceased collectors because, in some cases, the estates are still in probate). This did throw up some missing items of old UK TV, including programmes produced by former broadcasters ATV (ITC Entertainment), Associated-Rediffusion, Southern and Thames, as well as the BBC. Material that already existed was sent for auction.

"I did know this collector personally," says Vanezis. "He was a bit of a magpie and, like other collectors, never turned down anything he was offered. We never spoke about any missing programmes he had and although I knew he had an extensive collection I didn't know how extensive it was."

Vanezis observes that "most collectors like to think they know what they've got" and this knowledge has seen some missing episodes, for example of Doctor Who, returned over the last 20 years. In many cases the film cans are labelled but often only with the name of a programme, not any details that might reveal buried treasure, which has meant going through almost every individual can and film.

Because the intention of the project is not to retain material long-term it does not require an involved media asset management system. Instead, Vanezis explains, the films being brought are catalogued according to criteria laid down by FIAF (International Federation of Film Archives) and the BFI. "The acquired information is simply inputted to a spreadsheet," he says. "The kind of information we gather includes title and subtitle, gauge and length/duration, stock date if available, condition and the owner."

Dr Peter Lester, lead archivist at DMU

The items are also numbered so material can be tracked. "For the purposes of the pilot study this is [on] a simple label but in the future, items will be barcoded," Vanezis says. "The bulk of the material will go to auction, so information regarding condition is important for the auction houses but also for any of the copyright holders that may take the original material."

Several collections have already been received at DMU and were part of the pilot case studies that ran this year. The painstaking investigation and cataloguing of the contents has unearthed some unexpected treasures. On the film side the 1919 silent movie Sealed Hearts, thought lost for nearly a century, was discovered on tinted 35mm nitrate stock. Directed by Ralph Ince for Selznick Pictures, the print is being returned to the George Eastman Museum in Rochester, New York State for restoration.

The FiF scheme has been as successful, if perhaps more so, in bringing to light TV productions that were missing from the archives. These have included complete programmes - such as three episodes of the 1970 series of BBC children's show Basil Brush and a 1967 ATV music special starring singer Tom Jones - and programme fragments like the soundless insert rushes from the second instalment of the 1968 two-part episode Take it with a Pinch of Salt from influential BBC crime car drama Z-Cars

Among the most significant finds have been two episodes of The Third Man, a 1959-62 series loosely based on the 1949 film noir of the same name, one of which was missing entirely and one that did exist but not in the BBC Archive; and two episodes of The Vise (sic) and one of its sequel, Saber of London These were produced by the American-born Danziger brothers, who made mystery melodramas in Britain during the late 1950s and early ‘60s for the American market as well as the UK.

"The various Danziger productions came from the fourth case study and are 3 mm negatives," Vanezis says. "That catalogue of filmed programmes is incomplete in official archives. If something turns out to be unique, or a master as opposed to a duplicate print, we'll give that much closer attention than a general print that's been in circulation for many years, which does not mean some of the 16mm dupes we've come across aren't important either.

"An example of this is The Third Man, a series of half-hours starring Michael Rennie. It was co-produced by the BBC, which retained the UK rights but doesn't have copies of everything, and we don't know of anyone else with 35mm originals

or anything else of it. So the BBC is interested in episodes of The Third Man on any broadcastable format that might help complete its collection."

As for the condition of films being brought in, Vanezis describes it as "very variable." He continues that every collection has some material suffering from vinegar syndrome, which affects celluloseacetate-based film. This is caused by not being stored properly in conditions of high humidity and temperature, which leads to buckling and shrinkage. Another serious problem is if the wrong glue was used to make edits when commercials were spliced into a programme. This can dry out and cause the film to come apart when passed through equipment such as ultrasonic cleaners or film scanners.

The Danziger shows have been so affected by vinegar syndrome that many are now solid blocks of film after liquefying at some point in the past and then drying out. Vanezis fears some might be beyond retrieval right now but thanks to a crowd-funded scheme, there is some hope. Tests carried out on a missing episode of Saber of London at restoration facility R3store Studios retrieved the visuals from the 35mm negatives, which will be scanned at 4K after the prints have been cleaned. Unfortunately, the situation with the separate optical soundtrack is described as "more problematic" and further tests are being carried out to see what can be done with it.

The broadcast world is now more aware of the need to preserve programmes after the wiping and junking of many shows from the ‘60s and early ‘70s left big gaps in the archives. Those missing pieces could return at some time - and some are alreadythanks to the enthusiasm of a very particular breed of collector.

Work being carried out at R3store Studios on an episode of Saber of London

MBREAKING THE HARDWARE CYCLE IN

post production

edia technology buyers within broadcast and post production companies are all too familiar with the “hardware cycle.”

Every 2-5 years, these companies must source and manage physical hardware like servers and storage systems. This process is lengthy, demanding, and costly, often stretching over several years. When upgrades are complete, they are nearly obsolete, leading to continuous operational expenditures (opex) related to hardware costs. This cycle saps IT resources needed for more critical tasks.

The hardware burden

The media and entertainment industry has been notably slow in transitioning away from hardware infrastructure, which can hinder business success for several reasons:

1. Intensive Resource Use: Rolling out new hardware demands significant time and effort, involving platform evaluation, sourcing, price negotiations, infrastructure redesign, and the migration of applications, data, backups, and archives.

2. Ongoing Maintenance: Physical hardware requires constant monitoring, updates, and troubleshooting, which can pull your IT team away from more strategic projects.

3. Significant Costs: Regular hardware upgrades drive up capital expenditures (capex) and operational expenditures (opex), affecting financial outcomes.

4. Environmental Concerns: Operating and cooling physical servers lead to high energy consumption and carbon emissions, which can conflict with many companies’ sustainability objectives.

Fortunately, there is a more effective way to escape this relentless cycle of purchasing and deployment.

Adopting a cloud-based methodology can be the answer to disrupting the traditional hardware cycle; however, its full potential is unlocked only when it enhances work practices. For post production workflows, a number of tools have been available to assist with very specific tasks, such as review and approval, file sharing, digital asset management. However, creative project workflows are still left to their own devices — both figuratively and literally. Seeing end-users ship hard drives to deliver projects to their teams in 2024 highlights the immaturity of current cloud technology implementations. Merely having cloud storage or platforms on the edges of post production workflows fails to address their core: the projects themselves.

The solution lies in tools that integrate key post production features, such as intelligent project management, sharing, permissions, comprehensive search capabilities, and automation with remote collaboration, thus empowering users to break free from hardware dependencies. A cloud-hosted, true collaboration platform must include secure cloud access, remote flexibility, and scalability designed for post production environments, as well as a creative framework that enables teams to collaborate, share resources, and manage projects efficiently.

(Truly) cloud-based post production

Shifting from traditional on-site hardware to cloud-based solutions streamlines operations and provides numerous workflow advantages for post production teams:

• Global Collaboration: Teams can work together effortlessly from any location, minimising travel requirements and reducing costs.

• Security and compliance: Improved access controls and automated project management enhance data protection, with customisable permissions and quotas.

• Flexibility and scalability: Cloud-based post production platforms enable businesses to easily scale infrastructure according to demand, transforming from capex-heavy investments to manageable opex models, reducing the need for continuous hardware upgrades and allowing IT teams to focus on innovation.

• Boosted Efficiency: Cloud-hosted tools decrease administrative burdens, enabling media and entertainment teams to focus on producing and distributing engaging content.

• Improved sustainability: Transitioning to cloud infrastructure cuts down on the environmental impact by reducing energy consumption from physical servers, supporting corporate eco-friendly initiatives.

Breaking away from traditional hardware is an essential shift for media companies seeking to boost efficiency, cut costs, and champion sustainability. Transitioning to cloud-based post production collaboration platforms allows businesses to break away from the inefficiencies of old hardware cycles and concentrate on their core business — producing engaging content. Looking forward, new trends in cloud technology are poised to revolutionise post production even further. AI-powered editing tools and enhanced collaborative features will enable teams to work more efficiently and with greater creativity. The future of content creation is promising, with the cloud leading the charge.

Hey ChatGPT, what’s on TV?

Someone once said it’s difficult to make predictions, especially about the future. Much of my career has been spent trying to foretell TV’s evolution. Nearly 20 years ago I co-wrote a book with William Cooper on broadcast television’s marriage with the internet. In the preface we envisioned a world in which a TV guide would select from “tens of thousands of live streams, hundreds of thousands of on-demand programmes, and virtually every movie ever made”, offering “endless thoughtful suggestions and playlists to suit the mood and your personal preferences”.

Much of what we wrote in ITPV: Broadband meets Broadcast, the network television revolution has since come to fruition. But not that intelligent guide. Yes, TV guides have become smarter thanks to personalised recommendations, and easier to use as visually exciting interfaces replaced grids that had all the charm of a spreadsheet. Voice-based interaction then took the friction away from search and navigation. But as slick as it is, my TV guide doesn’t really know me or anyone else sitting in front of it; it’s not building a relationship; it can’t detect my frame of mind. I turn on the device and my TV operator offers a homepage that literally doesn’t talk to me.

The guide I had in mind two decades ago would be a companion more akin to what we now call an AI assistant such as Siri or Alexa. Interaction would feel natural. It would understand contexts such as time of day, day of week, the weather, different seasons – all the things that influence TV viewing and selections. It would recognise me, know my viewing history and preferences, and through our interactions be able to work out – as humans do, using verbal and non-verbal cues – whether I’m tired or wide awake, in a hurry or relaxed, bored or engaged.

AI has long been used to power personalised recommendations. Machine learning can determine if you watched this then you’ll love that; if you liked a show featuring a certain actor then you’ll almost certainly be interested in this one. That form of AI is now given several prefixes – predictive, classic, traditional – to distinguish it from its content-creating sibling: generative AI. The guide I have in mind would blend both forms, using predictive AI to suggest programming and generative AI to power more human-like interaction – just like the advanced voice interaction now being rolled out by OpenAI’s ChatGPT. No more wake words each time

you speak; it can tolerate interruptions and ignore irrelevant details when you tell it to discard what you’ve just said; it knows when you’re shouting and being impatient from the tone of your voice. Siri and Alexa are currently being given a generative AI reboot. So how about we do the same to the TV guide? The advantages to operators could be immense, increasing time spent with their services and reducing churn by making the relationship even stickier. More conversational voice interactions would further improve accessibility. A generative AI model trained on TV descriptions, metadata and long-form information from broadcasters and producers would be able to offer much more detail about the show. Generative selections could also include serendipitous ‘wild card’ options – shows that you might not be interested in, but give them a go and suddenly you’re hooked. I asked ChatGPT what was on TV tonight and it gave me a list of primetime shows across three US networks. Problem: I’m in London. I then asked what was on TV in the UK tonight, and it confessed: “I can’t check real-time TV listings” before saying “BBC” (without giving a BBC channel name) might have a “popular drama or comedy” while Channel 4 might have “a gripping documentary”. No mention of ITV or any other broadcaster.

That suggests OpenAI has yet to realise there’s an opportunity to be had in helping humans make sense of the vast number of viewing options across broadcast, online and app-based TV. I believe that at some point it and the other AI developers – Google and Amazon are TV operating system developers after all – will turn their attention to this challenge. It’s an opportunity for TV operators and broadcasters to collaborate.

A final thought: there’s an emerging new art form in which generative AI is being used to create TV animations. If the tech improves at the same recent clip, and all the rights can be sorted, then at some point you’ll be able to include favourite characters and locations from a TV series – such as Inspector Morse, Oxford – in a prompt outlining a plot and gen AI could create a new episode. It’ll be an entirely new genre and form of TV, and the guide I envisage will be an integral part of it.

Graham charts the global impacts of generative AI on human-made media at grahamlovelace.substack.com

Quote from Adrian Poole, Senior Systems Engineer at Picture Shop in the UK:

I S C O N TE N T

Time to roll VT at ISE 2025

The world-renowned annual audiovisual solutions show is back Get hands- on with tomorrow ’s technology today –from big-name brands, boutique specialists and startups Say hello to the boundaries of possibility and beyond Reconnect with state- of-the-art tech in Barcelona