TM Broadcast International #143, July 2025

Page 1


Anewchapter for Italy’s Rai:

Exclusiveinterviewwiththeexecutive technical leadershiptoexplore itstechnological horizon

STREAMING RakutenTV goes all-inonFAST

INDUSTRY VOICES

5THEDITION

EDITORIAL

Rai is entering a new chapter. The retirement of its CTO, Stefano Ciccotti—a story first reported exclusively by TM Broadcast—opens a new scenario for the Italian public broadcaster. As of the publication date of this issue, Rai has not officially confirmed how its final organizational structure will look, although internal sources consulted by this magazine indicate that the corporation’s various business areas already operate with a high degree of autonomy. As such, the CTO role has gradually adopted a more institutional profile.

In other words, Rai’s renewal strategy remains firmly in place. The Italian broadcaster is currently undergoing an ambitious technological transformation, with the goal of evolving from a traditional broadcasting model to a media company—mirroring the direction taken by other key players in the industry.

The roadmap, which spans several years and involves a multi-million-euro investment, aims to establish a fully IP-based infrastructure. The transition is expected to be completed within the next four to five years.

Editor in chief

Javier de Martín editor@tmbroadcast.com

Creative Direction

Mercedes González mercedes.gonzalez@tmbroadcast.com

Chief Editor

Daniel Esparza press@tmbroadcast.com

Editorial Staff

Bárbara Ausín

Carlos Serrano

This is therefore an ideal moment to take a close look at Rai’s current status and future technological horizon. To that end, we sat down for an exclusive interview with Rai’s top technical leadership, led by Director of Technology Ubaldo Toni, to whom we extend our sincere thanks for coordinating this in-depth conversation with his team.

Also in this issue, we present the third installment of our Industry Voices section. This time, we speak with Simen K. Frostad, President of Bridge Technologies, an industry veteran and pioneer in the implementation of IP workflows.

It’s worth highlighting how the generous participation of each guest in this section—openly discussing key market trends and strategic directions—is turning Industry Voices into a privileged space for reflection on the present and future of the broadcast industry. Our sincere thanks to everyone making this possible.

All this and much more… only in TM Broadcast.

Key account manager

Patricia Pérez ppt@tmbroadcast.com

Administration

Laura de Diego administration@tmbroadcast.com

TM Broadcast International #143 July 2025

Published in Spain ISSN: 2659-5966

TM Broadcast International is a magazine published by Daró Media Group SL Centro Empresarial Tartessos Calle Pollensa 2, oficina 14 28290 Las Rozas (Madrid), Spain Phone +34 91 640 46 43

BROADCASTERS

A new chapter for Italy’s Rai:

Exclusive interview with the executive technical leadership to explore its technological horizon

Rai’s Director of Technology, Ubaldo Toni, together with his team, answers all our questions. We review the current technological status of Italy’s public broadcaster—now undergoing a deep transformation—and its main investment priorities. The interview also offers broader insights into the future of the broadcast industry.

Rakuten TV, goes all-in on FAST

We interview Rakuten TV’s Chief Product Officer, Sidharth Jayant, to explore the company’s strategy and most notable technological implementations.

INDUSTRY VOICES

“We need today tools that help non-experts understand what’s going on”

EXPLAINER

Cinematic Broadcast:

Much more than a buzzword

This new trend in the sector is here to stay. The various players in cinema and broadcast, including camera and optical manufacturers, must understand each other in order to generate professional partnerships with guarantees for success.

TM BROADCAST AWARDS

5th Edition of the TM Broadcast

Awards: Prestige and Innovation

TM BROADCAST SPAIN has held the fifth edition of these awards, now an undisputed benchmark in the country’s broadcast industry due to their rigor and independence.

BBC to launch new generative AI pilots to support production processes

Rhodri Talfan Davies, BBC Executive Sponsor of Gen AI, has announced that the BBC will launch new Generative AI pilots with the objective of supporting its production processes. The corporation is now looking to test two pilots publicly: ‘At a glance’ summaries and BBC Style Assist, as it has claimed in a statement.

BBC has been working on this project oover the last 18 months. They’ve also published some updates that share their approach to AI and What they are doing with it on the official web.

‘At a glance’ summaries

The objective of this is to make journalism more accessible, using GenAI to help assist journalists to create new ‘At a glance’ summaries of longer news articles.

“We’re going to look at whether adding an AI-assisted bullet point summary box on selected

articles helps us engage readers and make complex stories more accessible to users”, explains Talfan Davies.

The project aims to give journalists the full control, by using a single, approved prompt to generate the summary, then review and edit the output before publication.

“Jounalists will continue to review and edit every summary before it’s published, ensuring editorial standards are maintained. We will also make clear to the audience where AI has been used as part of our commitment to transparency”, adds.

BBC Style Assist

‘Style Assist’ is designed to explore how GenAI could support their news journalists to adapt and reformat stories so that they match the BBC ‘house-style’ for online news.

Every day, the BBC receives hundreds of stories from the Local Democracy Reporting Service (LDRS), a public service news partnership funded by the BBC, but provided by the local news sector in the UK.

The intention of this is to help journalists to reformat LDRS stories quickly and efficiently by reducing the production time required to publish these stories.

How it works

“BBC Style Assist uses a BBC-trained Large Language Model (LLM) which has been developed by our Research and Development team. It’s a smart AI system that has ‘read’ thousands of BBC articles so it can help amend text quickly to match our own house style”, details Talfan Davies. 

BBC’s CTO to step down at the end of the year

Changes are coming to the BBC’s technical leadership. Its CTO, Peter O’Kane, has announced that he will step down from the role at the end of the year, after 15 years at the public broadcaster. “This wasn’t an easy decision — working with you has been one of the most rewarding and inspiring parts of my career,” he said in a message shared on his LinkedIn profile.

In the post, Peter O’Kane reflected on some of the milestones achieved during his time at the BBC. “I’m incredibly proud of what we’ve achieved together: from modernising our technology landscape and embracing the cloud, to keeping the BBC on-air and online through some of the most challenging moments in recent history.”

He also highlighted the professionalism of his team. “What I’ll remember most, though, is the people. Your dedication, ingenuity, and resilience have inspired me every day.”

O’Kane stated that he will remain in the role until the completion of a key project focused on technological resilience and expressed confidence in ensuring a smooth transition. “Thank you for everything — for your trust, your talent, and for making this such a meaningful chapter in my life,” he concluded. 

EXCLUSIVE | RAI’s CTO, Stefano Ciccotti, steps down as he retires

The Italian broadcaster is currently undergoing an ambitious technological transformation

End of an era at RAI. Italy’s public broadcaster—one of the most prominent in Europe—bids farewell to its Chief Technology Officer, Stefano Ciccotti, who is retiring as of July 1. The news has been confirmed exclusively to TM BROADCAST by internal sources within the organization.

So far, RAI has not issued any official statement nor clarified who will succeed Ciccotti in the role. It’s worth noting, however, that due to the vast scope of its operations, the public corporation’s various business units already enjoy significant operational autonomy.

RAI is currently immersed in an ambitious technological transformation plan, aiming to shift from a traditional broadcaster to a media company—echoing a trend followed by other media players. The core goal of this transition is to establish a fully IP-based infrastructure.

Born in Rome in 1960, Stefano Ciccotti has been closely tied to RAI for nearly four decades. He was appointed CTO in October 2017. Upon taking on the role, he gave an exclusive interview

to TM BROADCAST, where he shared his initial thoughts and outlined the challenges ahead.

Ciccotti holds a degree in Electronic Engineering and joined RAI’s Technical Department in 1987, where he held several positions in the following years. In 1995, he moved to Omnitel Pronto Italia, and a year later, joined Telecom Italia Mobile. In 1997, he relocated to Vienna, where he was appointed Deputy Technical Director of Mobilkom Austria AG.

He returned to RAI in 1998 as Director of the newly created Broadcasting and Transmission Division. From this position,

he led the transformation of RAI’s plant engineering area into a joint-stock company, founding RAI Way S.p.A.. This company, part of the RAI Group, is responsible for planning, implementing, and managing telecommunications and broadcasting networks and services for both RAI and external clients.

Ciccotti served as CEO of RAI Way from 2000 to 2017 and was also its President between 2001 and 2004. During his tenure, he oversaw the company’s IPO, which was completed in 2014 with its listing on the Mercato Telematico Azionario. In October 2017, he returned to RAI to take up the role of Chief Technology Officer. 

Warner Bros Discovery acquires exclusive European rights for the Legends Team Cup tennis tournaments

Warner Bros. Discovery has announced a new partnership with the Legends Team Cup to show the new international tennis tour where players of the ATP Tour will compete in three teams across the US and Europe, as it has claimed in a statement.

WBD Sports will offer live coverage of all seven tournaments taking place across a host of locations in 2025 on its streaming platforms including HBO Max and discovery+. The inaugural Legends Team Cup will kick off from 14–16 August 2025 in Europe. Throughout the 2025 calendar, the tour will also take place across Asia, the Americas and the Middle East.

A Legends Team Cup highlights show will also be shown on Eurosport across Europe, and TNT Sports in the UK and Ireland following each event.

Across the 2025 calendar, the tour will visit locations in North America, Europe and Asia, with the date and venues to be revealed at a later date.

Each of the seven tournaments spans three days, with two teams competing in a series of two singles matches and one doubles match per day. The teams, who have a combined 503 ATP Tour titles across singles and doubles, are competing to lift the Björn Borg Trophy, named in honour of the tournament patron, and with a total prize pool of $12 million on offer.

The competition is made up of three teams, each spearheaded by one of the captains Carlos Moya, James Blake and Mark Philippoussis.

Players competing on the tour include Dominic Thiem, Sam Querrey, Jo Wilfried-Tsonga, Feliciano Lopez and Diego Schwartzman. The tour also welcomes players transitioning from the ATP Tour, such as Richard Gasquet, who last month played his final professional match at Roland-Garros, losing to Jannik Sinner in the second round.

Scott Young, EVP at WBD Sports Europe, affirmed: “We’re thrilled to bring the Legends Team Cup to millions of viewers across Europe. This innovative tournament brings together some of the most iconic names in tennis, which will deliver unforgettable moments for fans in some stunning locations across the globe. The fast-paced, unique format of two sets of four games in the competition makes it a unique proposition for fans to watch”.

“Adding the Legends Team Cup demonstrates our commitment to showcasing innovative sporting formats across the world and we can’t wait to see how some of these tennis greats extend their legacy”.

Marten Hedlund, president of Legends Team Cup, added: “Partnering with Warner Bros. Discovery marks a major milestone for the Legends Team Cup, as we’re now able to showcase our tour to millions of people around the world”.

“I’ve long had the belief that there should be more opportunities for tennis players to extend their careers once they retire from the ATP Tour, and that is what we are providing with the Legends Team Cup. This is a new chapter in the history of tennis, one where legends can continue to inspire, compete and connect with fans worldwide”. 

RTL Group acquires Sky Deutschland in bid to strengthen its streaming strategy

RTL Group has announced that it has signed a definitive agreement to acquire Sky Deutschland (DACH). The transaction brings together two of the most recognisable media brands in the DACH region, creating an entertainment business with around 11.5 million paying subscribers.

Together, the business is well-positioned to meet evolving consumer demands and compete with global streamers. The transaction combines Sky’s premium sports rights – including Bundesliga, DFB-Pokal, Premier League and Formula 1 – with RTL’s leading entertainment and news brands across RTL+, freeto-air and pay TV. It also unites

the fastest growing streaming offers in the German market, RTL+ and WOW. The transaction, which has been approved by the Board of Directors of RTL Group, is subject to regulatory approvals.

€150 million deal

According to the agreement, RTL Group will fully acquire Sky’s businesses in Germany, Austria, Switzerland, including customer relationships in Luxembourg, Liechtenstein and South Tyrol on a cash-free and debt-free basis. The purchase price consists of €150 million in cash and a variable consideration linked to RTL Group’s share price performance. The variable consideration can be triggered by Comcast, Sky’s parent company, at any time within five years after closing, provided that RTL Group’s

TV audiences are more fragmented than ever: How viewing habits differ across generations

America’s younger and older generations spend less time watching TV than any other group. While Gen X (1965-1980) and Millennials (1981-1996) prefer ad-supported subscription streaming (AVOD) over FAST channels, Gen Z (1997-2012) its more likely to fragment its consumption on smaller screens, according to the latest

release published by DASH in partnership with NORC at the University of Chicago.

The ARF DASH TV Universe Study shows how Americans connect to TV, how they consume it today and how that’s changing across generations. According to it, the habits that the different age brackets experiment is shaped

by the technologies they grew up with and the values they share.

Although Boomers (those born in the late 1940s through late 1950s) remain loyal to more familiar viewing patterns (54%), they are also starting to experiment with new platforms (49%). The report’s analysis informs that this could be

share price exceeds €41. The variable consideration is capped at €70 per share or €377 million. RTL Group has the right to settle the variable consideration in RTL Group shares or cash or a combination of both. RTL Group is considering buying treasury shares to be in a position to settle the variable consideration fully or partly in shares.

The combined business will offer a broader and more compelling German-language content portfolio for consumers across the DACH region. Viewers will benefit from expanded access to premium live sports, entertainment and news across RTL+, Sky, WOW and RTL’s free-to-air channels. By bringing together the strengths of RTL and Sky, the combined company will be able to compete against global streaming platforms. The transaction is expected to generate €250 million in annual synergies within three years, mostly cost synergies across all categories.

Under a separate trademark license agreement, RTL will have the right to use the Sky brand in the DACH region (Germany, Austria, Switzerland), Luxembourg, Liechtenstein and South Tyrol. RTL will acquire Sky Deutschland’s streaming brand “WOW” as part of the transaction.

Barny Mills, Sky Deutschland CEO, will continue to lead the Sky Deutschland business until the transaction is completed. Stephan Schmitter will stay in his current role as CEO of RTL Deutschland until closing of the transaction and then lead the combined company. RTL Deutschland will remain headquartered in Cologne and Sky Deutschland in Munich.

The two businesses will operate independently until 2026

The pro-forma revenue 2024 of the combined company was €4.6 billion, with approximately 45 per cent of the total revenue

coming from subscription-based revenue.1 RTL Group’s pro-forma revenue for 2024 was €8.2 billion2, more than 30 per cent higher than RTL Group’s reported consolidated revenue for 2024 (€6.25 billion). The acquisition of Sky Deutschland is the largest transaction for RTL Group since its inception in 2000. The two businesses will continue to operate independently until regulatory approvals are obtained, which are expected in 2026.

Thomas Rabe, CEO of RTL Group, said: “The combination of RTL and Sky is transformational for RTL Group. It will bring together two of the most powerful entertainment and sports brands in Europe and create a unique video proposition across free TV, pay TV and streaming. It will boost our streaming business, with a total of around 11.5 million paying subscribers, further diversify our revenue streams and make us even more attractive for creative talent, rights holders and business partners. 

because FAST more closely resembles the channel-surfing experience they are used to. Also, many of those channels feature programming that was current when Boomers were growing up and it’s free, like TV used to be when they were younger.

The opposite happens with other generations.

The “parent” generations, Gen X and Millenials, are the ones who most consume TV,

especially AVOD. The study’s data reflects that the first ones have a 69% preference for streaming services and 61% for FASTs.

Millenials are the main consumers of paid content (73%) and the second most frequent users of traditional TV (58%), after Gen X. 

Radio 47 (Kenya) becomes Africa’s first fully IP-based broadcast facility using Lawo

Radio 47, one of Kenya’s Swahili-language radio stations and operated by Cape Media Ltd., has unveiled Africa’s first fully IP-based, automated hybrid broadcast facility. Located in Nairobi, the installation aims to represent a new benchmark in the modernization of media infrastructure on the continent. The project was planned and executed by Kigali-based Mediacity Ads Ltd., a specialist in IP broadcast systems and Lawo’s regional partner, as the company has claimed in a statement.

The new facility integrates radio, television, live streaming, and remote production into a unified, flexible system, using broadcast technology from manufacturers including Lawo (Germany), MultiCam Systems (France), AVT Audio Video Technologies (Germany), Voceware, Cleanfeed, Genelec, Telos Alliance, PTZ Optics, OptiSign, and others.

IP Technology

Central to the installation is a fully IP-native broadcast infrastructure built on Lawo’s crystal broadcast consoles, Power Core DSP engine, and Ravenna/AES67-based

audio-over-IP networking. The integrated Lawo VisTool GUI software provides customizable touchscreen interfaces for control and visualization of sources, meters, and routing. The new workflow allows remote control and monitoring of all broadcast operations.

Each of the three new on-air studios at Radio 47 features a Lawo crystal console, chosen for its form factor, operation, and integration with IP workflows. The consoles are networked to a centralized Lawo Power Core, a modular IP I/O and mixing engine designed with the intention of managing extensive audio routing, DSP processing, and multiple studio operations from a single location.

The Power Core is equipped with modular I/O cards that handle analog, AES3, Dante, and MADI formats. Cape Media needed flexibility to proof its investment while still accommodating external sources and OB workflows. The system’s modular design also allows for easy expansion and remote integration, whether for regional studio feeds, mobile devices, or external contributors.

All studios, edit rooms, and control points are connected via an AES67-compliant IP backbone, with support for Ravenna networking. This infrastructure has the objective of enabling real-time, lossless audio transmission and complete operational freedom — feeds, phone calls, or streams can be routed anywhere in the facility.

“We wanted more than another radio station. We wanted to build the future of African media—an infrastructure that removes traditional barriers, empowers creativity in the young generation, and embraces tomorrow’s technology today”, explained Simon Gisharu, Chairman, Cape Media Ltd.

IP Workflows

“This wasn’t just an upgrade”, added Fred Martin Kiwalabye, Project Manager and Head Technical Engineer at Mediacity Ads Ltd. “It was a reinvention of how media can work—cleaner, smarter, faster, and future-proof. Powered by Lawo and supported by visionary leadership, we’ve redefined the possibilities”. 

Riedel launches RefSuite Ecosystem for sports officiating, coaching, and production

Riedel Communications has announced the launch of RefSuite, a new Managed Technology solution tailored to professional sports workflows. RefSuite combines hardware, software, cloud services, and 24/7 remote operations into one ecosystem with the objective of transforming refereeing, coaching, and broadcast operations across live sports environments, as the company has claimed in a statement.

Developed in close partnership with referees and industry stakeholders, and built on Riedel’s engineering team, RefSuite aims brings together five tightly integrated modules: RefCam, RefBox, RefComms, CoachComms, and RefCloud. From head-mounted referee cameras and FIFA Quality-certified Video Assistant Referee (VAR) systems to Bolero S-based wireless comms and cloud-based media management, RefSuite aims to enable coordination, performance, and decision-making while delivering unseen/immersive broadcast perspectives.

“RefSuite is the culmination of years of field experience, product innovation, and direct collaboration with the world’s leading sports organizations”, explained Lutz Rathmann, CEO Managed Technology, Riedel Communications. “With each module already proven in highprofile deployments worldwide, RefSuite represents a decisive

move away from fragmented solutions — delivering a cohesive, future-ready platform that empowers officiating, coaching, and broadcast teams alike”.

RefSuite’s modular design looks forward to offering better scalability and flexibility, adapting to each client’s requirements and scale. It delivers stabilized, broadcastgrade referee POV video, while the RefBox VAR review system offers intuitive, synchronous control of all video sources for instant analysis, playback, and clip export. RefComms and CoachComms provide encrypted, low-latency referee and team communication, complemented by the RefCloud media cloud as a central hub for secure media storage, review, and collaboration.

The ecosystem can be extended with Riedel’s Easy5G Private 5G network solution, offering data transmission for full-pitch RefCam video streaming or

remote RefBox review workflows. Combined with (optional) centralized monitoring and support via Riedel’s Remote Operations Center (ROC), this Managed Technology offering tries to deliver turnkey deployment and operational peace of mind.

“RefSuite represents our vision of seamless delivery — a unified technology and service model designed to meet the demanding needs of international federations, leagues, broadcasters, and production partners”, added Marc Schneider, Executive Director Global Events, Riedel Communications. “This is just the beginning: RefSuite is the first step in a new generation of integrated ecosystems that will unlock new capabilities for remote production, performance analysis, and immersive fan experiences — shaping the future of motorsport, maritime, and live production environments alike”. 

IBC2025 introduces Future Tech, a new hub of emerging technologies

IBC2025 is poised to lead the global conversation on media and entertainment (M&E) innovation when it brings the industry’s creative, technology, and business communities to the RAI Amsterdam on 12-15 September. Industry players from 170 countries will converge on this year’s show to connect, collaborate and unlock business opportunities – with a sharpened focus on transformative technology and real-world application. IBC2025 introduces Future Tech, a dynamic hub of emerging technologies, collaborative projects and next-gen talent taking up all of Hall 14.

“Shaping the future of media and entertainment worldwide is more than our theme – it’s

our mission,” said Michael Crimp, IBC’s CEO. “IBC2025 is where the brightest minds from the international M&E community explore the ideas and technologies reshaping our industry. Future Tech brings that innovation into focus – from AI-powered personalisation and cloud-native workflows to content provenance and private 5G networks, visitors can see the future being defined and built in real time.”

Future Tech: Keeping innovation at the heart of IBC2025

Pioneering advances in areas such as AI, virtual production, interactive media, sustainable technology, and immersive

experiences will drive visitors’ show journeys throughout IBC2025, but these technologies take centre stage in Future Tech.

Those exploring Hall 14 will see how AI is moving from the conceptual to the practical, spanning live production automation, generative content tools, and new uses just being introduced. They will also get the opportunity to delve into the virtual world of LED backdrops, augmented reality and real-time VFX, with exhibitors including Microsoft, Mo-Sys, 3Play, CaptionHub, Deepdub, Files.com, Monks (formerly Media Monks), Tata Communications, Veritone, and Ultra HD Forum.

Offering a curated blend of visionary showcases, hands-on demos, and thought leadership, Future Tech will also feature:

› Accelerator Innovation Zone: Where nine cutting-edge Proof of Concept (PoC) projects fast-track solutions to some of the industry’s toughest practical challenges through media owner and tech vendor collaboration.

› Future Tech Stage: Packed with keynotes, panels and live demos – including the Accelerator PoCs – and sponsored by Microsoft, this stage will feature industry pioneers previewing the technologies set to transform content creation and delivery.

› IBC Hackfest x Google Cloud: A high-intensity, two-day hackathon with digital innovators, tech entrepreneurs, software developers, creatives and engineers tackling real M&E challenges using Gemini AI and more.

› Google AI Penalty Challenge: This immersive interactive football-fan experience employs more than 15 integrated technologies to showcase AI-driven decision-making in sports performance.

› 100 Years of Television: Celebrating a century of innovation since the first TV picture was demonstrated by John Logie Baird at Strathclyde University in 1925, this exhibit not only looks back but offers a glimpse into what the next 100 years might bring.

› Networking Hub: From stand-up breakfasts and informal lunches

to curated meetups and evening DJ sets, this buzzing hub offers a relaxed space for media and tech professionals to connect.

Future Tech also plays host to the growing IBC Talent Programme, supported by partners such as Rise, Rise Academy, Host Broadcasting Services (HBS), Gals N Gear, and Media Entertainment Talent Manifesto. Running on Friday 12 September, the programme presents a wide range of sessions and speakers covering everything from mentoring and career pathways to practical skills, diversity and inclusion.

Exhibitor innovation everywhere

Innovation is by no means restricted to Hall 14. Groundbreaking advances in everything from production and transmission to cameras and lighting to streaming and cloud services will be spotlighted on stands across the show – including exhibitordriven presentations, demos and panels on the two Content Everywhere stages and the Showcase Theatre.

Companies returning to the show to meet, collaborate and showcase innovations include Adobe, Amazon Web Services, Ateliere, Avid, Blackmagic, Canon, EVS, Grass Valley, Panasonic, Riedel, Ross Video, Samsung, Sony and Zattoo. New exhibitors include Baron Weather, Cachefly, Files.com, Momento, NewBlue, OTT Solutions, Plain X, Raysync,

and Remotly. The amount of space booked by exhibitors for IBC2025 has now passed 44,000 square metres, with over 1,100 exhibitors booked so far.

Steve Connolly, Director at IBC, notes: “IBC2025 is set to be our most transformative edition yet – a show where innovation is embedded across the entire experience. From world-first product launches to pioneering conference insights, we’re bringing together the technologies, people, and ideas that are redefining what’s possible in M&E.”

Deep dive at the IBC Conference

This year’s IBC Conference, the exclusive knowledge and networking experience, is running from 12-14 September. It brings together a broad mix of industry leaders and innovators to address topics such as: the business of TV and the search for sustainable growth across new platforms; live sports and real-time experiences; personalised advertising and the future of commercial models; and discovery and prominence – how to ensure content is surfaced in a crowded landscape.

This year’s conference will once again deliver visionary speakers who will cover business-critical challenges around technology and advertising models. AI also features heavily across the programme, examining not just its creative potential but how it can drive efficiency, enhance personalisation, and support real-world editorial workflows.

Daktronics and Grass Valley team up to streamline venue production workflows

Grass Valley and Daktronics have announced a technology partnership that unites Grass Valley’s live production with Daktronics’ LED displays. Together, they aim to deliver fully integrated solutions that enhance content creation, production efficiency, and realtime presentation for sports and entertainment venues, as Grass Valley has claimed in a statement.

“This partnership with Grass Valley represents a powerful alignment of complementary technologies”, affirmed Brad Wiemann, Interim CEO and President of Daktronics. “Together, we’re helping venues deliver fully immersive, end-to-end experiences that transform how fans engage with live events”.

“Joining forces with Daktronics allows us to push the boundaries of what’s possible in live production and venue presentation”, added Jon Wilson, CEO of Grass Valley. “With two of the industry’s leading brands, we’re excited about what this means for the future of integrated, scalable solutions in the world’s top venues”.

Several joint projects are already underway, applying the technical integration of Daktronics’ display and control + Camino with Grass Valley’s IP-based live production tools.

Daktronics and Grass Valley will be showcasing this partnership at the upcoming IDEA Conference, taking place in Boston July 13 to 16, 2025. 

Telos announces Axia Altus SE is now available

Telos Alliance has announced that its Axia Altus SE virtual mixing console is now shipping. Like the original software implementation of Altus, Altus SE aims to offer a full-function browser-based mixer for remote events and contributors, allow easy deployment of temporary studios, and become a new option for disaster recovery sites, as the company has claimed in a statement.

As part of the company’s newly introduced Studio Essentials family of products, Altus SE

is built on the same fanless hardware as the Telos VX Duo broadcast phone system.

Its is designed in smaller size and with the intention of operating more silently. It includes 8 faders and can be expanded to up to 24 faders in 4-fader increments via buyout-style licenses.

“The reaction to Altus SE in our booth at the 2025 NAB Show was overwhelmingly positive”, affirmedCam Eicher, Telos Alliance VP of Worldwide Sales. “The idea of having the same

features and flexibility as our software-based Altus virtual mixing console, but with the convenience of an easy-to-deploy dedicated hardware appliance, really resonated with everyone at the show. We’re happy to share that Altus SE is in stock and ready to ship”. 

LiveU concludes FIDAL’s ‘Field Trials Beyond 5G’ research project

LiveU has announced the completion of the European FIDAL/B5VideoNet (B5GVN) project, which is co-funded by the 6G Smart Networks and Services. The project aimed to demonstrate advancements in remote video production workflows using multi-link, multi-slice 5G bonding for single and multi-cam transmission and feeds back to the receiver. LiveU conducted tests and trials using its multi-camera LU800 PRO units and Xtend connectivity solution with beyond 5G (B5G) capabilities, as it has claimed in a statement.

As part of this project, LiveU focused on three remote production use cases:

› Cloud Remote Production –leveraging slicing configurations to try to guarantee bandwidth and latency for cloud-based broadcast workflows.

› Edge-Based Production –integrating mobile edge computing within an operator infrastructure (like a private cloud).

› On-Site Remote Production with Cloud-based Solutions –testing uplink/downlink slicing configurations for production workflows using LiveU’s Mobile Receiver, Xtend and LU-Link together with the LiveU Ingest automatic recording and story metadata tagging solution.

The trials took place at the University of Patras (UoP) B5G testbed in Greece. All three

scenarios had the objective of exploring the benefits of multi-link, multi-slice bonding for broadcast-grade video transmission – a mechanism to guarantee specific SIMs bandwidth, latency, or error-rate parameters across the 5G infrastructure. This proved to be important under network load and congestion conditions.

The trials included various configurations of Guaranteed Bit Rate (GBR) and Non-Guaranteed Bit Rate (Non-GBR) slice and multi-slice configurations. Upon deployment in commercial networks, this technology would enable broadcasters and content creators to match service levels to their production requirements, resources and

budgets. For example, higherbandwidth GBR slices could be reserved where necessary, while more economical GBR slices or even NGBR, could be relied upon for less critical traffic or supplementing bandwidth where needed (and per budget).

Another ‘B5G-towards 6G’ technology explored was Network Exposure APIs for network resource allocation. The project laid the foundation for real-time video contribution service ordering, allowing LiveU and similar applications to request slices based on event timing and location. This technology (and related use cases) allows cellular operators to provide services on an ‘as needed’ basis rather than ‘everywhere all the time’. 

Canal+ produces MotoGP’s documentary to Vision Pro with Blackmagic URSA Cine Immersive

Canal+ has used the new Blackmagic URSA Cine Immersive camera and DaVinci Resolve Studio to film and finish its latest sports documentary about MotoGP with the intention that the audience see it on Vision Pro. The project is part of a new generation of immersive workflows for capture, postproduction and viewing on Apple Vision Pro and will be available exclusively on the Canal+ app on September 2025, as Blackmagic has claimed in a statement.

Produced in collaboration with MotoGP and Apple, the documentary follows world champion Johann Zarco and his team during their dramatic home victory at the French Grand Prix in Le Mans.

It was captured using the URSA Cine Immersive camera with dual 8160 x 7200 (58.7 Megapixel) sensors at 90fps, delivering 3D

immersive cinema content to a single file mixed with Apple Spatial Audio. The MotoGP sports experience places viewers in the heart of the action, from the pit lane and paddock to the podium.

“MotoGP is made for this format”, said Etienne Pidoux at Canal+.

“You feel the raw speed, and you see details you’d otherwise miss on a flat screen. It puts you closer to the machines and the team than ever before”.

To place the viewer at the center of the action, Canal+ deployed multiple URSA Cine Immersive cameras. “We had two cameras on pedestals and one on a Steadicam”, explained Pierre Maillat of CANAL+. “The idea was to be able to swap quickly between Steadicam and fixed setups depending on what was happening in the moment. The Steadicam setup was extremely valuable”, noted Pidoux. “It made us more reactive in a fast

changing environment and gave us more agility while filming”.

“Immersive video changes how you shoot”, added Pidoux. “You plan more, shoot less, and you rethink composition because of the 180 degree view, especially in tight or crowded spaces like the pit lane”. Lighting was also a consideration inside the team garages. “We added some extra light to compensate for the 90 frames per second stereoscopic capture”.

Each camera was paired with an ambisonic microphone to capture first order spatial audio, the ambisonic mics were supplemented by discrete microphones for interviews and other critical sound sources. “We recorded in ambisonics Format A for the immersive mix and channel based for other sources,” Maillat noted. “Everything was timecoded wirelessly and synced on both the cameras and the external recorders”.

A portable production cart with a Mac Studio running DaVinci Resolve Studio, alongside an Apple Vision Pro, was set up trackside to monitor and test shots in context. “This approach allowed us to check the content right after shooting and helped us verify framing while still on location”, said Maillat.

Canal+ had a second Mac Studio running DaVinci Resolve Studio and an Apple Vision Pro set up at the hotel in

Le Mans to handle media offload and backups. With 8TB of internal storage, recording directly to the Media Module, the crew could film more than two hours of 8K stereoscopic 3D immersive footage on the track without needing to change cards.

Postproduction took place in Paris, where Canal+ used a Mac Studio running DaVinci Resolve Studio for editing, color grading,

and audio mixing. “We could even preview the stereoscopic timeline directly in Apple Vision Pro, crucial for immersive grading”, explained Maillat.

Spatial Audio was mixed using DaVinci Resolve Studio’s Fairlight. “Initially, we planned to use a different digital audio workstation (DAW), but DaVinci Resolve Studio and Fairlight was the platform that gave us both creative

Lynx announces new key appointments

LYNX Technik has announced an executive leadership transition, effective July 1, 2025. Markus Motzko has been appointed Chief Executive Officer and Vincent Noyer will take on the role of Chief Technology Officer, as the company has claimed in a statement.

Founder and longtime CEO Winfried Deckelmann will transition out of daily operations and continue to support the company as a member of the Supervisory Board, where he will serve as Chairman.

“Markus and Vincent represent the perfect balance of operational excellence and visionary thinking”, said Winfried Deckelmann. “They have both demonstrated a deep understanding of our technology and our customers’ needs. I have full confidence in their ability to guide LYNX Technik into its next chapter while honoring the principles that have shaped our company since the beginning”.

Dr. Markus Motzko brings experience in production processes, manufacturing optimization, and technical leadership. Since joining LYNX Technik, he has led production and administration, focusing on operational efficiency and long-term growth. Prior to joining the company, Markus held roles in the industrial and medical semiconductor sectors, managing product development, supply chain operations, and customer engagement.

“I’m honored to help lead LYNX Technik through its next phase”, affirmed Markus Motzko. “The company has a strong reputation for engineering excellence and customer-focused innovation. I look forward to building on that legacy with Vincent and our talented global team”.

flexibility and the high quality deliverables for Apple Vision Pro”, explained Maillat.

“Filming with the URSA Cine Immersive camera and viewing it in Apple Vision Pro, we found incredible moments we’d normally treat as background”, Pidoux concluded. “Cleaning the track, helmet close ups, the crowd, they all become part of the experience”. 

Vincent Noyer, previously Director of Product Marketing, assumes the CTO role with over two decades of experience in the broadcast and live sports industries. Before LYNX Technik, Vincent held senior positions at Ross Video, where he helped drive the PIERO Sports Graphics solution.

“LYNX Technik has always been about delivering smart, dependable tools that broadcast, and AV professionals can rely on”, concluded Vincent Noyer.

“As CTO, my focus is on continuing that tradition, developing technology that addresses real-world challenges and evolves with our customers’ needs. We will keep refining our solutions to make them even more efficient and aligned with the way people work with content.” 

A new chapter for Italy’s

Rai:

Exclusive interview with the executive technical leadership to explore its technological horizon

Rai’s Director of Technology, Ubaldo Toni, together with his team, answers all our questions. We review the current technological status of Italy’s public broadcaster—now undergoing a deep transformation— and its main investment priorities. The interview also offers br oader insights into the future of the broadcast industry

A change of cycle at Rai. The retirement of its CTO, Stefano Ciccotti—a story exclusively reported by TM BROADCAST opens a new chapter for the Italian public broadcaster. As of the publication date of this issue, Rai has not officially confirmed how its final organizational structure will look. It’s worth noting, in any case, that the various business units within the public corporation—due to the vast scope of its activities—already enjoy a considerable degree of operational autonomy. Over the years, the CTO

position has taken on a more institutional character within the company.

This makes it an ideal moment to take stock of Rai’s current situation and to explore the key elements of its technological outlook for the years ahead. Helping us in this task is Ubaldo Toni, Director of Technology, who has brought together his senior technical leadership team to exclusively answer all our questions.

This article is part of a series exploring the main public broadcasters across

A large-scale shift to UHD would involve significantly higher costs — for example, storage requirements can increase fourfold — so we enable full UHD functionality only where and when it is truly needed

Europe. In recent issues, we have focused on the Nordic countries, featuring interviews with the technical heads of Sweden’s SVT, Denmark’s TV 2, Iceland’s RÚV, and Norway’s NRK (you can access the articles through the links). As a side note, Ubaldo Toni mentioned that he had read these pieces and that the ambitious upgrades carried out by SVT and TV 2—which he personally visited—once served as a source of inspiration for Rai.

In addition to Ubaldo Toni, the interview also includes contributions from

RAI’S DIRECTOR OF TECHNOLOGY, UBALDO TONI

Nino Garaio, Head of Digital Production Systems; Fabio DiGiglio, Deputy Head of Digital Production Systems; Stefano Marchetti, Head of Platform Engineering; Guglielmo Trentalange, Head of Studio Networking Systems; Andrea Menozzi, Head of Infrastructure Engineering; and Riccardo Rombaldoni, Head of Content Production Engineering.

As for Rai’s general organizational structure, it is important to highlight that the “Direzione Tecnologie” led by Toni is primarily focused on television production systems and media-related IT infrastructure. The scope of this interview

has therefore been limited to that area. Other domains are managed by separate business units: “Reti e Piattaforme” oversees OTT services, non-linear media, and DVB-S/DVB-T distribution; “ICT” handles corporatewide IT and networking outside the media domain; CRITS (“Centro Ricerche Innovazione Tecnologica e Sperimentazione”) is responsible for R&D activities; and “Direzione Produzione TV” manages the operational aspects tied to major events such as the Sanremo Festival.

With four production centers, 17 regional offices, around 2,000 journalists, and dozens of linear

channels, Rai stands as one of Europe’s leading broadcasters. It also holds an extensive content archive, dating back to 1924 for radio and 1954 for television. Currently, the corporation is undergoing a major transformation. “We’re shifting from broadcaster to media company so we are developing a multi-year, multi-million-euro innovation plan to support this transition,” explains Ubaldo Toni. “We are steadily moving toward a fully IT- and IP-based architecture, though completing the full transition will take another four to five years.”

Have you implemented any recent technological innovations or upgrades that you would like to highlight?

In recent years, we completed the renewal of the production infrastructure of one of our major Rai Production Centre in Rome (Studi Nomentano). This facility includes six medium- to large-sized studios (ranging from 380 to 1,500 sqm) dedicated to entertainment programs, five control rooms equipped with HD/UHD-12G-ready technology, a central SMPTE 2110 IP-based routing system, post-production facilities with centralized storage, and new LED-based lighting systems that have reduced energy consumption by 80%.

We also recently completed a full overhaul of our national news production systems, transitioning from a traditional IT environment to a private cloud. This cloud infrastructure features state-of-the-art technology,

both at the application level — using AVID MediaCentral Cloud UX — and the hardware layer, which is built on Dell and VMware hyperconverged virtualization. As a result, we’ve reduced energy consumption by 60%, while continuing to serve over 800 journalists, who now benefit from fully effective remote working capabilities.

What is your current technological landscape when it comes to the transition from SDI to IP on the production side?

We began introducing SMPTE 2110 IP technology where we saw clear benefits, not simply to follow industry trends. In the live production area, we didn’t find compelling technical or economic reasons to be early adopters of the 2110 standard. As a result, most of our newer live production systems are based on 12G SDI UHD/ HDR, while SMPTE 2110 is used primarily to optimize signal distribution and

enable shared use of resources across different studios and control rooms. We believe this hybrid approach has been the best choice in terms of reliability, sustainability, investment efficiency, maintenance costs, and operational effectiveness.

Rai has been a pioneer in adopting UHD. Could you summarize your current position and the main challenges you foresee in this area?

On the production side, we have primarily invested in OB vans, as they are typically used for premium events that may be sold or distributed internationally, where UHD quality is often required. Our long-term infrastructure projects are also designed with UHD support in mind.

However, a large-scale shift to UHD would involve significantly higher costs — for example, storage requirements can increase fourfold — so we enable full UHD functionality only

where and when it is truly needed. We particularly value flexible licensing models (e.g., for high-end cameras) that allow UHD capabilities to be activated on a usage basis.

On the distribution side, Rai launched its 4K channel in 2016, broadcasting free-to-air on the Tivùsat digital satellite platform. The channel is used to air UHD versions of major live events (such as the Opening of the Holy Door) and other premium content. These broadcasts are simulcast with HD versions on our premium HD channels and are also available via streaming on HbbTV-compatible devices.

In relation to this, could you provide an overview of the current state of the transition to UHD in the Italian television landscape?

UHD is steadily gaining ground in high-end production, particularly for premium events and some scripted content.

AI has recently been adopted to enhance processes related to media accessibility. It is currently used to assist with batch preparation of subtitles and is being tested for automatic transcription of live events

However, its broader adoption is still limited by distribution challenges. In Italy, DTT remains the primary delivery platform, and even with T2-HEVC, there isn’t sufficient bandwidth to support UHD broadcasts terrestrially. Satellite is also used, but its reach is limited. Broadband distribution remains costly due to high CDN expenses, and fiber-to-thehome coverage is still not widespread across the country. At this stage, large-scale investments in UHD are not justified by the potential for additional revenue.

This year, we plan to sign several framework contracts for equipment such as UHD video switchers, studio cameras and lenses, PTZ cameras, ENG cameras, and broadcast monitors

Are there any interesting projects or upgrades in the near future that you can share with us?

We’ve just entered the executive phase of the Rai Sport digitization project. The goal is to connect our sports channels and newsrooms in Rome and Milan to the existing Avid infrastructure, while integrating EVS equipment to create a fully streamlined workflow for event coverage and sports news production. The project is expected to be fully implemented within two years.

In addition, we are about to launch a proof of concept (POC) for studio automation in our regional news operations.

In news or sports production, graphics are a key element. In this regard, how do you approach this area, and to what extent are you using immersive technologies or augmented reality?

We are currently using immersive technologies and augmented reality in several studios in Milan, primarily for sports programming,

as well as in Rome for other popular TV programs and documentaries. We have also planned further investments in this area to expand and enhance our capabilities.

I’d like to know a bit more about the key elements of your studios and facilities. Have you recently acquired any significant equipment? Which manufacturers do you usually rely on?

We have recently carried out significant upgrades in our TV studios through

public tenders for key technologies. This year, we plan to sign several framework contracts for equipment such as UHD video switchers, studio cameras and lenses, PTZ cameras, ENG cameras, and broadcast monitors.

On the infrastructure and control side, we typically rely on both Utah and Riedel’s MediorNet distributed video routing systems, as well as the Lawo VSM broadcast control system. We are also planning investments in technologies aimed at significantly reducing operational and energy costs — including LED walls,

LED floor systems, LED lighting, and, last but not least, virtual set systems.

Among the events Rai has recently covered,

could

you share a specific success story

you’re

particularly proud of — one that stood out due to the challenges it involved?

I guess “ Sanremo Festival 4k” and “ Eurovision Song Contest” in Turin, but as this questions relates to operational events managed by a different business unit, I may not be able to answer.

What would you say is the current state of 5G adoption in the Italian broadcast industry and within your workflows?

We are running extensive tests, using either public networks, or private “5G bubbles” in some events. This is still an R&D matter for us, so for detailed information Rai CRITS (Centro Ricerche ed Innovazione Tecnologica) should be contacted for further details.

How are you adopting artificial intelligence and process automation in your workflows?

How do you see its influence in the near future?

AI has recently been adopted to enhance processes related to media accessibility. It is currently used to assist with batch preparation of subtitles and is being tested for automatic transcription of live events. More broadly, we see AI as increasingly valuable in improving the

performance of production tools, which in turn will help boost overall productivity.

What is your level of confidence in and adoption of cloud technologies compared to on-premise systems?

On-premises systems, while requiring a significant upfront investment, offer predictable costs and greater control — which is essential for a public broadcaster like Rai, given our strict compliance and security requirements.

That said, it’s clear that cloud technologies offer unique advantages, particularly in terms of flexibility and scalability.

We see a hybrid approach as the most effective strategy — one that leverages the strengths of both cloud and on-premises solutions to address specific business needs.

For example, we see clear value in beginning to support Rai’s program delivery partners with ingest and QC file services

configured in the cloud. This allows us to reduce feedback latency regarding the technical compliance of media assets and to scale delivery operations during peak periods throughout the year.

However, we have no plans to move our program archive to the cloud. Too many questions remain open around content rights, security, and content management — particularly when relying, even partially, on major providers such as AWS or Azure.

The future of TV in two takeaways

“A change of mind is needed from the old way of “creating” television to the new one, involving not just engineering matters part but also the operation, professional training and human resources side”

“That old “niche” made of companies and technology suppliers is disappearing and we can oversee on the horizon, even in the media/broadcast world, the big IT market players such as Amazon, Microsoft, Huawei, Nvidia, Meta with their impressive economic power and resources”

This also ties into the broader issue of the digital divide. A cloud-based platform requires stable and high-speed network connectivity, especially for post-production workflows. Unfortunately, that level of connectivity is not always guaranteed — particularly in scenarios where cloud processing would be most beneficial, such as special outdoor productions or major sporting events like the Giro D’Italia bicycle competition.

What is your vision for the transition of the broadcast industry to a software-based infrastructure?

The transition to softwarebased infrastructures is progressing rapidly, driven by advances in artificial intelligence, cloud computing, and IP-based workflows. Our objective is to harness the potential of software to achieve more flexible, scalable, and efficient operations.

At the same time, it’s essential to maintain the reliability and security that have traditionally characterized broadcast environments.

An equally important focus for us is the training of engineers. Their skillsets must evolve to meet the growing complexity involved in managing and administering software-based systems.

Would you like to add anything else?

RAI ENGINEERING TEAM

An equally important focus for us is comprehensive training across all levels — from project and system design to operations and maintenance. As we adopt more complex, softwaredriven infrastructures, it’s critical that all teams involved are well-prepared before systems go live. Engineers must understand not only the underlying architecture, but also how to configure, operate, and troubleshoot new technologies effectively.

Likewise, operators need to adapt to evolving workflows, and maintenance personnel must be equipped to handle updates, integration issues, and long-term support. In our view, investing in training ahead of final release is not optional — it’s a key success factor for ensuring reliability, minimizing downtime, and securing a smooth transition to next-generation broadcast systems.

Technology is rapidly and relentlessly transforming

professions and duties. A change of mind is needed from the old way of “creating” television to the new one, involving not just engineering matters part but also the operation, professional training and human resources side.

This works with media companies but also to suppliers who find themselves facing the same challenges, even in the recruitment of new talents and consolidated professionals in the sector.

An

important focus for us is the training of engineers. Their skillsets must evolve to meet the growing complexity involved in managing and administering software-based systems

That old “niche” made of companies and technology suppliers is disappearing and we can oversee on the horizon, even in the media/broadcast world, the big IT market players such as Amazon, Microsoft, Huawei, Nvidia, Meta with their impressive economic power and resources.

Rai PBS, in this context, adds an additional point of complexity as it is subject to the “Italian procurement code” by virtue of which it must plan its technological investments with extreme caution and precision, in a complex and extremely competitive context in Italy and Europe.

CPTV NOMENTANO

We recently completed a full overhaul of our national news production systems, transitioning from a traditional IT environment to a private cloud

Rakuten TV goes all-in on FAST

We interview Rakuten TV’s Chief Product Officer, Sidharth Jayant, to explore the company’s strategy and most notable technological implementations

The FAST (Free Ad-supported Streaming Television) phenomenon has become one of the hottest trends in the industry—and it seems we’re only at the beginning. “What we’re seeing now is that a large part of the industry still hasn’t fully embraced FAST”, explains TM BROADCAST Sidharth Jayant, Chief Product Officer at Rakuten TV.

This streaming platform played a pioneering role in this vertical, launching its first FAST channels back in 2020. “We were one of the first European companies to do so at scale, across multiple countries,” emphasizes Sidharth Jayant.

“Over the past five years, these channels have gained incredible traction for us. They’ve become a key growth driver—so much so that continuing to invest in them was a no-brainer.”

Today, Rakuten TV owns and operates hundreds of FAST channels.

“Some TV manufacturers were ahead of the curve and adopted it early on, but now more and more are jumping on the FAST bandwagon,” continues Rakuten TV’s CPO. “On the telecom side, especially in Europe, there’s still a lot of room for growth. Not enough telcos have adopted FAST yet.”

“Most of our markets are localized, and every time we roll out a new feature, it needs to come with the right copy in each language. We’re looking at how AI can help us improve that process”

Sidharth Jayant has helped expand the company’s service from two to more than 40 markets across Europe during his ten years at Rakuten TV. Among the most notable recent technological implementations, Jayant is particularly proud of Rakuten TV Enterprise, a newly created business line that is already fully operational. This solution is designed to enable content owners and distributors to easily launch and monetize their FAST channels and video-centric apps.

In this interview, we explore all the key aspects of this new unit and also review Rakuten TV’s overall strategy, its use of artificial intelligence, its vision of the FAST universe, and its future plans, among other topics.

What would you say are the main technical challenges Rakuten TV is currently facing?

If we take a step back, we’ve been running this service for over 15 years now, across Europe. The platform operates under two business models: a transactional one—where users can rent or buy movies—and a free one, supported by advertising. This includes both on-demand content and FAST (Free Ad-Supported TV) channels.

One of the key technical challenges is managing the diversity of devices our service runs on. Smart TVs or CTVs, as we call them, are especially important to us. They offer the best environment for watching long-form content like movies. People sit down in their living rooms, in front of high-quality screens, and engage for longer sessions.

From a technical perspective, it’s crucial to understand that each TV

manufacturer is different. The operating system may vary, the video player might be different, and even the specs required to build the application can change.

So the first challenge is embracing that diversity and building a service that not only supports it, but actually thrives

in it—delivering the best possible experience for each type of device. And the same applies to smaller screens, web platforms, and consoles. Each platform involves a different usage context, but the goal is always the same: delivering a great playback experience.

The second challenge I’d highlight is the development of a new business line, which we call Rakuten TV Enterprise. Over the years, we’ve developed a lot of great technology for our own app, and now we’re at a point where we can offer those same tools to partners and clients.

These might be content owners wanting to launch a channel or an app, or telcos looking for a transactional store or FAST channels. We can build the full service for them. We’ve been developing this for the past couple of years, and it’s now ready. In fact, many clients are already using it.

So this has been another major challenge: running a successful B2C service while also creating and scaling a new B2B line. Thankfully, we’ve been able to leverage the tools we originally built for ourselves. Still, it’s been demanding from a technical point of view— especially when it comes to prioritizing developments and maintaining high quality for two very different customer bases. One is B2C, the other B2B, and their needs are completely different. Those are the two main challenges that come to mind—though of course, there are more.

Before we go into more detail about that new business line, I’d like to ask: what strategies do you generally follow to monetize your services?

Let’s start with the ongoing B2C service. As I mentioned earlier, we offer movies to rent or to buy. This allows

us to provide newer content from the biggest Hollywood studios—something that’s not always available on other streaming platforms, because they work under different models. So, the first part of the strategy is offering premium, up-to-date content.

The second part is simplifying the rental and purchase experience as much as possible. That includes integrating local payment methods, which are very important in each market. For example, in Poland, Blik is widely used. In Sweden, Swish is very popular. In some countries, PayPal is more common. And of course, credit cards are standard. So we try to make the process as smooth and user-friendly as we can, tailored to each region.

The other key pillar is advertising. We aim to build an ad experience that isn’t

“If you’re watching a movie or a TV show, the ad should appear at just the right moment—so we can

monetize your viewing, but without disrupting the experience”

intrusive and that respects user preferences. Ads need to blend in and out of the content seamlessly. If you’re watching a movie or a TV show, the ad should appear at just the right moment— so we can monetize your viewing, but without disrupting the experience.

Then, when it comes to the enterprise side, the monetization model changes completely. Depending on the kind of service we’re offering, we might use a fixed-fee approach, or explore more innovative models—like helping our partners monetize their inventory, rather than charging them a large upfront cost. It really depends on the use case, but overall, it’s a very exciting area for us right now.

How do you

handle content storage from a technical standpoint?

How much do you rely on the cloud, and in which cases do you make use of it?

We definitely take advantage of cloud storage solutions. It’s pretty basic

for us at this point—we rely on it heavily. We use different storage classes so we can optimize both performance and cost. And we also apply retention policies to remove content that’s no longer in use. So broadly speaking, yes, it’s a cloud-based approach.

I’d also like to speak about artificial intelligence. To what extent have you implemented AI in your workflows—like subtitling, personalized recommendations, and so on?

There’s a major focus—not just from us, but from Rakuten as a group—on making AI our ally. Let’s put it that way.

So more than just at a local level, it’s a strategic priority at the group level, both in Japan and across Rakuten International. We’ve been provided with internal AI tools, but it goes beyond just having access to them. There’s constant encouragement and training to actually use them. The idea is that we all need to become comfortable with AI and use it to boost our productivity. At the engineering level, for example, our team uses GitHub Copilot, and we’re already seeing improvements in productivity—not just here in Europe, but across the company.

Beyond that, several AI-driven solutions are

being developed with the goal of increasing efficiency. One area is localization. You mentioned subtitling, but localization goes further than that. Since our service runs across Europe, we need to adapt everything to different languages. Most of our markets are localized, and every time we roll out a new feature, it needs to come with the right copy in each language. So we’re looking at how AI can help us improve that process.

Of course, we’re also exploring how to enhance subtitling itself. And because we license a lot of content, we’re in constant dialogue with studios to understand the possibilities in this space. That will take some time, but it’s part of the roadmap.

In addition, there are initiatives at the group level focused on improving customer service—for example, by equipping support teams with AI agents that can assist them in their daily work. So there are several fronts where AI is already in use, and many more where it’s being actively developed.

Let’s go back to the new business line you mentioned earlier —Rakuten TV Enterprise. Could you go into more detail about it—and the technical challenges it involved?

Yes, definitely—this is one of our main technological innovations. Until last year, when we built our own FAST channels, we relied

on external technology providers. That meant using third-party solutions for CDN, playout technology, scheduling and ad insertion engines. But we have a large number of owned-andoperated channels across Europe—100+, actually— and they’re distributed on many platforms.

Over the past year and a half or so, we’ve developed our own technology stack to support all of this internally. Today, all of our FAST channels are powered 100% by Rakuten TV’s own technology.

We’ve built our own scheduler, so our in-house programming team can manage content without

“We’ve built our own scheduler, so our in-house programming team can manage content without relying on any third-party tools”

relying on any third-party tools. Our playback experts developed the playout system as well. For ad insertion, we still use external components, but we adapt and integrate them into our own system. And we continue to use our existing CDN network, which now also supports the enterprise offering.

What’s most important is that everything we’ve built is designed so that third-party partners—content owners or service providers—can adopt and use it easily. So it’s not just for us internally; it’s become the foundation for an entirely new line of business. This is the core of what we call Rakuten TV Enterprise. And it’s gaining a lot of traction. We’re seeing strong interest and onboarding a growing number of clients.

In terms of trends, how do you assess the evolution of FAST models in today’s media landscape—and how do you expect it will evolve in the future?

To talk about the future, I think it’s worth taking a quick look back as well. We launched our first FAST channels in 2020— March, if I’m not mistaken. We were one of the first European companies to do so at scale, across multiple countries.

Over the past five years, these channels have gained incredible traction for us. They’ve become a key growth driver—so much so that continuing to invest in them was a no-brainer.

What we’re seeing now is that a large part of the industry still hasn’t fully embraced FAST. Some TV manufacturers were ahead

of the curve and adopted it early on, but now more and more are jumping on the FAST bandwagon. On the telecom side, especially in Europe, there’s still a lot of room for growth. Not enough telcos have adopted FAST yet.

That said, many of our current and potential partners are talking to us about launching their own FAST services. We believe content owners are increasingly seeing the value—it helps them reach incremental audiences. And we expect more broadcasters will step into the FAST space as well.

So, yes, we anticipate continued growth—not just in adoption and distribution, but also in the evolution of FAST formats. We believe in it strongly.

And to finish, what can you tell us about your future plans?

“There are initiatives at the group level focused on improving customer service—for example, by equipping support teams with AI agents that can assist them in their daily work”

The most recent development is Rakuten TV Enterprise, as I mentioned earlier. But like with any product, it’s never really “finished”—it keeps evolving.

What we’re already seeing is that new formats are being requested for FAST playback. Not many yet, but depending on the market, there are new or additional formats being asked for. At the same time, more manufacturers and telcos are joining the distribution side. And when you connect your channels to different platforms, it’s not just a simple copy-paste—it requires tailored integration every time. So we expect more of that.

Within the FAST space—and beyond it—our tools will continue to be adopted by more and more clients. And in this context, our “users” are actually other companies. That’s a different approach to product development, one where feedback from the clients plays a key role.

So, on a quarterly basis, we expect to roll out new

features and improvements to the tools we’re offering in the market. Advertising will be at the core of all this. You’ll see new ad formats coming from us, and much greater signal transparency—meaning advertisers will clearly understand the kind of inventory they’re targeting.

For end users, we’ll offer more control over preferences. Recognized

CMPs (Consent Management Platforms) will be supported across all platforms, so users feel confident that their privacy is respected.

And finally, we’ll keep working on making advertising feel even more integrated. Ads will blend more smoothly into the content stream, helping users stay engaged. We believe even the ad

experience itself will become more immersive and enjoyable over time.

“You’ll see new ad formats coming from us, and much greater signal transparency— meaning advertisers will clearly understand the kind of inventory they’re targeting”

Simen K. Frostad, Chairman of Bridge Technologies:

“We need today tools that help non-experts understand what’s going on”

SIMEN

We interview this industry veteran to gain his market perspective. IP, the dominance of OTT platforms, the new production models emerging in sports broadcasting, and the expansion of software over hardware are among the key topics addressed in this conversation

Simen K. Frostad has a rare talent for explaining complex matters in a way that makes them seem simple. Perhaps that’s why he’s been able to clearly identify a growing market trend in today’s fast-paced technological landscape: the need to create accessible tools. As he puts it: “What we need today are tools that help non-experts understand what’s going on. That’s absolutely critical. Across IT, telecom, and the broader IP space, we’re seeing fewer people with in-depth knowledge. That means our tools must do more—providing deeper insights and even suggestions for how to fix problems.”

Co-founder of Bridge Technologies in 2004 and currently the company’s Chairman, Simen K. Frostad joins the Industry Voices section to share his seasoned market vision with TM BROADCAST readers.

A pioneer in recognizing the advantages of IP—a technology that has transformed the industry from top to bottom—he is

convinced that the sector must fully embrace the shift to IT environments. Doing so, he stresses, requires a joint effort from network and video departments, which often struggle to communicate due to their different technical languages.

“A core part of our work today is enabling teams to immediately determine if a problem is transport-related or production-related, so the appropriate experts can step in and resolve it”

The dominance of OTT platforms, the new production models emerging in sports broadcasting, and the expansion of software over hardware are also key topics in this conversation.

How did Bridge Technologies come about?

We founded the company in 2004, driven by a strong interest in IP-based technologies. But the story actually begins earlier—

“IT teams are incredibly vigilant and have powerful orchestration tools that make operations much more efficient than traditional broadcast workflows,” says Simen K. Frostad. At the same time, he sends a message to the IT world: “It needs to be more time-sensitive and better understand the critical nature of real-time broadcast.”

back in 1999, when we created a Scandinavian IP-based contribution network. At the time, IP technology was still in its infancy, so we were definitely early adopters.

We had a great start thanks to substantial support from Cisco, and we developed a network that transported TS-based MPEG-2 at what was then considered high bandwidth—50 megabits— with low latency over an IP/MPLS network.

That experience gave us deep insight into IP, and we quickly became enthusiastic advocates for it.

I also received a lot of support from Tandberg Television, which was a major player in broadcast systems at that time. I got to work closely with some of their top engineers— some of whom developed Tandberg’s very first IP interfaces. So, when Tandberg decided in 2004

to relocate much of its R&D from Norway to the UK, many engineers weren’t keen on moving abroad. That situation helped spark the creation of Bridge Technologies. We saw a clear need to improve visibility and understanding of packet transport— because with IP, a lot happens inside the switch, and it’s not always obvious how packets are actually flowing or how minor configuration changes can have a major impact.

HQ

“The remote aspect is transforming the way production is done. We’ve already seen it happening in major sports events over the past two years, and we’re going to see even more of it in the next two”

Broadcast media, in particular, is unforgiving. If you start losing packets, the result is visible artifacts in the picture. The problem is that an artifact caused by an encoder issue looks exactly the same as one caused by packet loss in transport. So our first mission was to create tools that could help distinguish between those two root causes—whether the issue was in the production chain or in the transport layer.

That remains a core part of our work today: enabling teams to immediately determine if a problem is transport-related or production-related, so the appropriate experts can step in and resolve it.

What are your main strategic goals?

One of our main strategic goals is to create tools that are accessible to non-experts—people who may not fully understand how these complex systems work, but who need enough clarity to identify issues and bring in the right teams to fix them. Even in production environments with incredibly talented engineers, many don’t have deep IP expertise. Conversely, IT departments that support parts of production often don’t understand media transport. So, we aim to bridge that gap—translating advanced metrics into insights that can be understood and acted upon by non-specialists.

That brings me to one of my questions — the convergence between video and IT departments. What would you say are the main challenges in making these two areas work together? Well, first of all, these two groups have grown up quite differently. The IT department is full of clever

people who understand a lot about scaling, processing, and systems. But traditionally, they’ve focused on supplying their own set of tools and capabilities within a service-based framework. Telecommunications people, on the other hand, have tended to embed a lot of intelligence directly into the network.

So even though they share common goals, their approaches—and their language—are different. That creates a bit of friction. The world is moving more and more towards IT because it’s such a vast and well-developed industry, and that means broadcasters have to learn how to communicate better with IT professionals.

The challenge is that the IT world can be slow to adapt to the specific needs

of broadcast. In high-end broadcasting, there’s no room for packet loss. If you’re delivering low-latency services, there’s no error correction fast enough to compensate for lost packets. So, while I have a lot of respect for the IT industry, I do think it needs to be more time-sensitive and better understand the critical nature of real-time broadcast.

That said, the broadcast industry can also learn a great deal from IT. IT teams are incredibly vigilant and have powerful orchestration tools that make operations much more efficient than traditional broadcast workflows. So yes, we’re heading toward a more IT-centric world, and IT will continue to dominate—but both sides have something valuable to offer.

IP: Present and future

“IP is embedded in every aspect of broadcast. There’s no avoiding it. Even in SDI environments, you’ll find devices controlled via IP, so a solid understanding of IP is essential”

“What’s coming next, in my view, is the expansion of networks not just to transport data, but also to support things like memory sharing and server clustering”

You mentioned founding Bridge Technologies back in 2004. How would you assess how IP technology has evolved since then?

I co-founded Bridge with three brilliant colleagues from Tandberg Television. One of the strategic missions that have remained constant is addressing the decline in deep expertise across the industry.

What we need today are tools that help non-experts understand what’s going on. That’s absolutely critical. Across IT, telecom, and the broader IP space, we’re seeing fewer people with in-depth knowledge. That means our tools must do more—providing deeper insights and even suggestions for how to fix problems.

Right now, IP is embedded in every aspect of broadcast. There’s no avoiding it. Even in SDI environments, you’ll find devices controlled via IP, so a solid understanding of IP is essential.

Interestingly, while bandwidth has exploded—

moving from 155 megabit networks in the late ’90s (which seemed huge at the time), to gigabit, then 10 GbE, and now up to 100, 400, and even 800 GbE— the core challenges haven’t changed. You still need to trace packets with absolute precision.

What’s coming next, in my view, is the expansion of networks not just to transport data, but also to support things like memory sharing and server clustering. That’s where IP is headed—towards enabling powerful, distributed computing environments. And as vendors, we’re genuinely excited about what’s possible.

I’d also like to hear your insights about OTT platforms and streaming services. How do you see these platforms evolving, and how do you

think they will coexist with traditional broadcast in the future?

We’ve always believed that OTT would become dominant—mainly because it makes excellent use of standard IT technologies. As soon as you introduce web caching, you’ve got the basic framework for an OTT system.

Of course, it’s not without its challenges. When you implement caching across multiple layers, maintaining precise timing becomes essential—especially if you want to reduce latency. It’s relatively easy to manage OTT systems with 120-second delays using standard servers and web infrastructure. But if you want to bring that down to just a few seconds, the timing between caching layers needs to be extremely accurate.

“While I have a lot of respect for the IT industry, I do think it needs to be more time-sensitive and better understand the critical nature of real-time broadcast”

You could argue that multicast is a better solution in terms of latency and distribution efficiency. And technically, that’s true. But building large-scale multicast networks is complex and demands highly reliable, often expensive, switching hardware. OTT, on the other hand, is a much more cost-effective solution.

Look at companies like Netflix—they’ve taken OTT to its limits. One of their major advantages is owning

their own CDN. They have distribution nodes located very close to the end user, which helps a lot in terms of performance and latency. They’re now pushing further into real-time OTT solutions. And I think as latency requirements become stricter and more players start building their own OTT CDNs, we’ll see OTT become the dominant model.

I don’t think there’s much growth left in traditional linear distribution.

Even what we still call “linear” is mostly IP-based today. Although terrestrial transmission in Europe hasn’t transitioned to IP yet, it’s clearly moving in that direction.

The beauty of OTT is that once you switch to IP-based delivery—sending content in packet blocks or segments—you can transport virtually anything. I’m a big fan of OTT because it’s incredibly powerful to be able to cache content on standard HTTP servers.

VB440 IN THE MCR
“I think as latency requirements become stricter and more players start building their own OTT CDNs, we’ll see OTT become the dominant model”

In line with what you’ve just mentioned, broadcasters today operate across a wide range of delivery formats. How is Bridge Technologies addressing

this technological diversity in its product portfolio?

Well, we aim to support all aspects of broadcast because most operators— despite moving in a specific direction—still use a mix of technologies. A typical broadcaster will use satellite, OTT, contribution links, terrestrial delivery, and cable systems.

That’s why we’ve built a complete portfolio capable of monitoring all these delivery methods from a single central point.

It’s crucial for broadcast operations to maintain visibility over legacy systems too.

Interestingly, legacy systems are actually seeing a kind of resurgence. With more advanced monitoring and analysis tools, broadcasters are able to cut costs— something that’s a top priority right now. Running multiple technologies in parallel drives up expenses. So we’re proud that our tools are helping reduce those costs by making it viable to continue operating legacy networks alongside new distribution models.

Live production is another topic I’d like to discuss—especially in sports, where it’s evolving rapidly.

How do you assess this transformation, and which trends do you believe will define its future direction?

About seven or eight years ago, we took our first steps into live production. That was when we could begin using commoditized 100 GbE interfaces with servers and standard CPUs.

VB440 METRICS VIEW

Just as a side note, all of our platforms are software-based. They’re built on CPU architecture, and we spend a lot of time developing software— even for our embedded systems. So when we started working with larger servers using interfaces like 200 GbE (which we now support), it became clear we could do some pretty spectacular things.

One major advantage of this software-based approach is the ability to use HTML—especially HTML5—as a flexible user interface. Today, we’re delivering full-motion video over HTML with extremely low latency. That means you can decouple the operator from the physical hardware.

Typically, you’ll have servers in a data center or on-prem production facility, but the creatives and operators can be located anywhere. That’s the biggest shift we’ve seen, particularly in high-end sports production. You no longer need your entire team on-site at the venue.

Instead, teams are being centralized in hubs or remote production facilities.

That saves a huge amount of money and logistics— and also improves working conditions. Rather than cramming people into OB vans, you can provide a more relaxed and efficient environment for delivering high-quality content.

But in Qatar, we had our devices at the venue, while all shading and operations were carried out at the broadcast center. There were four control rooms, each managing 12 camera positions, and shading was done remotely—four cameras at a time— for every game. It was a landmark moment.

“One example we’re showcasing at IBC this year is speaker control. We now support IP-based Ethernet speakers that are powered via PoE”

After that, remote shading and centralized production became more widespread—simply because it works and saves a lot of costs.

We supported the World Cup in Qatar a few years ago, where everything was done centrally. One especially interesting aspect was that even camera shading was handled remotely. And that’s traditionally been a very tough nut to crack— because camera shaders require zero latency. Any delay makes their job nearly impossible. If there’s lag, they’ll overshoot or undershoot when adjusting the iris.

Then came the Women’s World Cup, which was even more complex. Unlike Qatar, where everything is relatively close and connected by fiber, this time the central operation was in Sydney. Games were held in Perth (a continent away) and even in New Zealand. Those are vast distances.

Yet we managed to keep latency extremely low, making centralized production possible once again. We firmly believe this model will be used for all major sports events.

We already saw it implemented at the Olympics, and I think it will become the standard for all high-end, real-time programming.

Regarding the future, can you share any upcoming product launches or improvements, or any exciting projects you haven’t mentioned yet that you’d like to highlight?

One important goal is the ability to deliver the signal wherever the user is. That’s crucial. But another key development comes from centralizing functions into software running on COTS servers. When you do that, you suddenly have the ability to replace a lot of traditional hardware.

One example we’re showcasing at IBC this year is speaker control. We now support IP-based Ethernet speakers that are powered via PoE. Our partners in Finland are doing a great job developing PoE speakers that can receive AES67 signals directly.

That means we can now stream directly to speakers using software-based control, eliminating the need for external hardware controllers. It’s a great example of how we’re helping reduce hardware footprints and consolidate infrastructure.

Ultimately, our goal is to continue expanding what’s possible in software—to replace more and more bespoke equipment with flexible, server-based solutions. That way, servers can deliver a wide range of services from a centralized point.

This also helps reduce the physical infrastructure, which is often the most expensive part of a deployment. If you need a dedicated fiber line to every operator workstation, you’ll also need converters, routers, and other specialized gear. But if you can replace all that with a web browser—providing access to media, control, and monitoring—you gain major cost efficiencies and flexibility.

“Interestingly, legacy systems are actually seeing a kind of resurgence. With more advanced monitoring and analysis tools, broadcasters are able to cut costs”

With this approach, an operator panel can become an audio controller, a signal generator, a multiviewer, or whatever else you need at that moment. So we strongly believe in simplifying infrastructure while making it more versatile. Instead of having high-bandwidth ST 2110 signals going directly to each operator, we’re advocating for simple 1 GbE or 10 GbE networks that still deliver everything they need.

Final question.

Considering all these elements, what would you say sets Bridge Technologies apart from other competitors in the market?

The most important people in any production are the operators—the creatives. They’re the ones making decisions and telling stories that matter to the audience.

Even though traditional broadcast may be in decline, media consumption is growing—and especially when it comes to high-value content. And we believe

we’re truly at the cutting edge when it comes to delivering tools that support those decisionmakers, especially through browser-based workflows.

The remote aspect—having everything accessible via web interfaces—is transforming the way production is done. We’ve already seen it happening

in major sports events over the past two years, and we’re going to see even more of it in the next two.

We’re very fortunate to be at the forefront of this transformation, and we’re proud that so many large organizations are using our solutions to solve complex operational and technical challenges.

CAMERA SHADING IN AN OB TRUCK

Professional partnerships with guarantees for success Cinematic Broadcast: Much more than a buzzword

This new trend in the sector is here to stay. The various players in cinema and broadcast, including camera and optical manufacturers, must understand each other in order to generate professional partnerships with guarantees for success.

Whenever someone wants to identify the most significant thing about a feature film, simply by saying forcefully: “it’s cinema”, it seems that everything has been said. Exactly the same thing happens when we want to approach TV content. By saying: “it’s television”, everything becomes clearer. I remember the times I have heard among friends, in professional forums or whenever recommending AV, expressions such as: “this is cinema”, “ it’s not bad, for a TV program, it works”, “such a TV series is cinema”, “the story is very good, but it looks like video/TV”, among other comments.

We must say that this link between cinema-TV and TV-cinema has been present since the time when both media needed to coexist and relate to each other to achieve better results in the financial balance sheet of the audiovisual content generated. At first, it was due to pure commercial necessity, which had an impact on funding channels, exploitation of AV work and the right timing among the various distribution channels of cinema content on TV (conventional TV and platforms). What is known as distribution/exploitation windows.

This first interaction, in return, allowed TV to offer another type of content of higher aesthetic and narrative quality, in addition to increasing its own television offering.

Initially, content was shot with traditional film cameras, basically in 35 mm photosensitive material, which after several processes, would be converted into electromagnetic images (thus making a telecine) suitable for TV broadcasting. First there were feature films, then documentaries (also shot

in S16 mm and 16 mm film formats) and, later on, commercials, video clips and fiction series.

Film and television have always had their own peculiarities. Until now. Both audiovisual ecosystems are now closer than ever before. In this sense, we can identify several aspects that make the fusion of both environments (TV-Cinema; Cinena-TV) real:

› The digitization of processes, phases and workflows to obtain cinema/TV content,

from capture to delivery. Nowadays, both cinema and TV use the same raw material for creation of audiovisual content. It’s all zeros and ones.

Nowadays, a film shooting and a video/TV recording are not very different from each other, since they use the same image and sound capture means

› A greater efficiency and performance in management of

digital files, thus maintaining high levels of quality throughout the production process.

› The democratization of state-of-the-art technology, especially in cameras. Tight prices and fairly similar features facilitate the exchange between the two professional environments.

› The development of more specific optics, designed to minimize aberrations and optical defects for UHD/4K that allow full integration in multi-camera productions and in a live broadcasting.

› The change in TV technology, with the move from SD to UHD/4K TV, with the inclusion of HFR and making it possible to work within a wider gamut. All this under a broadcast regulation.

› The use of metadata, proxies and Intermediate File Format (DPX, IMF, XML...).

› The training of the technical staff (“digital natives”) on the latest technology and an ongoing adaptation to ways of working based on multitasking, multidisciplinary and collaborative approaches.

› The visual culture of the audience or public with ever-present references from both ecosystems, together with a high consumption of highlights and video games.

› A constant search for more visual, increasingly attractive experiences and a renewed audiovisual language.

› The business drive of TV stations, VOD and online TV platforms (the latter very intent on supplying content of high visual quality, as for instance, HBO, Netflix).

Nowadays, in the digital age, we are entering a new stage. Basically, contents of TV programs have an aesthetic closer to the references that a film provides. That is, a broadcast program with a ‘cinematic look’ or, as termed by others, in ‘cinema mode’.

The most pessimistic ones believe that cinematic broadcasting can go down the same path as 3D; after several attempts it may not end up being implemented in a productive, industrial fashion

This is what has come to be known as cinematic broadcast. The current trend is to apply this in program formats where there is a live or false live broadcast with a multicamera kind of production.

In content production processes known as recordings, there is already enough experience to ensure a correct cinema-TV workflow. For example, in fiction series of American origin and even of Spanish origin, and also in soap operas from Turkey.

Nowadays, a film shooting and a video/TV recording are not very different from each other, since they use the same image and sound capture means.

It should be remembered that at in the era of filming on photosensitive material there were clear differences with respect to the electronic media used in TV: from the type of camera (they were mostly 35 mm cinema cameras), to the attaining of the zero copy, through the chemical laboratory, the cutting of the negative, color grading, the positive and the embedding of effects, transitions and lettering.

Cinema and television are striving for a look being defined by some of the following parameters:

› Sharper images and more accurate colors.

› An aesthetic with decreased field depth.

› Greater detail between whites and blacks.

› The use of High Frame Rate (HFR).

› Panoramic aspect ratios.

› A more dramatic and emotional narrative.

› The beauty of blur.

› Get precise bokeh effects.

› The absence of aberrations and distortions in the formation of the image.

› A wider colorimetry, enriched and closer to the reality references.

Cinematic broadcasting, as cinema, requires: cameras with larger sensors (such as Super 35 or Full Frame); extended gamut -Wide Gamut- and greater dynamic range; logarithmic images; dedicated and specialized optics; exposure measurements under the T number; lens sets with shorter fixed focal lengths; a minimum of 10-bit linear recording processing; a minimum bit rate of 240 Mbps at 23.98 fps; color management and correction processes, and live grading; applying LUTs (Look-Up Tables) and transfer function referred to the scene (for example, Slog3, Log C, Log3G10, VLog), mainly.

Despite knowing what we want and how to achieve it, we have the difficulty of applying it to TV under very specific guidelines: the format and duration of a

TV program, multi-camera production, the essence of live and connectivity (wired and wireless) typical of the universe of TV broadcasts/ transmissions.

Cinematic broadcasting, is in essence, bringing together the benefits and experiences of both professional environments to make pure TV. From cinema, we already have the knowledge and operations to obtain quality images with and the much coveted look; from TV,

we take its mastery and configuration in live multi-camera productions.

The most pessimistic ones believe that cinematic broadcasting can go down the same path as other previous innovations (as was the case with 3D content); after several attempts

The

it may not end up being implemented in a productive, industrial fashion. They only see issues such as unavailability of camera equipment, connectivity between equipment and image signal processing, lack of specialized camera technicians and operators.

most optimistic, among whom I find myself, see that cinematic broadcasting is going to be much more than a trend or a buzzword

The most optimistic ones, among whom I find myself, see that cinematic broadcasting is going to be much more than a trend or a buzzword, At present, several solutions and measures have already been adopted so that the television-like way of working live while keeping a cinematic look is assumed within the audiovisual sector in TV, in view of the different types of productions and needs:

› Camera manufacturers are offering a very interesting range, the most representative names being: Arri, Red,

Sony, Canon, Blackmagic and Panasonic, under different types of camera, from modular, to compact, including PTZ and studio cameras. Even Fujifilm has joined this trend in 2025 (with its GFX Eterna model).

We can consider three large groups here:

– Digital cinema cameras with broadcast capabilities (e.g. Red Komodo 6K, Arri Alexa Mini LF, or Blackmagic URSA Mini Pro 12K).

– Broadcast video cameras with cinematic features (e.g. Sony

PXW-FX9; Canon EOS C300 Mark III, Panasonic VariCam LT).

– Cinematic Style Live Stream Cameras (Sony HDC-3500 and/or Grass Valley LDX 86N).

One of the platforms that is clearly committed to quality audiovisual content, with that cinematic look for VOD consumption, is Netflix. Some of the cameras that are recommended for filming in this platform are: Arri LF, Mini LF, 65 and the 35; Canon EOS C300 Mark II; Mark III, C500 Mark II, C700,

C700 FF, C80, C400, R5C; Panasonic Varicam 35, LT, Pure, AU-EVA1; S1H, BGH1, BS1H, AK-UC4000; Red Weapon Monster 8K VV, Dragon 8K VV, Helium 8K S35, W-Gemini 5K S35, Komodo 6K, V-Raptor 8K W…; Panavision DXL2; Sony Venice, Venice II 6K / 8K, Burano, FX9, F55, F65, FS7 II, F5, FX6, FR7, PXW-Z450, HDC-F5500; Blackmagic URSA Mini Pro 4.6K / G2, Mini Pro 12K OLPF… (See https:// partnerhelp.netflixstudios. com for potential updates).

› The duality in the signal streams: physical

With this cinematic broadcast look, we try to update some content that was initially conceived for TV towards something more visual, spectacular and of high quality in the eyes of the audience

recording in raw and live connection under broadcast standards.

› Mixed multi-camera setup, i.e. using the broadcast TV video signal together with a more cinematic signal. In this regard, some very interesting decisions are being made.

On the one hand, multi-camera productions that allow interleaving cinematic images with other broadcast-style images with different camera models (always trying to make it easer for viewers to justify changes in the image and visual aesthetics); and, on the other hand, using cameras within a multi-camera environment that provide the possibility of changing the visual look (broadcast or cinema) within the same camera.

› Optics manufacturers and developers are facilitating the integration

of high-end products in live television production and with the operation of a multi-camera: Fujifilm and Canon are in the first line of real-life solutions: more compact optics, prepared for larger sensors, colorimetry standards, motorized zoom lenses, grips with 16-bit encoders, integrations with different mounts (G, BL, PL...).

– A very significant mention is the Fujinon range of lenses (Fujifilm): Duvo (showcased at the NAB Show 2024) and the LA30x7.8BRM-XB2 optics (NAB Show 2025).

› The training of specialized technicians is based on their deeper knowledge of the digital environment, workflows and fundamental technical parameters. Likewise, the Vocational Training and the corresponding professional accreditations.

› Broadcast regulatory approval and compliance (SMPTE ST 2067-21:2016; SMPTE ST

2067-21:2020 or SMPTE ST 2067-21:2023; SMPTE ST-2086; ITU-R BT.709 / D65 / ITU-R BT.1886; P3 D65 / SMPTE ST 2084 (PQ); ITU-R BT.1702-1).

Very important are:

– The SMPTE ST 2048-1 standard defines the main characteristics of an image with a 4K resolution (4096×2160 pixels).

– The SMPTE ST2110, SMPTE ST.2082-12 and SMPTE ST 2036-1 standards for 8K UHD (7680x4320 pixels).

– Recommendation ITU-R BT.2100 proposes three levels of detail or resolution: highdefinition television (1,920×1,080), 4K UHDTV (3,840×2,160) and 8K (7,680×4,320), all of them using the progressive image system with wide color gamut and the framerate range included in the BT.2020 ITU-UHDTV recommendation.

› Connectivity solutions under SDI 6G, SDI 12G, use of converters and fiber optic transmission systems, Ethernet networks and/or 5G Wi-Fi.

› Coordination and integration of responsibilities between the technical teams,

especially between engineering and technical operations, production, filmmaking, camera, postproduction technicians and color grading. These new lines of work make it possible to obtain adapted solutions, share criteria and facilitate the use of LUTs.

As per the lines written above, it seems clear that broadcast TV is committed towards cinematic content, but is it also the other way around? Of course, too, but we’ll leave it for another article.

To sum up, when this cinematic broadcast look is sought, the aim is to try to update some content that was initially conceived for TV towards The players involved in film and live broadcasting, as well as camera and optics manufacturers, are forced to understand each other and generate professional alliances with guarantees for success

something more visual, spectacular and of high quality in the eyes of the audience. Surely there will be television content with a broadcast-only aesthetic such as news, contests, debates, interviews and talk shows, among others. However, the drive of the live broadcasting sector will gradually assume in a natural way a more cinematic look, in order to attract a greater number of viewers to TV.

Some very positive examples and experiences are:

› In the Spanish TV, TVE’s Radio 3 Concerts, being this the first live television production with 4K technology by using Sony FX9 full-frame sensor

cameras to achieve a cinematic look.

› Sony and Trans Audio Video (TAV) for the filming of the Vasco Rossi concert held at the San Siro stadium in Milan. For the first time, up to 22 Sony cameras were used, most of them with full-frame sensors.

› The 2021 Super Bowl, where a Sony Venice camera equipped with Sigma Art 1.4 lenses was used.

› LaLiga (Spanish soccer premier league) and Mediapro, featuring a large sensor mirrorless camera for their broadcasts.

Precisely, TM BROADCAST hosted a breakfast called

“Cinematic Broadcast”, (14 May 2024) at the Eurostars Madrid Tower Hotel, where several renowned professionals from the television industry in our country had the opportunity to discuss the advantages and challenges of integrating cinema techniques into TV production.

The trend is crystal clear. The players involved in cinema and live broadcasting, as well as camera and optics manufacturers, among others, are forced to understand each other, communicate with each other and generate professional partnerships with guarantees for success.

TMBroadcast Awards: PrestigeandInnovation

TM BROADCAST SPAIN has held the fifth edition of these awards, now an undisputed benchmark in the country’s broadcast industry due to their rigor and independence.

For our international readers, this is also an opportunity to discover some of the most exciting projects currently shaping Spain’s audiovisual landscape.

Organised by the Spanish edition of this magazine, the TM BROADCAST Awards have celebrated their fifth edition, cementing their position as a leading recognition platform for the most innovative projects in Spain’s broadcast industry.

Their prestige stems from the strong engagement of companies—eager to participate year after year—and the commendable work of the jury, which thoroughly reviews each submission before meeting to deliberate, ensuring a fair and well-informed decision as outlined in the official rules.

The expert panel reached its conclusions on May 20 at the Daró Media Group headquarters, publisher of TM BROADCAST, and was made up of some of the most respected and emblematic figures in Spanish broadcast. To ensure impartiality, jury members had no ties to the nominated projects. Participants included former technical directors from leading Spanish broadcasters—Vicente Alcalá, Koldo Lizarralde, and José Enrique Zamorano— alongside prominent industry experts and regular contributors to the magazine: Luis Sanz, Luis Pavía, and Carlos Medina.

After over two hours of rigorous and enriching discussion, the jury selected the most deserving success cases from the numerous entries, based on excellence in each category. Some winners were chosen unanimously, others by simple majority. In categories with diverging opinions, the jury awarded special mentions to additional projects.

Winners List

Best Infrastructure Upgrade Project

Winner: Datos Media

Project: Comprehensive renewal of news production systems in Canarias TV using cloud-based solutions

Summary:

Canarias TV (a regional Spanish broadcaster in the Canary Islands) has undergone a complete overhaul of its news production systems, integrating cloud-based solutions in its Tenerife and Gran Canaria facilities. Led by Datos Media Technologies, the project involved upgrading ingestion, transcoding, production, editing, graphics, and playout systems—all based on Avid MediaCentral Cloud UX. One of the key milestones was the simultaneous launch of the new system in both locations, enabling broadcast within the same week.

an SDI-based infrastructure to SMPTE ST 2110-compliant IP technology. The project aimed to achieve more efficient, flexible, and scalable broadcasting, positioning RTVE Sant Cugat at the forefront of next-generation media production and distribution.

Best 5G Project

Winner: ISTEC

Project: Audiovisual broadcasting using 5G Broadcast in SFN environments

Summary:

Best IP Technology Integration

Project

Winner: TSA (Telefónica Servicios Audiovisuales)

Project: Continuity system upgrade in Sant Cugat and transition to IP technology

Summary:

RTVE’s production center in Sant Cugat (Catalonia) underwent a continuity system overhaul led by TSA, transitioning from

ISTEC, the public operator of the Valencian Community, led a project to assess the potential of 5G Broadcast technology as an alternative or complement to DVB-T2 for audiovisual content distribution—particularly relevant in high-demand environments and mobile devices.

> Special Mention: Ges-IT

Project: Zurich Mapoma Madrid Marathon and Half Marathon

Summary: Ges-IT’s technical deployment during the Zurich Mapoma Marathon and Half Marathon marked a milestone in sports

event broadcasting. The project combined 5G backpacks, drones, autonomous cameras, Girobikes, and H.265 encoding to enhance live coverage and deliver an immersive experience for viewers and media.

Best Postproduction Project

Winner: 3cat

Project: Audio Control C

Summary:

Developed at the Sant Joan Despí Production Center, the “So-C Control” by 3cat (Catalonia’s public broadcaster) is a landmark in modern sound postproduction. Located in Control C, the facility is noted for its versatility, acoustic precision, and cutting-edge technologies tailored for current and emerging content formats.

Best Innovation Project

Winner: TSA

Project: TSAMediaHUB – integrated platform for content and audiovisual services management

Summary:

TSA developed TSAMediaHUB, a platform that redefines audiovisual content management through intelligent automation, hybrid cloud infrastructure, and modular architecture.

> Special Mention: 3cat

Project: Sign Language via WebAssembly on DTT

Summary:

3cat developed a solution to improve accessibility in Digital Terrestrial Television using WebAssembly, offering customizable sign language services— an alternative to the fixed embedded interpreter video, enhancing viewer experience and flexibility.

Winners List

Best AI Implementation Project

Winner: 3cat

Project: Journalistic assistant

Summary:

This project integrates generative AI into the 324.cat newsroom’s CMS. Launched in November 2024, it assists journalists by optimizing content creation and publishing processes using Azure OpenAI’s GPT-4-based generative AI.

Best Sports Technical Production

Winner: RTVE

Project: UHD production of the Paris 2024 Olympic Games

Summary:

RTVE deployed an extensive technical team for the Paris 2024 Olympics. Over 100 professionals were sent to France to deliver full broadcast, radio, and digital platform coverage. The TV output included 400+ hours on La1 UHD via DTT, simulcast on La1 HD and Teledeporte. RNE aired 150+ hours of programming from Paris, and RTVE Play offered six simultaneous live feeds, reaching 6.2 million users and growing 61% from Tokyo 2020. Social media views surpassed 469 million.

Best Entertainment Technical Production

Winner: Webedia

Project: El Partidazo de YouTubers 5

Summary:

This large-scale entertainment-sports event, held on October 12, 2024, at the Metropolitano stadium (Madrid), combined high-end production with influencer-driven content. Organised

by Webedia and DjMaRiiO, it attracted over 30,000 in-person attendees and 1.5 million online live viewers, standing out as one of the year’s top transmedia events.

Best News Technical Production

Winner: Grup Mediapro

Project: DANA coverage

Summary:

Overon (part of Grup Mediapro) led a high-impact technical news deployment during the severe flooding (DANA) in Valencia in October 2024. They were first on the scene and operated under extreme conditions for over a month, ensuring national and international coverage.

Best Live Technical Production

Winner: Grup Mediapro

Project: America’s Cup

Summary:

The 37th America’s Cup, held in Barcelona in summer 2024, became the first in its 173-year history to be broadcast in 4K

UHD HDR with 5.1 surround sound. Grup Mediapro, selected by America’s Cup Media as host broadcaster, delivered the complete broadcast infrastructure enabling global distribution.

Best Audiovisual Production/Installation

Winner: TSA

Project: Virtual production with holography for the ERT plenary session

Summary:

TSA led an innovative virtual production using real-time holography for the European Round Table for Industry (ERT) Plenary in Madrid. The session included a live interview between Chema Alonso in Madrid’s Gran Vía Movistar store and a remote speaker at Distrito Telefónica’s La Cabina studio.

Winners List

Best Archive Integration Project

Winner: TSA

Project: Canal Extremadura

Summary:

TSA renewed and ensured the long-term sustainability of Canal Extremadura’s digital archive platform using Quantum’s Stornext 6. The project will guarantee operational continuity for at least five more years, maintaining audiovisual heritage management since 2018.

Best OTT Project

Winner: TSA

Project: OTT headend for O2 TV

Summary:

TSA led the design and deployment of the headend for O2 TV’s OTT service in Germany. The project ensures high-quality, scalable, multi-device-compatible delivery, addressing growing demand for digital content.

Best Live Broadcast Project by a Training Center

Winner: IES Puerta Bonita (Madrid)

Project: EOS

– The End of Thought

Summary:

“EOS – The End of Thought” is a live broadcast project entirely created by second-year vocational students of Video DJ

and Sound Technician studies at IES Puerta Bonita. It explores the interaction between humans and AI through an audiovisual performance that blends technology, social reflection, and audience participation.

Other

*This category includes projects that didn’t fit neatly into other sections but were considered deserving of recognition due to their excellence.

Winner: RTVE

Project: First Spanish DTT channel with UHD technology

Summary:

In February 2024, RTVE launched Spain’s first UHD Digital Terrestrial Television channel, coinciding with the nationwide shutdown of standard definition signals. This move positions RTVE as a European pioneer—alongside France Télévisions—in free-to-air UHD DTT broadcasting.

2025

5thEditionof the

TMBroadcast Awards

Note: This article is a shortened version of a more extensive report published in the Spanish edition of TM BROADCAST.

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.