TM Broadcast International #127, March 2024

Page 1


Welcome to Issue #127 of TM Broadcast International, that promises to be a rich collage of insights and innovations from the world of broadcast and audiovisual production. This issue is a vibrant palette for our readers, featuring exclusive interviews with industry frontrunners such as TVMonaco, a new player in the broadcasting field, and an insightful conversation with Will Cohen, the VFX maestro behind Dr. Who’s mesmerizing visuals at Bad Wolf, in which he shares his experiences, shedding light on the intersection of storytelling and technology.

Along the present issue, readers also will find an analysis that looks into the broadcasting industry’s gradual transition to Internet Protocol (IP) technology, showcasing its role in enhancing the flexibility, efficiency, and quality of content delivery. The article features

Editor in chief

Javier de Martín

Key account manager

Patricia Pérez

Editorial staff

case studies from leading organizations like the Canadian Broadcasting Corporation (CBC), ITN, Foxtel, and Fubo, highlighting their journeys in adopting IP technology to improve content production, distribution, and collaboration across different locations.

This TM Broadcast International magazine’s issue also features specialized articles on “Color in Broadcast”, delving into the technical and aesthetic aspects of color in TV production, and “AI in Content Creation”, highlighting the cutting-edge use of artificial intelligence in crafting compelling content.

Join us as we embark on this enlightening journey through the latest trends, challenges, and breakthroughs that define the dynamic world of broadcast and audiovisual production.

Creative Direction

Mercedes González


Laura de Diego

Published in Spain

ISSN: 2659-5966


#127 March 2024


Broadcast International is a magazine published by Daró Media Group SL Centro Empresarial Tartessos Calle Pollensa 2, oficina 14 28290 Las Rozas (Madrid), Spain Phone +34 91 640 46 43

News 6 Inside the TARDIS: A Conversation with Will Cohen, the Vision Behind Doctor Who’s Latest VFX Wonders

In an exclusive interview with TM Broadcast International Magazine, we delve into the creative mind behind the visual spectacles of one of Britain’s most beloved shows, Doctor Who. Will Cohen, the mastermind VFX producer responsible for bringing the Time Lord’s adventures to life, shares his experiences and insights from working on the series, especially during its landmark 60th anniversary specials. Cohen’s journey with Doctor Who is a fascinating blend of nostalgia, innovation, and creative camaraderie, highlighting the challenges and triumphs of modern visual effects storytelling.

Inside TV Monaco’s journey to broadcast the essence of Monaco worldwide

In the fast-growing landscape of global broadcasting, the tiny yet prestigious principality of Monaco has embarked on an ambitious journey to establish its voice on the international stage through the launch of its public broadcaster within the TV5Monde network.






Media: The future of content creation and distribution
legacy to leading edge: The IP migration stories of Foxtel, ITN, Fubo, and CBC
has its broadcast space

Agile Content introduces solution for fighting against piracy on content platforms

Agile Content has rolled out its Agile CDN Director solution at Telenor Sweden, a major telecommunications operator in the country. This deployment marks a continuation of the longstanding partnership between the two companies, aimed at enhancing efficiencies and optimizing the utilization of Agile Content subsidiary Edgeware’s content delivery network.

As piracy continues to surge in the over-the-top (OTT) sector, the significance of technological solutions like Agile CDN Director is becoming increasingly apparent. Television piracy alone constituted nearly half (48%) of all access to infringing sites in the EU in 2022, according to a report from the European Union Intellectual Property Office (EUIPO).

Agile CDN Director builds upon Edgeware’s existing CDN architecture, introducing advanced features such as realtime cache, network switching, and integrated server quality of experience monitoring. This evolution enables Telenor Sweden to enhance the efficiency of its next-generation television services by seamlessly integrating its CDN with that of its sister company, Telenor Norway, facilitating resource utilization across networks and ensuring uninterrupted service even in the event of site loss.

Marielle Lind, Telenor Sweden’s technical architect for TV backend systems, said of the Agile CDN Director: “As responsible for Telenor Sweden’s CDN and ensuring our customers get the quality of experience they expect on their devices, I have a long list of requirements I want from my network, some of which we couldn’t meet. But with Agile CDN Director now up and running, suddenly a lot more is possible. So far, I haven’t found anything that’s impossible.”

Another key advantage of the solution that Telenor Sweden has found surprisingly effective is its ability to keep hackers out and transmit only to authorised users, company stated. As Telenor also considers, Agile CDN Director provides greater visibility into everything happening in the CDN at any given time, including the use of common access tokens that prove that customer requests are coming

from Telenor’s own back-end and not from external agents, including hackers trying to access a network.

This additional layer of security protects a CDN like Telenor’s Edgeware Network and ensures that only valid customers have access, which in turn frees up capacity and ultimately provides better network speeds for the TV viewing public, according to the information released. Agile CDN Director provides a whole new level of security. “Our records show that we have blocked between 50,000 and 60,000 invalid requests each day that were trying to access the CDN and reach a streaming server,” said Marielle. “With Agile CDN Director, it’s easy to identify those invalid requests and deny them access. If we didn’t have it, the network would be much busier with traffic that shouldn’t be there.” 


Nevion extends Virtuoso audio and video processing functionality

Nevion has announced that the signal processing capabilities of its software-defined media node Virtuoso have been extended with a new audio interface and additional up/down/cross conversion (UDC) functionality. These capabilities increase the versatility of Virtuoso for live production applications, such as outside broadcast or centralized processing (“glue”) infrastructure for studio, production control room or master control room operations.

A key component of Sony’s Networked Live offering, Nevion Virtuoso has been widely deployed across the globe to transport, process, and monitor signals in real-time. The video and audio processing capabilities of Virtuoso in particular are being used daily by many broadcasters in their facilities to support their national and regional radio and TV productions.

Virtuoso already offers comprehensive audio capabilities, including bidirectional AES3, MADI and SMPTE ST 2110 / AES67 IP audio interfacing. Processing features include monitoring, routing, embedding, mixing, shuffling and perchannel control of polarity, gain, and delay. With the new RPRO interface aimed at remote

production applications, Virtuoso can now also interface and transport mixed digital and analogue audio signals, along with GPIO and sync distribution.

For video processing, Virtuoso’s UDC capability offers a variety of high-quality format conversions for HD and UHD with native SMPTE ST 2110-20 uncompressed video input/ output. The existing functionality includes de-interlacing, scaling, HDR/SDR conversion, legalization, frame synchronization and delay. The new additions to Virtuoso’s UDC capabilities include frame rate conversion and configurable 3D LUTs for color space conversion (with a subset of BBCs 3D LUTs pre-loaded).

The latest Virtuoso media function release also includes a further enhancement to the JPEG-XS in TS (TR-07) capability announced at NAB 2023, adding the ability to handle IP-in/IP-out workflows.

Steve Hard, VP of Product Management, Virtuoso at Nevion says: “Nevion Virtuoso is obviously well-known for its media transport functionality, but it also offers exceptional media processing capabilities. This makes Virtuoso an extremely versatile media node – something that our customers recognize and appreciate. These latest extensions to the media processing capabilities of Virtuoso, cement its position as one of the highest performers for in-facilities and outside broadcast applications.”


Vitec launches the MGW Diamond-H Compact 4K HDMI


Vitec is launching the MGW Diamond-H 4K HDMI encoder to further enhance its broad portfolio of HEVC encode and decode products.

“Vitec is proud to introduce the MGW Diamond-H, a powerful compact 4K HDMI Encoder demonstrating our commitment to innovation and meeting the evolving needs of our customers,” says Richard Bernard, Head of Product Management, at Vitec. “With its impressive encoding capabilities, seamless integration features, and Power over Ethernet support, the MGW Diamond-H opens up new possibilities for IPTV distribution and video contribution across various industries.”

The MGW Diamond-H, a portable 4K HDMI Encoder is a significant

addition to Vitec’s portable HEVC encoder product line, setting a new standard for quality, efficiency, and integration. With the ability to encode up to 4 channels from two HDMI inputs, the MGW Diamond-H empowers users to capture and stream content with unparalleled quality and minimal latency.

The unit is designed to facilitate

integration into existing setups. Featuring HDMI loop through, the MGW Diamond-H ensures a smooth workflow and enhanced connectivity with existing video equipment. For optimal reliability and power efficiency, the MGW Diamond-H can be powered via Power over Ethernet, streamlining installations and reducing the need for additional power sources. 


Videosys assists Timeline TV in enhancing basketball coverage

Timeline, a broadcast facilities company with headquarters in UK, has teamed up with Videosys Broadcast to enhance coverage of the British Basketball League championships, currently broadcasted on Sky Sports, the League’s UK partner. By implementing Videosys camera control at each basketball venue, Timeline’s production staff can manage multiple cameras remotely from a central location in London.

The British Basketball League championships feature 10 clubs playing three to four games every weekend across various UK venues. To optimize resources, Timeline has devised a remote production solution, minimizing expenses on personnel and equipment.

The solution entails Timeline providing studio space at its Ealing Broadcast Centre in West London for the League’s production crew and presenters. Additionally, technical personnel and identical flyaway equipment kits are deployed at each venue. On match days, feeds from four Sony cameras at the venue are transmitted back to Ealing via ethernet connections, allowing the production crew to handle aspects like color balance, camera angles, and vision mixing from the gallery.

John Hayhurst, Senior RF Project Manager at Timeline TV, highlights the transition to data transmission via ethernet cables and IP addresses: “These days, we don’t move anything around in baseband video – it’s all just data sent via ethernet cables and IP addresses.” The production is centralized at Ealing, where camera control and tally information are managed, with the latter sent over an IP system to control radio cameras at the venue.

Timeline utilizes Videosys camera control system, comprising an Indoor Unit (IDU) at Ealing, an Outdoor Unit (ODU) at the venue, and a compact Camera Receiver (RXSM-E) mounted on each camera. The system allows seamless control of camera settings regardless of location.

While currently unicasting camera control data between Ealing and each venue, Timeline’s system has multicasting capabilities, enabled by identical equipment at all venues. Videosys’ CCU system, fully IP-based, supports TCP and UDP delivery, ensuring data transmission over wide area networks with guaranteed latency as the company states.

The overarching aim of remote production is to streamline operations across multiple events on the same day. “This is what lies at the heart of the technical setup we have put into place for the League’s basketball coverage,” says John Hayhurst. “Years ago, covering multiple games would involve sending an entire crew to each venue, which was obviously expensive and labor-intensive.” 


Clear-Com empowers seamless communication experience at Snapdragon Stadium in San Diego

Snapdragon Stadium, located within SDSU Mission Valley, serves as a year-round sports and entertainment destination and a hub of community engagement. Home to SDSU Aztec Football, San Diego Wave FC of the National Women’s Soccer League, San Diego Legion of Major League Rugby and the newly announced MLS team, San Diego FC, the venue hosts a myriad of events, including concerts, festivals, motocross shows, international sporting events, championships, community events, private and corporate events, and more. Notably, Snapdragon Stadium has emerged as a favored venue for the US Women’s National Soccer Team. Since its opening, the facility has hosted multiple ‘friendlies’ and is currently a key venue for the CONCACAF Women’s Gold Cup. ClearCom’s communication solutions contribute to the success of these events, enhancing the overall experience for athletes, performers, and fans alike.

Snapdragon Stadium’s cutting-edge communication infrastructure, featuring Clear-

Com’s advanced equipment, plays a pivotal role in addressing diverse needs associated with its status as a multi-use facility for both sports and entertainment. The heart of Snapdragon Stadium’s communication infrastructure lies in the deployment of Clear-Com’s FreeSpeak II® digital wireless system combined with Encore® analog partyline elements, providing a robust and reliable communication platform that ensures seamless coordination during various events hosted at the 35,000-seat multipurpose venue.

Ceci Bacerra, Manager, AV & Broadcast at Snapdragon Stadium, expressed their satisfaction with Clear-Com’s solutions, stating, “Many sporting teams, shows, and concerts travel with Clear-Com equipment that can easily integrate into our house system. Their online portal makes it user-friendly and easy to configure for each event and the specific needs that may arise.”

In the control room, the deployment includes the Encore Main Station, a mix of 4

channel and 2 channel Remote Stations, and Encore Speaker Stations in U-Box. For remote beltpacks and interfacing needs, the venue relies on five Encore RS-701 Single Channel beltpacks and the Encore MT701 for interfacing with external systems. The FreeSpeak II digital wireless system, comprising the FreeSpeak II-BASE-II Base Station, Transceivers, Splitters, and beltpacks, offers unparalleled flexibility. The 1.9GHz product provides the necessary range and flexibility for beltpacks to seemlessly roam anywhere in the facility, covering sidelines, the bowl, control room, locker rooms, and dock areas. ClearCom’s advanced technology ensures a robust and reliable communication network, underscoring their commitment to technical excellence in sports and entertainment venues.

The system’s ability to seamlessly integrate into Snapdragon Stadium’s infrastructure and adapt to the diverse requirements of sporting events, concerts, and other entertainment productions showcases Clear-Com’s leadership in communication systems for sports and entertainment. The company’s reputation for delivering reliable and innovative solutions played a crucial role in the selection process, ensuring that the venue is equipped with the latest technology to meet the communication demands of its diverse events. 


Bainet Group enhances live news content capabilities with LiveU ecoSystem field units

Bainet Group, a Spain-based production powerhouse with a rich history spanning over three decades in TV, cinema, advertising, and online content creation, has embraced LiveU to revolutionize its live content acquisition process. Deploying 20 portable units from LiveU’s IP-video EcoSystem, Bainet Group now delivers breaking news and related stories to broadcasters and audiences with unprecedented speed and dynamism.

Founded in 1992, Bainet Group has earned a sterling reputation for pushing technological boundaries to enhance service quality for its diverse clientele.

From regional broadcasters in Spain like RTVCE, EITB, and RTCM to national networks such as A3 and La Sexta, Bainet Group caters to a wide array of clients, including international production companies and corporate entities.

Sebastian Diaz, General Manager of Bainet Group in Madrid, remarked, “LiveU has been a game-changer for us. Previously, live content production posed significant challenges, especially in terms of agility and timeliness. With LiveU’s field units, we’ve significantly ramped up our live broadcasting, particularly in news coverage. We’ve transitioned from managing one live event per day to handling up to 20, equating to over 7,000 live events annually.”

Diaz underscores the reliability and robustness of LiveU’s technology, crucial for their news production services, whether for live broadcasts or content acquisition and transmission to broadcasters. “The biggest boon for us has been the mobility offered by LiveU’s backpack technology. We now have unfettered access to locations and live stories that were previously out of reach. Gone are the days of grappling with DSNG trucks, satellite connectivity woes, and logistical hassles. LiveU has eliminated these

barriers, allowing us to position ourselves anywhere and transmit live signals from any location instantly.”

He also emphasizes the technology’s ability to capture stories even in extreme conditions, such as conflict zones and natural disasters, enabling Bainet Group to deliver compelling content to global audiences. “The signal stability, coupled with robust MCR monitoring, is exceptional. Combined with the costeffectiveness compared to DSNGs and exceptional service from LiveU staff, I am thoroughly satisfied both on a technical level, ensuring seamless content delivery to broadcasters, and with the personalized technical assistance and support from the local team,” Diaz added.

Laura Llames, Country Manager South Europe, LiveU, commented, “Spain represents a burgeoning market for LiveU, with a growing number of production companies, broadcasters, and rights holders embracing the transformative potential of our technology. Bainet Group’s expansion of live content acquisition in news is testament to the impactful capabilities of the LiveU EcoSystem in content creation.” 


Prime Vision Studio upgrades production fleet with Ikegami UHK-X700 4K cameras

Prime Vision Studio, based in Dubai and specializing in high-quality broadcast studio and mobile television content production services, has made a strategic investment in the latest generation Ikegami UHK-X700 4K studio/EFP cameras to enhance its production capabilities. With over two decades of experience in systems design and operation, Prime Vision Studio is wellequipped to cater to both studiobased and location events.

According to Mr. Mansoor Meghani, Managing Director of Prime Vision Studio, the decision to invest in Ikegami UHK-X700 cameras was driven by the increasing demand for 4K-UHD HDR capability, ensuring that productions remain relevant and commercially viable in the long term. The cameras were chosen for their versatility, high signal quality, and robust build, making them suitable for various applications, including studio-based projects, outside broadcasts, and mobile content creation, in Meghani’s words.

The support received from Ikegami Middle East further solidified the decision to opt for the UHK-X700 cameras. Prime Vision Studio has added eight

UHK-X700 camera systems, including a high-frame-rate version, to its rental fleet. These cameras will be available for dry hire, crewed hire, or as part of a comprehensive production management service.

Abdul Ghani, General Manager of Ikegami Middle East, expressed confidence that Prime Vision Studio’s production team will find the UHK-X700 cameras to be robust and feature-rich solutions for a wide range of applications.

The Ikegami UHK-X700 is a recent addition to the Ikegami Unicam XE 4K series, designed for both studio and outdoor use. It features three 2/3-inch CMOS 4K sensors with a global shutter, providing freedom from rollingshutter distortion and flash-

banding artifacts, as company states. The camera supports full HDR/SDR and offers a choice between BT.709 and BT.2020 color spaces. High frame-rate shooting options make it suitable for capturing fast motion in sports and stage events.

When combined with the Ikegami BSX-100 base station, the UHK-X700 allows for cable lengths of up to 3 km, supporting simultaneous output in HD SDR and UHD HDR video formats. Peripheral options for the UHK-X700 include the OCP-300 control panel, MCP-300 master control panel, and SE-U430 system expander. Ikegami also offers a choice of two types of viewfinders for this model: the VFL201D (2-inch LCD) and VFLP700AD (7-inch full HD LCD). 


StreamGuys elevates San Jose Sharks’ broadcasting beyond

traditional radio

StreamGuys’ SGrecast technology is amplifying the NHL juggernaut’s global presence through its 24/7 audio streaming platform, Sharks Audio Network, while also facilitating advertisement insertion and monitoring services.

Founded in 1991, the San Jose Sharks, a prominent NHL team stationed at San Jose’s SAP Center in the Bay Area of San Francisco, transitioned from a 20-year tenure on The South Bay’s 98.5 KFOX station to establish their own 24/7 audio network in 2001. StreamGuys’ SGrecast has been instrumental in managing podcast automation, rebroadcasting, and live streaming across various platforms.

The Sharks Audio Network serves as a strategic move to engage fans and deliver continuous audio content to San Jose Sharks enthusiasts worldwide.


ffering a blend of live regular season and Stanley Cup playoff matches alongside a diverse array of on-demand content, the network features interviews, player profiles, replays, pre-game coverage, highlight reels, lifestyle programming, and live news updates.

Dan Rusanowsky, a seasoned play-by-play announcer with a background in radio, spearheads the team’s audio production

efforts. With over three decades of experience with the Sharks, Rusanowsky oversees all aspects of audio content, ensuring fans are kept abreast of the action from the renowned “Shark Tank”.

“I am primarily recognized as the play-by-play commentator for the San Jose Sharks, but I am also responsible for managing

the Shark’s audio network and coordinating all audio production tasks alongside our team,” remarks Rusanowsky. “Having been with the Sharks since its inception in 1991, setting up radio network coverage was part of my initial responsibilities. However, transitioning to a 24-hour format expanded our operations significantly.”


Partnering with StreamGuys since 2019, Rusanowsky leveraged the SGrecast SaaS platform to convert and redistribute live content, broaden the network’s reach, and capitalize on the Sharks’ online presence to engage their global fan base. While most listeners access the stream through the Sharks Plus SAP Center app, the San Jose Sharks website employs the SGrecast player to facilitate seamless connectivity.

“Although we maintain a terrestrial radio network in Northern California, we’ve shifted focus towards our 24hour programming on the app,” explains Rusanowsky. “This approach enables us to reach audiences in previously untapped regions, and SGrecast simplifies content delivery to both NHL platforms and our terrestrial affiliates through Skyview Satellite.”

Rusanowsky emphasizes SGrecast’s role as a central repository for managing programming and automatically archiving content. The platform’s recording capability enables prompt access to game airchecks, facilitating swift editing, repackaging, and re-upload processes. With dedicated studio space at the SAP Center, the Sharks maintain full control over content, driving increased fan engagement and enhanced monetization opportunities, as company stated.

“Traditional radio excels in live programming, but we’re witnessing a shift, particularly among younger demographics,” observes Rusanowsky. “Operating across multiple channels allows us to leverage programming for both client acquisition and retention, with the Sharks Audio Network serving as a pivotal tool in maintaining fan engagement.”

Neil Carducci, Quality Assurance Tester at StreamGuys, highlights Rusanowsky’s innovative approach to content repurposing, maximizing SGrecast’s potential through dynamic workflows. By blending recordings, live broadcasts, scheduled programming, and podcasting, the Sharks have optimized content utilization.

Rusanowsky further underscores the utilization of StreamGuys’ ad insertion technology to integrate commercials into non-live programming. Analytics provided by SGreports offer valuable insights into audience demographics and listening patterns, empowering targeted marketing strategies.

“Expanding transmitter coverage digitally has immense value, particularly for organizations with niche audiences,” concludes Rusanowsky. “StreamGuys’ solutions offer cost-effective means to reach and engage audiences globally, making it an invaluable asset for any broadcasting endeavor.” 

eVision partners with Red Bull: launching Red Bull TV

channel on Starz On for free

Evision, a player in the media and entertainment realm, is gearing up to introduce the Red Bull TV channel to Starz On, promising an exhilarating experience for thrill-seeking viewers.

In a groundbreaking move, Starz On becomes the inaugural streaming platform in the MENA region to feature the Red Bull TV channel.

Olivier Bramly, CEO of evision by e& life, expressed excitement about the collaboration, stating: “We’re delighted to partner with Red Bull, a brand renowned for its boundarypushing ethos and ability to surpass expectations. Together, we’re poised to elevate entertainment to new heights for Starz On viewers across the region. This alliance signifies a pivotal moment for our MENA audience, as we bring them live events, compelling narratives, and pulse-pounding experiences.”

Red Bull TV isn’t just about adrenaline-fueled action sports; it offers a rich array of live events, shows, and documentaries spanning extreme sports, music, lifestyle, and culture.

Starting February 2024, Starz On users will have free access to the Red Bull TV FAST channel. As the pioneer streaming platform in the region to host the Red Bull FAST TV channel, the streaming service is making this content available through its adsupported streaming platform. 


Reliance and Disney forge strategic alliance to transform Indian entertainment landscape

Reliance Industries Limited (RIL), Viacom 18 Media Private Limited (Viacom18), and The Walt Disney Company (NYSE: DIS) have announced a significant partnership aimed at consolidating their digital streaming and television assets in India. The joint venture (JV) will integrate the operations of Viacom18 and Star India, with Reliance committing ₹11,500 crore to support its growth strategy.

Under the agreements, Viacom18’s media business will merge into Star India Private Limited (SIPL) through a court-approved arrangement. Reliance’s substantial investment at closing, approximately ₹11,500 crore or US$ 1.4 billion, underscores its commitment to the JV’s success. The total value of the JV stands at ₹70,352 crore or US$ 8.5 billion post-transaction, excluding synergies. Following completion, Reliance will control

the JV, holding a 16.34% stake, while Viacom18 and Disney will own 46.82% and 36.84%, respectively.

Disney may contribute additional media assets to the JV, pending regulatory and third-party approvals. Mrs. Nita M. Ambani will serve as Chairperson, with Mr. Uday Shankar appointed as Vice Chairperson, providing strategic direction.

The JV aims to become a prominent player in TV and digital streaming for entertainment and sports content in India, leveraging iconic brands such as Colors, StarPlus, StarGOLD, Star Sports, and Sports18, alongside JioCinema and Hotstar. With a combined audience of over 750 million in India and beyond, the JV intends to lead the digital transformation of India’s media and entertainment industry, with the aim of offering diverse and high-quality content accessible anytime, anywhere.

The partnership seeks to enhance the digital entertainment experience by combining Viacom18 and Star India’s expertise, technology, and content libraries. This includes Disney’s acclaimed films and shows, promising a rich and accessible entertainment offering for Indian audiences worldwide.

Furthermore, the JV will have exclusive rights to distribute Disney content in India, boasting a license to over 30,000 Disney assets. Mukesh D Ambani, Chairman & Managing Director of Reliance Industries, expressed enthusiasm about the agreement, emphasizing the joint venture’s potential to deliver unparalleled content at affordable prices.

Bob Iger, CEO of The Walt Disney Company, highlighted the immense opportunities in India, foreseeing long-term value creation through the joint venture. Uday Shankar, Cofounder of Bodhi Tree Systems, emphasized the commitment to delivering exceptional value to audiences, advertisers, and partners, while shaping the future of entertainment in India.

The transaction is subject to regulatory and shareholder approvals and is expected to be completed by the last quarter of Calendar Year 2024 or the first quarter of Calendar Year 2025. 


SMPTE and IMF User Group forge alliance to advance media standards

SMPTE and the IMF User Group have announced a collaboration aimed at advancing the Interoperable Master Format (IMF) family of SMPTE Standards. The IMF User Group, established in 2016 under the Hollywood Professional Association (HPA), serves as a global forum for endusers and implementors of IMF standards.

IMF, outlined in SMPTE ST 2067, streamlines the storage of audio-visual content necessary for creating various distribution versions across multiple territories and platforms. This format facilitates business-tobusiness content exchange among content owners, post facilities, and distribution platforms, playing a crucial role in modern content fulfillment. It has significantly contributed to transitioning from tape-based to file-based workflows in television and streaming, emerging as the preferred UHD media delivery format for numerous content providers.

The IMF UG’s mission is to promote IMF adoption by bringing together content owners, service providers, retailers, and equipment/ software vendors. Through member meetings, workshops,

plugfests, and best practice publications, the group fosters collaboration and knowledge sharing. Members actively contribute to shaping the longterm roadmap of IMF standards.

“We are honored to become the home for the IMF User Group and thankful to our colleagues at HPA for all they’ve done to administer the user group to this point,” said SMPTE Executive Director David Grindle. “Community organizations, like IMF UG, are vital to keeping our standards updated with the feedback from those using the systems.”

“IMF is a true example of how a standard was developed in one organization, deployed into the

industry and then gathered a community of users via the HPA and now SMPTE,” said SMPTE President Renard T. Jenkins. “As a longtime member of the UG and SMPTE, I am excited that the user group and the standards community are coming together to continue its growth and further its development.”

According to both companies’ statements, the collaboration between SMPTE and the IMF User Group signifies a strategic partnership aimed at furthering the adoption and development of IMF standards, ultimately benefiting media professionals, technologists, engineers, and stakeholders across the industry. 


BBC Studios takes full control of BritBox International in a £255M deal

In a landmark move, BBC Studios has announced its acquisition of full ownership of BritBox International by purchasing ITV’s 50% stake for £255 million. This strategic acquisition positions the BBC’s branch to further expand and scale BritBox International’s reach.

BritBox International, established by BBC Studios and ITV in 2017, has rapidly become the premier streaming service for top-tier British television, particularly known for its compelling mystery dramas. The platform has seen growth, with subscriber numbers increasing by over 300% in the last four years, now boasting over 3.75 million subscribers and valued at approximately £500 million.

This acquisition not only signifies a pivotal moment for BBC Studios

but also ensures the continued delivery of a broad spectrum of British content to BritBox International, thanks to extended licensing agreements with ITV.

Following BritBox International’s move into BBC Studios’ Global Media & Streaming division, its global CEO Reemah Sakaan is stepping down. Sakaan has been an important part of the company since the start, and for the past three years in the role of CEO she has overseen the venture’s accelerated growth and creative success. Tom Fussell BBC Studios CEO said: “I’d like to thank Reemah for her outstanding contribution to BritBox International, which under her stewardship has seen remarkable year-on-year growth. Her passion and dedication has helped create a great culture

and build a business that is loved by audiences and that has real momentum.”

Carolyn McCall, ITV CEO said: “I would like to thank the BritBox International team for making the company such a success and particularly CEO Reemah Sakaan for her leadership, drive and vision.”

BritBox International will be integrated into BBC Studios’ Global Media and Streaming division, enhancing its digital and direct-to-consumer service offerings. Rebecca Glashow, CEO of BBC Studios Global Media & Streaming, expressed enthusiasm about the acquisition, seeing it as a profitable venture with significant growth potential, backed by the full support of BBC. 


beIN Media Group secures exclusive 10-year broadcast rights for Formula One in MENA and Türkiye

beIN Media Group has recently secured exclusive rights to broadcast the FIA Formula One World Championship™ in 25 countries, spanning the Middle East and North Africa (MENA) as well as Türkiye, for the next decade. The agreement, extending until the end of the 2033 season, marks a significant milestone in the realm of sports broadcasting for the region.

Under this 10-year deal, viewers across MENA and Türkiye will have access to live coverage of Practice, Qualifying, F1 Sprint, and Grand Prix races through beIN’s flagship platforms – beIN Sports and beIN’s OTT platform TOD. The coverage will include commentary in Arabic, Turkish, and English, along with exclusive analysis from prominent presenters and pundits. Additionally, viewers will have

the opportunity to watch each race in 4K/UHD quality on beIN Sports and TOD. F2 and F3 races will also be available live and exclusively in Arabic, Turkish, and English.

Moreover, the agreement entails an exclusive content partnership tailored for the MENA region, aiming to cater to the diverse and passionate audience in the area. This collaboration will see the creation of region-specific content, with Doha serving as a dedicated production hub, leveraging beIN’s renowned production capabilities.

The 2024 Formula One season, set to feature a record 24 race calendar, will kick off and conclude in MENA. This season includes four major regional Grands Prix, commencing in Bahrain in February and

culminating in Abu Dhabi in December.

Yousef Al-Obaidly, CEO of beIN Media Group, expressed enthusiasm about the return of Formula 1 to the beIN platform, highlighting the significance of the sport within their portfolio and their commitment to delivering captivating experiences to fans across the region.

Stefano Domenicali, President and CEO of Formula 1, emphasized the partnership’s role in enhancing the broadcast experience for fans at home, acknowledging the growing fanbase in the Middle East and Türkiye.

Ian Holmes, Director of Media Rights and Content Creation at Formula 1, echoed the sentiment, acknowledging the high demand for Formula 1 in the region and expressing confidence in beIN’s ability to elevate the broadcast programming.

This long-term partnership not only expands beIN’s global footprint as a sports broadcaster but also signifies a strategic move to enhance the multi-sport offering for viewers in MENA and Türkiye, according to the company’s statements. 


Nikon to expand presence in professional digital cinema with acquisition of, LLC

In a strategic move aimed at bolstering its position in the professional digital cinema camera market, Nikon Corporation (Nikon) has announced its agreement to acquire, LLC (RED), a leading US cinema camera manufacturer. Under this agreement, Nikon will acquire 100% ownership of RED, making it a wholly-owned subsidiary, pending the fulfillment of certain closing conditions outlined in the Membership Interest Purchase Agreement with RED’s founder, Mr. James Jannard, and its current President, Mr. Jarred Land.

RED, established in 2005, has been a trailblazer in the realm of digital cinema cameras, introducing products such as the original RED ONE 4K and the cutting-edge V-RAPTOR [X] equipped with its proprietary RAW compression technology.

Recognized with an Academy Award, RED cameras have become the preferred choice for numerous Hollywood productions, acclaimed by directors and cinematographers worldwide for their innovative features and unparalleled image quality tailored for top-tier filmmaking and video production.

The acquisition stems from the shared vision of Nikon and RED to meet customer demands and deliver great user experiences, amalgamating the strengths of both entities, according to both technological companies. As both companies stated, Nikon’s renowned expertise in product development, reliability, and proficiency in image processing, optical technology, and user interface, combined with RED’s expertise in cinema cameras, including distinctive image compression technology and color science, will pave the way for the creation of unique products in the professional digital cinema camera market.

In the Japanese technological group’s own words, this acquisition marks Nikon’s strategic move to capitalize on the burgeoning professional digital cinema camera market, leveraging the solid business foundations and networks of both companies. 

Dalet and Veritone forge partnership to enhance media asset management and monetization

Dalet and Veritone have announced a collaboration to integrate the Dalet Flex media workflow ecosystem with Veritone’s AI-powered Digital Media Hub (DMH), featuring commerce and monetization capabilities. This partnership aims to streamline workflows from content creation to distribution, offering media, sports, and entertainment customers the means to monetize their digital media archives effectively.

“With content consumption being at an all-time high and media-rich organizations seeking new ways to bring in additional revenue streams, monetization of media archives and assets is key,” states Carl Farrell, CEO and Board Member of Dalet. “The combined power of Dalet and Veritone enables customers to overhaul their monetization initiatives, exposing and licensing their assets quickly, securely and with the highest level of control.”

Through the Dalet and Veritone referral partnership, media and entertainment companies can maximize the return on investment of their content assets and generate new revenue streams. The solution ensures secure and scalable content delivery to partners while allowing organizations to retain control over their content catalog.

“Veritone’s AI-enabled technology has long been the tool of choice for some of the world’s most recognized brands because of its ability to more efficiently and effectively organize, manage and monetize content,” said Sean King, SVP, GM at Veritone“. 


ROXi partners with Frequency to introduce new music video channels on FAST

ROXi, a music streaming company, is set to debut a series of linear music video channels on FAST (Free Advertising Supported Television). Frequency, known for powering several renowned streaming television channels and connected TV platforms worldwide, has been chosen by ROXi to deliver these channels to FAST providers. LG has become the first to introduce ROXi’s music video channels to UK audiences, with 10 channels already accessible on LG Channels.

The array of ROXi music video channels, expertly delivered by Frequency to LG Channels’ FAST users on LG Smart TVs across the UK, includes “Hot Right Now,” showcasing the latest music videos from global icons like Ed Sheeran, Taylor Swift, and Calvin Harris, along with “Music Video Karaoke,” presenting official music videos of beloved karaoke tracks with scrolling lyrics. Additionally, viewers can enjoy “Greatest Music Videos of All Time,” featuring a curated selection of timeless classics from artists such as Beyoncé, Prince, and Eminem.

ROXi CEO Rob Lewis expressed his commitment to providing consumers with an unparalleled music experience on television,

stating, “We believe consumers deserve the best made-for-TV music experience, which is why we’re making ROXi’s curated music video channels available to millions of FAST users for free.” Lewis emphasized the data-driven approach of ROXi, leveraging insights from years of ROXi TV app usage to craft highly optimized linear music video channels for FAST platforms.

Utilizing Frequency Studio 5.0, ROXi is swiftly deploying its music video channels across various platforms, including LG Channels. Frequency CEO Blair Harrison highlighted the synergy between ROXi’s curated content and the FAST landscape, emphasizing the need for efficient and costeffective channel launches.

Harrison noted that Frequency Studio 5.0 offers ROXi an efficient suite of tools to ensure seamless deployment.

ROXi’s expansion into FAST channels complements its existing TV app business, which provides on-demand interactive music video streaming and curated channels. With the meteoric rise of FAST services globally, ROXi sees a significant opportunity to engage new audiences through curated music content.

In addition to its FAST channels, ROXi’s free TV app is already available on a wide range of Smart and Pay TV platforms, including Sky Q, Stream and Glass, LG, Samsung, Fire TV, Android TV, and Google TV. Besides, ROXi’s entry into FAST follows its recent collaboration with Sinclair Broadcast Group, announced at CES, which saw the launch of interactive music channels on the new digital standard for US TV, ATSC 3.0.


In an exclusive interview with TM Broadcast International Magazine, we delve into the creative mind behind the visual spectacles of one of Britain’s most beloved shows, Doctor Who. Will Cohen, the mastermind VFX producer responsible for bringing the Time Lord’s adventures to life, shares his experiences and insights from working on the series, especially during its landmark 60th anniversary specials. Cohen’s journey with Doctor Who is a fascinating blend of nostalgia, innovation, and creative camaraderie, highlighting the challenges and triumphs of modern visual effects storytelling.

This interview offers a rare glimpse into the intricate world of VFX production in television, through the lens of a seasoned professional who has helped shape the visual landscape of a series that has captivated audiences for decades. Cohen’s journey with Doctor Who is not just a testament to his personal achievements but also a signal fire for the future of visual storytelling in the ever-evolving world of television and cinema.


Can you share insights into your role as VFX producer for Doctor Who, particularly in the 60th anniversary special episodes?

Reflecting on my journey as the VFX producer for Doctor Who, particularly during the momentous 60th anniversary special episodes, it evokes a tapestry of emotions and memories. My adventure with this beloved series commenced in the early 2000s, a period that now seems both a distant memory and as vivid as yesterday. The serendipity of life’s timing played a pivotal role in my return to this universe. As I was transitioning away from leading a visual effects company, a chance encounter with Phil, a producer and an old acquaintance, at the theatre post-pandemic, rekindled connections to my past work on Doctor Who. Our subsequent conversation in January 2022 was a watershed moment. I shared insights into how the visual effects industry had evolved dramatically, especially in the wake of the pandemic’s disruptions and the subsequent surge in content production. This era of transformation

presented both challenges and opportunities, marking a distinct departure from the landscape Phil was accustomed to.

Motivated by a desire to contribute to the show’s enduring legacy, I transitioned into a consulting role to navigate the evolving VFX landscape and strategize for the show’s ambitious visual effects. This consultancy soon blossomed into a full-fledged role as the VFX producer, thanks to Phil’s proposition and the collaborative spirit of

Joel Collins and the executive team.

Embarking on this project felt like a harmonious blend of nostalgia and innovation. The unveiling of the plan for the specials, coupled with the compelling scripts, filled me with anticipation and excitement. Reuniting with Russell (T. Davis), David (Tennant), Catherine (Tate), Julie (Gardner), and Jane (Tranter) was not just a professional engagement but a heartfelt reunion of creative minds.


The ethos behind our strategy was to elevate the series to a cinematic echelon, aspiring to meet the lofty standards of global entertainment giants.

Fundamentally, my journey has come full circle, returning me to the origins of my adventure with Doctor Who. It says a lot about the enduring impact of the series and the ever-evolving art of visual effects storytelling. The opportunity to contribute to such a landmark occasion in the series’ history has been both a privilege and a thrilling challenge, embodying the spirit of innovation and creative camaraderie that defines the series.

How did Bad Wolf Productions approach the monumental task of celebrating Doctor Who’s 60th anniversary, considering its rich history?

Navigating the grand celebration of Doctor Who’s 60th anniversary was akin to orchestrating a symphony with numerous moving parts, each requiring meticulous attention and coordination. The landscape of 2022 presented a unique set of challenges, underscored by the bustling nature of the industry.

Reaching out to esteemed colleagues across the globe, I was met with the stark reality of the times: lengthy waiting periods and substantial financial commitments were the new norm, a testament to the industry’s unprecedented demand.

Amidst this bustling backdrop, our objective was crystal clear: to craft a series of specials that not only honoured the rich tapestry of Doctor Who’s history but also reignited the passion of its global fanbase.

Collaborating closely with Dan May, Joel (Collins), Phil (Sims) and the creative pillars of Russell, Julie, David, and Jane, we devised a strategic plan. Our approach championed the engagement of boutiquestyle visual effects companies,


each possessing unique talents and an eagerness to contribute to the Doctor Who legacy.

The ethos behind our strategy was to elevate the series to a cinematic echelon, aspiring to meet the lofty standards of global entertainment giants. This ambition was not just about enhancing the visual spectacle; it was about rekindling the essence of Doctor Who in a contemporary context, ensuring it resonated with both longstanding fans and new audiences alike.

Our tactical approach involved diversifying our visual effects partnerships rather than consolidating our resources with a singular entity. This

decision was driven by a desire to infuse the project with a sense of zeal and innovation reminiscent of our initial forays into the Doctor Who universe. We sought partners who would not only relish the global visibility but also potentially embark on new creative trajectories because of their involvement.

The logistical timeline was tight, with a mere three months to transition from conceptualization to execution, a period during which we tirelessly sought and engaged with companies and individuals whose visions aligned with ours. This process was deeply personal to me, prioritizing collaborations with professionals who shared a

direct line of communication, ensuring that decisions were made swiftly and effectively without delays.

My role transcended the mere assembly of a plan; it was about stewarding this plan through the inevitable ebbs and flows of production, extinguishing unforeseen fires, and continuously adapting our strategy to the evolving narrative and technical demands of the series. This journey was not just about revisiting the past; it was about defining what Doctor Who means in the contemporary era and how it can continue to captivate and inspire. The excitement of projecting the series into the future, while


rooted in its illustrious past, made this venture not only a professional commitment but a personal passion.

What was the vision behind the new opening sequence for Doctor Who on BBC iPlayer, showcasing every era of the series?

To delve into the creative process behind the new opening sequence for Doctor Who on BBC iPlayer, an endeavour that artfully encapsulates the series’ rich tapestry of eras, is an intriguing topic. Although this aspect fell more squarely within the purview of Joel (Collins), one of the executive minds, and the branding and marketing team, their vision

From my perspective, even as an observer in this instance, the opening sequence stood as a bold declaration of the series’ ongoing evolution, inviting viewers of all ages to partake in the timeless adventure that is Doctor Who.

was nothing short of brilliant, drawing evident inspiration from the grand storytelling traditions of industry giants like Marvel and Disney.

I must admit, my involvement in this specific facet of the project was minimal, and I came across the final output somewhat later in its development. The selection process for the content, given the voluminous archives spanning decades of Doctor Who history, must have been an Herculean task. The final product, vibrant and resonant with the series’ legacy, struck me profoundly when I first laid eyes on it. It was a masterful blend of nostalgia and contemporary flair, designed to appeal to a broad spectrum of audiences, from die-hard fans familiar with every Doctor’s quirk and adversary to newcomers embarking on their first journey through time and space.

The sequence, with its rapid-fire montage of iconic moments, characters, and settings, was meticulously crafted to capture the essence of Doctor Who’s enduring appeal. Each frame, each transition was a nod to the series’ heritage while also serving as a beacon for its future direction. From my perspective, even as an observer in this instance, the opening sequence stood as a bold declaration of the series’ ongoing evolution, inviting viewers of all ages to partake in the timeless adventure that is Doctor Who. Witnessing this fresh yet respectful homage to the series’ history was both humbling and exhilarating, underscoring the boundless creativity and reverence for the Doctor Who legacy that continues to fuel its journey through time and space.


Russell T Davies aims to make all Doctor Who content available in one place. How did this impact your role in coordinating VFX for the expanded Whoniverse?

My focus as the VFX producer remained steadfastly on the monumental task at hand. The intricacies of coordinating visual effects for such an expansive narrative

landscape are, by nature, immersive and demanding, often transcending broader strategic initiatives.

Our mission was clearcut yet complex: to bring to life the myriad stories within the Whoniverse with the resources at our disposal. This endeavour often involved meticulous deliberation on every visual

element we intended to create, balancing creative aspirations with budgetary constraints. The art of visual storytelling, particularly in a universe as rich and varied as Doctor Who’s, necessitates a thoughtful approach to each scene, each effect, considering its narrative impact and feasibility.

The evolution of our processes, especially with advancements in previsualization techniques, has undoubtedly enhanced our ability to craft epic, cinematic


experiences that resonate with the series’ legacy. Yet, the essence of our work remains unchanged. The quest to achieve narrative depth and visual grandeur within the parameters set before us continues to be our guiding principle.

Navigating through thousands of visual effects shots across the specials and the series, coordinating with multiple vendors, and managing an array of meetings and communications consumed our daily operations. The

initial phase of the specials was particularly pivotal, setting the tone and expectations for our collaborative efforts. Our reliance on the expertise and direct involvement of our early vendor partners was instrumental in laying the groundwork for the visual effects department.

While the broader discussions about unifying the Doctor

Who content and expanding its universe buzzed around us, they served more as a backdrop to our immediate priorities. However, the prospect of contributing to a larger, more interconnected Whoniverse did imbue our work with an added layer of excitement and anticipation. The idea of Doctor Who evolving into an even more expansive franchise, with


spinoffs and new narratives, while not directly impacting our day-to-day responsibilities, certainly fuelled our enthusiasm for the project. It underscored the significance of our work in shaping the future of this beloved series, reminding us of the vast, imaginative canvas we were helping to bring to life.

The introduction of the spin-off series “Tales of the TARDIS” is an exciting addition. How did VFX contribute to this new series, and what can viewers expect?

Embarking on the “Tales of the TARDIS” series was a remarkable journey that underscored the unique stature of Doctor Who, not just as a television series but as a cultural phenomenon akin to the likes of James Bond. This project, particularly in the wake of the 60th anniversary celebrations, brought to light the immense responsibility and honour of contributing to such an esteemed legacy. Doctor Who has always transcended the typical confines of a sci-fi show, capturing the imagination of audiences and the media alike, a real proof of its significant place in British heritage.

The introduction of “Tales of the TARDIS” amidst this landmark celebration added a new layer of complexity and excitement to our work. The series, akin to a vibrant tapestry of the Whoniverse, brought forth a multitude of opportunities and challenges, from engaging with the colour restoration of classic episodes to navigating the intricacies of new narratives. These endeavours were not just tasks but a tribute to the storied history of Doctor Who, an opportunity to delve deeper into its rich mythology and bring forth new stories that resonate with both longtime fans and newcomers.

In terms of visual effects, our role in “Tales of the TARDIS” was nuanced and collaborative. Given the series’ scope and the budgetary considerations, our involvement centered around conceptual discussions on how to craft compelling opening and closing sequences that were both visually engaging and financially viable. This collaborative effort speaks volumes about the creative synergy that defines the production of Doctor Who, where every decision is a delicate balance between ambition and feasibility.

The orchestration of VFX post-production work within the Doctor Who series is a symphony of collaboration, dialogue, and meticulous planning that begins right from the initial stages of script development and pre-production.


Moreover, the presence of iconic actors from the Doctor Who legacy wandering the corridors during production served as a constant reminder of the series’ storied past and the legacy we were contributing to. It was a surreal experience, one that bridged generations of storytelling and brought the history of the TARDIS to life in a new light.

While the primary visual effects responsibilities for “Tales of the TARDIS” were adeptly handled by the talented team at Painting Practice, our involvement in the discussions and

planning stages was crucial. It underscored the collaborative spirit that is the hallmark of Doctor Who’s production, where every department brings its expertise to the fore to collectively elevate the narrative.

How do you divide the VFX post production work among the different departments or professionals?

The orchestration of VFX post-production work within the Doctor Who series is a symphony of collaboration, dialogue, and meticulous planning that begins right from the initial stages of script development and preproduction. My experiences working alongside talents like Phil Sims, our production designer, and Joel (Collins) have been instrumental in shaping the visual narrative of the series.

From the outset, we immerse ourselves in extensive discussions and visual explorations to map out the conceptual framework for key sequences, debating the nature of creatures and elements we aim to bring to life, be they prosthetic, digital, or a hybrid. These early deliberations are crucial,


as the decision to construct elements practically demands significant lead time well before filming commences, encompassing real-time preparations and other logistical considerations.

As the directorial vision takes shape, our team engages closely with the experts at Painting Practice to delve into pre-visualization processes. This phase is not just about laying out scenes; it’s an expansive creative exercise where directors, alongside our VFX team, scout locations, refine the narrative, and envision the story’s visual representation. This collaborative journey ensures that every department, from scriptwriters to set designers, shares a unified vision, fostering a cohesive and harmonious production environment.

One of the unique aspects of our workflow is the seamless integration of Painting Practice within the art department, bridging the gap that often exists between production design and visual effects. This integration allows for a more holistic approach to visual storytelling, where ideas and feedback flow freely, ensuring that every visual element, from the grandest set piece to

the smallest detail, contributes to the narrative’s overall impact.

This collaborative ethos extends to our interactions with the entire production team, where early discussions, shared visualizations, and collective brainstorming sessions set the foundation for a synchronized effort. By involving all heads of departments from the earliest stages, we not only align our creative visions but also streamline the logistical aspects of production, from budgeting to scheduling.


The division of VFX postproduction work is less about segmenting tasks among departments and more about fostering a culture of open communication, shared responsibility, and creative partnership. This approach not only enhances the efficiency and coherence of our work but also ensures that the magical world of Doctor Who is brought to life with the utmost fidelity to our collective vision.

Can you elaborate on the collaboration with Painting Practice in designing VFX for the 60th-anniversary special episodes, including the unique challenges faced?

Collaborating with Painting Practice on the visual effects for the 60th-anniversary specials of Doctor Who was an extraordinary journey, enriched by the collective genius of incredibly talented

individuals. Phil Sims, our esteemed production designer, has a longstanding rapport with Painting Practice, which set a strong foundation for our collaboration. The depth of experience in visual effects and design within their team was pivotal in bringing to life sequences that demanded grandeur and finesse.

One memorable instance was the meticulously pre-visualized


sequence involving helicopters approaching UNIT tower in the third special. This sequence, reminiscent of high-octane cinematic experiences, truly elevated the visual storytelling, embodying the essence of Doctor Who in its most majestic form. The realization of such scenes required a harmonious blend of design, narrative integration, and visual effects artistry. Phil’s vision for UNIT tower and its surroundings, coupled with the adept supervision of Dan May and the creative courage of Painting Practice’s team, translated into visuals that resonated with cinematic quality, seamlessly blending with the narrative fabric of Doctor Who.

The key to achieving such impactful visuals lies in the early stages of planning. It’s not about retrofitting grandeur into existing footage; it’s about embedding scale and scope from the conceptual phase. This foresight in planning allows for a more cohesive visual narrative, where each element is purposefully designed to contribute to the overarching story.

Our approach to visual effects is deeply rooted in a comprehensive understanding

of the filmmaking process. We constantly evaluate the feasibility of achieving certain effects in-camera versus in post-production, considering factors such as time, budget, practicality, and overall visual impact. This evaluation is a collaborative effort, involving discussions with directors, cinematographers, and producers to determine the most effective approach to bring our shared vision to life.

The collaboration with Painting Practice, Dan, Phil, and the various visual effects studios was emblematic of the creative synergy that drives this industry. It’s a testament to the magic that unfolds when experts from diverse fields unite, each contributing their unique perspective and expertise. This partnership was not just about working alongside one another; it was a deeply integrated process where ideas, challenges, and solutions were shared openly, fostering an environment of creativity and innovation.

In essence, the journey of designing VFX for the 60thanniversary specials was a celebration of collaborative artistry, where the fusion of design, narrative, and technical expertise culminated in visuals that not only

honoured the legacy of Doctor Who but also pushed the boundaries of what’s possible in visual storytelling. It was an endeavour that, through the challenges and triumphs, reminded us of the exhilarating possibilities when we come together to create something truly extraordinary.

In the specials episodes, there are futuristic and sci-fi scenes. How did you use virtual production and CGI to bring these elements to life, especially in the episode “Wild Blue Yonder”?


The making of “Wild Blue Yonder” for Doctor Who’s special episodes presented an exhilarating blend of challenges and opportunities, particularly in the realms of virtual production and CGI. From the early conceptual discussions in March-April 2022 to the delivery in AugustSeptember 2023, the journey was a testament to the ingenuity and collaborative spirit that defines our work.

Russell’s vision for this episode, with its expansive 40-kilometer spaceship corridor, demanded a creative approach that balanced

Collaboration with Real Time, a company with extensive experience in gaming and virtual technologies, was key.

ambitious storytelling with practical execution. The options before us ranged from filming on massive locations to constructing extensive sets or leveraging green screens for chroma keying. Each potential solution came with its own set of complexities, particularly given the time constraints and budget considerations.

The pivotal decision to harness virtual effects to realize the corridor was born out of necessity and innovation. Working with Tom Kingsley, a director whose fresh perspective and openness to digital technologies were invaluable, we embarked on a meticulous pre-visualization process. This phase was crucial, as it involved storyboarding every moment to ensure clarity in narrative progression and visual coherence.

Our solution was a hybrid form of virtual production, where only the floor was physically built for the actors, with the rest of the

environment rendered digitally. This approach required Tom to storyboard extensively, ensuring every scene was meticulously planned in relation to the virtual space.

Collaboration with Real Time, a company with extensive experience in gaming and virtual technologies, was key. They helped us transition the pre-visualization assets into shoot-ready digital environments using Unreal Engine, allowing us to maintain visual consistency and adapt in real-time to the dynamic needs of filming.

The use of Mo-Sys technology was another cornerstone of our strategy, enabling us to track camera movements and render backgrounds in near real-time, a process that significantly enhanced the actors’ ability to interact with their surroundings, despite the prevalence of green screens.

This venture was not just about employing new


technologies; it was about reimagining the workflow of visual effects to suit the narrative and technical demands of “Wild Blue Yonder.” The editing process, led by the talented Tim (Hodges), was streamlined thanks to the pre-rendered backgrounds, allowing for a more intuitive and efficient post-production phase.

The economic efficiency of rendering 4K backgrounds for significant portions of the episode underscored the value of this hybrid approach. Yet, traditional VFX techniques were still vital, especially for sequences involving spaceships and the dramatic implosion of the corridor, showcasing the diverse skill set required to bring such a complex episode to life.

It’s clear that innovation, while inherently risky, is crucial for pushing the boundaries of what’s possible in television production. “Wild Blue Yonder” not only honoured the legacy of Doctor Who by embracing new techniques but also set a precedent for future explorations in virtual production, highlighting the ever-evolving nature of storytelling in the digital age. The experience was a profound reminder of the joys and challenges ingrained in Doctor Who, where no two projects are ever the same, and the potential for creativity is boundless.

We were going to ask for the corridor and spaceship environment and how did you create it using virtual production and

new technologies, but you already gave us an idea. Absolutely, the evolution of these technologies, particularly the unique hybrid workflow we’ve adopted for Doctor Who, is something I eagerly anticipate. This approach notably alleviates some of the conventional pressures of production, specifically the need to retroactively address issues that arise with virtual production environments.

Traditionally, production teams are accustomed to a certain flow - scouting and selecting locations, finalizing cast, constructing sets, and making real-time adjustments to props and set designs. This tangible, handson methodology is deeply ingrained in the production


psyche, offering a level of flexibility and immediacy that’s challenging to replicate in a purely digital environment.

The shift towards virtual production, particularly with LED screens and comprehensive previsualization, demands a significant paradigm shift. Every element, from the backdrop to the minutiae of a scene, must be meticulously planned and integrated into the virtual environment well before the actual shoot. This level of pre-production detail can be daunting, as it seemingly locks in creative decisions far earlier than traditional methods, potentially constraining the spontaneity and fluidity that come with on-the-spot direction and production design adjustments.

However, the trade-offs come with their own set of advantages, such as the ability to create and manipulate vast and complex environments that would be impractical, if not impossible, to construct physically. The challenge lies in adapting to this new rhythm, embracing the opportunities it presents while navigating the constraints.

As the industry begins to acclimate to these virtual production tools, it’s fascinating to witness the

growing understanding and appreciation of their potential. The learning curve is steep, but the creative possibilities are expansive, promising a future where the lines between physical and digital production blur, offering unprecedented storytelling capabilities. This transition period is a crucible of innovation, and I’m eager to see how these methodologies evolve, enhancing our ability to bring the fantastical worlds of Doctor Who to life with even greater authenticity and immersion.

Which technology has boosted VFX the most during last years? If you have to choose among all the advances, tech advances that have been until now, which one would you choose?

Navigating through the myriad of technological advancements that have revolutionized the VFX landscape is no small feat. The continuous improvements across the board are staggering, but if I were to highlight one transformative development, it would be the advent of Universal Scene Description (USD) pipelines. This innovation has fundamentally altered the collaborative dynamics

within visual effects studios, allowing for a more integrated and cohesive approach to scene creation. This departure from the sequential, compartmentalized workflows of the past has been nothing short of revolutionary, enabling teams to work concurrently on complex scenes, thereby enhancing both efficiency and output quality.

However, the emergence of Unreal Engine as a tool in the VFX arsenal marks a significant milestone in our industry. Beyond the initial buzz surrounding LED walls, Unreal Engine offers a level of immediacy and flexibility that was previously unattainable. This tool embodies the convergence of real-time rendering capabilities and high-quality visual effects, a combination that has long been the holy grail for VFX artists, particularly those of us drawing inspiration from the gaming industry.

The pandemic underscored the potential of virtual production, with promises of remote shooting locations and reduced logistical hurdles. While these benefits are tangible, the reality is nuanced, encompassing both advantages and limitations, from cost implications to


aesthetic considerations. Yet, the true value of Unreal Engine lies in its capacity to dramatically accelerate the visualization process. The ability to rapidly prototype and iterate on environments, down to the details like foliage density or water placement, without leaving the digital domain, is a game-changer. This efficiency not only saves time but also opens up new realms of creative exploration, enabling us to refine our visions with unprecedented speed and flexibility.

Looking ahead, the integration of artificial intelligence into the VFX workflow looms on the horizon, promising further innovations in how we conceive and execute visual effects. The potential for automating certain tasks within the VFX pipeline could empower smaller teams to accomplish more, amplifying both creativity and productivity.

Tools like Unreal Engine are not just about doing more with less; they’re about expanding the boundaries of what’s possible in visual storytelling. They offer us the ability to bring our wildest imaginations to life with a speed and fidelity that was previously unimaginable,

marking an exciting era for creators and audiences alike.

The regeneration of the Doctor is always a crucial moment. How did you approach the VFX for the iconic regeneration sequence into the Fifteenth Doctor?

Approaching the regeneration sequence into the Fifteenth Doctor was a venture that required a blend of meticulous planning, creative innovation, and technical prowess. Russell T. Davies, with his distinctively visual storytelling style, crafts narratives that are rich in detail and vivid in imagination. His scripts serve not just as textual narratives but as blueprints brimming with visual cues, guiding the visual effects team towards his envisioned spectacle.

For the regeneration sequence, while Russell’s script laid the groundwork, it left room for visual interpretation and innovation, particularly concerning the design of the VFX. This iconic moment in Doctor Who’s lore necessitated a blend of creativity and technical ingenuity to bring to life. The preparation involved a thorough previsualization of

the action, but the specific design of the regeneration effect itself was something we decided to evolve postfilming, providing us with the flexibility to refine and adapt the visuals.

The execution of this sequence was a testament to the collaborative spirit of our team. Utilizing a techno crane for a motion control pass, we were able to craft a seamless transition between the Doctors, a process that was both ambitious and fraught with logistical challenges. Concerns about the feasibility of this approach, given the weight restrictions and the complexity of the setup on the back lot, nearly led us to reconsider. I must confess, there was a moment when I contemplated a more conventional approach, questioning the necessity of such an elaborate setup for a moment already imbued with inherent significance.

However, the determination and ingenuity of our team, particularly Sean Varney, our shoot VFX Supervisor, and Richard Widgery, a motion control expert, were instrumental in overcoming these hurdles. Sean’s practical problem-solving on set, combined with


Richard’s custom software for technical visualization, allowed us to transform Dan’s dynamic previsualized shot into a feasible reality. Their perseverance and technical acumen were pivotal in realizing this sequence, ensuring that the ambitious vision could be actualized without compromise.

The post-production phase was equally critical. Seb (Sebastian Barker), our VFX supervisor from Automatik, played a crucial role in defining the final aesthetic of the regeneration effect. His initial designs and subsequent refinements encapsulated the essence of the moment, blending seamlessly with the narrative and visual fabric of the series. This period of post-production, enriched by the luxury of time, allowed for a thoughtful consideration of

the show’s evolving tone and visual language, ensuring that the regeneration sequence was not only a spectacle but a coherent part of the series’ broader aesthetic.

I insist it’s evident that while technology serves as an enabler, amplifying our creative capabilities, the heart of our industry lies in the people. The collaborative synergy between writers, directors, VFX artists, and technical experts underscores the human element at the core of storytelling. Just as the essence of editing remains centered around the editor’s vision, irrespective of the tools at their disposal, the magic of visual effects is a product of human creativity and ingenuity. In the end, the realization of such iconic moments in Doctor Who is a celebration of collective

creativity, a harmonious blend of vision, talent, and technology.

Sound has a lot of importance in film industry, how do you see the intersection of audio and visual elements in creating a seamless viewer experience, especially in a series like Doctor Who?

In the realm of film and television production, particularly in a series as immersive as Doctor Who, the symbiosis between sound and visual elements is paramount. Recently, I navigated through a complex segment of work, the intricacies of which I’m eager to share in the coming months. This experience reinforced a fundamental truth: sound constitutes more than half of the storytelling impact. It’s the unsung hero that, when misaligned, can unravel even the most visually stunning sequences.

The visual effects domain often bears the brunt of misconceptions, with CGI sometimes unfairly maligned. Yet, it’s crucial to recognize that many challenges perceived as visual can, in fact, find their resolution within the auditory landscape. There was a particular instance where


no visual adjustment could rectify an issue — the solution lay entirely in the auditory realm. This underscored a vital lesson: the potency of sound in shaping narrative perception and emotional engagement is unparalleled. Consider cinema’s most iconic moments; their resonance is as much about the auditory experience as the visual. The synergy of music and sound design plays a crucial role in enveloping the audience, transporting them into the narrative’s heart. A prime example is the film “The Insider,” where Russell Crowe’s character’s escalating paranoia is masterfully amplified through sound design — a testament to audio’s capacity to manipulate emotion and tension.

In the context of virtual production, the principles of sound design and integration remain unchanged. The process of capturing performances on stage may be consistent, but the post-production phase is where sound truly sculpts the viewer’s experience. Collaborating with our executive team has highlighted the exhilaration that accompanies the final mix — a pivotal juncture where the visual and auditory

elements coalesce to realize the narrative’s full potential.

The grading process, too, plays a critical role in this alchemy. Our colourist, an integral member of the visual effects team, leverages advanced software capabilities to elevate the visual narrative, marrying it seamlessly with the sound design to enhance the story’s emotional and thematic depth.

This confluence of sound and visuals in the final stages of production is a moment of culmination, where months of iterative edits and versions transform into the cohesive and polished entity that reaches the audience. It’s a period marked by anticipation and excitement, as we witness the disparate elements of our creative endeavour meld into a singular, compelling narrative. In Doctor Who, where the canvas of storytelling spans the cosmos and traverses time, this harmony between sound and visuals is not just desirable — it’s essential to capturing the imagination of viewers and faithfully conveying the grandeur and nuance of the Whoniverse.

Finally, we were wondering about your future projects as VFX

producer and BadWolf’s future developments; any interesting project that you can share with us?

Currently, my focus is entirely dedicated to Doctor Who, with my tenure on this iconic series drawing to a close in the upcoming months. It’s been an immersive journey, one that has deeply engaged me, leaving little room to venture into other projects at this juncture. However, I’m keenly aware of the vibrant developments within Bad Wolf, buoyed by the insights from our executive team who are involved in a slew of other ventures.

Bad Wolf is at the forefront of embracing cutting-edge technology, with a steadfast commitment to producing content that stands out for its quality and innovation. The pipeline is brimming with exciting projects, both within and beyond the Doctor Who universe, some of which you’ve hinted at in your inquiry. The future looks promising for Bad Wolf, poised to elevate their repertoire with even more compelling narratives and groundbreaking visual storytelling. So, I encourage everyone to keep an eye on their upcoming productions; there’s much to anticipate and celebrate in what lies ahead. 


In the fast-growing landscape of global broadcasting, the tiny yet prestigious principality of Monaco has embarked on an ambitious journey to establish its voice on the international stage through the launch of its public broadcaster within the TV5Monde network. Spearheaded by the visionary foresight of the Prince of Monaco, this initiative seeks not only to enrich the Francophone media landscape but also to champion the critical cause of environmental stewardship, a testament to Monaco’s longstanding commitment to global conservation efforts.


Monaco’s public broadcaster emerges as a beacon of innovative storytelling and conscientious journalism, dedicated to shedding light on the principality’s significant contributions to marine and environmental conservation. Through a carefully curated blend of documentaries, news segments, and prime-time specials, the channel aims to inspire action and awareness, resonating with a global audience deeply concerned with the fate of our natural world.

Behind the scenes, the technical orchestration of Monaco’s broadcasting endeavour is a tale of rapid innovation and strategic collaborations. With no preexisting infrastructure to build upon, the team behind TV Monaco faced the daunting task of creating a state-of-the-art broadcast and digital ecosystem from the ground up. Leveraging cutting-edge IP-based production technologies and a robust digital strategy, TV Monaco is set to redefine the standards of modern broadcasting, seamlessly integrating linear and digital content delivery to meet the evolving demands of today’s media consumers. This lead-in sets the stage for an in-depth interview with Sylvain Bottari, Directeur des Technologies et de l’ Antenne / CTO - CDO, a key figure behind Monaco’s broadcasting revolution, offering insights into the challenges, technologies, and aspirations driving this unique media venture.

First question, how does start or search at this initiative creating a public broadcaster for Monaco within French network, TV5Monde?

The story started by the Prince of Monaco, willing to be part of the TV5 Francophone Network, in 2021. Of course, to be part of this network, we needed


to have a public TV channel of each country. And Monaco didn’t have one channel that was fulfilling all the requirements. For instance, the fact that the news team, the editorial news deck, had to be independent from the executive power.

That’s why the idea to create the public channel of the Principality of Monaco came to be part of this international network focusing on Monaco, and French Riviera, and Italian Riviera as well in the world with a will to deal with the international viewers not

only locally, but globally. It means that we must talk to the Monegasque people, of course, but it’s also to explain what is done in Monaco everywhere in the world, so we go step by step. We have different roadmaps for that, but the idea is to provide programs and the turn of voice of the Principality of Monaco throughout the world.

To put Monaco in the global stage.

Exactly. One of the biggest topics will be environment because it’s also why we

exist. The Prince of Monaco wanted to reflect the fact that a lot of things are done in the environment world and that are not well known. The idea was to explain that and to show what’s going on here and that’s a very important topic for the prince and the government here.

We want to have a lot of documentaries and magazines focusing on this topic. We will have a program about the oceans in prime time in 2024 we will have more programs on a news perspective, but with a different perspective than what we see on other networks from other countries. Yes, hat’s part of the reason why this project has been done and we still have, of course, work to do because we are very young. All these things are in projects at short or mid-term period.

Yes, because you are only four months old…

Yes. Four months. We started September 1st which was a very challenging deadline for me as a Technical and Technology Officer because it started very late. I would say the operational part of the project started very late. I joined the small team in September 2022.


It’s been one year.

Less than a year ago, upon my arrival, the foundation for our broadcast endeavor was yet to be established. We were faced with the challenge of creating a broadcast facility from the ground up, as we had no dedicated space, no infrastructure, no building, and a core team comprising

only four or five individuals operating from a provisional workspace. Despite these challenges, the prospect of designing an entire system from scratch was exhilarating. It presented an opportunity to set objectives across a spectrum of technological domains, including computer science, broadcasting,

infrastructure development, and spatial design. Every decision, from the location of the studios to the arrangement of editing suites, was an exercise in creation.

In addition to building a traditional channel, we have also developed a comprehensive digital


platform. Although not previously mentioned, our website encompasses a digital platform, complete with applications for Android and iOS that provide video on demand (VOD) content. We are embarking on supplying content to IPTV platforms in France, starting with Orange and other IPTV distributors. International expansion is a prospective goal, contingent on the rights we secure for our programming.

offering free-to-air over satellite in Europe. Looking forward, we aim to extend our reach to other territories, thus expanding our broadcast and digital footprint.

Asia, Africa, India…

In addition to building a traditional channel, we have also developed a comprehensive digital platform

Our vision for this project was holistic from the outset. We envisioned a 360-degree approach, which meant broadcasting linearly on local digital TV and through French IPTV networks, reaching audiences across France. This includes distribution via Canal Sat and Canal Plus on satellite networks, with the current

Indeed, addressing all territories, including the United States, is part of our strategy, but we are currently awaiting clearances related to the rights of the programs. We have set ourselves the goal of significantly increasing our self-produced content. As of now, we have acquired certain programs, which is necessary to meet the demands of a 24/7 broadcast channel. However, as we progress, the production of our own content will take precedence. We are looking forward to introducing ‘TVMonaco


Originals’—our unique lineup of programming. This will enable us to offer our content globally, which remains our ultimate objective.

Actually, you already answered my second question. How long did it take to design, equip, and keep up the installation?

It took a year from the very first start. To be more precise we had the first start of the building works on January 2nd, so a year ago from now.

That day, January 2nd, 2023, we had the people from the construction that came and started to work on the place where I am now.

From zero.

From zero. It was a former restaurant for companies. It had nothing to do with our business.

Ok. No broadcast equipment, then.

Exactly. It was also a very big challenge because in seven months we had to create a complete infrastructure of offices, technical rooms, data center, TV studio, MCR, galleries, news editing, and news journalists’ teams. That was also very challenging.

It was lot of work, congratulations.

Thank you. Actually people moved in the new building August 7th. For the second part, we had only three weeks to prepare ourselves to do the trainings, to do the fake news programs, to train, and also to know each other. Because a lot of people arrived at the last minute, all the operations team that I had to hire on the technical side, but also all the journalists. It was quite late.

Suddenly, everybody discovered the building, but also they met each other

Our operations are IP-based, which is crucial for feeding the necessary content to both generalist channels and live external broadcasts

and they prepared and tried to work together. It was something memorable.

What production technologies are deployed in TV Monaco?

We have all the technologies that we need for a complete generalist channel. We have everything from the beginning to the end. It means we have contribution fields that we get on fiber to get the files and send them…

TV Monaco’s broadcast system is over IP?

Absolutely, our operations are IP-based, which is crucial for feeding the necessary content to both generalist channels and live external broadcasts. This infrastructure underpins our daily work, utilizing a Media Asset Management System that’s integrated across our editorial teams for general programming, which runs 24/7, and specifically tailored for news and sports content.

We’ve adopted a system called Nxt Edition, which boasts a remarkably innovative user interface. It’s not only futureproof but also incorporates cutting-edge database systems. The platform is flexible enough to integrate AI features and is designed to be story-centric, representing a significant shift in the way we approach content creation. It’s user-friendly, which was a critical concern for us; we needed a system that wouldn’t require extensive training for our team. Fortunately, the deployment and user adoption swift were smooth.

The production and playout capabilities we’ve developed are robust, allowing us to manage with a lean team or scale up for larger productions. Our daily live news programs are

distributed globally, including to TV5 Monde. Additionally, every weekend we produce two live sports shows, meaning our live production is a constant, daily effort. The Nxt Edition ecosystem supports this rigorously, encompassing graphics and production playout systems.

In terms of hardware, we’ve invested in a vision mixer from Sony, known for its power and reliability. Our studio is equipped with Panasonic remote PTZ cameras, and for audio, we rely on a Calrec system. This suite of technology not only supports our current production demands but also positions us to embrace future trends and innovations in broadcasting.

Does TV Monaco use remote production? What kind of tech is deployed within your OB Trucks?

Our operations currently do not include proprietary Outside Broadcast (OB) vans. Instead, we utilize compact yet powerful systems such as Haivision backpacks for heavyduty, remote productions. Our technology allows us to handle up to four concurrent feeds, which enhances our broadcasting capabilities significantly.


In TV Monaco, we leverage the extensive fiber-optic network provided by Monaco Telecom, the nation’s public telecommunications provider. This infrastructure offers us the capability to broadcast events directly via fiber, eliminating the need for satellite dishes in local productions.

For international broadcasts or those outside our fiber network’s reach, we partner

with industry leaders like Globcast or EasyTools for IP-based feed acquisition. Additionally, beyond our main production gallery, we have invested in a versatile mobile production unit. This compact setup is equipped with three cameras and a small mixer, making it ideal for covering external events. It’s a flexible solution that can be deployed not only for traditional broadcasting purposes but

also for digital platforms and social media live streams, such as Facebook Live or Twitch. This mobility allows us to adapt swiftly to the dynamic nature of live events, making our production both efficient and responsive to the needs of modern audiences.

What kind of productions does TV Monaco develop?

Our production capabilities are bifurcated into two distinct


segments. On one hand, we have the capacity for largescale in-house productions that utilize our expansive gallery. On the other, we are equipped with a compact, deployable mobile system that caters to our flexible production needs. For example, during events like the Formula 1 Grand Prix in Monaco, we can effortlessly rent a terrace and set up a live show using this mobile

In terms of technology, our infrastructure includes a continuous, 24-hour playout system. Additionally, we maintain a Master Control Room (MCR), which serves as the central hub for overseeing and managing all production flows.

system, which aligns perfectly with our current operational requirements.

In terms of technology, our infrastructure includes a continuous, 24-hour playout system. Additionally, we maintain a Master Control Room (MCR), which serves as the central hub for overseeing and managing all production flows. This MCR is the nerve center of our operations, ensuring seamless integration and coordination across various production elements.

At first, you told us that TV Monaco is planning to create original productions. To create these original productions, you have a study with VR, with virtual sets for virtual production?

It was something that we have in mind. But, currently, we are not using VR.

So you are thinking about developing and implementing it.

It could be. Our current slate of original productions primarily consists of external productions such as magazines, feature stories, news segments, and documentaries focusing on environmental issues, lifestyle, and other nonfiction content. We have not ventured into fiction as of yet. In our traditional studio setup, we are fully equipped with a Chroma Key System, utilizing green screens that allow for creative flexibility on special occasions.

Regarding virtual production, it was considered in our initial planning stages and included as an optional element in the tender I prepared. As time progressed, we determined that it was not immediately essential for launch. However,


we are structured in such a way that integrating virtual production into our workflow in the future is entirely feasible. Our infrastructure is ‘virtual production ready,’ so to speak, but implementing it would necessitate hiring specialists with expertise in 3D design and other related skills. It’s a significant undertaking both in terms of workload and investment.

Nonetheless, it’s a fascinating area that we continue to monitor closely. We are actively contemplating its potential applications. Additionally, I’d like to note that we have collaborated with a partner to develop a virtual weather program, which features a distinctive mascot – a unique character that has become a signature element of the program.

Like a pet.

Yes, exactly. Our distinctive approach to presenting the weather forecast involves a unique character — a bird known locally as a ‘gabion,’ which is native to Monaco. This concept was envisioned by our top management as a novel method to engage our audience with weather updates.

In our daily broadcasts, we feature a 3D virtual gabion

that delivers weather information. This innovative presentation has generated considerable buzz and has been well-received for its originality.

The creation of the virtual gabion was not an inhouse development but it was facilitated through a technical partnership. From the audience’s viewpoint, this character adds a touch of levity to our weather forecast segments, which are broadcast three times daily: in the morning, at midday, and in the evening, every day of the year. It’s a whimsical element that has become a beloved part of our daily programming schedule.

Which broadcast controller does TV Monaco use?

In the realm of broadcast control, we have integrated Cerebrum by EVS into our workflow. Implementing a broadcast controller was a critical step for us, as it allowed centralized control over the various devices in our arsenal. We have equipped our facility with a cutting-edge router from Ross, the Ultrix, and we had the distinction of being the first to broadcast using the latest version of this equipment worldwide. This pioneering move was

not without its challenges, especially considering our aggressive timeline, but we’ve successfully navigated the initial hurdles. Any early-stage difficulties were mitigated by the exceptional support from Ross’s team in Canada, who were instrumental in addressing the initial bugs. Now, the system operates flawlessly.

The Cerebrum System is pivotal for the seamless triggering and coordination of our devices. It has proven to be an invaluable tool in our operations.

We also initiated our studio operations with Cerebrum and have developed scripts to streamline its usage, making it more operator-friendly. One particular challenge I encountered was staffing; the broadcasting industry in Monaco isn’t vast, making it difficult to find the right talent. However, we’ve assembled a team that blends highly experienced professionals with eager novices. It’s a dynamic environment where seasoned experts mentor those just beginning their careers. This mix creates a fertile learning ground and is particularly exciting as it fosters growth and innovation from the ground up.


Regarding to managing workflows, do you work with cloud solutions?

Initially, we contemplated establishing a 100% cloudbased infrastructure. However, accurately projecting the day-to-day costs of such a setup proved to be a complex challenge. Additionally, we encountered obstacles with

managing live feeds using a fully cloud-based system.

To address these challenges, we adopted a hybrid approach. Our primary playout system is based onpremises for robust, real-time broadcast management, but we’ve implemented a cloudbased disaster recovery site for high availability and resilience. This site can be activated swiftly in case of any on-site failures.

Furthermore, our media assets are securely backed up in the cloud, ensuring additional data protection. For traffic management and ERP tasks related to program scheduling and database management, which trigger the play-out rundown, we employ a Software as a Service (SaaS) platform. This

is done in collaboration with MediaGeniX, who operates within the cloud environment.

We also utilize cloud services for our play-out graphics, incorporating a cloudbased solution known as Thus, while our infrastructure is primarily on-premises, we strategically leverage cloud technology for certain aspects, resulting in a versatile and efficient hybrid model.

TV Monaco’s approach to infrastructure security and cloud usage is sophisticated and multifaceted, particularly in light of Monaco’s digital advancements. The broadcaster operates within a high-security paradigm, partly due to the principality’s investment in Monaco Cloud, a sovereign cloud infrastructure which is a joint venture with the Government of Monaco, established to offer digital services to the government and businesses both within and beyond Monaco’s borders. The sovereign cloud assures that data is protected, compliant, and kept within the national territory, as per Monegasque law, and is immune to international data-sharing regulations due to Monaco’s non-EU status.


This setup underscores Monaco’s commitment to data security and digital sovereignty, aligning perfectly with the principality’s ambition to establish itself as a digital hub through initiatives such as the Extended Monaco programme.

In the context of TV Monaco’s operations, this robust cloud infrastructure provides an added layer of security for sensitive data. Monaco Cloud, in partnership with VMware, offers a sovereign cloud service that aligns with the principality’s digital transformation goals, ensuring that data processing occurs within the country’s borders and adheres to the highest data compliance regulations.

Moreover, the broadcaster has a private on-prem data

center to manage certain workflow components, harnessing the cloud intelligently where it is most beneficial and feasible. The on-prem system is complemented by disaster recovery sites in the cloud, media backup, and traffic management systems that are cloud-based, with the use of SaaS platforms for program and database management. This hybrid model benefits from the cloud’s flexibility and security while maintaining core play-out systems onpremises for stability and control.

Additionally, the IT infrastructure for nonbroadcast-related functions, such as email, operates on Microsoft Azure, reflecting a blend of cloud services from both AWS and Microsoft,

which is a common practice in the industry.

The strategic use of a sovereign cloud, alongside onprem data centers and cloud services for non-critical IT functions, ensures a balance between agility, cost-efficiency, and security, which is crucial for a public broadcaster like TV Monaco. This model exemplifies how TV Monaco is embracing the latest in digital infrastructure to meet the specific needs of broadcast security and efficiency.

Which solution does TV Monaco manage audio with?

Our audio infrastructure is fully integrated with Dante audio networking. Dante is renowned for its highfidelity, uncompressed, multi-


channel digital audio over a standard Ethernet network. In addition to our primary audio production system, we also have specialized spaces designed for audio work. This includes dedicated commentary booths, which are used for live broadcasts, as well as separate voiceover rooms for promotional content and narrative segments in our productions. These facilities are equipped with the latest technology to ensure professional-grade audio for all our broadcasting needs.

And which intercoms does TV Monaco use?

For Intercom, we had a good partnership with Riedel. It was at last minute that we were able to choose Riedels. They did a lot of efforts and we are quite happy to work with this technology.

Riedel is a well-recognized brand in the broadcast industry, known for its sophisticated intercom systems. TV Monaco has likely integrated Riedel’s Artist, a digital matrix intercom system that provides flexible and scalable communication solutions. This system is highly customizable, ranging from 8x8 to 1024x1024 non-

blocking ports, enabling TV Monaco to adapt to evolving project requirements and technologies.

TV Monaco has linear TV and VOD. How do you manage both transmission?

Our approach to content delivery is seamlessly integrated, with the same team managing both digital and linear programming. This unified strategy ensures a cohesive content strategy across all platforms. The programming is dynamic, with digital platforms often featuring complete seasons of magazines or documentaries available at once, contrasting with the weekly schedule on linear TV.

The backbone of this integrated programming strategy is the WHAT’SON system by MediaGeniX, a sophisticated traffic system that centralizes content, media, and rights management on a single platform for linear, on-demand, and digital distribution. This system facilitates content lifecycle management across all broadcast mediums, providing users with detailed overviews that flag potential issues and

keeping content information accessible at all stages.

Our team leverages the WHAT’SON system to automate workflows, which are triggered by their Media Asset Management (MAM) system. Content is distributed to either the linear or VOD platforms by tagging media assets, streamlining the publication process. Additionally, we are able to recognize the unique nature of social media content, which is why we dedicate teams to curate content specifically for these platforms, tailoring the format to suit mediums such as Instagram, with considerations for aspect ratio and content length, ensuring an optimized viewer experience across all digital touchpoints.

Which solution does TV Monaco use for managing live graphics? Does TV Monaco integrate VR or augmented reality in this graphics?

At the current stage, virtual reality (VR) and augmented reality (AR) are not implemented in the initial phase of our infrastructure. However, we place a significant emphasis on the use of graphics, especially in


sports broadcasting, which is critical for enhancing viewer engagement. We have a dedicated post-production team, including two graphic designers, who work closely with the sports editorial team and journalists to handle all the graphics related to sports results and other content. This work is demanding but brings substantial value to our programs by enriching the viewing experience with highquality graphics.

Our on-air and on-set graphics are sophisticated, with large SMD screens in the studio acting as dynamic backdrops that assist in visually conveying the subject matter to the audience. These graphics are part of an integrated production technology ecosystem that includes the Nxt Edition system, which is central to our production operations.

While I have previously mentioned in the context of playout graphics— like adding logos and other elements managed by the Master Control Room (MCR)— the production level graphics integration is handled through Nxt Edition. The expertise of Nxt Edition’s team in graphics is well-established; they are the creators of CasparCG

technology, which leverages HTML5 technology for graphic rendering. This integration underscores our commitment to delivering high-caliber broadcast content that is both informative and visually compelling.

We already spoke about the documentary and fiction productions that TV Monaco is planning to make in a near future. Is it planning to distribute or monetize this content in any way?

In our pursuit of continuously enhancing the quality of our programming, it’s clear that financial investment is key. Our primary funding comes from public allocations provided by the government.

However, we are actively exploring alternative revenue streams to increase our investment in content creation, which is crucial for our growth and sustainability.

One of the strategic methods we are employing to boost our budget is the sale of our own high-quality programming. This initiative is more than just a short-term objective; it forms a critical part of our financial strategy for the current and coming years. By monetizing our content, we aim to reinvest in our programming, thereby enriching our viewers’ experience and expanding our reach. This approach not only supports our financial goals but also fosters the production of compelling


What next developments are you planning on for a near future, is TV Monaco thinking about expansion?

As we approach full capacity within our offices, our focus is not merely on spatial expansion but on amplifying the quality and quantity of our productions. As we head into 2024, a key objective is to authentically capture and convey all events in Monaco from a uniquely Monegasque perspective.

A significant highlight for this year is the Olympic Games in France. We are deploying at least two members of our team to comprehensively cover the event, with a special focus on Monegasque athletes. This will include dedicated programming, special magazines, and features that spotlight the Olympics. Similarly, with the Tour de France passing through Nice or Monaco, we seize the opportunity to cover this monumental event in sports.

2024 is also an important year for environmental coverage. We are planning extensive coverage of the Monaco Ocean Week and other significant environmental events in the South of France, such as those in Marseille. Through these initiatives, we aim to elevate our environmental reporting.

In the realm of news, we aspire to demonstrate that quality journalism can be achieved even with a modest team. We are committed to covering a wide array of topics and to producing special editions for various events throughout Monaco and the French Riviera. The objective for 2024 is to transition from reactive reporting to more anticipatory journalism, allowing for a calmer, more strategic approach to news coverage.

On the distribution front, we are actively working to expand our channel’s reach to other platforms and countries progressively, month after month. This continuous expansion is a thrilling prospect for us and is integral to our mission to broadcast the essence of Monaco to a global audience.

content that resonates with a broader audience.

Is TV Monaco thinking about using an OTT platform?

It could be, yes. To elevate the production value and diversity of our programming, we are considering various strategies to generate revenue. One avenue is to distribute our content to other networks, particularly OTT platforms, which would expand our reach and offer additional sources of income. Not all content will be sold; some will be shared as part of a reciprocal arrangement with Tele5 Monde Networks, adhering to a deal that involves program exchange. Additionally, selected programs are slated to be broadcast on TV5 Monde, enhancing our international presence.

In terms of advertising, it is a potential revenue stream we may tap into in the future. Currently, as we are in the early stages of establishing our brand, our focus is on growing our audience and becoming more well-known. Once we have achieved a certain level of recognition and viewership, we will thoughtfully consider the incorporation of advertisements into our TV broadcasts. This would represent a strategic move to increase our budget, thereby giving us the financial capability to produce an even broader array of high-quality programs. 


Broadcasting in the IP era

Case studies of innovation and adaptation

The broadcast industry is at the cusp of a technological renaissance, with IP (Internet Protocol) technology at the heart of this transformation. The transition to IP-based infrastructures is enabling broadcasters to enhance the flexibility, efficiency, and quality of their content delivery systems. This shift is not just about keeping pace with technological advancements; it’s a strategic move to meet the evolving consumption habits of viewers who demand high-quality, on-demand content accessible across multiple devices.

One notable example of this transition is the Canadian Broadcasting Corporation (CBC), which has embraced IP technology to overhaul its production and distribution workflows, impulse by Lawo. By adopting IP, Lawo helped CBC to significantly increase its agility in content production and distribution, enabling seamless collaboration across different locations and facilitating the delivery of high-quality content to its audience. Similarly, ITN, a leading news and content provider, has partnered with Arista to leverage IP technology in enhancing its network infrastructure. This move has empowered ITN to manage the increasing volume of high-resolution content efficiently, ensuring that its news delivery services remain robust and responsive to the fast-paced nature of global news dissemination.

Foxtel, Australia’s largest subscription television provider, represents another significant milestone in this journey. The company embarked on a transformative project to shift its entire architecture to IP, decommissioning its legacy SDI Television Center. This ambitious move, led

by Matt Carter, head of broadcast engineering and projects, involved overcoming numerous challenges to transition to a multi-format software platform capable of handling over 2000 channels. The collaboration with Magna Systems & Engineering and the adoption of TAG’s monitoring and visualization system were pivotal, enabling Foxtel to monitor multiple workflows and formats across various sites at scale.

In the sports entertainment sector, Fubo (formerly known as fuboTV) has made significant strides by partnering with Zixi to transition to an IP-based infrastructure for 4K live video delivery. This strategic shift not only underscores Fubo’s commitment to delivering premium quality content but also highlights the scalability and reliability that IP technology brings to live sports broadcasting. The adoption of Zixi’s SoftwareDefined Video Platform (SDVP) has enabled Fubo to streamline its content acquisition and distribution processes, ensuring high-quality live streaming across various networks and devices.

These case studies exemplify the broader trend within the broadcast industry towards IP migration. By leveraging IP technology, broadcasters are not only enhancing their operational efficiencies but are also setting new standards for content quality and viewer engagement. As the industry continues to evolve, the adoption of IP infrastructures is poised to redefine the broadcast landscape, offering broadcasters unprecedented opportunities to innovate and deliver compelling content in an increasingly digital world.


You Always Win As A Team

In the IP domain, defining success is not as easy as one might think. Does it mean that a new device is recognized and fully functional as soon as you connect it to the network? Or maybe that HOME Apps that have yet to be officially released is already in use at a global event and delivers big time?

For Lawo, success always hinges on objective criteria for the assessment of how well the goals set out by the customer are met or even exceeded. Until further notice, architecting an IP infrastructure based on a standard blueprint is tricky, because no two infrastructures are the same. SMPTE ST2110 is not only an open standard, but also carries the promise that any compliant device can communicate with, and control, any other compliant device by any manufacturer. The product mix tends to differ quite substantially.

The beauty of IP is that its possibilities are almost infinite. This can also be its curse: hardly anyone considers ST2110 IP an out-of-the-box solution that merely requires users to dismantle their

baseband SDI infrastructure, set up a few switches and then connect IP-compatible gear to them. That is not how it works.

For one, only a few greenfield installations start afresh, with an all-new infrastructure. And quite a few devices that broadcast operators use on a daily basis are not yet IP natives. As a result, almost any setup needs to include gateways to which an existing pool of SDI devices can be connected.

Their output signals are converted to IP streams that are then sent to the desired destinations. For SDI devices that need to output signals, the relevant streams are converted from IP back to SDI—often by another gateway in another location— and transmitted to the relevant inputs on the device.


Currently, most devices used to ingest and output audio and video signals are not IP natives. Gateways like Lawo’s .edge or A__stage series will therefore play an important part for years to come, to connect cameras, microphones, etc. Even a cloud-based operation still requires entry and exit points for sources and destinations that are unable to transmit, or receive, IP streams.

The Agony of Choice

When you enter into talks with a systems integrator or a vendor like Lawo, they usually ask questions about your envisaged workflow, the scope of the envisaged infrastructure, the number of operators that need to be able to work concurrently, their location for any given assignment, and so on.

You probably already have an idea—and the case studies on Lawo’s website may have nudged you in one direction rather than another. During these rounds of preliminary talks, several approaches or suggestions that sound equally appealing may be brought up. You and your team should discuss them and get back to the vendor’s sales representative or one of its pre-sales engineers with more specific questions.

A true partner listens carefully and happily provides feedback as to what is possible—and how this can be implemented in the most elegant and costeffective way.

This information will allow you to compile a call for tender, which is obviously based on internal discussions regarding who needs what, and how this may translate into a workable workflow. Lists of required functionality received from your audio and video engineers need to be included in the final call for tender document.

Love me Tender

The next stage will be for you to sift through the received tender responses. Well-crafted replies not only describe a solution and list the


products needed to achieve the envisaged workflows, but also detail how they deliver on their promise and what else they allow you to do. Coupled with a detailed description of a plausible vision for your project, this should give you an idea of whether the vendor is a good fit.

Again, as a customer, you have the right to ask for clarification and comments on why the vendor recommends something your team was unaware of. Your team may have imagined a workflow that may very well be delivered in a different, less costly and more flexible way. Listen to such

suggestions. After all, you have the experience and people skills to separate the wheat from the chaff at this stage.

Although there may be some haggling involved, your focus should be on whether the proposed solution has all the ingredients to see you through the next five to ten years—and beyond.

If you—or your systems integrator—approach several vendors for different parts of the envisaged solution, be sure to ask them about interoperability, how well the proposed products communicate with devices or software by other brands,

etc. Never forget that you first and foremost need a solution rather than product specifications.

We’re in This Together

There are examples of broadcasters that choose not to rely on an external systems integrator. The most remarkable instance of this approach can be found in Montreal’s Nouvelle Maison de Radio-Canada (NMRC). There, the decision was to keep the planning and execution stage in-house, forcing the project team to quickly familiarize themselves with open-


standards ST2110 IP, NMOS, SDN, overall control and a range of other concepts.

By the time they started contacting carefully selected vendors, the project team knew full well that their vision of maximum agility and the required capabilities of the prospective equipment required an open partnership.

The NMRC team was not just expecting advice and a helping hand, mind you—it also expected its partners to amend and refine the specifications of their products to better implement the project.

Some partners were at first hesitant to go that far, unsure of what they were committing to and whether the ambitious goals could be reached in a timely fashion to meet the deadline.

The proposed open working relationship, where temporary failure was deemed a stepping stone to the next level, eventually convinced them to commit to the project. In Lawo’s case, this has resulted in significant enhancements of its VSM broadcast control system and other products.

The NMRC team clearly deserves most of the credit for the successful delivery of one of the biggest IP broadcast

infrastructures to date. Its highly flexible workflows exceed anything that has been achieved in the past. But the team knew who to rely on as partners and how to steer the collaboration to reach their ambitious goal of pooling devices to use them more efficiently, and controlling its agile infrastructure via single hardware, or software, button presses.

And the Winner Is…

In Radio-Canada’s case, the winner was not just the Montreal-based broadcasting arm of CBC, but the broadcast community at large: product refinements and ingenious ways of tackling certain aspects were only part of the story. The other half were refinements to the ST2110 and NMOS specifications triggered and performed by the team.

The team’s readiness to share all insights gained in the process is a noble gesture that will benefit the broadcast community at large. The NMRC project goes to show that going it alone—without a systems integrator—indeed works, provided you work hand in hand with trusted partners and openly share qualms and light-bulb ideas. This is how you move mountains. 


ITN choose Arista Networks for major SMPTE ST2110 media network upgrade to support evolving production and studio

ITN is one of the world’s leading multimedia content providers, producing news, entertainment and factual programming across a range of platforms. Following a decade of growth, ITN’s underlying SDI network at its studio and production complex in London was starting to meet its limits. For the upgrade and modernisation of the production galleries, ITN chose Arista Networks as the core of a new SMPTE ST2110 based IP media network. The successful project has delivered scalable performance, reliability, and simple transition to meet its evolving production needs and future growth.

For sixty-five years, ITN has been one of the UK’s foremost television production and news organisations. Combining cutting-edge technology and innovation with expert storytelling flair, ITN is recognised globally for its quality, integrity and creativity. ITN

makes the award-winning daily news programmes for ITV, Channel 4 and Channel 5 in the UK, providing comprehensive and impartial news to the British public and reaching millions of viewers every day. ITN Productions, ITN’s independent production division,


produces high quality content across sevendistinct areas: television production, sports, advertising, industry-specific programming, education content, entertainment and news footage syndication, and post-production.


ITN moved into its news headquarters, a purposebuilt building in the Gray’s Inn Road, London in 1991. The site has six studio spaces and hosts ITN Productions, its award- winning creative production arm that produces over 10,000 hours of content every year for broadcasters, businesses, brands, rights holders and digital channels. Over the last 30 years, ITN output has grown and although its production environments have adapted using the prevailing technology – the underlying network based on SDI routing was starting to limit its ability to scale.

network is a significant change for us – and it will be with us for a long time – so our scoping project was extremely thorough.”

Over a multi-year process, ITN technical teams evaluated networking designs, suppliers and technologies before selecting dB Broadcast as its lead system integrator and Arista Networks for its core IP infrastructure.

As Jon Roberts, Director of Technology, Production & Innovation explains, “Our needs have changed as ITN has grown and today it is critical for us to be able to adapt to new demands and workflows with flexibility at scale. The move to an IP

“We had already had Arista successfully deployed in our wider enterprise network,” explains Jon Roberts, “And we were impressed not only by the network’s reliability, but also the openness and adherence to standards, along with many successful deployments within similar large scale broadcasting facilities.”

One of the main criteria for the project was to deliver a

solution based around the SMPTE ST2110 standard that provides an industry wide framework for integrating IP based media infrastructure. As part of this approach, ITN also chose to deploy an EVS Cerebrum Broadcast Control System (BCS) and IP based production equipment from Grass Valley on top of a highperformance Arista Network infrastructure. This ‘best of breed’ implementation has been field proven in a number of similar environments and aligned with ITN’s longer term strategy to deliver a software defined approach to network orchestration.



dB Broadcast worked extensively on the design and proposed two separate networks. The first is the ITN PTP & Media Network using a fully redundant architecture. The heart of the new network features an active/active pair of Arista 7280CR3-96 series switches which are purpose built around a flexible arrangement of 1G/10/25/40/50/100GbE ports to support the widest range of connected devices and workflows.

Alongside the media network is the ITN Monitoring & Control Network which uses a spine and leaf design that again implements a fully redundant design based on a pair of Arista 7050X3’s – with 10G active interconnectsthat spans out to multiple Arista 7010T leaf switches that connect to various locations including newsrooms, editing

suites, MCR’s and other critical elements. All Arista 7000 series products run the same Arista EOS® software and binary image, simplifying network administration with a single standard across all switches.

“This approach offers high resiliency, flexibility and can scale out quickly,” explains Mike Bryan, Technical Director at dB Broadcast Ltd. “The whole solution is supported by Arista CloudVision® which gives a detailed view of the entire network and integrated seamlessly with EVS Cerebrum that allows us more control over the entire media workflows. It really is state of the art!”


As a 24/7 organisation, the new media and control networks have been deployed in parallel with the existing infrastructure. The first SMPTE ST2110 enabled gallery was ready to put into service in early November 2021 and over the next few months, Bryan and the team at ITN plan to migrate the core ITV production galleries, Channel 4 and Channel 5 plus ITN

Productions over to the new networks.

Although the new network design and move towards an all-IP infrastructure is part of a longer- term strategy to modernise and scale the ITN environment, there have been significant immediate improvements. “We have built more resilience into our media operations by removing single points of failure such as the old SDI routers,” says Jon Roberts. “While the implementation of CloudVision and Cerebrum not only gives us a lot more visibility over our media operations, it also allows us to make significant changes and create new production workflows quickly.”

“The next stage is to start to implement more orchestration and automation capabilities to allow us to better utilise new IP based technologies. Ultimately, this is part of a longer-term strategy to utilise innovative technology to support the best news, entertainment and factual storytelling that is at the heart of the ITN brand,” Jon Roberts concludes. 


Evolution of Foxtel: The Leap to IP Infrastructure

Transitioning from SDI to IP with cutting-edge monitoring solutions

At A Glance – The Players

Foxtel is Australia’s largest subscription-based television provider with approximately 4.5 million subscribers across various platforms including cable television, direct broadcast satellite television, and IPTV streaming services. Foxtel connects customers to award-winning live channels and on-demand libraries from major international studios featuring local and international drama, entertainment, movies, sports, news, documentaries, reality, lifestyle, comedy, and kids shows together with its own Australian Foxtel Originals.

Matt Carter, head of broadcast engineering and projects at Foxtel, looks after the provider’s broadcast operations including playout, contribution, and distribution

as well as the projects team. Additionally, he oversees all upcoming projects and ensures that content is being delivered to subscribers.

Tim Banner is solutions architect for Magna Systems & Engineering. With offices in Australia, New Zealand, Singapore, Hong Kong and Indonesia,

the company provides best-of- breed solutions to Media, Entertainment and Telecommunication Industries. As solutions architect, Banner is responsible for staying ahead of the technology curve by continuously seeking out new products and solutions that meet the needs of Magna’s customers.


The Challenge

When Foxtel made the decision to decommission its legacy SDI Television Center at Macquarie Park and transition its entire architecture to an IPbased infrastructure it came with an overabundance of challenges and complexities. Future- thinking technologies had to be uncovered, evaluated, and implemented to drive a multi-format software platform comprising over 2000 channels with built-in mechanisms to handle future growth, both in number of channels and emerging formats. The project also required careful balancing of compute and memory capacity to work correctly.

The project was led by Carter, who brought in Banner to collaborate on a system that serves customers with a mixed-format menu.

Beyond transitioning to allIP, Foxtel needed to monitor multiple workflows and formats across multiple sites at high scale. A critical component of Foxtel’s success rested on a monitoring and visualization system capable of simultaneously handling all types of IP signals including any combination of uncompressed and compressed HD and UHD, NDI, and OTT streams including – HLS and DASH with CMAF including DRM. The system’s ability to scale with the subscription-based

television company’s growth also topped the list of necessities.

A Very Fast Solution

Carter and Banner had numerous aspects to consider, commercially, technically, and operationally, and decided on a TAG monitoring and visualization system following extensive market research, testing and trials. TAG’s ability to support any format, protocol, codec, or resolution added agility to Foxtel’s business strategies and in turn economics, as they could put a compressed feed against an uncompressed feed on the same COTS hardware, and confidently


plan to expand offerings to include emerging formats.

“One of the things that made TAG really attractive to us was its ability to handle multiple formats,” said Carter. “The fact that you could put a compressed feed against an uncompressed feed on the same hardware was a real selling point. It was huge.”

“We’ve had a long relationship with Foxtel and we had been talking about doing a TAG demo for a while,” added Banner. “The time was right. Foxtel was looking down the path toward IP and Magna responded with a TAG monitoring and visualization solution that was really solid.”

The project team carefully addressed the current needs of the massive migration while considering future expectations. The TAG system includes built-in access to enhancements and integrations, ensuring that Foxtel easily keeps pace with industry changes. The TAG system is equally comfortable on-prem or deployed in a cloud environment, providing Foxtel with the agility they need to grow confidently as technology and market conditions evolve. TAG’s unique Zero Friction® flexible licensing business strategy

coupled with its Adaptive Monitoring and Monitoring by Exception solutions maximize resources by allowing assets to be used when and if necessary, reducing ‘eyes-onglass’, costs and complexity, and taking Foxtel to a higher operational level.

Under the Hood & Up on the Cloud

Foxtel was coming from a very silo-based architecture with distinct separation of applications. For the new facilities, in addition to stream visualization on displays, they also required real-time probing of all signal formats and encoding types across the full workflow for a complete end-to-end view of the video workflow. The new IP facilities converge those former silos into a common platform with common support enabling the full channel path to be viewed from contribution to distribution. Automation also played a significant role as TAG’s Media Control System (MCS) configured all the servers automatically adding the time- saving element Foxtel was seeking. TAG’s MCS provides a comprehensive suite of tools to manage the production and delivery of content across all platforms and is designed to automate the processes involved in

creating, distributing, and managing content.

The Results

The project that launched with 2,000 channels with over 28 servers has already expanded with six additional servers handling playout monitoring as well. The TAG monitoring solution is currently one of the largest of its kind in the world.

“Not having to configure 28 individual servers was a blessing,” adds Carter. “When we put out the RFP, we were asking for a controller that could handle all of our platforms. But we were receiving vague responses. And then we looked at TAG’s MCS and realized how much easier the process would be for us. The fact that it can configure all the servers automatically really saved us a great deal of time. We are so pleased that we went with TAG’s MCS right from the beginning.”

Magna handled most of the project initially, but a team from TAG joined as things began to progress. Collaboration between Foxtel, Magna and TAG was seamless and efficient, leading to a successful project outcome.

“The response from TAG has been incredible,” said Carter.


“During the implementation of the project, TAG was a key part of an extremely smooth rollout. They replied very promptly throughout the process and were always attentive to our needs. The TAG team was right there to help us with any questions we had or issues we may have encountered throughout the project.”

Carter concluded, “The flexibility you get with a software platform is great. But there are also some complexities involved as well. For instance, with a hardware device, you know exactly what the capacity is and what it does. With software, it’s a bit of a balancing act between the compute, the memory, the bandwidth and interconnects.

It all has to be worked out, but in the end – it’s a no-brainer. It’s the way of the future.

Looking Forward

Foxtel is a leading Australian pay television company that has been providing premium entertainment and sports content to its subscribers for over two decades. With a strong focus on innovation and customer satisfaction, Foxtel is now looking to expand its services by migrating its live sport production, which is currently on the legacy platform, to its new IP infrastructure. This move is not only a significant undertaking for Foxtel, but also a major step forward in the Australian sports broadcasting industry. With the migration, Foxtel is expected to offer an even better viewing experience to its customers, with increased UHD support and higher bandwidth requirements. TAG’s capabilities and ability to handle live production with ease make it a natural fit for this next step, bringing Foxtel closer to an all IP facility. Foxtel’s commitment to embracing change and adopting cutting-edge technology is a testament to its dedication to providing world-class entertainment services to its customers in Australia. 


Fubo transitions to IP infrastructure from Zixi for 4K live video delivery

The broadcasting landscape is undergoing a revolutionary transformation with the migration to Internet Protocol (IP) based infrastructures, heralding a new era in live video content delivery. This evolution offers unparalleled flexibility, efficiency, and cost-effectiveness, enabling broadcasters to leverage the internet’s vast potential.

In this dynamic scenario, Fubo (previously known as fuboTV) emerges as a frontrunner, redefining the entertainment space with its tech-centric approach and a rich repertoire of sports content. The company’s strategic pivot to IP networks for disseminating content marks a departure from traditional broadcasting methods, underscoring its commitment to innovation.

Central to Fubo’s mission to deliver broadcast-quality, live streaming content in 4K resolution is its collaboration with Zixi. The partnership

leverages Zixi’s SoftwareDefined Video Platform (SDVP), a comprehensive solution that acts as a Universal Video Gateway. This enables Fubo to seamlessly acquire live linear content from diverse sources, including fiber and satellite feeds, and standardize it for digital platforms.

Historically, Fubo relied on a mix of third-party solutions for encoding, transcoding, and content distribution. However, the past year has

seen a strategic shift with the company moving its transcoding operations to the cloud, significantly enhancing its operational agility and scalability.

At the heart of this transition is Zixi Broadcaster software, which plays a pivotal role in Fubo’s content processing workflow. The software efficiently manages the reception of content via MPEG TS – UDP, facilitating secure transmission over the public internet using the Zixi Protocol


to Google Cloud Platform for transcoding. The content is then prepared for distribution, reaching viewers through a variety of devices including Smart TVs, mobile platforms, desktops, and set-top boxes.

A key feature of the SDVP is its extensive interoperability, supported by compatibility with 17 industry protocols such as Zixi, NDI, RIST, SRT, and WebRTC. This flexibility ensures Fubo’s seamless integration within a diverse software ecosystem and an expansive network of partners, ensuring the platform’s adaptability and future readiness.

Zixi’s cutting-edge technology, with its patented techniques for sequenced hitless and bonded hitless failover,

guarantees high reliability and exceptional stream quality across heterogeneous IP networks. This capability is critical for Fubo, where the quality and dependability of live streams are paramount.

“For Fubo, the quality and reliability of live streams is of the utmost importance” said Geir Magnusson Jr., CMO, Fubo.

“Zixi’s interoperability in terms of its protocol acceptance and robust partner network meant that we could easily virtualize our content supply chain and leverage a tested and costeffective solution that would allow us the flexibility to easily adapt to any future streaming needs.”

The expansion of Fubo’s Zixi deployment is a testament to the platform’s dedication to

delivering a wide spectrum of high-quality content. This strategic alliance not only strengthens Fubo’s position in the competitive Over-The-Top (OTT) space but also presents a formidable challenge to traditional pay-TV services by offering a diverse array of live and on-demand content accessible via the internet.

Fubo’s partnership with Zixi is a prime example of how the innovative application of IP technology is revolutionizing content delivery, aligning with the contemporary viewer’s expectations for high-quality, reliable live streaming services. This collaboration paves the way for a future where digital broadcasting meets the demands of an increasingly connected and tech-savvy audience. 


The future of content creation and distribution is being shaped by AI. With the ability to analyze large amounts of data, AI can predict viewers’ trends and preferences, thus enabling the creation of audiovisual products that are more aligned with the tastes and concerns of viewers and, therefore, more likely to be successful. This not only increases viewers’ content consumption, but also opens up new revenue channels.

AI can also automate the distribution process, thus ensuring that content is delivered to the right audience at the right time. This can significantly increase the reach and impact of productions, which in turn results in a larger audience and higher revenues. In addition, this distribution process can be optimized, thus reducing the time and resources needed and therefore this resulting in lower operating costs.

AI-driven media: the main goals

The primary goal of AI-driven media is to retain viewers and increase engagement, especially in this highly competitive content creation environment. This is achieved through a concept that has become very prominent in almost every aspect of the services we enjoy on a daily basis: hypercustomization.

We all want to enjoy “exclusive” and “designed for me” services but, obviously, it is difficult to pay for them. That’s where AI or, more specifically, Machine Learning -ML-, comes in to analyze our behavior patterns and offer us those customized services. And this applies to distribution media such as linear channels, content aggregators or OTT platforms in general, where AI is used to customize the media offering according to individual viewer preferences, thus increasing


content consumption and consolidating revenue in a more robust way.

A new path is opened in which companies, to grow beyond the traditional vertical expansions of distributors towards creators and vice versa, can make use of their existing catalogue, which in turn allows them to increase revenues without great risks. This includes the creation of new experiences, knowledge and content for consumer satisfaction and growth, making use of their current catalogue of productions. The very well-known example of FAST channels, which we have covered on TM Broadcast, is a clear case of this and of the most direct application of machine learning in our industry.

In addition, AI plays a crucial role in decreasing operating costs in the audiovisual sector in particular, and in the hightech sector in general. By optimizing content creation, distribution, and consumption through automation, companies can ensure a sustainable cost reduction and improved efficiency. And beware, that cost reduction does not imply a decreased number of staff, but that people will be dedicated to

creating and adding value where it is required and stop doing repetitive or lowvalue tasks that are easily automatable. It’s a doubleedged sword, to be sure, but just like the steam engine in the industrial revolution, you have to learn how to use AI the way it deserves.

Increased content consumption of viewers with AI

AI-driven decisions in media produce a much stronger content delivery system, thus driving consumption

of audiovisual products and revenue. Custom metadata improve viewer experiences, search accuracy, and real-time content updates.

If instead of offering all viewers the same metadata, the same description of the film, series or program, we offer different descriptions depending on the demographic profile of the specific viewer, including their mood, as we will see later, as well as time of day and time of year, the experience is greatly enriched. And this does not require an inordinate


investment or extraordinary complexity, it is about applying AI as a tool that complements our current workflow.

AI also enables dynamic presentation of productions, real-time feedback, and accessibility improvements, which enhance usability and inclusivity. Real-time suggestions keep viewer interest with content they really like, while predictive analytics aligns specific content with the preferences the viewer expects to be offered. And we are not just meaning the typical

recommendation engines based on collaborative filtering that compare similar users, which required considering thousands of users to work “well” -please note the quotation marks- but algorithms that are able to detect and predict patterns with the mere consumption of content.

In addition, AI can generate subtitles from audio to improve accessibility and user experience in real time and in any language, without the need to have them generated beforehand. Who wouldn’t want a platform that offered content in any language in the world? Unbelievable.

It is also capable of implemented an AI-powered navigation system that learns from user preferences and behavior to provide a customized graphical interface with the menu options shown in a particular order “the way I like it”; and, all this, different for each viewer, time of day or mood, impressive.

All these concepts go in the same direction, that the viewer perceives a high customization of their services without extraordinary complexity or exorbitant costs.

New revenue streams through AI

AI-driven media segmentation enhances revenue through ultra-targeted and segmented advertising beyond premium subscriptions. And they are not just niche channels where we program content for a particular segment of the population. By segmenting them simply with demographic data and putting stereotyped advertisers, but dynamic and different advertising inserts for each viewer, or group of these, with their favorite content at the time of day and week, for example. In short, FAST channels boosted to the zillionth power.

Emotional analytics for content creation and distribution can provide insights into viewer preferences, thus helping creators produce emotionally engaging content that appeals to a larger audience and generates an increased revenue. This goes beyond offering content if we are sad, happy, tired or cheerful; it is about using this information to create such content and deliver it in a timely manner. And for the time being it’s not just deciding when to release it, but at what time to offer it


on my OTT platform or receive a notification on my mobile to watch it now that I’m sitting on the couch after a meal following a hard day’s work or after training, which is not the same.

Emotional trends and feelings within viewer segments can also be identified, thus facilitating associations with brands that align with specific emotional themes, this also leading to new revenue opportunities. If whenever I am happy I see a certain product, Paulov’s dog that we all carry inside will associate that product with my happiness.

Some of us may already get a whiff of: “Can we unconsciously influence the audience and persuade them without them knowing?” This is a valid conclusion but we will now address this ethical dilemma that we may have already raised...

Decreased operating costs with AI

AI can help reduce operating costs by automating various tasks. For example, an AI system can automatically generate content metadata, identify key scenes, generate industry-standard thumbnails, trailers, and descriptions, thus ensuring accuracy and consistency.

AI can also be used to develop a unified search system that integrates large language models with all internal information sources, thus providing a unified search interface to consult this content, and an external one, such as IMDB or the Internet in general.

And here we insist on the fact that we do not mean replacing people with machines, but about people stopping doing repetitive


and automatable tasks that do not add any value and starting to do interesting things that really contribute to the creative process and to viewers, which at the end of the day is what our industry is all about.

The power of hypercustomization

As we have already mentioned, hypercustomization is one of the most significant benefits of AI in the audiovisual industry. By analyzing viewer behavior and preferences, AI can create customized content recommendations, improving viewer experience and increasing production consumption.

Hypercustomization goes beyond simply recommending content based on past viewing history. It can also take into account factors such as viewer mood, time of day, and even the weather, to recommend content that is more likely to be enjoyed. This level of customization can significantly increase content consumption and viewer loyalty, this leading to an increased revenue.

At this point we began to set foot on a slippery slope: Is it ethical to make use of the predicted mood of viewers to tailor our offering to them? To what extent are such predictions accurate or can unconsciously influence the audience? And this is precisely the crux of the matter, in which the viewer is made aware of why we recommend certain content or their platform behaves in such a way. As long as it is made explicit that the recommendation or customization offered is based on certain subjective criteria, ethics is not a problem but, when we use these data to surreptitiously modulate the behavior of our audience


without them being aware of it, that is another matter.

The role of emotional analytics

Emotional analytics is another area where AI can have a significant impact within the audiovisual industry. By analyzing viewers’ reactions and emotions, AI can provide insights into what type of content is most liked by whom and at what time of the day, week, or year that happens. This can help content creators produce more emotionally engaging material.

Emotional analytics can also be used to customize content recommendations. For example, if a viewer tends to enjoy productions that bring out a certain emotion, AI can recommend similar content. This can further enhance viewer experience and increase audiovisual consumption.

The challenges that lie ahead

Although we have commented on it throughout the article, it is worth emphasizing that one must be very careful when applying this new tool to avoid making mistakes

in the attempt. And this happens with every significant innovation or progress being made for mankind in general, we need time to learn how to use it. AI is not an exception, but just one more example of this, and very significant one.

Despite the numerous benefits of AI, there are also challenges that need to be addressed. One of the main challenges is the potential for bias in AI algorithms. If not managed properly, AI can reinforce existing biases, thus leading to unfair outcomes. When an AI model

is implemented -this being really complex- it can have unexpected results over time or whenever more data is added in a massive way. Therefore, constant


monitoring is very necessary and these biases must be corrected in all honesty.

Another challenge is the potential for discontinued

jobs. As AI automates various tasks, there is a risk that some jobs may become obsolete. Therefore, companies need to invest in retraining and upskilling their staff to ensure they can adapt to the changing landscape.

And right here we’ve outlined certain solutions. These tasks that were previously performed manually, and that therefore required workmanship, have been automated, but new tasks have been created such as the monitoring of algorithms,

the correction of biases or the optimization of systems. And, just as the steam engine did not end direct labor in the industrial revolution, AI is not going to put us all on the dole overnight.


In sum, AI is transforming the audiovisual industry, offering numerous opportunities for hypercustomization, increased viewer content consumption, new revenue streams, and reduced operating costs. However, to make the most of these opportunities, companies need to invest in the necessary infrastructure and skills, as well as to address the challenges posed by AI such as task automation or the biases that such algorithms can introduce.

As AI keeps evolving, it will undoubtedly continue to shape the future of the audiovisual industry, making it more dynamic, attractive and profitable. And, as always, we will have to adapt to it. Today our industry is more exciting than it was 5, 10 or 15 years ago, and less so than it will be a decade from now. Not all industries can say the same, let’s enjoy the moment! 


Color has its broadcast space

Color on TV is one of the great allies to produce more attractive content, achieve unforgettable visual experiences and, therefore, generate more emotion and involvement among viewers. But has it always been that way? Have we reached the limits of technology when it comes to information flow with color images?

The answer to both questions is the same one: NO. We are all aware of the fact that the first images that we were able to capture, store, modify and disseminate were in B/W (in a broader sense, images with a grayscale from white to black), both in the field of photography and in the cinematographic field, including the television environment.

Color television arose on a timeline between the 1960s and 1980s, when most of the world’s viewers began to receive color images on their TV sets. Before those dates, what was happening were first technical decisions, tests and a real battle between TV companies to be first in the race towards color TV.

Color technology has been approving and applying new processes, codifications, norms and standards. In this long journey of changes and breakthroughs, two moments were key in the field of TV:

 The beginnings of color broadcasting with backward compatibility with monochromatic (B/W) signals. It is the birth of the first standards, the initial tests and broadcasts that were authorized in different territories all over the world. This was the case with NTSC (acronym for National Television System Committee) in the US from 1953; and in Europe with the PAL standard (Phase Alternate Line) and/or SECAM (Séquentiel Couleur à Mémoire, in French). These three are what we know as TV color coding systems.

 The revolution of the digitization of the TV signal flow, leaving the problems and considerations of analog video signals well behind. This digitization is allowing us to achieve improvements in color both in capture devices and in viewing or monitoring equipment, and, above all, through better control management within the color signal broadcast/reception processes.


The international standard for digital encoding of video signals into components was ITU-R 601, which defined the technical specifications for the Y (luminance) components and the R-Y and B-Y color components. This standard set out the way to full digitization in the TV broadcast environment.

Let’s try to understand what is going on with color at present in the television industry. For this, it is necessary to have some notions about color and its application in TV very clear.

A first approximation to what color is can be obtained from the definition: sensation produced by the light rays that impress the visual organs based on the wavelength.

This sensation is possible because of the properties of our sight -of the human eyeto be able to capture certain wavelengths, specifically those that we call the ‘visible spectrum’ (region of the electromagnetic spectrum that the human eye is capable of perceiving). After research and studies in regard to this matter, we can say that the human being has specialized photoreceptor cells -the rods and cones- which are found in the retina and are responsible

for our vision. Specifically, cones give us the perception of color because they have the ability to detect the different wavelengths of light, specifically those classified as red (L cone), blue (S cone) and green (M cone).

Now is the time to ask how color is generated when artificial devices created by humans take the spotlight and what they have to do both in the way of producing color and when it comes to viewing or enjoying it.

In this article about color, human vision and television are closely linked. The starting point: wavelengths R (Red), G (Green) and B (Blue). This is commonly known as RGB.

It is also necessary to delve into color modes, which simply indicate us how to achieve a color gamut. There are two modes: additive synthesis and subtractive synthesis. Both human vision and television work under additive synthesis, that is, obtaining a color or a color gamut from the sum of the color components,


both through the primary RGB colors and by means of the secondary ones: cyan (C), magenta (M) and yellow (Y).

Once these two modes have been identified, we approach what we know as color models, which are the different ways of formulating colors under parameters and/ or numerical values. Thus, we find the RGB model, the CMYK model, the Color Lab model, HSB or HSL, among others.

In order to make it easier to perceive the different color models, some scholars proposed different systems of representation: Newton’s Color Circle (1704); Goethe’s Color Wheel (1810); the dodecagrams or colored stars by Charles Blanc (1867); Wilhelm Ostwald’s Double Cone (1853-1932); Albert Munsell’s Solid (1858–1918), the cube by Alfredo Hickethier, or the CIE (Commission Internationale de l’Eclairage) triangle in 1931, among others.

Worth highlighting is CIE1931 triangle, also known as the CIE1931 chromaticity space, which characterizes colors by means of a luminance parameter Y and two color coordinates X and Y that specify a point within that

diagram. This representation system was based on a series of experiments conducted in the late 1920s by W. David Wright and John Guild; their experimental results were included in the CIE RGB specification, from which the 1931 specification was derived.

Under the RGB model, which obviously works with additive synthesis and is common to all capture devices (cameras), display devices (monitors and televisions) and projection (projectors), we ask ourselves a simple question: is the same color range represented in all instances? Obviously NOT.

We all have memories that in a given image the color looks different depending on which camera has made the capture and where we see it displayed. These differences are usually based on the color space of each particular device and equipment involved in the workflow within the TV environment.

A color space is simply the range or set of colors that an image can display on a particular device. There are many different color spaces, also known as color gamuts. So, how many are there?

There can be as many color spaces as there are equipment and devices in different production environments (analog/digital); the issue here is that in order to achieve proper control in color management we have to apply different norms or standards throughout the workflow.

In the broadcast television environment, there have been and there are currently some major standards, such as:

 ITU-R BT.601 and SMPTE-C primary colors: This mode is designed for controlled-view environments established by ITU-R BT.2035. The SMPTE-C (Society of Motion Picture and Television Engineers) put them in place for standard definition content (SDTV) in the US and in most countries where NTSC had been implemented.

 ITU-R BT.601 and EBU tech 3213 primary colors: This mode is designed for controlled-view environments laid down by EBU (European Broadcasting Union).

 Rec. 609 (ITU-R BT.709): This digital standard includes the color spaces for NTSC, PAL and SECAM SDTV analog formats.


 Rec. 709 (ITU-R BT.709): It is the color space for HDTV with Standard Dynamic Range (SDR). Released in 1990.

 Rec. 2020 (ITU-R BT.2020): It is the color space for UHDTV (4K TV) with Standard Dynamic Range (SDR). Released in 2012.

 BT.2100: ITU* published in 2016 this recommendation to include HDR in its BT.2020 specifications.

(*) ITU is the United Nations specialized agency for information and communication technologies (ICT). Founded in 1865, its objectives are to carry out technical studies, answer practical questions and offer technical recommendations.

As a general principle, the wider the color space that the standard has, the more colors and greater realism the resulting image will feature and, therefore, the more we will be able to enjoy the contents. But to make this possible at all times we have to perform a correct color management and proper device calibration. For example, the BT-2020 color standard allows to show about 3/4 of the visible color spectrum; while the BT.709 standard only displays 30% of the spectrum offered by the new BT.2020 standard.

What about digital cinema? Does it have its own color space? That’s right. Digital cinematography is making its own decisions and designing its own standards in order to offer a quality product in many aspects. In this specific case, it is the DCI-P3 color space (part of the SMPTE RP 431-2 standard on “Digital cinema quality - Projector and reference environment”).

DCI-P3 color space is not as broad as that of the BT2020 standard. Therefore, in order to ensure future compatibility in terms of color, a new wider space has been defined that offers greater flexibility for color spectrum coding: the Academy Color Encoding System (ACES).

1865, its objectives are to carry out technical studies, answer practical questions and offer technical recommendations.


The trend for achieving efficient color management is to work on the postproduction / grading of projects in ACES. In this way, a master file in ACES is obtained from which the necessary copies can be made in BT2020, DCI-P3 or BT-709.

Under the CIE YXZ 1931 color rendering system, the Rec.2020 color space standard covers 75.8%; the DCI-P3 color space, 53.6%; the Adobe RGB color space encompasses 52.1%; and the Rec.709 color space comprises 35.9%.

Before finishing this article, it is convenient to keep in mind some technical parameters that will greatly define the color values with which we work in the digital environment of today’s television such as: the type of video signal, the type of subsampling for the color components, the bit depth or quantification of color (recommending a high bit depth/pixel –High Bit DepthHBD), the type of codec and video format... and, also, the workflows under HDR (High Dynamic Range).

Color in the audiovisual field is formed / worked on from the very first decisions made

during pre-production, such as the choice of camera type and recording form (with logarithmic curves, for example),the processes involved in filming and postproduction / grading or the completion of the audiovisual product with the mastering.

Broadcast television for today’s viewers is a television under the Rec.709 (ITU-R BT.709) color space, which takes us to what is known as an extended color space: Wide Color Gamut (WCG). But this is only the beginning, as novelties and innovations must continue to arrive that by far expand what we understand by color and, above all. allow enjoying other types of content (UHDTV / 4K TV – HDR - Rec. 2020).

The success of color for broadcast TV is not completely guaranteed so we must embrace a threefold commitment:

 Professional workflows between the different color spaces, looks, LUTs, lighting techniques... to offer attractive content at all stages in audiovisual production.

 Visual education for viewers that allows a better assessment and acceptance of the new color that will fill

their TV screens. Also, training as a user to know how to calibrate their equipment, automate visual results and configure the “proprietary” color management systems provided by manufacturers of screens, TV sets and monitors.

 New presentation and projection devices that easily include new color spaces, from Rec.709 (ITU-R BT.709) to DCI-P3 and also ITU-R BT.2020 / BT.2100.

Changes are taking place every day, even as we are reading this article, such as the appearance of new capture devices, improvements in video signal digitization, the rise of broadband and 5G mobile telephony, a change in trends on AV content consumption, improvements in lighting equipment, the implementation of UHD (Ultra High Definition), 4K or 8K, the expansion of IP and Cloud environments, social media, video-on-demand platforms, content generated in virtual environments or backwardcompatible transcoding streams, among others.

These changes will continue to have an impact what we call the color of an image but following the current trend: better and wider color range in audiovisual content. 

Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.