TM Broadcast International #126, February 2024

Page 1


Titular noticia Texto noticia


EDITORIAL This TM Broadcast International’s issue

effects to redefine storytelling and audience

explores the intersection of technology and


creativity, shaping the future of broadcast and audiovisual production. Our special report on VIDA Content OS, platform implemented by industry giants BBC and Getty Images, unveils the transformative power of digital asset management in streamlining content workflows and enhancing collaboration. Through our three conversations with Simon Roue (VIDA’s CEO), Paul Davis (Vice President at Getty Images, EMEA) and Chris Hulse (Head of Motion Gallery at BBC Studios) readers will learn how the VIDA Content OS integration means a new way of delivering audiovisual archives to the public. Accompanying this, readers will find exclusive interviews with pioneers from four leading

Furthermore, TM Broadcast International hunts through the dynamic realm of AI-driven media, examining how artificial intelligence is reshaping content creation, distribution, and consumption. The emergence of intelligent algorithms promises a new era of personalized and interactive media experiences. Alongside, our in-depth analysis of the latest advancements in codecs and broadcasting technology addresses the industry’s shift towards more efficient, highquality streaming solutions, catering to the ever-evolving demands of digital audiences. This edition is a testament to the ongoing evolution of the broadcast and audiovisual landscape, where technology and creativity

virtual production studios—80six, Ready

converge to open new horizons. Join us as

Set, Final Pixel, and Nant Studios. These

we navigate these developments, offering

talks illuminate how virtual production is

insights and inspiration to professionals and

carving content creation and the innovative

enthusiasts alike, charting the course for the

use of extended reality and in-camera visual

future of media production and consumption.

Editor in chief Javier de Martín

Creative Direction Mercedes González

TM Broadcast International #126 February 2024

Key account manager Patricia Pérez

Administration Laura de Diego

Editorial staff

Published in Spain

ISSN: 2659-5966

TM Broadcast International is a magazine published by Daró Media Group SL Centro Empresarial Tartessos Calle Pollensa 2, oficina 14 28290 Las Rozas (Madrid), Spain Phone +34 91 640 46 43






Virtual Production 80six Dan Hamill, the co-founder and commercial director of 80six, offers a peek behind the curtain of the innovative virtual production studio.

Final Pixel In the burgeoning field of virtual production, Final Pixel stands as a beacon of innovation and adaptability. Spearheaded by CEO and co-founder Michael McKenna, the company has rapidly ascended as a trailblazer, especially noted for its agility during the challenging pandemic period.

Nant Studios Under the visionary leadership of Vice President of Virtual Production Gary Marshall, Nant Studios exemplifies the rapid evolution of the virtual production landscape.

Ready Set Studios Nils Pauwels, CEO of Ready Set Studios (RSS), speaks to the transformative power of virtual production in an industry increasingly defined by its adaptability and innovation.



Revolutionizing the archive: The VIDA Content Os platform’s impact on BBC and Getty Images In an era where content is king, the integration of VIDA Content Os platform by BBC and Getty Images marks a significant leap forward in the audiovisual industry, especially in the realm of archive management and content monetization. This collaboration, underpinned by advanced AI capabilities, is set to redefine how media assets are accessed, managed, and leveraged, offering an unprecedented level of efficiency and innovation.


Codecs, and more…



Marshall CV730 PTZ cameras seamlessly integrate with Quicklink Remote Studio for remote production control The Quicklink Remote Studio and ST250 platform enables remote control of various studio elements, including multiple cameras, camera tally, lighting, audio, teleprompter, engineering talkback/IFB, chroma keying, and more. Offering output-independent camera feeds for efficient bandwidth Marshall Electronics has

remote production, to ensure

optimization, the system also

announced that its CV730 30x

compatibility with their multi-

supports graphics, overlays,

UHD60 NDI PTZ camera now

camera remote studio solution.

or tiling of cameras, company

seamlessly integrates with the

This integration empowers

states. Besides, the cameras

Quicklink Remote Studio and

Quicklink users to seamlessly

have the ability to remotely

ST250 multi-camera remote

control the CV730 PTZ camera

control over 400 devices via an

directly from the Quicklink

intuitive web interface or control

Remote Studio web interface.

panels utilizing Quicklink Bridge

studio solution. With Quicklink’ suite, users can effortlessly manage camera pan, tilt, zoom,


and pedestals, complete with

The Marshall CV730-BHN

automatic tally for standby,

(black) and CV730-WHN (white)

Richard Rees, CEO of Quicklink,

preview, and live on-air functions,

models feature 9.2 million-

expressed enthusiasm about

alongside a comprehensive suite

pixel Sony sensors, offering a

expanding the list of approved

of production solution controls.

30x optical zoom range and

devices for the ST250, stating,

simultaneous 12GSDI & HDMI

“We are continually looking to

outputs, alongside networkable

expand our list of approved

full-NDI®, NDI|HX3, NDI|HX2, IP

devices for the ST250 and are

(HEVC), SRT, and other common

thrilled that our users can now

IP codecs. Capable of delivering

take advantage of Marshall PTZ

clear UHD (4K) images at

cameras while using our Remote

resolutions up to 3840x2160p

Studio solution.” Rees highlighted

value for our customers in

and 60fps, the incorporation

the close collaboration with

terms of interoperability with

of Full NDI® technology to the

Marshall throughout the testing

proven industry solutions.”

cameras ensures low latency,

phase to ensure smooth

Keach expressed excitement

high-quality, frame-accurate

operation of the cameras with

about partnering with

video, and audio transmission in

the ST250, resulting in seamless

Quicklink, recognized leaders in

real-time within IP workflows.

integration for users. 

Bernie Keach, representing Marshall Electronics, emphasized the added value for customers through collaboration with other manufacturers, stating, “Collaborating with other manufacturers provides further





Archiware launches P5 Data Management Software 7.2: Introducing S3 object archive and boosting archiving capabilities Archiware GmbH has unveiled version 7.2 of its P5 Data Management software, and has announced several groundbreaking features: S3 Object Archive: The highlight of this release is the S3 Object Archive, enabling seamless archiving to LTO and LTFS storage via the S3 interface. With compatibility for Amazon S3 Glacier Flexible and Deep Archive storage classes, this feature provides full management of tapes, drives, and libraries, allowing external products to leverage the P5 Archive module

Mover, enabling the configuration

fully featured 30-day trial on the


of data migration based on

Archiware website.

Server-wide Incremental Archiving: Version 7.2 enhances incremental archiving capabilities with the INC+ archive, facilitating the tracking of changing projects. This optimization allows large

specific directories, contents, or metadata fields. Additionally, the new job history feature allows

in Munich, Germany, boasts

for the monitoring of large and

over two decades of experience

lengthy migration operations,

in data management software

enhancing visibility and control.

for backup, synchronization,

data sets to be relocated to

Moreover, the release boasts

different storage volumes without

expanded archive index

necessitating re-saving by the

functionalities in the P5 Data

archive, streamlining the archival

Mover, offering improved


configuration options for data

Filter-based Query and Job History: The latest version

Archiware GmbH, headquartered

migration and streamlined search functionality.

and archiving. With more than 20,000 licenses sold worldwide, Archiware’s software is trusted by hundreds of media companies globally. Their product line includes P5 Synchronize, P5 Backup, P5 Archive, and P5 Data Mover, catering to diverse data

introduces filter-based query

Archiware P5 version 7.2 is now

management needs across

functionalities in the P5 Data

available for upgrade or as a free,

various industries. 



Ateme to introduce video streaming solutions for monetization and engage at MWC 2024 Ateme is set to make its

incorporating support for Spatial

angle live and replay coverage.

presence felt at Mobile World

Computing to provide immersive

Ateme’s solutions are designed

Congress 2024. The company

experiences with AR overlays and

to ensure fans never miss a

High Dynamic Range (HDR) for

moment, thanks to AI-generated

vibrant and vivid colors.

highlight creation and enhanced

will be highlighting its latest video streaming solutions designed to help operators streamline


video delivery costs. You can find

Engage: The company provides

Ateme in Hall 5, Booth 5G18.

exceptional experiences for

Monetize: Ateme delivers flexible

sports enthusiasts, both inside

solutions for video content

and outside venues. This includes

delivery and monetization. This

offering viewers an immersive

includes SaaS solutions for FAST

perspective of the game on

(Free Ad-Supported TV) and

their smartphones and XR

Dynamic Ad Insertion, helping

Immerse: Ateme offers viewers

headsets, along with real-time

operators optimize revenue

the highest quality experiences,

access to statistics and multi-

streams. 

Ateme’s showcase will revolve around cutting-edge technologies aimed at achieving the following objectives:



Planetcast launches its NexC cloud-first infrastructure to help companies to navigate complexity Planetcast Media Services, a global provider of broadcast technology and media services, has introduced its NexC architecture aimed at simplifying content management, distribution, and monetization for media companies. NexC offers a unified content supply chain management system, addressing the complexity challenge in today’s media and entertainment landscape. Key benefits of NexC include: – Streamlined movement of content assets between commonly used services for improved efficiency and reduced costs. – Collaboration tools within NexC facilitate global teamwork without leaving the platform. – Integration of major functionalities in a single interface, including content supply chain, playout, distribution, FAST, OTT distribution, and post-production, allowing Planetcast customers to


add services effortlessly without dealing with multiple vendors. – Cloud-based infrastructure and content-repurposing capabilities in NexC enable users to monetize content quickly and costeffectively across various formats and regions. Sanjay Duda, CEO at Planetcast, explains, “Media organizations are facing challenges in navigating postproduction, content delivery, and monetization efficiently. Our NexC unified service layer architecture addresses this complexity challenge through flexibility and a cloud-first approach, helping customers maximize efficiency and costeffectiveness.” The NexC architecture comprises various elements accessible through Planetcast’s unified customer user interface (UI) and dashboard, including Contido for content supply chain and management, Content Preparation & Localisation

Services, CloudX for hybrid cloud playout and Recaster for digital delivery, and Planetcast OTT for non-linear content distribution. Additionally, NexC offers partnerintegrated value-added services such as FAST Playout, Cloudbased Creative Post Production, and more. Venugopal Iyengar, COO, Digital at Planetcast, emphasizes their ability to adapt to changing distribution and monetization trends over the years. He states, “We believe that the ‘complexity challenge’ is the greatest issue facing media companies today. To meet this challenge, we have designed the NexC architecture to provide all the power and functionality media companies need, combined with the simplicity of a single sign-on collaborative user interface.” To stay at the forefront of the industry, NexC will support API integration for Generative AI solutions to enhance workflows and functionality. 


Three Denmark chooses Net Insight’s GPS/ GNSS-independent time synchronization for 5G expansion Net Insight has secured its first order from Three Denmark (Three) for GPS/GNSS-free time synchronization, supporting Three Denmark’s ambitious 5G rollout. The order, set for delivery in Q1, 2024, builds upon the successful deployment of Net Insight’s solution in Sweden. This deployment introduces Net Insight’s advanced Zyntai nodes to Three Denmark’s network, enhancing synchronization capabilities while eliminating the reliance on GPS/GNSS. The collaboration aims to streamline synchronization costs and ensure independence from GPS/GNSS, marking a significant step in Three Denmark’s 5G expansion plans. “We are convinced that Net Insight’s synchronization solution meets our stringent requirements for a GPS-independent network,” says Kim Christensen, CTO at Three Denmark. “It has proven to be the most adaptable, dependable, and cost-effective solution, aligning perfectly with Three’s high standards for 5G.” “We are delighted that another strong and progressive operator such as Three Denmark has chosen our solution. The high number of GNSS jamming incidents in the last months has increased the interest in our GPS/GNSS independent synchronization solution,” says Crister Fritzson, CEO of Net Insight. “Beyond its GPS independence, our Zyntai solution reduces overall costs and accelerates 5G rollouts. The increased security and availability of the solution is key in meeting the requirements of new advanced 5G services, aligning seamlessly with the vision of a forward-

RAI to complete transition to DVB-T2 in September 2024 Starting from September 1, 2024, Rai will transition to the DVB-T2 standard to complete the shift to digital TV. Here’s what it means for viewers. Unless there are any unexpected surprises, as of September 1, 2024, RAI will begin broadcasting its programs using the new DVB-T2 standard, which stands for Digital Video Broadcasting – 2nd Generation Terrestrial. With this move, the public broadcaster will have completed the transition to digital terrestrial television. This confirmation comes from RAI executive Stefano Ciccotti, who stated, “I confirm the transition of one Rai multiplex to DVB-T2 in September. The new service contract has been signed.” For those wondering, there’s no need to purchase a new television to access RAI channels, as long as the existing TV is not so outdated that it requires replacement. To delve deeper, all TV sets introduced to the market since 2017 – the year when manufacturers were obligated to exclusively market models compatible with the new DVB-T2 standard – are indeed suitable for accommodating this change. It’s important to remember that the DVB-T2 standard will enable viewers to watch digital terrestrial channels in high definition using the current MPEG-H H.265/HEVC encoding system (and potentially other systems in the future). According to some reports, the new television multiplex (which is the technique for transmitting and distributing digital terrestrial TV signals, allowing multiple TV channels to be broadcast on the same frequency band through a combination of data compression and multiplexing techniques) should initially involve the first three channels of the “Viale Mazzini” broadcaster, namely RAI 1, RAI 2, and RAI 3, enhancing their reception and signal quality. This transition will eventually extend to all other RAI channels. 

thinking mobile operator like Three.” 



Goolight cable TV station at Japan uses Blackmagic Design workflow for its 4K 60p studio Goolight, the cable TV operator in Nagano Prefecture, has incorporated several Blackmagic Design products, including the ATEM 4 M/E Broadcast Studio 4K live production switcher and HyperDeck Extreme 8K HDR broadcast deck, to operate a 4K 60p capable studio. Goolight is a cable TV station covering the vicinity of


Suzaka City, Obuse City, and Takayama-mura in Nagano Prefecture. In addition to TV services, it provides internet, telecommunications, and electrical services supporting regional infrastructures. Last year, the company extensively updated its studio equipment for cable TV operations, making it the first cable TV station in Nagano to operate a 4K compatible studio.

Shinji Yamagishi, director of the Media Promotion Division of Goolight, explained: “The old studio equipment was over 10 years old, and it was the perfect time for an update. Concurrently, through collaboration between the local government and the private sector, a new facility for creating vitality was planned in front of Suzaka Station, where our headquarters is located.


We decided to incorporate a studio in this facility to engage in information dissemination collaboratively with the local community. This led to the

converters employed for signal conversion. “When updating, we needed to create a studio that was 4K 60p compatible, considering future needs. It was also essential to make it cost-effective. With these factors in mind, we decided to introduce Blackmagic Design products as our main equipment. We wanted to create a versatile studio that can be used for everything, from broadcasting to streaming,” said Yamagishi. The building where Goolight’s studio is located is positioned in front of Suzaka Station, serving as a gathering place for local residents. With the aim of creating a facility that fosters new vibrancy in collaboration with local administration and private sectors, the decision was made to construct a new facility incorporating a studio in the building.

creation of a new open style studio.” An ATEM 4 M/E Broadcast Studio 4K was introduced as the main switcher, along with the ATEM 1 M/E Advanced Panel 10 for control and Smart Videohub 12G 40×40 with Videohub Master Control Pro for routing. For recording and playback, HyperDeck Studio 4K Pro, HyperDeck Extreme 8K HDR, and HyperDeck Extreme Control are being used. Multiple SmartView 4K monitors are also installed, with Teranex AV standards

With the goal to engage the local community and disseminate information collaboratively, various Blackmagic Design products designed for streaming were introduced, including Blackmagic Pocket Cinema Camera 4K digital film camera, Blackmagic Video Assist 7” 12G HDR monitor/recorder, Blackmagic Micro Studio Camera 4K Pro, ATEM Television Studio HD live production switcher, HyperDeck Studio HD Mini broadcast deck, and Web Presenter HD streaming solution. Goolight manages equipment and operations in this setup. “The building where this studio is located is managed by

Suzaka City and has facilities which citizens can use, such as parenting support centers, kitchen studios, and coworking spaces. Equipment is installed in a portable rack for streaming from these places, making it available anytime. In the future, we want to contribute to the education of local students in collaboration with Suzaka City,” said Yamagishi. “While we had been using ATEM Mini switchers before, the ATEM 4 M/E Broadcast Studio 4K introduced in this new studio is easy to use without having to consult the manual, as long as you have a basic knowledge of switchers. With the ATEM software, the flexibility increases further, and it’s a switcher that allows you to do what you want very easily. As we have a small staff, it’s convenient for oneperson operation. Moreover, when connected to computers, multiple people can work, making it easy to handle operations from solo to large scale,” said Hayato Muraishi of Goolight, who oversees the technical aspect of the studio. The company utilizes the studio for various purposes, including the recording of election programs, live broadcasts of local festivals, and the weekly news program. Muraishi said: “HyperDeck recorders are used for video playback and recording backups during live broadcasts. HyperDeck Extreme 8K HDR is great because it has built-in scopes, allowing us to check waveforms directly from the deck. HyperDeck Extreme Control is designed for broadcast deck specifications, making it easy to use”. 



Trade4Sports partners with Eintracht Frankfurt to impulse ads revenue with digitalization and automation Trade4Sports and Eintracht Frankfurt, a prominent European football club, have embarked on a multi-year partnership aimed at optimizing advertising processes through digitization and automation. Over the next 5.5 years, Trade4Sports will play a pivotal role in enhancing the digital capabilities of the club. This partnership, facilitated by Qvest’s ownership stake in Trade4Sports since 2022,

crucial digital processes such as

Frankfurt Fußball AG, expressed

advertising management, media

enthusiasm for the digital

asset organization, and playlist

transformation facilitated by the

scheduling for Eintracht Frankfurt

T4S Marketing Cloud: “The future

across various venues including

is driven by digital processes.

the Deutsche Bank Park, Stadium

With the T4S Marketing Cloud,

on Brentanobad, and the Ahorn

Trade4Sports has successfully

Camp Sportpark in Dreieich. The

digitized and automated many

adaptable nature of the platform

manual tasks within the club

The collaborative efforts

allows clubs and leagues to

and in our interactions with

between Trade4Sports and

seamlessly integrate it into their

customers. This streamlines

Eintracht Frankfurt signify a

existing workflows. Ahead of the

our operations significantly

forward-thinking approach to

upcoming 2024/2025 football

and represents a valuable

the intersection of digitalization,

season, additional features

development for our club.”

sports, business, and media

such as remote operation and

technology. This strategic

AI-generated playlists will be

alliance is a testament to the

integrated into the solution.

close partnership between

Coupled with a comprehensive

Trade4Sports and Qvest,

playout solution, this enables

combining their expertise to

dynamic advertisement playback

redefine real-time advertising in

on LED perimeter boards and

sports and entertainment.

ribbons, with the flexibility to

will introduce innovative Software-as-a-Service (SaaS) products designed to elevate perimeter advertising revenue for sports clubs through digitalization, professionalization, and expanded marketing opportunities.

adjust playlists in real-time, even

Frederic Komp, Co-Founder and Managing Director of Trade4Sports GmbH, echoed these sentiments, stating, “We are honored by the trust Eintracht Frankfurt has placed in us. When a top European club like Eintracht Frankfurt recognizes

during live matches.

the benefits of the T4S Marketing

Marketing Cloud, a robust

Arnfried Lemmle, Director of

it underscores the value of our

platform tailored to streamline

Sales and Marketing at Eintracht

platform.” 

At the heart of this longterm collaboration is the T4S


Cloud in their daily operations,


Rohde & Schwarz to showcase the maturity of 5G broadcast at Mobile World Congress 2024 Rohde & Schwarz, a key player in the audiovisual production and broadcast industry, is set to demonstrate the readiness of 5G Broadcast as a revenuegenerating technology at the Mobile World Congress 2024. Visitors can experience this complete ecosystem at the Rohde & Schwarz booth (Stand 5A80) in Fira Gran Via Barcelona from February 26th to 29th. 5G Broadcast, a one-to-many transmission standard within the 3GPP specifications, presents new avenues for both media and data transmission. It excels in broadcasting live content to a multitude of mobile devices simultaneously, offering the flexibility for pop-up channels at events like sports and music festivals. One notable feature is its ability to deliver media broadcasts to compatible mobile devices without the need for a SIM card. Additionally, 5G Broadcast supports data broadcasting, making it a suitable

choice for updating IoT devices and automotive applications. Rohde & Schwarz has played a pivotal role in standardizing this technology, collaborating with organizations such as ETSI JTC Broadcast and 5G-MAG, thus securing the ITU’s endorsement of 5G Broadcast as a nextgeneration DTT standard on a global scale. This endorsement leads to more efficient utilization of the UHF band, contributing to a sustainable and environmentally conscious future for the industry. After extensive real-world demonstrations and proofs of concept, the end-to-end solution is now ready for commercialization, attracting interest from network operators and content providers worldwide. Thomas Janner, Director of R&D Broadcast Applications at Rohde & Schwarz, emphasized the simplicity of integrating 5G Broadcast into existing networks,

stating, “For operators of networks, adding 5G Broadcast is a simple upgrade to Rohde & Schwarz transmitters. It offers a practical and cost-effective solution for achieving true mobile television without increasing network congestion. Moreover, the technology opens up vast opportunities for data delivery services, especially in parallel with television broadcasting, addressing the growing needs of automotive and IoT sectors.” Visitors to the Mobile World Congress can expect informative presentations at the Rohde & Schwarz booth, as well as the opportunity to witness a live demonstration at the company’s headquarters in Munich. These demonstrations will highlight the comprehensive ecosystem in place, thanks to collaborations with key industry players and standardization bodies, confirming that 5G Broadcast is ready for immediate commercial deployment. 



Whip Media implements AI-powered solutions to address reporting challenges and predict content trends Whip Media, a player in the world of entertainment technology, has unveiled new enhancements to its Software as a Service (SaaS) solutions at CES. These additions harness the power of AI and automation for data aggregation, bringing significant improvements to the collection, analysis, and updating of viewership and content insights.

Addressing FAST reporting challenges Whip Media has introduced innovative features aimed at revolutionizing reporting for AVOD (Ad-Supported Video on Demand) and FAST (Free AdSupported Television) platforms and channels, bolstering their content performance tracking and revenue monitoring

to grasp differences in audience

vast amounts of unstructured

engagement and performance

consumer-generated data,

across various platforms and

including emotions and

Reporting: Whip Media’s AI-


anticipation, to swiftly identify

enhanced Robotic Process

Predicting content trends

capabilities through AI: 1. Automated Data Acquisition

Automation (RPA) components can seamlessly retrieve data from any source, dynamically adapting

Whip Media has also introduced

to changes.

AI tools to enhance its content

2. Advanced Automated Title Matching: Leveraging AI, the updated title matching feature reduces manual labor while

and consumer insights solutions, drawing on millions of real-time data points generated by TV and film viewers worldwide. These tools enable more accurate

patterns, trends, and insights that may elude human research analysts. 2. Advanced Predictive Content Analytics: By training AI models using a combination of quantitative and qualitative data, customers can gain valuable foresight and intelligence,

predictions of content outcomes:

predicting future viewing

understanding of cross-platform

1. Real-time Sentiment at

engagement events to improve

content reporting, allowing them

Scale: AI algorithms analyze

content outcomes. 

improving accuracy. It empowers FAST channels to gain a deeper


behaviors, preferences, and


Grass Valley’s cutting-edge technology empowered G20 summit When the world turned its

scalable, centralized hub for the

finest form, thanks to Grass

attention to New Delhi for

event’s vast media requirements,

Valley’s K-Frame XP Switchers

the pivotal G20 Summit on

ensuring on-demand capacity

and seamless Adobe Premiere

September 9 and 10, 2023,

and eliminating post-summit

integration. The ability to

Grass Valley played a crucial

costs, a significant advantage for

transition seamlessly between

role in delivering a seamless,

broadcasters covering the event.

High Dynamic Range and

cutting-edge media experience. Empowering the International Media Centre (IMC), Grass Valley’s transformative Media Universe took center stage as it powered the IMC across multiple G20 locations, capturing live and recorded moments from the Bharat Mandapam, Raj Ghat, Palam Airport, and Hindon Airbase. AMPP – The game changer: One of the standout elements of Grass Valley’s contribution was the AMPP platform, a cloud-based SaaS solution that integrated Grass Valley cameras,

From ingest to distribution: With 99 Grass Valley cameras strategically deployed, Elastic Recorder X and Isilon Storage ensured smooth ingest and secure storage of vast amounts of footage. Mync Software streamlined the process of cataloging and converting ingested video files, while FrameLight X Asset Management provided real-time accessibility and collaboration for global broadcasters. Grass Valley’s comprehensive solutions covered every aspect of media content management during the summit.

Standard Dynamic Range, facilitated by KudosPro UHD1200, ensured an optimal viewing experience for audiences worldwide. The GV media universe advantage: Handling the complexity of routing feeds across multiple venues, Grass Valley’s Densité Modular signal processors and Sirius Routers played a pivotal role, ensuring seamless transitions and consistent signal quality. With over 105 major pieces of Grass Valley equipment strategically deployed, the GV Media Universe

switchers, elastic recorders,

Broadcast quality: The summit’s

delivered strength and reliability

and more. AMPP provided a

content was showcased in its

at its core, reaffirming Grass Valley’s reputation as a leader in technical solutions for broadcast and audiovisual production. Grass Valley’s role at the G20 Summit exemplifies their commitment to pushing the boundaries of technology and delivering excellence in media production, making them an indispensable partner for such high-profile events. 



Broadcast Solutions builds a second IP truck up for SuperSport roads, with a trailer measuring 16 meters in length, 4.3 meters in height, and 2.6 meters in width when traveling. In contrast, IP2 is 4.5 meters shorter, making it suitable for venues where IP1 would be physically too large. The Sunshine Golf Tour is a prime example, as the larger truck would struggle to access golf estates. SuperSport has opted to standardize its control system with the hi human interface developed by Broadcast SUPERSPORT’S IP2 WAS CONSTRUCTED AT BROADCAST SOLUTIONS’ FACILITIES IN BINGEN AM RHEIN, GERMANY

control layer simplifies system configuration and provides

Broadcast Solutions has completed the construction of a second all-IP outside broadcast truck for South African broadcaster SuperSport. This new unit, named IP2, serves as a slightly smaller counterpart to the previously delivered giant IP1, which arrived in mid-2023.

is facilitated by the SMPTE

IP2 closely mirrors the specifications of its predecessor, including comprehensive production facilities such as a Sony XVS 9000 4 M/E switcher, a 64-fader Calrec Artemis audio mixer, and onboard EVS replay servers. Sony cameras, including the 3500 Ultra HD, 5500 super-motion, and 4800 ultra-motion cameras, are also part of the setup. Connectivity

“IP1, known as ‘The Queen,’


Solutions. This powerful

ST2110 family of standards, with Imagine Communications SNPs handling SDI/IP conversion, video processing, and multiviewing. Prishen Govender, Senior Manager Technical Broadcasting – Outside Broadcast Services at SuperSport, explained, is designed for our most prestigious sports events. However, we quickly realized the need for a slightly scaled-down version. IP2, referred to as ‘The Princess,’ functions similarly but

a user-friendly interface. It also enables seamless interconnection between IP1 and IP2, allowing the two trucks to work together for massive productions. SuperSport’s Govender noted, “After conducting extensive research and consulting industry professionals worldwide, we selected Broadcast Solutions to build IP1, which had an immediate positive impact on our operations. Consequently, we made the bold decision to commission a second truck

in a slightly smaller form.”

promptly, and once again,

IP1, also constructed by

very specific requirements. We

Broadcast Solutions, pushed the

can’t wait to put IP2 into service

limits of size for South African

in South Africa.” 

Broadcast Solutions has met our


Eurovision Sport launches as EBU’s inaugural direct-toconsumer streaming service game-changer for sports fans across Europe and right around the world. We firmly believe that sport should be for all.”

The European Broadcasting Union (EBU) has introduced a groundbreaking digital streaming platform as part of its commitment to increase public access to sports content across Europe. Dubbed Eurovision Sport, this marks the EBU’s debut in the direct-to-consumer service realm and represents a significant milestone in live sports broadcasting. Thousands of hours of content will stream through this unified digital destination, complementing existing coverage by public service media and showcasing every moment of a diverse range of sporting events. Eurovision Sport will collaborate with the EBU’s network of public service Members, ensuring comprehensive coverage of numerous Olympic sports, including athletics, gymnastics, skiing, swimming, and more. It will feature events ranging from World to European Championships, multi-sport competitions, and national

championships. Notably, Eurovision Sport is the first sports streaming service to achieve true gender equality across all its live sports content. Among the initial competitions to be featured on the digital platform, Eurovision Sport will provide live coverage of this month’s World Aquatics Championships in Doha (Feb 2-18), the upcoming International Biathlon Union World Championships (Feb 7-18) in Czechia, and next month’s World Athletics Indoor Championships in Glasgow (March 1-3). The EBU partnered with Nagravision to develop and operate the Eurovision Sport platform, ensuring it meets the needs of fans, Members, Federations, and sponsors through in-depth market research. Noel Curran, the EBU’s Director General, welcomed the launch, stating: “Eurovision Sport is a

Currently, only a third of sports fans have access to premium sports channels. Through free streaming, Eurovision Sport aims to democratize access to live sports coverage, encouraging greater participation and fostering a sense of unity through sports. The EBU currently manages media rights for 14 sports on behalf of public service media, delivering over 43,000 hours of sport annually through agreements with 28 international sports federations. Glen Killane, Executive Director for Sport at the EBU, commented: “With the support of public service media, we’ll be able to provide sports federations with an unrivaled shop window for their sports around the world.” In this fragmented digital world, Eurovision Sport offers a solution to the challenges faced by sports fans in finding and accessing the sports they love, while also assisting sports federations in attracting new audiences. Eurovision Sport is accessible now via desktop and mobile website – eurovisionsport. com – as well as Android and iOS mobile and tablet apps. In the future, it will be available via Connected TVs and selected free ad-supported streaming television channels. 



ESPN, Fox, and Warner Bros. Discovery join forces to launch streaming sports service in the U.S. ESPN, Fox, and Warner Bros. Discovery have unveiled plans to establish a collaborative Joint Venture (JV) aimed at launching an innovative streaming sports service in the United States. This venture will merge the extensive sports linear networks and directto-consumer (DTC) service ESPN+ offerings into a standalone app, catering to the fervent sports aficionado. The service will offer an array of premium sports content, spanning across major professional leagues such as NFL, NBA, WNBA, MLB, NHL, NASCAR, College Sports, UFC, PGA TOUR Golf, Grand Slam Tennis, the FIFA World Cup, Cycling, and much more. Scheduled to debut in fall 2024, the new platform seeks to revolutionize the sports streaming landscape, delivering an unparalleled experience directly to consumers. Bob Iger, Chief Executive Officer of The Walt Disney Company said,

the sports programming of

more choice, enjoyment and

other industry leaders as part of

value and we’re thrilled to deliver

a differentiated sports-centric

it to sports fans.”

service. I’m grateful to Jimmy Pitaro and the team at ESPN, who are at the forefront of innovating

Key highlights of the venture include:

on behalf of consumers to create

– Establishment of a new joint

new offerings with more choice

venture comprising ESPN, Fox,

and greater value.”

and Warner Bros. Discovery to

David Zaslav, Chief Executive Officer of Warner Bros. Discovery,

develop, launch, and operate the streaming sports bundle.

said “At WBD, our ambition is

– Equal ownership and board

always to connect our leading

representation for each entity,

content and brands with as

with non-exclusive licensing

many viewers as possible, and

of sports content to the joint

this exciting joint venture and


“The launch of this new streaming

the unparalleled combination

sports service is a significant

of marquee sports rights and

moment for Disney and ESPN, a

access to the greatest sporting

major win for sports fans, and an

events in the world allows us

important step forward for the

to do just that. This new sports

Further details, including pricing,

media business. This means the

service exemplifies our ability as

will be disclosed at a later date as

full suite of ESPN channels will be

an industry to drive innovation

the venture progresses towards

available to consumers alongside

and provide consumers with

its anticipated launch. 


– Introduction of a new brand with an independent management team.


SPI International and NXTDigital India partner to launch Dizi channel and FilmBox+ for indian viewers SPI International, a subsidiary of

in delivering exceptional

Kurosawa, Fellini, and more. It’s

Canal+ with a wealth of expertise

entertainment to Indian viewers,

undoubtedly a treasure trove

in channel management and

promising a diverse range of

of on-demand entertainment

content curation, has recently

content to cater to various

options for our audience. It may

shared that they have joined


have been a long time coming,

forces with NXTDigital India to introduce the Dizi channel and the FilmBox+ streaming platform to subscribers in India.

Subscribers can expect a wide array of Dizi channel series and FilmBox Arthouse content available on-demand through the

This partnership opens up

FilmBox+ platform, allowing them

an opportunity for viewers to

to indulge in their favorite shows

immerse themselves in Turkish

and films at their convenience.

drama series dubbed in Hindi. The Dizi channel can be accessed through the VAS Bouquet package as a linear channel, and it’s also available on-demand via the FilmBox+ platform, ensuring that viewers can enjoy captivating content at their convenience. Additionally, FilmBox Arthouse on the FilmBox+ streaming service

Khalid Khan, CEO of India Spark and a key figure in facilitating the partnership between SPI International and NXTDigital India, expressed his enthusiasm for this collaboration. He commented, “We have observed the increasing preference of Indian viewers for Turkish content over the

but it’s finally here!”

About NXTDigital NXTDigital serves as the digital media arm of Hinduja Global Solutions Ltd. (HGS), a player in technology-driven customer experience, business process management, and digital media services, supported by the global conglomerate Hinduja Group. With a nationwide reach, NXTDigital delivers television services through digital cable and the country’s sole Headend-InThe-Sky (HITS) satellite platform,

past few years. Recognizing the

operating under the brand

immense potential here, we

names INDigital and NXTDigital,

partnered with SPI International

respectively. The HITS service

Murat Muratoglu, Head of

to dub their content in Hindi.

spans over 1,500 cities and

Distribution at SPI International,

As a viewer myself, I believe this

towns, covering more than 4,500

expressed his satisfaction

is a groundbreaking moment

PIN codes, with a significant

with this collaboration, stating,

– potentially the first Turkish

presence in rapidly growing semi-

“We are delighted to work

channel entirely dubbed in Hindi.

urban, semi-rural, and rural India.

alongside NXTDigital India in

Moreover, FilmBox Arthouse

NXTDigital has a well-established

bringing the Dizi channel and

promises to be a cinephile’s

national presence through a

FilmBox Arthouse content to

paradise, offering a collection of

network of 10,000 digital services

their audience. This partnership

timeless classics from legendary

partners who serve millions of

marks a significant step forward

directors like Hitchcock,

customers across the country.

offers access to renowned classics of world cinema.



At the forefront of virtual innovation




In this interview with TM Broadcast International, Dan Hamill, the cofounder and commercial director of 80six, offers a peek behind the curtain of the innovative virtual production studio. 80six, an independent company celebrated for integrating high-end equipment with world-class video solutions, stands out in the entertainment technology landscape. With a passion for pushing the boundaries of live entertainment, corporate events, virtual production, and broadcast, 80six has carved out a name for itself as a leader in delivering cutting-edge in-camera visual effects and extended reality experiences.



The studio’s journey began nine years ago, fuelled by a vision to craft a multi-disciplinary hub for creativity and technological innovation. 80six’s studio in Slough epitomizes this vision, reflecting an impressive growth that doubles down on their commitment to revolutionize the industry. Their studio’s expansion signifies a stride forward in specialized vehicle processing using in-camera VFX—a testament to their ambition and foresight.



At the heart of 80six’s philosophy is a deep-seated belief in the power of collaboration and adaptability. Whether it’s handling the pressure of simultaneous productions or flexing their technical muscles to customize LED volumes for heavy-hitters like Netflix and BBC, 80six’s dynamic approach is a beacon for the future of virtual production. With a team that thrives under pressure and a relentless drive for innovation, 80six is not just keeping pace with the evolving landscape of virtual production—they are defining it.


When and how was 80Six

to venture into our own

over 11,000 square feet has

born? Which are 80six

business, emphasizing stellar

been a significant milestone,

strength points for the

entertainment technology and

propelling us to specialize in


exceptional client service as

vehicle processing for dynamic

our core offerings.

in-camera VFX.

Our studio in Slough,

The cornerstone of 80six’s

launched three years

identity is our team’s spirit.

back, symbolizes our

Our collective strength shines

commitment to spearheading

brightest when meeting the

led to the founding of our

advancements in video

challenges of simultaneous

company nearly a decade

technology, particularly

productions, like the recent

ago. Our extensive work with

for in-camera VFX and

car shoot for a premier

preeminent artists and iconic

extended reality. The 2022

streaming service and a

festivals fuelled a desire

expansion of our facility to

rapid turnaround extended

Reflecting on the inception of 80six, it was the shared aspirations and experiences with Jack James, my friend and business partner, that




reality project. It’s the synergy,

Can you tell us about your

and on location. When we

resilience, and relentless

more recent works and

launched, our business


model was unprecedented

At 80six, flexibility and

in the market. We were

bespoke solutions are

pioneers, offering custom-

the keystones of our

built stages and technology

operations both in-studio

stacks tailored specifically

pursuit of excellence within our team that sets 80six apart in the ever-evolving landscape of virtual production.



exemplifies our success, with its driving scenes shot on a custom LED vehicle processing stage, creating incamera VFX that contributed to the show’s global acclaim. Looking ahead to 2024, our ambition is to push the envelope further by enhancing our in-house production capabilities. Our goal is to offer comprehensive production solutions that align with our clients’ diverse and evolving needs. We’re not just responding to the market — we’re shaping it. What are the key pieces of equipment that you rely on for your virtual production setups? How do you ensure that your equipment stays at the forefront of technology?

for each production’s needs. This foresight put us at the forefront three years ago, and it’s gratifying to see the industry now catching up with our early vision of a mobile and flexible virtual production

This adaptability has


on Netflix’s ‘Heartstopper’

drawn high-profile projects from across the industry, including major players like Netflix, DNEG, BBC, CBS, and Paramount+. Our work

We use advanced real-time rendering to run complex real-time scenes via RX nodes with our ultra-powerful disguise media server’s integrated with a video engine, such as Unreal Engine. 27


Both the virtual production

servers, and Mo-Sys camera

techniques we employ –in-

tracking. Staying ahead,

camera VFX for film & TV

we regularly upgrade our

shoots or extended reality for

inventory and collaborate

broadcast, rely on the same

closely with manufacturers

technology with different

to test and access new


technologies, ensuring our

Our virtual production set-

solutions lead the market.

up hinges on cutting-edge

In 2024, we will upgrade our

technology, including ROE

offering by moving some of

Visual LED screens of fine

our screen into market (will

pixel pitches of 2.6mm,

be hard to say goodbye as its

Brompton Technology image

still does us a great service)

processing, disguise media

and invest in a new screen


of 300sqm+ to enable us to

Looking ahead to 2024, our ambition is to push the envelope further by enhancing our in-house production capabilities.


expand our capability for studio work and corporate events to support the next stage in our strategy. In what ways do you incorporate state-of-theart technologies, such as real-time rendering or motion capture, into your production processes? We use advanced real-time rendering to run complex realtime scenes via RX nodes with our ultra-powerful disguise media server’s integrated with a video engine, such as Unreal Engine. We also display content at scale on multiple LED screens and virtual sets. The reliable RX range is ideal for such demanding real-time content.

Are 80Six using AI in any way? Currently, we do not use AI, but we are exploring the possibility of integrating Cuebric for content tests and demos. Cuebric is an innovative AI generation studio that offers a Plug & Play 3D solution, which enables us to quickly turn 2D assets into near-3D (specifically 2.5D and 2.75D backgrounds for ICVFX). Through integration with disguise, we can easily export the scenes we create to media servers, where the scene’s layers are automatically mapped to an LED environment and made ready for shooting. This development is a significant game-changer because it promises to reduce preproduction time and costs substantially. How do you manage the workflow in a virtual production environment to ensure efficiency and quality? With more than four years of experience in the field, we’ve honed a powerful, efficient workflow for virtual production, emphasizing a strong culture of preproduction, where we can coordinate all technical elements to meet the




unique challenges of each project. Our flexibility and technological mastery ensure top-quality outcomes, even in this fast-developing early adopter field. Production teams for streaming platforms ask us for certain tools for the colour pipeline and we work alongside them to make sure we match their requirements exactly.


How do you tailor your services to meet the specific needs of different clients in the broadcast and audio-visual industry?

differing budget needs of

Every project we work on has unique requirements and complexities, so there’s no one-size-fits-all solution - this is our strength! While our current inventory mainly consists of premium video solutions, we are expanding our offerings to meet the

vision is fully realized within

clients as virtual studios become more popular in the broadcast world. Our goal is to ensure that every client’s their budget. How does 80Six ensure that its team stays skilled and knowledgeable in the rapidly evolving field of virtual production? Are there specific training


Looking forward, what do you see as the next big developments in virtual production? How is 80Six preparing for these future trends and technologies? Our main goal at 80six is to encourage a wider adoption of virtual production. To achieve this, we have always provided flexible and sustainable solutions to our clients. With the advent of VP 2.0, we believe that the industry is moving in the same direction as us. Instead of huge permanent LED Volumes, the trend is now towards customized LED Volumes of different configurations that are tailored to the specific needs of each scene. We envision a simpler future for virtual production, with less technological complexity and more mobile, plug-andplay systems - exactly what 80six are delivering. ABOUT VIRTUAL PRODUCTION STUDIOS BY 80SIX Located just outside London, Virtual Production Studios by 80six is an 11,743 sq ft. purpose-built virtual production & mixed reality studio with bespoke LED stages and the latest real-time technologies for delivering in-camera VFX (ICVFX) or xR (extended reality). Through a combination of highresolution LED screens, game engines, camera tracking technology and awardwinning media servers, the studio is geared to lead visual innovation in film, TV & advertising by integrating existing and emerging technologies in new ways that push entertainment production to a new level.

programs or learning opportunities you provide to your staff? Committed to staying at the forefront of virtual production, 80six has been a part of SMPTE’s On-Set Virtual Production Initiative since the beginning and they’re some of the leading global drivers in standardising virtual production and bringing rapid solutions based on meaningful industry insights. You will find our technical team at the major AV tradeshows worldwide, constantly learning about the latest trends and innovative technologies. Virtual production is still a relatively new field, with only a handful of ‘experts’. However, we are pleased to see that there is a lot of knowledge-sharing happening organically, which highlights the global collaboration that this technique has inspired. What have been some of the biggest challenges you’ve faced in virtual production, and what lessons have you learned from them? How have these challenges influenced your approach to future projects? Working with extended reality in a broadcast environment can be challenging, especially when aiming for a cinematic output. Cinematic lighting doesn’t always align with the requirements for lighting on a xR stage. Achieving the illusion of perfect blend between the back wall and LED floor is driven by light coordination. There’s still a lot to learn as each project comes with specific creative challenges, and you work with different production teams every time. It’s not uncommon to encounter light spill issues, which is why DOPs and gaffers are learning together on the stage. 



Virtual sets, real impact

Final Pixel



In the burgeoning field of virtual production, Final Pixel stands as a beacon of innovation and adaptability. Spearheaded by CEO and co-founder Michael McKenna, the company has rapidly ascended as a trailblazer, especially noted for its agility during the challenging pandemic period. Their commitment to making production accessible, sustainable, and diversified has not only reshaped the creative landscape but also demonstrated the transformative power of virtual production.



The studios’ global footprint is marked by its versatile studios and its focus on cutting-edge research and development, and their approach prioritizes workflow and talent over mere equipment, enabling the creation of tailored virtual environments for any production need. This bespoke strategy has attracted a diverse clientele, including major studios and streaming giants, all seeking to leverage Final Pixel’s unique capability to bring stories to life through immersive virtual worlds. In recent projects, Final Pixel has showcased its prowess in virtual production, from bringing an embargoed Formula One car to a virtual U.S. road trip to crafting immersive experiences for movie premieres. These ventures highlight the company’s end-to-end production capabilities and its adeptness in managing complex, technologically demanding projects. With a future-focused outlook, Final Pixel is poised to further democratize virtual production, making it an integral part of storytelling and content creation for creatives worldwide. TM Broadcast International presents this interview with Michael McKenna, CEO and Co-Founder of Final Pixel, to explore the key contributions that the studio is poised to make in advancing the content creation industry.


How and when did Final Pixel start its path? Final Pixel was founded during the pandemic and found early success helping clients who were unable to do live action production during lockdown. As a creative production company specialising in virtual production, we were able to create high-end commercials and short form for clients across the globe who were otherwise unable to film. Our ethos is all about making production more accessible, sustainable and with enhanced diversity and social mobility. We see virtual production as a way of filmmaking which is not only more powerful for creatives and producers alike, but can also improve the nature of production and make it a better place to be. Having become experts in workflow and producing through doing multiple shoots like this, we have honed our expertise in applied production technology and worldbuilding to bring virtual production to a broad range of clients, including Warner Bros Discovery, BBC, Sky and Netflix.

Final Pixel counts with three studios in three different locations. Which characteristics and technological equipment has each one? Are any of them specialized in any way? Final Pixel has production teams in the US and UK, with labs for R&D and smaller shoots. We decided early on to invest more in workflows and talent than kit, as our clients want


As a creative production company specialising in virtual production, we were able to create high-end commercials and short form for clients across the globe who were otherwise unable to film. 35


to shoot all over the world in various sizes of studio with differing needs per production. We have lots of experience now in building pop-up LED volumes, tailored to the script and creative, in any location to bring the virtual production technology to the project. We also have a network of trusted stage partners in select locations. This allows us to find the perfect fit of technology, equipment and resources for each client’s specific needs (and budget).



spec’d the Kino Flo MIMIKs on a few shoots now.

We have lots of experience now in building pop-up LED volumes, tailored to the script and creative, in any location to bring the virtual production technology to the project.

What are the main projects and works that Final Pixel has been involved in and how have they showcased your capabilities in virtual production? Recent projects for the team

A common studio setup will have a 2.3-to-2.8-pixel pitch LED wall, 12m to 18m wide, 5m to 6m tall and on a shallow curve or sometimes flat. We often opt for ROE tiles on Brompton processing, but are seeing some strong challenge now in the VP LED market from Absen, Unilumin and notably Sony. Megapixel is also causing waves, in particular with the

possibilities of ghostframe. For smaller tabletop shoots we would prefer a finer pixel pitch. Our stages run mainly on nDisplay native or Disguise. More recently we have been adopting specific playback tools for the job, such as Assimilate or even Da Vinci resolve. We are also increasingly integrating the studio elements more and more, for example we have

have included the launch of the RB19 for Oracle Red Bull’s Formula One Racing team - which allowed us to shoot with an embargoed car without having to take it out on public roads. We were able to take the super-secret new F1 car on a US road trip without ever leaving the studio in Wakefield, UK. This project started at the concepting stage and we handled all the



way to final post and delivery. This showcased how we can maximize production value when taking an end-to-end approach with production technologies. We weave realtime throughout the whole process, and this helps to put the creative control back in the hands of filmmakers at incredible speed and efficiency for production. Using the technology, we were able to deliver this piece as a carbon net zero production, reinforcing the sustainable benefits of virtual production. Also, in the UK we recently created a live activation event around the premiere of the Netflix movie Rebel Moon where influencers and VIPs were able to immerse themselves in the locations from the movie - by the magic of virtual production. This shows how we can extend a brand’s IP into in-person events using the same environments from the main show or commercial. This is an exciting new area of transmedia which brands are incredibly interested in, and with our workflows and the efficiency of real-time and virtual production it has become more accessible now than ever.


We also produced the launch of Warner Brothers Discovery’s MAX streaming service, live from the historic WB studios in Los Angeles. This involved popping up a large LED wall, with a bespoke design, and running a live show with all the same underpinning technologies we use in our VP shoots for commercials or film/TV. This

Recent projects for the team have included the launch of the RB19 for Oracle Red Bull’s Formula One Racing team - which allowed us to shoot with an embargoed car without having to take it out on public roads.


that we were able to make real-time creative decisions and execute them in a way that would never have been possible with traditional film production. To decide on set that you are going to shoot at night, instead of daytime —and then within the hour be ready to shoot— brings a completely liberating approach to filmmaking. Things that were impossible before now have become extremely do-able. With most of our shoots it’s the combination of technologies, the incremental improvements across the component hardware, software and workflows, which build to be greater than the sum of their parts. That’s the very nature of filmmaking!

shows that we are able to put together the biggest projects anywhere in the world and have the teams that excel to the highest standards under pressure. Producing a VP shoot often feels like running a live event!

to have a breakthrough in

Could you name one project that has pushed Final Pixel to improve or

VP technicians, along with

terms of workflows or even tech utilized? The project that gave us the biggest breakthrough was the Red Bull Formula One shoot. On this shoot we had such a collection of talented colorists, unreal engine artists and traditional film crew

Because of the development of Unreal Engine, we found ourselves able to render photo realistic clips directly from UE and straight into our edit, further supporting the narrative. According to your point of view, which would be the main technological development that boosted virtual production capabilities? The greatest advance, as we see it, has been the



development of photorealistic real-time rendering in game engines. This recent development kick-started the virtual production industry and has been a paradigm shift for filmmakers - both those working with traditional VFX, and those more traditional filmmakers who have suddenly realised the incredible potential that virtual production brings to the creative process and the realisation of their vision. Sound and illumination are often unappreciated but these technics play


an important role in filming. What are the special features that virtual production requires regarding lighting and audio production? Virtual production requires very close attention to cinematography, lighting and particularly the resolution and color space of the LED tiles used to create the live, in-camera composite. The light shed by the LED tiles is different in nature from the reflected light we would see from a real-world physical location, so we need skilled technicians who can

correctly calibrate the lighting and cameras to create a believable, photorealistic, immersive world in the studio. Sound plays just as an important role in virtual production as any other filmmaking approach, carrying the narrative —it plays the role in VP of further reinforcing reality. What is Simtrav and how does it work? What applications could this development have in filming production, letting apart motor ads?


The greatest advance, as we see it, has been the development of photorealistic realtime rendering in game engines Simtrav (or simulated travel), otherwise termed as car process, allows us to shoot car process work that is leaps and bounds beyond the traditional green screen and back projection techniques. We can bring a vehicle into a studio and shoot it from all angles, with a completely convincing scene playing on the LED wall in the background - making for a totally believable composite. The technique works with all sorts of vehicles, from cars to trains to planes. We showed how this can be utilised to good effect on a recent project for Apple+, where our team worked on the LED volume doing car shots for the new series Criminal Record. This workflow has become a bit of a no-brainer for vehicle process work, as it cuts out long days on a low-loader, brings much more control to creative and production, and also allows the directors greater focus on



performances with the talent. We are often being asked to provide virtual production technology and workflows for this type of shoot for TV and Film and can provide them into most studios. We can provide the plates and even create bespoke plates, as we did for a ‘Dancing with the Stars x Hyundai’ project with SPS, in Unreal to exactly match the desired background and movement. Can you explain what the Final Pixel Academy is and its role in the broader industry? When we started Final Pixel we found it almost impossible to hire the kinds of technicians we needed, so the only choice we had was to start training people en masse. Over the last three years the Academy has become one of the biggest virtual production training organizations in the world. The Academy has helped clients get their

What is the ‘Content Factory’, and what was the inspiration behind its creation? The Content Factory was designed to help brands get started in virtual production quickly, without having to invest large sums of money in LED wall equipment and technology. The idea is that brands have access to a streamlined setup of a small LED studio and a set of 3D environments. In this space

staff up to speed, helped

they can produce countless

educational institutions kick-

hours of content without

start their virtual production

having the long-winded set up

teaching programs and

of a traditional shoot. There is

helped thousands of

no need to travel, yet content

students develop their virtual

can be shot in locations all

production skills, opening new

over the world —all with the

doors for countless people.

magic of virtual production.


How does ‘Worldbuilders’ enhance the storytelling and creative process in virtual production? Worldbuilders allows creatives to have a vision for the place they want to shoot in, and see that vision fully realized —even if there is no actual physical location which matches their imaginary desire. Additional Worldbuilders can create a fully 3D environment that is specially optimized for virtual production, ensuring that once it is projected on the LED wall the environment looks just as good through the film camera as it does on the PC. What is the process of creating a virtual environment using Worldbuilders?


What future developments can we expect to see from Final Pixel in the realm of virtual production? As one of the first companies to emerge in this marketplace, Final Pixel has a reputation for innovation and being at the leading edge of the industry. We pioneered real-time mocap performance, use of AI and have developed countless new ways of working. Our goal is to keep pushing the boundaries of the technology and allow filmmakers to expand their creative capabilities beyond what they ever thought possible. About Final Pixel With headquarters in US and UK, Final Pixel is an awardwinning global Creative Virtual Production and Innovation Works with expertise in creative producing, world-building technology development and education. Final Pixel is defining the immersive, engaging and sustainable future for Film, Episodic and Advertising through its own virtual production workflows, techniques and realtime virtual art world-building technologies, and is leading a global virtual production education, training and careerbuilding initiative led by its own team of experts and delivered through the Final Pixel Academy.

Like most projects, the process usually begins with storyboards and a script. After an extensive process of collecting references with the client, an environment sketch is drawn to show how the environment should look when it is finished. The client is then involved at every stage as the sketch is transformed first into a ‘’white card’’ or “grey box” outline of the final environment, and then in each successive stage of adding texture, landscape, architecture, furniture etc. Once the environment is fully designed, it is handed over the cinematographer to light and prepare for the onset shooting days. During the pre-light days on set the environment is further tweaked, the lighting adjusted, and even furniture moved around until it is just right for the shoot. How do you see the field of virtual production evolving in the next few years, and what role will Final Pixel play in this evolution? Once filmmakers see the creative possibilities, they are hooked. Final Pixel is committed to democratising the technology and getting it in the hands of as many filmmakers as possible. As the technology and tools improve, the bar to entry will become lower and lower. Our mission is to accelerate this process as much as we can. What are some emerging trends in virtual production that Final Pixel is particularly excited about? Manufacturers designing specific hardware and software for VP uses, for example anti-glare coating for LEDs to help DoPs light without spill on the screen are particularly exciting. Sony looks to have developed a very interesting product with its Verona. Also the rise of RGBW and RGBWW LED for use in VP will enhance the representation of skin tone and really open up more integration options with Unreal environments and moving scenes. Gen AI for 3D will also be an area to watch…



Innovating storytelling

Nant Studios



Under the visionary leadership of Vice President of Virtual Production Gary Marshall, Nant Studios exemplifies the rapid evolution of the virtual production landscape. Born in 2014 in Culver City, Los Angeles, as a traditional studio rental, Nant Studios has since transitioned into a vanguard of virtual production, integrating cutting-edge technologies like LED volumes and motion capture systems. This transformation was catalyzed by strategic collaborations with industry pioneers such as Epic Games and Animatrix, propelling Nant Studios into high-profile projects including “Avengers: Endgame” and “Gears of War.”



The studio’s foray into virtual production was significantly influenced by the innovative work on “The Mandalorian” at Manhattan Beach Studios, prompting Nant Studios to delve into LED technology and Unreal Engine capabilities. This led to the establishment of a stateof-the-art facility in El Segundo, equipped with a large LED volume, setting new standards for immersive content creation. Nant Studios’ journey from its inception to becoming a hub for virtual production excellence is marked by continuous adaptation and embracing new technological frontiers. This ethos is reflected in their recent projects and ongoing advancements in virtual production techniques, promising a future where the boundaries of storytelling and content creation are endlessly expanded.

Interview with Gary Marshall, Vice President of Virtual Production at NantStudios Can you tell us about how and when Nant Studios were born and how has been the ride until now? Nant Studios was conceived in 2014 in Culver City, Los Angeles, initially operating as a conventional studio rental facility. Our first venue offered a black box studio space along with production offices, catering primarily to local productions seeking


high-quality, well-appointed facilities. Our evolution began two years later when we formed a partnership with Animatrix, a Los Angeles-based performance motion capture company known for using the same advanced motion capture systems as seen in major productions like “Avatar” and “Planet of the Apes”. This collaboration transformed our Culver City location into a hub for cutting-edge motion capture projects, contributing to high-profile works such as “Avengers: Endgame” and the “Gears of War” video game series.

The pivotal moment for Nant Studios came in 2019, following our exposure to the virtual production techniques being tested for “The Mandalorian” at Manhattan Beach Studios. Recognizing the transformative potential of virtual production, we were eager to explore this technology further. Our ambition led to a collaboration with Epic Games, aimed at creating a space in Los Angeles to demonstrate and develop Unreal Engine capabilities. Thanks to the support of Dr. Patrick Soon-Shiong and Michelle Soon-Shiong, who are


deeply invested in healthcare and media respectively, we identified a former shoe factory in El Segundo, near Los Angeles International Airport, as the ideal site for our expansion. This new location was envisioned not just as a studio but as a pioneering facility equipped with a large LED wall and space for Epic Games to establish their Los Angeles lab. By the summer of 2020, we formalized our plans and I joined Nant Studios, becoming one of the initial team members tasked with constructing our state-of-theart LED volume amidst the

challenges of the COVID-19 pandemic. Our El Segundo volume, comparable in size to the original “Mandalorian” set, features a dynamic, 360-degree environment with an LED ceiling, setting a new standard for virtual production. Despite the uncertainties brought by the pandemic, our venture proved successful. In our inaugural year, we hosted a diverse range of projects, from commercials to music videos, and episodic content, culminating in the production of the “Westworld” season finale. This project, notably shot on film, added a layer of

complexity and showcased the versatility and appeal of LED virtual production across various budget levels and formats. In essence, Nant Studios emerged from a traditional studio rental business to become a forefront of virtual production innovation, driven by a vision to redefine content creation and a commitment to embracing and developing new technologies. Nant Studios’ collaborations with Epic Games and Animetrix have played pivotal roles in shaping the studio’s



direction and capabilities. How have these partnerships influenced the evolution and technological advancements at Nant Studios? The strategic collaborations of Nant Studios with Epic Games and Animatrix have been instrumental in shaping the studio’s evolution. These alliances have facilitated the development and real-world testing of virtual production features, enhancing the capabilities of the Unreal Engine used in professional production environments. The proximity of Epic’s lab to our studio enables a symbiotic relationship, allowing for the iteration of new features on their smaller LED wall before scaling tests on our larger production stage. This collaboration not only refines the software for practical application but also fosters industry education, with workshops for guilds and associations, demystifying LED and in-camera visual effects for industry professionals. This knowledge exchange is pivotal in cultivating talent within the niche virtual production field, a challenging but vital endeavour in the current competitive landscape. Can you elaborate about a recent achievement for Nant Studios? Reflecting on NantStudios’ journey up to the present day, particularly from 2020 to 2022, we reached a significant milestone around mid-April or May when NBC Universal approached us. Impressed by our work on stages in California and our strong partnership with Epic Games, they entrusted us with the ambitious project of constructing two massive LED volumes in Australia for an upcoming episodic show set to be filmed in Melbourne. Stage one in Melbourne has since become the world’s largest LED volume, a colossal structure standing 40 feet tall, 100 feet wide, and 160 feet deep. This venture posed a complex technical challenge, requiring a sophisticated design to power the system and handle the immense computational and logistical demands. We completed construction in early 2023, followed by extensive testing.



“The pivotal moment for Nant Studios came in 2019, following our exposure to the virtual production techniques being tested for “The Mandalorian” at Manhattan Beach Studios”



The initial production slated to inaugurate these volumes was ‘Metropolis’, the Sam Esmail Apple TV NBCU show. However, due to unforeseen strikes, the production went on hiatus, and the studio ultimately cancelled the project. Despite this setback, by late 2023, the Australian stages began attracting a variety of projects, bolstered by a skilled local team trained on our LA stages. Looking ahead to 2024, we’re excited about constructing two new LED volumes in Los Angeles. One will be dedicated to automotive projects, featuring an innovative design with configurable modular wall pieces, allowing us to tailor the volume to the specific needs of each production. This flexibility represents the future of virtual production, adapting creatively to the demands of diverse projects. What technologies are currently implemented in your studios for virtual production? Can you describe what services does Nant Studios offer and if there’s any difference among El Segundo, Calver City (CA) and Melbourne facilities? Every installation is similar, but El Segundo, our pioneering stage, features a horseshoe-


shaped LED wall with a pixel pitch of 2.8mm, and a

“Stage one in Melbourne has since become the world’s largest LED volume, a colossal structure standing 40 feet tall, 100 feet wide, and 160 feet deep.”

to a color shift effect due to the arrangement of the LED diodes. In contrast, our Melbourne stage boasts advanced ceiling tiles with a revised LED array to minimize this issue. Additionally, the Melbourne ceiling is modular and motorized, allowing for dynamic movement and ease of maintenance, significantly enhancing the flexibility and functionality of the space for various production needs. For your productions, do you designate specific studios for certain types of projects, or are all your studios equipped to handle a variety of productions?

static ceiling that can lead

Our versatility across global studios allows us to tailor


each space for specialized productions. In Melbourne, car commercials are often allocated to Stage 3, a U-shaped venue with 2.3mm pixel pitch panels that enhance the fine details on reflective surfaces like vehicles. Meanwhile, motion capture projects are centralized in Culver City, Los Angeles, benefiting from our dedicated mocap facilities. In El Segundo, we handle a variety of virtual production projects, utilizing our disguise system for 2D media playback when a full 3D environment isn’t necessary. As we progress, our new El Segundo stage is being custom-built to focus on vehicle processing,

ensuring we meet the specific demands of each production with precision. Could you provide a detailed overview of the Real-Time Art Department’s functions and its significance within the context of virtual production? Our Real-Time Art Department, composed of six multi-skilled artists, focuses on real-time interactive content creation, post-visual effects, and traditional offline rendering. Originally, the team was established with a single specialist responsible for validating 3D content’s compatibility with our LED

walls, ensuring frame rate consistency, color space accuracy, and animation sequencing. Recognizing the value in this, we expanded the department to offer content creation as a full-service solution, streamlining the process for clients. As we evolve, we’re developing a post-visual effects team to initiate asset creation, leveraging USD and opensource standards for seamless integration across all stages of production. This collaborative approach allows for an efficient pipeline where assets are created, shared, and refined by our Real-Time Art Department, then potentially re-integrated with post VFX, culminating in a versatile and streamlined content development process that addresses both virtual production and postproduction needs. That’s precisely our objective. We’re on the brink of initiating our first commercial project that will be produced entirely through this innovative workflow. This project will integrate aspects of virtual production alongside fully CGrendered shots, utilizing V-ray for offline rendering. It’s a comprehensive approach that will blend various techniques into a cohesive hybrid workflow.



What are, according to your point of view, the key distinctions between virtual production’s applications in advertising, narrative films, TV series and possibly video games? Certainly, the distinctions between virtual production in advertising, narrative films, TV series, and even video games primarily revolve around timelines and budgets. In advertising, there’s a noticeable agility in adopting new technologies. Creatives, agencies, and directors here have been among the early adopters, likely due to a mix of factors. Advertising spans a broad spectrum of budget sizes, from high-budget commercials to more constrained projects like music videos. This diversity has facilitated a rapid embrace of virtual production, somewhat akin to the earlier shift from chemical film to digital video. There was significant resistance back then, especially within the traditional realms of feature films and episodic TV, rooted in a reluctance to deviate from established practices and the perceived threat to conventional roles and techniques. Virtual production, in my view, mirrors this scenario.


A portion of the narrative film industry views it as an additional layer of complexity. However, in advertising, the response is markedly different. Here, virtual production is seen as a revolutionary tool that offers unprecedented versatility; —Imagine shooting a car commercial in multiple global locations in a single day without leaving the studio. This level of efficiency and creative freedom is particularly appealing in advertising, where the turnaround times are much shorter compared to films or TV series. The mindset in advertising is inherently more experimental and forward-looking, compared to the cautious and tradition-bound approach often seen in film and TV production. Advertisements are typically produced over a span of six to eight weeks, demanding a fast-paced and flexible workflow that virtual production can adeptly support. In essence, the adoption of virtual production technologies has been warmly welcomed in the advertising sector, driven by the need for efficiency, innovation, and the ability to rapidly iterate creative concepts.

This contrasts with the more measured and hesitant reception in narrative film and television, where the weight of tradition and concerns over the implications of new technologies on established practices and employment loom larger. How do you incorporate In-Camera VFX (ICVFX) technology into your production pipeline, and what benefits does this integration offer for highprofile projects such as “Avengers,” “Game of Thrones,” or “Star Wars Jedi”? Integrating In-Camera VFX (ICVFX) into our production


work as it is to identify what will. For content creation, we offer our expertise to either take the helm or, if the client already has preferred content creators, we ensure they’re quickly assimilated into our specialized workflow. This involves a comprehensive set of guidelines and best practices developed by our real-time art department, tailored to ensure seamless integration of ICVFX and preparation for any postproduction needs.

Every installation is similar, but El Segundo, our pioneering stage, features a horseshoe-shaped LED wall with a pixel pitch of 2.8mm, and a static ceiling that can lead to a color shift effect due to the arrangement of the LED diodes. pipeline fundamentally revolves around transforming our approach to asset and content creation, emphasizing extensive preplanning and preparation. The essence of employing LED technology and virtual production techniques lies in having all necessary digital assets prepared and optimized for this environment well in

Our engagement with clients


to recognize what might not

starts from the ground up, guiding them meticulously through each phase, from initial conceptual discussions to pinpointing precisely what elements of their project are suited for virtual production and which might not benefit as much. This discernment is crucial, as it’s as important

The transformative aspect of adopting this approach is not just in the immediate benefits to the production process itself, such as increased efficiency and flexibility, but also in the broader implications for asset utilization. Once developed, these assets can be repurposed across a variety of platforms, from print media to immersive AR/VR experiences, enhancing brand engagement and extending the lifecycle of the content far beyond its initial use. A prime example of this is the digital showroom we developed for Toyota. Traditionally, each new commercial required



constructing or re-dressing a physical showroom, a process both costly and timeconsuming. By creating a digitally reconstructed version of their showroom, complete with interchangeable ‘skins’ for different campaigns, we demonstrated a significant shift towards more sustainable, efficient production practices. This not only streamlined their commercial production process but also opened their eyes to the potential for asset reuse in creative and costeffective ways.

was a pivotal moment,

Our presentation to Toyota and their agency, Saatchi,

efficient content creation


showcasing the tangible benefits of virtual production. By leveraging a pre-existing asset – in this case, a photogrammetric scan of their showroom – and adapting it for virtual production, we illustrated how to achieve greater efficiency and cost savings in commercial production. This approach, we believe, is a testament to the transformative power of ICVFX technology in not just enhancing production workflows but in redefining the potential for creative and across the board.

When working on “Avatar,” what were some of the challenges you encountered, and how did you address them? Working on the original “Avatar” in 2009 was a pioneering experience in virtual production for me. At that time, the concept of virtual production was in its nascent stages, and “Avatar” served as a groundbreaking project that leveraged technologies such as virtual cameras, Simulcam, and an extensive use of performance capture. My primary focus was on the motion capture aspect,


ensuring the seamless transition of captured data into the animation pipeline. This involved developing methodologies to manage and sanitize the influx of scene files from the motion capture stage. It was crucial to maintain order amidst the hectic pace of production, where file naming and scene management could easily become chaotic. The challenge lay in untangling the complex web of virtual production scene files and ensuring they were properly formatted for Weta Digital’s animation pipeline. This task required a blend of technical

acumen and creative problemsolving to ensure the integrity of the data being funneled into the subsequent stages of production. Following my work on “Avatar,” I returned to London and contributed to “Gravity” at Framestore. This project was another significant milestone in my career, particularly in the realm of LED virtual production. For “Gravity,” we constructed a light box that utilized LED panels to project real-time lighting and reflections onto the actors. This early adoption of LED technology was primarily for lighting purposes, as the

panels at the time weren’t advanced enough to be used as direct backdrops for incamera capture, a technique that has become a staple in today’s ICVFX practices. These experiences laid the groundwork for the evolution of ICVFX technology. The journey from the pioneering days on “Avatar” to the sophisticated use of LED in “Gravity” and beyond reflects a decade-long evolution of virtual production techniques. It was a gradual but inevitable progression towards the immersive, versatile ICVFX capabilities we utilize today. Each project posed its unique



challenges, but overcoming them contributed to the rich tapestry of innovation that defines our industry’s current state. Regarding to CGI technology, control systems or robotics, could you share insights about which of them have significantly influenced your work? Of course, the integration of game engines into virtual production and the blurring lines between real-time and offline rendering have been pivotal in shaping our current workflows. A decade ago, tools like MotionBuilder were at the forefront due to their ability to offer realtime playback, which was revolutionary for visualizing performances captured in motion suits. However, the visual quality, particularly in terms of lighting, shading, and texturing, was rather rudimentary compared to the detailed output achieved through offline rendering, as exemplified by the original “Avatar” film. Fast forward to today, the evolution in real-time rendering technologies, notably with Unreal Engine 5, has significantly narrowed


the gap between what we see in real-time on set and the final rendered output. Innovations like Nanite and Lumen within Unreal Engine have pushed the boundaries of visual fidelity, making real-time rendered frames nearly indistinguishable from their offline rendered counterparts. This leap in technology enables us to produce photorealistic visuals in real-time, a feat that was unimaginable just a few years ago. Moreover, the advancements in virtual reality, spearheaded by platforms like Oculus Rift, have further propelled the capabilities of real-time graphics, enhancing the immersive experience and overall quality of virtual production. Another critical component in this evolution is motion capture technology, not only for tracking human performances but also for the precise tracking of cameras

within LED volumes. The accuracy and low latency of these tracking systems are crucial for maintaining the illusion of reality within the virtual environment. These technological advancements, each significant in its own right, have converged to create a synergistic effect that has transformed the landscape of virtual production. It’s a testament to how far we’ve come in the field, where the tools and techniques at our disposal now allow for an unprecedented level of realism and efficiency in content creation. Could you discuss any current limitations in performance capture or related technologies that you wish could be overcome, and how would you address them if given the opportunity?


Our Real-Time Art Department, composed of six multi-skilled artists, focuses on real-time interactive content creation, post-visual effects, and traditional offline rendering.

Yes, definitely; one aspect I’d highlight as a current challenge within the realm of performance capture and virtual production is the considerable time and effort required to craft high-quality content for LED walls. It’s not so much a limitation as it is an area ripe for innovation. We’re actively exploring the potential of generative AI and machine learning to streamline this process. Interestingly, the healthcare arm of our company is making significant strides in AI for medical imaging, which presents a unique opportunity for crossdisciplinary collaboration to enhance virtual production. The ultimate vision, or “holy grail,” if you will, is to enable creatives to interact with LED stages in a more intuitive, real-time manner, akin to the concept of a holodeck. Imagine being able to articulate a scene—say, a

grassy field with a river, or a snow-covered landscape with mountains—and having it rendered in high fidelity on demand. While real-time rendering has advanced significantly, content creation remains a premeditated process, and that’s the gap we’re looking to bridge. However, I must emphasize the irreplaceable value of human creativity in this equation. The integration of AI and procedural generation tools like Houdini aims not to supplant artists but to augment their capabilities, allowing them to achieve a substantial portion of the work efficiently while reserving their expertise for the crucial final touches that imbue scenes with life and authenticity. On the technical side, reducing system latency is another priority. Despite the strides in GPU performance

and motion capture technology, we still face a latency of about seven or eight frames in an ICVFX LED volume. Optimizing this to achieve a latency of merely one or two frames would significantly enhance the immediacy and responsiveness of virtual environments, making the virtual production process even more seamless and intuitive for all involved. What can you tell us about “Viva Las Vengeance”? “Viva Las Vengeance” was indeed one of our earlier ventures into utilizing LED volume technology in a creative project. Zack Snyder, known for directing “Army of the Dead,” embarked on this journey following a traditional filmmaking process for the mentioned film. The transition from this conventional approach to exploring the possibilities with LED volumes was initiated when Epic Games, in collaboration with us at Nant Studios, proposed an innovative experiment to Snyder. They suggested taking a CGI asset from “Army of the Dead,” rendered by Framestore, and adapting it for real-time use in Unreal Engine on our LED stage in El Segundo.



Snyder, intrigued by the potential, embraced the opportunity, which led to a day of creative exploration on our stage. This experiment sparked the idea to incorporate a taco truck from the movie into a novel context. The concept evolved into creating content for “Viva Las Vengeance,” using the LED volume to craft a unique commercial that tied into the broader “Army of the Dead” universe, including its VR experience component. Collaboration with Framestore was key in this process, as we worked closely to adapt their assets for real-time rendering. My personal history with Framestore, having been a part of their team for seven years, facilitated this collaboration, reinforcing the project’s creative synergy. This venture into using LED volumes for “Viva Las Vengeance” not only delivered engaging content but also served as a precursor to our subsequent project for the Resorts World Hotel in Las Vegas. This commercial featured A-list celebrities like Katy Perry and Celine Dion and showcased the efficiency and flexibility of virtual production. By integrating practical set pieces


with digital backdrops, we managed to accommodate the tight schedules of multiple celebrities in a single day, highlighting the importance of thorough pre-production and the seamless integration of virtual production techniques with traditional filmmaking practices. These experiences underscored the transformative potential of LED volume technology in the film and advertising industries, offering a glimpse into the future of content creation and production efficiency. As we conclude this interview, could you share insights into any upcoming projects or groundbreaking technologies you are currently developing or planning to introduce? I’m excited to share some of the forward-thinking developments we’re undertaking at Nant Studios. One of the groundbreaking shifts we’re embracing involves reimagining the construction of LED walls. We’re moving towards a modular and flexible design philosophy, which stands in stark contrast to the traditional approach of installing static, immovable LED volumes within

soundstages. This innovation allows us to tailor the LED setup to the specific needs of each project, offering unparalleled versatility and efficiency in our use of studio space. Parallel to this, we’re pioneering a new vehicle motion base technology, an advancement from the conventional gimbal systems. This development is geared towards accommodating a wide range of vehicles by adjusting the wheelbase to suit the specific requirements of any given production. The integration of this hardware with real-time


Since acquiring General Lift, we’ve been leveraging their expertise to advance our motion base technology, ensuring it meets the high standards of today’s film and commercial productions. Moreover, we’re dedicated to refining our content creation pipelines, focusing on establishing a next-generation hybrid workflow with Universal Scene Description (USD) at its core. This endeavor is part of our broader commitment to continuous research and development in AI, machine learning, and generative AI technologies.

enhancing the realism and

Lift, a company with a four-

dynamic possibilities for

decade legacy in producing

At Nant Studios, innovation is the cornerstone of our philosophy. We’re deeply invested in research, development, and experimentation, constantly exploring new frontiers to push the boundaries of what’s possible in virtual production

automotive commercials and

motion control equipment.

and beyond. 

game engines like Unreal

action-packed narratives with

Engine will enable a seamless

intricate car chase sequences.

interaction between the motion base and the virtual environments, significantly

This initiative is supported by our collaboration with General

About Nant Studios NantStudios is a state-of-the-art full service production ecosystem comprised of traditional, broadcast and virtual production stages. The company is based in Los Angeles with two campuses in Culver City and El Segundo. The company’s virtual production stages are serviced by an expert team with decades of virtual production, visual effects, and engineering experience. NantStudio’s goal is to democratize the virtual production workflow and make it accessible to any scale project, while innovating with R&D in technologies to streamline the process.



Overcoming filming barriers with virtual tech

Ready Set Studios



Nils Pauwels, CEO of Ready Set Studios (RSS), speaks to the transformative power of virtual production in an industry increasingly defined by its adaptability and innovation. RSS was conceived amid the pandemic, a time when traditional filming was challenged by lockdowns and travel restrictions. It became apparent that a new approach was necessary, one that could transcend the limitations imposed by the global crisis. Ready Set Studios emerged as a response to these industry upheavals, offering a virtual production space that marries technological advancement with creative expression.



RSS, under Pauwels’ leadership, has been a part of a diverse array of projects, from major streaming service productions to independent films, commercial campaigns, and music videos. The studio’s expertise in in-camera VFX (ICVFX) has positioned them as a unique and pioneering force in the realm of virtual production. This approach, which blends 3D scenes with liveaction footage on set, demands a symphony of planning, communication, and tech prowess, enabling a seamless integration of digital and physical realms. Looking forward, Pauwels discusses the evolving landscape of virtual production and its broader impact on the audiovisual industry. With LED technology improving, integration with camera and lighting systems becoming more sophisticated, and the potential application of AI on the horizon, the next five years look to be a period of significant transformation. As virtual production becomes more accessible and costeffective, Pauwels and Ready Set Studios are at the forefront, not just adapting to changes but actively shaping the future of filmmaking.

When was Ready Set Studios born and why? How was the kick-off for this virtual production studio?

the (Dutch) film market. How

The conception of RSS lies right in the middle of the corona period. Two of the founders were complaining about the current state of

producing as a whole.


everyone was always shooting at the same, scarce locations, how it was impossible to travel and how much waste we were

At the same time two others were discussing Virtual Production options and

exploring ideas. When it all finally came together, RSS was born! In which works has Ready Set Studios participated until now? We have worked for large streamers, independent (indie)( filmmakers, creative


agencies, commercial

longform or a commercial

VP is such a crazy fast moving

production companies and

doesn’t matter anymore at

industry that everything you

music video clips. Even a few

that point.

do can be out of fashion or even obsolete the next day,

live-events! Even though our core passion lies in feature

How has virtual production

or even faster. We are proud

films and series, we just love

evolved at Ready Set

that we not only have kept

it when “it all comes together”

Studios since its inception,

standing through difficult

and we have helped to

and what key technological

times, but also have done

create something beautiful

advancements have been

continuous R&D in our own

from scratch. Whether that’s

the most influential?

volume. We are beta testing



for several large companies and softwares, trailblazing for studios all over the world and allowing workflows and pipelines to standardise. Can you explain the core principles of In-Camera VFX technology and how it’s implemented in your productions? First of all, it’s important to understand that there is a big distinction between VP and ICVFX . Where VP is the process where the physical and the digital world meet, ICVFX is only one of the parts described as VP. In-camera VFX (ICVFX) means adding effects while filming, not after. In today’s making process, it usually involves creating 3D scenes on big screens and filming actors and props in front of them on a real set. To make this all happen smoothly requires meticulous planning

Which are ICVFX’s benefits and how does it impact in production pipelines or / and in the film industry, according to your point of view? There are many benefits to using ICVF. IBL, re-usability of backgrounds, shortened post-production time, guaranteed consistency and on set collaboration and creativity to name a few. The biggest change in production pipelines is that everyone needs to make decisions beforehand that would traditionally happen later in the process. At the same time there is a similar shift in cashflow and the spending process. Either of these can be quite scary but when we are all on set, everyone can see the results in-camera and make (creative) adjustments on set whilst collaborating with other HOD’s, its all worth it!

and a lot of communication as well as technological ingenuity. Whilst RSS is a studio that only does ICVFX, we tend to be involved from the earliest of stages, sometimes even the conception of an idea, to implement the workflow into the whole production. This can be from a Productions Design standpoint, or VFX standpoint for example.


What are the most significant challenges the studio faces in virtual production, and how does Ready Set Studios address these challenges? I think the biggest challenge is managing expectations. VP is not the holy grail. It is a tool in your toolbox. A very powerful tool that is, but it should be

“I think the biggest challenge is managing expectations. VP is not the holy grail. It is a tool in your toolbox.” treated as such. It will be tough to film a whole film or series in VP, but it can offer a solution to something that would otherwise have been very difficult to produce. In your opinion and experience, has the introduction of virtual


production techniques changed the broader landscape of audiovisual production?

production looks at challenges

Absolutely. On a technical level first of all. The companies traditionally offering AV solutions are now submerged in technical filmmaking terms and had to do a crash course in understanding our lingo. At the same time all manufacturers in the chain have upped their game in their product delivery allowing for smoother transitions into the VP landscape.

it is easier to think of solutions

But it has also changed how anyone involved in a

in a script or treatment. With an extra tool in your toolbox and in the back of your mind, and the money shot can be achievable for everyone now. How does virtual production affect the collaboration between directors, cinematographers, and visual effects teams? The collaboration between all HOD’s will be intensified in pre-production as well as in on set production. The ability to work closely together

with all teams means that the creativity is in fact with all the right people to ensure the best possible outcome. Anyone on set can now see the final result, so Productions Design can make adjustments, but so can VAD and make-up. All in conjunction with the DoP and Director. What emerging technologies do you believe will have the most significant impact on virtual production in the next five years? LED will become better, adding more colours to the traditional RGB. Integration of LED, Lighting and camera with each other and in softwares like unreal engine will only get better. And dare I mention AI? What skills and knowledge are essential for professionals looking to specialise in virtual production? There are a lot of new skills that are now on set, from Game engines to LED technicians. Most important is to understand how everything works together and find your own niche in that chain, or have the helicopter overview that’s needed to be a supervisor for example.



What projects are Ready Set Studios preparing for a near future? We are lucky to be involved in several large projects at the moment. Reaching from a large pop-up stage in Holland to productions abroad. As well as several bigger and smaller shoots in our permanent studio in Amsterdam. Really exciting times to be VP pioneers in the early adaptation phase of this filmmaking technique. About Ready Set Studios Launched in March 2022 by award-winning Dutch filmmakers, visual effects professionals and creative technologists, ReadySet Studios is considered the most advanced, fullservice LED virtual production facility in the Netherlands.

How has the client’s understanding and expectations of virtual production evolved, and how does Ready Set Studios manage these expectations? When we first started VP as a whole, ICVFX in specific were greatly unknown amongst the Dutch industry professionals. We have done countless demonstrations and educational tours and


see that now filmmakers in the broadest sense of the word know who to call and when to call us. It is not uncommon we are involved from the scriptwriting stage onwards or sometimes even earlier in a project. Also, for the commercial projects we are often involved in the conception of an idea, coworking on treatments and so on. With understanding the technology comes understanding of


expectations, so all in all, we

us. More and more smaller

are going the right direction!

volumes are popping up allowing smaller budget

Is virtual production

productions to leverage

becoming more accessible

the technology for their

and cost-effective for smaller productions? If so,



From your experience,

Yes, it is democratising

what has been the most

more and more. The days

groundbreaking project

that LED volumes were only

or application of virtual

massive ones used for the

production technology at

1% of filmmaking are behind

Ready Set Studios?

We just finished a project where VP was the perfect solution. A longform timepiece taking place in the same set at various times. The entire studio was a practical set with reflective glass and smoke everywhere. The digital backdrop provided the necessary changes between the time periods and it all blended perfectly in with each other. Really a project to keep an eye on.





Revolutionizing the archive

The VIDA Content Os platform’s impact on BBC and Getty Images In an era where content is king, the integration of VIDA Content Os platform by BBC and Getty Images marks a significant leap forward in the audiovisual industry, especially in the realm of archive management and content monetization. This collaboration, underpinned by advanced AI capabilities, is set to redefine how media assets are accessed, managed, and leveraged, offering an unprecedented level of efficiency and innovation. The upcoming interviews in TM Broadcast International delve deep into the technical intricacies and visionary approaches behind this monumental project, shedding light on the challenges overcome and the breakthroughs achieved. TM Broadcast’s readers can anticipate a comprehensive analysis of the seamless integration of cloud technologies, AI-driven transcription services, and user-centric platform design that collectively enhance the accessibility and utility of vast media archives. The firsthand accounts from key figures behind the initiative, including Simon Roue of VIDA Content Os, Chris Hulse from BBC, and Paul Davis, Vice President at Getty Images, EMEA, will provide valuable insights into the strategic decisions and technological advancements that have propelled this development. This feature article aims not only to inform but also to inspire industry professionals about the potential of AI in transforming content management practices. The discussions will explore how the VIDA Content Os platform’s implementation facilitates a more intuitive search, curation, and distribution process, thus setting a new standard for content platforms in the digital age. Through this analytical lens, we unveil how this pioneering project stands as a testament to the power of collaboration and innovation in harnessing the full potential of AI in the audiovisual sector.



Recently, we’ve known that VIDA developed a MAM system to manage and make accessible 57,000 BBC documents to the public. These files were only available searching by hand, and now AI capabilities are supporting and managing these documents in a efficient way, for the public’s benefit. First, TM Broadcast International speaks with Chris Hulse, Head of Motion Gallery at BBC Studios

Regarding the recently implemented VIDA Content OS, can you provide more details on the collaboration between Getty Images and BBC Studios in launching this new platform for accessing BBC archive video content? BBC Motion Gallery and Getty Images combine our expertise in content and sales to commercially represent the BBC’s TV and radio archive across the globe. We pride ourselves on our excellent customer service and openness to feedback. When asked to make our content more readily accessible we took this on board, investigating the specific requirements of our clients. The resulting project was timeconsuming but well worth the investment to deliver a highly regarded product that is having a positive impact on our businesses and our clients.


How does this platform enhance the customer experience in terms of searching, purchasing, and downloading BBC archive video content? Our previous library platform only gave our registered clients the opportunity to view our content. If they wished to select clips for licencing they needed to be sent a secure link for each programme, watch it again and clip up sequences which they wished to licence. With VIDA they are able to research, browse and clip up content 24/7. They only need to contact the Sales team to obtain additional research, or to discuss master clips and licencing, which saves them both time and money. The platform allows customers to securely search the entire digitized library. Can you tell us about the security

Chris Hulse has been the Head of BBC Studios Motion Gallery since 2014 when they partnered with Getty Images to represent the BBC’s TV and radio archive for commercial clips sales. He has over 35 years’ experience in the media industry, having been an Archive Researcher prior to leading teams, digitisation programmes and technical projects within the BBC. His varied roles have given him practical experience of working with film, videotape, audio, music, stills and file-based media. Chris is very proud to be facilitating the re-use of the BBC’s exceptional programming globally whilst providing valuable support to their public service broadcasting role.


measures implemented to protect the BBC archive content? The VIDA platform contains all the content ingested by Motion Gallery and is only accessible to our registered users. The registration is limited to our commercial clients and managed internally. The site is not open to web crawlers or search engines. The high-resolution master content is not directly accessible to our clients. Their clips are manually released only after licencing. How the curated collections enhance the creative process for producers and programme makers?

The collections are not aimed to be exhaustive but offer programme makers a short cut to content curated by our expert researchers which covers commissioning trends and upcoming events. This may provide valuable insights for current projects or inspire new ones. We have read that postclearance high-resolution masters are available immediately. How does this benefit projects with fast turnaround times, and what impact does it have on production timelines? When clients select clips from our preview files the platform

generates high resolution versions at the same time. This means that there are delays in master fulfilment. Once our client has signed the licence the clips are able to be downloaded straight into their edit. How do the latest Speech to Text tools integrated into the platform facilitate transcript search for all assets, and how does this contribute to uncovering hidden content from the BBC Motion Gallery’s vast archive? The platform runs speech to text transcription at the point of transcode. This provides us and our clients with valuable additional search options for data that is not available via the BBC Archive’s metadata set. Can you provide insights into the significance of making iconic BBC content, such as The Office, Absolutely Fabulous, BBC Sports Personality of the Year, and Planet Earth, available for download for the first time ever? Our aim is to make the BBC’s amazing archive more visible to the global production market. Highlighting iconic programmes and brands



brings footfall to the site where clients are then exposed to our fantastic editorial content which they may previously been unaware of. How does this new platform contribute to the licensing and distribution of BBC programmes globally? Motion Gallery’s business is licencing clips from the BBC’s vast TV and radio archive. The VIDA platform actively supports this by making the BBCs programmes more visible to our global audience who may not be aware of the breadth of the BBC’s output. This may then lead to sales leads for the BBC Studios programme distribution team. How does VIDA Content OS address the growing demands in the archive footage licensing industry, providing an integrated and efficient supply chain for servicing the catalogue,


particularly with the use of cloud and AI technology? Behind the scenes we have moved away from local spinning disc and tape robotics to a managed cloud solution. This not only delivers financial savings and greater sustainability but a much more robust and effective service to our clients. Wherever they are in the world they will have the same user experience. With the BBC being over 100 years old it has a vast archive. This has been managed via many different systems and teams over the years which impacted the level and accessibility of metadata. Utilising AI tools to deliver reliable transcripts for our content at the point of transcode give programme makers a much richer source to draw upon. Identifying contents such as personalities and speeches which were not previously identified.

How does the partnership between Getty Images and BBC Studios aim to assist programme makers around the world, and in what ways does the VIDA platform play a crucial role in achieving this objective? Our partnership aims to make the BBC’s quality programming and editorial content readily accessible to programme makers globally. We leverage Getty Image’s worldwide sales team and provide clients with a trusted archive platform that is available for them 24/7, helping them to locate specific shots or ideas for commissioning pitches. Is BBC planning new developments related to its contents and its management? Motion Gallery is focused on enhancing our offering to clients. Our aim for the year ahead is to develop our VIDA platform to deliver more content and further enhance the user experience.


Part of this collaboration enterprise, Getty Images provides its experience working with VIDA and BBC throughout the VIDA Content OS’ implementation process with Paul Davis, Vice President at Getty Images, EMEA; Davis answered TM Broadcast questions about the latest advance in the field of the archive management.

How did the company realize that it required a new MAM solution?

How long did it take to

Our goal is to help Producers get their productions made – VIDA allows us to collaborate creatively with our clients at development stage to build programme ideas and seamlessly gain access to over 57,000 digitised programmes.

Conversations around the

The key realization was driven by the need to improve our overall customer experience and efficiency as well as enhance access to the incredible BBC Motion Gallery library. The previous MAM solution only provided registered users with a viewonly platform experience and created barriers between our customers and our content. Our goal was to breakdown those barriers and allow customers the ability to research and partially self-serve in real time 24/7, which massively benefits our customers dealing with fast turnaround times.

create and implement the new MAM solution? launch of the new VIDA platform began in 2018 but it took a significant amount of time to bring this to life, with BBC Motion Gallery, Getty Images and VDMS developers working tirelessly together to build the best solution possible for our clients. What kind of archives (AV, only audio, only video) are managed by this new MAM platform? The platform currently supports the BBC Motion Galleries AV content offering, but we are working towards expanding to also include BBC Radio Archive on the platform. What kind of issues has it resolved? The VIDA platform allows customers to research content from over 57,000

digitized assets from the BBC library and go directly to master selection on any of these featured items- this digitized collection is also growing every day. Content is being added both proactively and as part of continued offline research access available to our customers. Any BBC programme that has Library sales rights, and has not previously been digitized, is now added to the VIDA platform library as soon as the new preview is completed. We have also been able to integrate the VIDA Content OS Platform with Getty Images internal processing systems to allow for a more streamlined content ordering and delivery process – this has reduced turnaround times and administrative load. This integration also allows for the real time release



of reproduction rights and copyright information with master content delivery, which was not possible on older platform solutions. Does the VIDA Content OS platform offer any new feature that makes easier managing and monetizing the content? The VIDA Content OS Platform allows customers to view and research watchable content freely, view metadata, pull transcripts, store, and share collections of content, download previews and order master clips against their projects, all of this leads directly to more content consumption and licensing, whilst making the collection easier to support, reducing turnaround times and administrative load. The platform also provides curated collections – this allows our clients to view content that is matched to what is selling in the market to allow for deeper creative collaboration in the development stages. Another very exciting feature for industry professionals is the ability to utilise the new speech to text transcriptions which will allow programme makers to uncover content


that may have previously been difficult to unearth. This new MAM solution, could it be used to manage all Getty content? This MAM solution offers our clients with a bespoke improved and enhanced solution to the BBC Motion Gallery offline collection. Getty Images represent more than 551,000 contributors, of which 80,000 are exclusive to Getty Images. We will continue to explore all solutions to meet the needs of any customer— no matter their size—around the globe. However, Getty Images’ website already offers a growing library of over 551 million visual assets (video + stills) that delivers unmatched depth, breadth, and quality – an extensive and comprehensive archive (historic & full range) with over 135 million images dating back to the beginning of photography and our video collection contains over 26 million video clips and growing. The BBC Studios online collection is also available online with over 200,000 licence-ready clips.

when implementing the new VIDA Content OS solution? Our number one priority was putting the customer at the heart of everything we do. The constant consideration was ensuring this solution resulted in the most positive outcome for all parties engaging with the platform.

What were the main difficulties or challenges

Balancing the needs of the BBC MG archive, Getty


the client ordering process. This new platform offers clients a full end-to-end journey from allowing clients to research and partially serfserve. What role did the royalties or reproduction rights / copyright when implementing the VIDA MAM platform?

Images internal processing systems and ensuring that this was a legitimate upgrade to the previous customer experience. Until now, what MAM solution did Getty use for its data base? Previously the BBC library offered clients a view-only experience. This platform was completely independent to

We had to consider how reproduction rights information and copyright info was provided to customers using all content on the platform. There is a dedicated team of experts working at both BBC Motion Gallery and Getty Images to ensure that all content released onto the platform is diligently checked and approved before release. The VIDA platform allows for master content to be released to customers with all clearance information attached in real time and ready to be ingested into an edit. How did the VIDA platform integrate with your former CRM? Are both systems interrelated now? Before September 2023, VIDA did not integrate with any system at Getty Images. Our CRM is now fully integrated with VIDA, which allows

customers to preview, clip up, order, track requests and download master content directly via VIDA platform. This has revolutionised the customer experience, providing self-serve abilities, improving time to access content and enhanced accessibility to our content. Recently, Getty developed together with Nvidia a Generative AI tool for content ideation and creation, how does it work? We are excited to offer our customers the Generative AI by Getty Images tool which pairs the company’s best-in-class creative content with the latest AI technology for a commercially safe generative AI tool. Generative AI by Getty Images is trained on the state-of-the-art Edify model architecture, which is part of NVIDIA Picasso, a foundry for generative AI models for visual design. The tool is trained solely from Getty Images’ vast creative library, including exclusive premium content, with full indemnification for commercial use. Customers simply type in the image prompt, use the offered filters around content type, aspect ratio and colours/



mood and click ‘generate’. Four generations are received off every prompt and from there customers can go on to refine what the generator created and see what pre-shot imagery exists around the prompt in case that gets them to the desired image faster. Has it learnt from your archive or database? The tool was trained on Getty Images’ vast creative library, including exclusive premium content, driven by our global creative insights research and-curated by our internal experts who elevate the best authentic and representative imagery.


What is the main goal to achieve with this new tool offering? To offer our customers a highquality tool which won’t put them at legal risk. Not all other AI generator services and tools are automatically safe to use for commercial purposes. Uniquely, we can say ours is as it is exclusively trained off our premium creative content, so you won’t run into legal issues around using images which contain likenesses of real people, trademarks, logos, works of art or architecture, or other elements protected by third-party intellectual property rights that you do not have the right to use. We

also believe that a betterquality training set gives you a better-quality output. Which kind of developments can we expect from Getty Images in a near future? We’ve just announced advanced inpainting and outpainting features now available via API. Developers can seamlessly integrate the new APIs with creative applications to add people and objects to images, replace specific elements and expand images in a wide range of aspect ratios. These features will soon launch on and


TM Broadcast spoke with Simon Roue, VIDA’s CEO, to know how the main challenges affronted and which breakthroughs comes with this project’s development.

Which were the requirements for this project and how do you adapt the solution VIDA content to this project? Access to the BBC’s extensive library is not direct due to the nature of the broadcaster being a public service entity. To facilitate library searches, Getty Images, through its commercial relationship with the BBC, is contractually permitted to sell content from the BBC library. The complexity arises with the library being only partially digitized and the challenge of surfacing content from the undigitized portion. As a researcher, initial access through Getty might reveal a selection of clips from their public website. To delve deeper, an account in VIDA is created by Getty Images, which unlocks further levels of access. A significant endeavor was the migration of 57,000 assets into VIDA, necessitating transcoding and the creation of proxies for each asset. Furthermore, to enhance searchability within the platform, we undertook

the transcription of over two million minutes of content. Integration with Getty Images’ CRM, specifically Salesforce, was also critical. This allows users to search the VIDA library, select clips, and add them to their basket, which in turn generates an opportunity in Salesforce. However, not all clips are immediately licensable due to potential third-party content or rights clearance issues, such as those involving the Royal Family. Once the rights checks are complete and commercial negotiations finalized, users can utilize watermarked proxies for their edits, and upon clearance, download the high-resolution asset from VIDA. The platform also caters to instances where content is not yet digitized in VIDA but exists within the BBC’s library, including various departments like BBC archives, news, or sports. These departments can upload files into VIDA, enabling them to be surfaced for licensing. This sophisticated system offers

an easy-to-use environment for researchers to surface previously inaccessible content and for Getty to fulfill its commercial obligations, ultimately generating revenue for BBC studios. This journey to commercialize the BBC’s rich archive and provide access to over 5,000 researchers through VIDA has been successful, though it was a complex process not to be underestimated. It’s a lot of work. How long did it take to develop the platform? It was about six months. We’ve been working with the BBC for a little while prior to that, but the integration with the rights management system, the Getty Images development team also had to be involved because they’re opening their CRM system for us to be able to integrate with. It wasn’t without its challenges, but we overcame all of them.



VIDA guides users into a frictionless environment via Passwordless entry, simplifying access with a secure link sent to their email, or through various single signon services. Upon entry, users are greeted with a curated homepage by BBC and Getty, highlighting thematic collections that span diverse categories, including natural history.

shared with collaborators—all within the platform’s secure confines. This curation is complemented by a robust search engine that penetrates beyond surface-level data, utilizing transcriptionbased metadata to unearth previously inaccessible assets. This capability allows footage researchers to conduct granular searches with keywords extracted via sophisticated speech-to-text tools, revealing valuable clips for their projects.

These collections are a cornerstone feature of VIDA, enabling both our hosts and users to assemble and personalize content compilations that can be

Our integration extends to the shopping experience within VIDA. Users can select clips, tailor them to their project’s scope, and seamlessly add them to a basket. This initiates

How does the platform work from the point of view of the user?


a process within Getty Images’ CRM system, ensuring that rights and commercial considerations are validated. Once the requisite checks and negotiations are completed, users are notified that highresolution assets are ready for download, streamlining their creative process. It’s important to note that VIDA’s scope extends to content not yet digitized within BBC’s archives. We’ve established integrations with various BBC departments, ensuring that any requested content can be promptly uploaded and made available on our platform. This sophisticated yet userfriendly system underscores


our commitment to providing a state-of-the-art content repository and distribution service, benefiting both the BBC’s extensive archives and our commercial partners at Getty Images. Which were the main challenges for this development? Certainly, reflecting on the main challenges we encountered while developing VIDA, it becomes clear that our cloud-native approach was a decisive factor in mitigating potential obstacles. The sheer volume of content, exceeding a petabyte, could have presented a significant hurdle if we were constrained by traditional on-premise methods. Instead, leveraging the expansive compute power of AWS and utilizing Dolby’s advanced transcoding engine allowed us to efficiently process and transcode a vast archive of media. Synchronization among stakeholders was paramount. Ensuring alignment across the teams at VIDA, BBC, and Getty Images, while maintaining business continuity, was a complex dance deftly managed by our adept project managers and customer success team. The adoption

phase post-launch brought its own set of user-centric challenges, from login issues to email discrepancies – the typical teething problems of any large-scale digital transformation. Yet, it’s the people aspect, the change management, that truly tests the mettle of technology deployment. Witnessing users move past the initial adjustment to embrace the full potential of what VIDA offers, and receiving their enthusiastic feedback, has been particularly gratifying. It underscores the success of the platform not just in technical terms, but in its ability to revolutionize user experience and content accessibility. We suppose that, at least at the beginning, the work had to involve BBC and Getty, but which departments were involved? Do you set a special team for this work? The development of VIDA was a collaborative effort, drawing on the strengths of multiple departments across Getty, BBC, and our VIDA team. At Getty, the synergy between the commercial, development, and marketing teams was

pivotal. Their efforts in user education through webinars and training were crucial for a smooth transition to VIDA. At the BBC, the rights management and commercial teams faced a paradigm shift in their daily operations, managing the risks associated with technology change and ensuring customer satisfaction. Within the VIDA team, the customer success, development, project management, and marketing departments played integral roles. Communicating internal changes and fostering a collaborative environment were key responsibilities, especially for our new team members like Philippe, who joined us to strengthen our internal communication. The development was a testament to teamwork, with each department contributing to the collective goal and ensuring a seamless introduction of VIDA to our users. Which are the main differences between managing video / audio files and pictures? How did you overcome these differences and integrate



all documents across the same system? VIDA is proficient in handling a diverse array of documents and images, yet this specific collaboration was focused on the BBC’s rich trove of audiovisual content, which includes video, audio, and associated metadata. Despite VIDA’s capacity to manage both still and moving images, our primary objective here was to facilitate the licensing of the BBC’s audiovisual assets. We successfully integrated content across various resolutions, from standard definition to 4K, and even 5K for certain unedited raw footage, particularly from natural history archives. The versatility of VIDA ensured a seamless user experience regardless of file size, which is integral when dealing with such a wide spectrum of content. What compression protocols did the team work with? All of the source assets are stored predominantly in the ProRes format. That’s all the mezzanine assets. Then the proxy files, which is the viewable, streamable content, that is all using H.264. There are various bitrate ladders, so depending upon the internet


connection that you are working with, the system will dynamically decide which bit rate is optimal to be served to use to give you the best searching and screening experience. By keeping it relatively simplified and not going wide on the different formats, you are able to build a much more easy-tointegrate-with kind of system.

background noise, thanks to the sophisticated preprocessing these models facilitate. By continuously integrating the latest AI advancements, VIDA ensures that content is futureproofed, empowering users to reprocess assets with new models as they evolve, extracting even more value over time.

This MAM platform uses AI features and the latest speech-to-text technology, could you tell us a little more about this stateof-the-art tech and what capabilities / possibilities offer?

What can you tell us about working with the cloud? What are your providers for storage?

VIDA harnesses cuttingedge AI models from industry leaders like AWS and OpenAI, specifically the Whisper technology, to offer unparalleled speechto-text capabilities. These models have advanced remarkably over the past few years, now adeptly handling multiple languages and regional dialects with greater sophistication. Our platform leverages these tools to produce highfidelity transcripts, providing options to balance speed and accuracy depending on project requirements. We can even isolate speech from

In the realm of cloud-based operations, AWS stands as our principal storage solution, providing a robust and reliable foundation. To ensure maximum resilience and disaster recovery, we also engage with additional providers such as Wasabi, which allows us to maintain triple redundancy for our clips across a diverse array of cloud services. This multitiered approach assures longterm security for our stored content. Additionally, we utilize Amazon CloudFront for its global reach, ensuring users anywhere can experience swift and smooth playback. Our infrastructure also includes Aspera’s high-speed transfer capabilities, ensuring rapid content delivery that can fully


utilize available bandwidth.

which provides clients with

means moving beyond

The elasticity of our system

readily available content.

keyword searches to

is another hallmark, capable

While I am not directly

more intuitive queries, like

of scaling seamlessly to

involved in the management

identifying images without

accommodate any number

or creation of Getty’s platform,

explicit mentions in the

of users instantly, thanks to

our collaboration becomes

script. AI will also aid in

AWS Lambda functions. This

pivotal when clients require

summarizing show content,

sophisticated infrastructure

deeper, more specialized

providing concise synopses

represents the pinnacle of

content. That’s where VIDA

that streamline search

comes into play, granting

efforts. Additionally, we’re

access to those who seek

developing an advanced

operational scale.

beyond the surface, such

player to facilitate navigation

as documentary filmmakers

through enriched metadata

Getty has a different

needing unique and not

markers. Expect to see

widely accessible footage.

more dynamic, user-centric

modern cloud technology, offering a virtually limitless

platform for its clients, with a larger data base, Are

search functionalities, akin What future developments

to e-commerce experiences,

platform’s creation and / or

are you preparing?

bringing efficiency to content


In our roadmap for 2024

discovery and tracking.

Getty Images operates its

and 2025, we are focusing

Our goal is to refine user

own platform, designed for

on deepening AI integration

experience, ensuring time

immediate licensing and

to enhance content context

spent on VIDA is both

e-commerce capabilities,

analysis within shows. This

productive and enjoyable. 

you involved any how in its



Codecs, and more… The Magic of Broadcast

A few years ago we published an article on codecs in which we left the question of what the scenario would be like a few years down the road unanswered. Let’s see what the current situation is. By Luis Pavía



It has been more than 4 years since we put in your hands a similar content in which we mainly contemplated the aspects related to the methods and workflows that cover from capture to broadcast, barely mentioning then the protocols used when said broadcast is made through streaming. Interestingly, that is where we now consider that the most significant line of evolution has occurred throughout all this time. It is not something new, since it already existed at the time, but it is true that it has unmistakably established itself as the form of consumption preferred by a large number of users. And this preference increases as the age of viewers decreases. It seems an undeniable fact that broadcasting through traditional broadcasting methods has stopped growing quite some time now. And while it is true that it remains steady because there is a large number of end users who by geographical location or lifestyle continue to prefer this method, most of the growth is occurring through the different streaming modalities available. From OTT (Over The Top – with dedicated receivers and decoders) services to online platforms in their different variants, both free and subscription. And what does all this have to do with our headline? Everything. Because, in short, the means of transmission and the arrival to the end customers are what determine the needs that must be sorted out to get the content to the viewers and keep all the pieces of our gear running satisfactorily.



The first feeling we have when facing the review of this content is that, rather than major changes in codecs or containers, evolution is taking place in response to new forms of consumption. Let’s not lose sight of the fact that this change has not been abrupt at any rate. It is simply an evolution that occurs slowly and gradually, but we would also say relentlessly. A very large part of content consumers are now on the move, on screens of all sizes and formats, they are not content to go by the schedule of a programming grid, and


they tend to choose what they want to see, when they choose, and from where they please. They are immediacy consumers. But at the same time, many of those consumers have also become content creators through social media. Because technology allows for it. We have some idea that we return to on a recurring basis so as never to lose sight of the reality in which we are: nowadays, nearly anyone with a mobile phone and a data connection has at their fingertips what a few years ago was only available

to the largest broadcasting corporations. An international broadcast a few years ago was only possible with very specific resources and means, which were extremely expensive and not even within reach of all broadcasters. Whereas, today, a simple group video call or even a broadcast between continents is at a simple touch of a button on most of mobiles that any of us carry in our pockets every day. In these cases, they are no longer codecs inside files, but codecs traveling through


There is a wide range of codecs, each one created for a different purpose. networks using protocols. And having reached this point, let us return -briefly this time- to a review of the meaning of each and every one of the different elements that we must manage. When we work with files, our decision-making capacity remains much more open, as we have many more elements that we will be responsible to manage. The first of these elements to consider is the container. The container is the physical file, the collection of “zeros” and “ones” that is recorded in some type of media: memory card, disk, pen drive, etc. It’s file extension can be the telling sign but, watch out, although extensions do not change or the type of file is the same, content can be very different. Thus, we find WAV, AC3, AAC, PCM, WMA, MP3, etc... files in the particular case of audio-only files. Or other types such as AVI, MP4, MOV,

MXF, M2TS, FLV… for video files.

optional metadata, such as subtitles.

The important thing is to bear in mind is that the container is still comparable to a shoe box. Why this comparison? Because a shoe box can hold our collection of concert tickets, watches, cables, chargers and even shoes. But the most characteristic thing is that it can simultaneously contain different types of objects.

Having seen the container, let’s go with the codec. This term comes from co-dec, an abbreviation for enCOder– DECoder. Actually, it is about the language, the method that has been used to compress, sort and write the data that were originally images and sounds in order to have them stored and then be able to reconstruct those same images and sounds by using them.

In the case of audio files, we could say that practically all the content is only audio, distributed in one or more tracks or channels. Whereas in the case of video files, the content is always multiple, because the same file always contains the data collection of the images, the video sequence itself. But it normally also contains more or less audio tracks associated with that stream, in the same way that it will contain different amounts of metadata. These metadata can be convenient, such as those related to recording references such as date, time, camera, etc. While others are essential, such as the ones that are necessary to properly synchronize audio and video. And there may even be some

There is a wide range of codecs, each one created for a different purpose. Some of them to reduce the size of the files to the minimum possible, others to maintain the maximum possible quality at the expense of a greater volume of information and the whole range of different options that take up the space between both options. So if container and codec are such distant concepts, why is there so much confusion around this? There are three main causes and we think it is important to recognize and distinguish them. One: on many occasions the container and the codec share the same name. Thus, an MP4 container will frequently encapsulate in the MPEG4



codec. Two: different files of the same container type, such as AVI, can host different types of codecs, such as DivX, Xvid and MJPEG. And three: different containers can use the same type of codec, as is the case with the MOV container that also uses the MPEG4 codec. As if this were not enough, each type of codec has its own variants, such as the wellknown H264 and H265, both of which are developments within the MP4 codec. So as time goes on, why is the trend to get more scattered instead of aiming for unification? Well, as almost always, for both technological and economic reasons. The technological reasons are related to making the most of the evolution of the available technology and everything it entails from the point of view of processing power and efficiency of results, since ultimately


each new codec appears as a response to a specific need when the technology provides the solution, such as storage capacity, data transfer, processing, etc... The economic reasons respond to the need to get a return from investments in research, attract followers for our ecosystem or the generosity of groups that offer the result of their work in an altruistic way, just to give some examples. Without questioning any of them, we understand that they are all legitimate, and each and every one of the possible combinations in the different scenarios respond to different needs. Two examples close to the extremes would be, on the one hand, the need to maintain the highest quality in capture, recording an enormous amount of data on a local storage media, so that production can be carried out in the best conditions. And, on

the other hand, the need to package in a single file a movie together with its soundtrack in several languages and an even wider collection of subtitles so that its total volume is the minimum possible without detriment to the perceived final quality. And here all the elements that are beyond mere mathematics and data compression algorithms come into play. Because one of the lines of development that are followed in the new compression techniques values the intrinsic abilities of our perception to apply the lowest compression rates in the parts of the image that we perceive in greater detail, and vice versa. The goal of compression is usually aimed at achieving the optimal balance between the highest perceived image quality and the lowest possible data volume. With different levels depending on the use: production or broadcasting.


Hence the enormous amount of “flavours and nuances” that we find in our data “salad”. In short, in this sense, the objectives pursued when researching, developing and launching the different formats and codecs that remain among us have hardly changed. In this sense, we refer those who are curious about this to our issue on this topic 4 years ago, because all the concepts explained back then then in detail and depth relating to algorithms, compression, sampling, etc. remain completely valid and up to date. We move now to cover specific containers and codecs, which surely a good part of our readership expects us to do and which, as you will see, has hardly changed since that previous issue. As for containers, this time we are not going to dedicate space to those suitable for capture, such as ProRes

Different files of the same container type, such as AVI, can host different types of codecs, such as DivX, Xvid and MJPEG

or RAW in their different variants, generally associated with the different camera manufacturers, or others in that field of action, since there are no significant changes with respect to our old content. Little else to add also to our old content regarding containers for distribution, remaining as the most common: MP4 (MPEG-4): maintains its popularity thanks to its versatility and quality, presenting only limitations if we want to manage content of the highest quality.

AVI (Audio Video Interleave): still valid despite being one of the oldest ones, thanks to its compatibility with an extensive range of codecs, but increasingly falling into oblivion because of the large file size it generates. MOV (Apple’s QuickTime Movie): Ideal for complex multimedia content with multiple data streams, but relatively limited support outside the Apple environment. WebM (from Google): is an open standard featuring mainly small file size in order to facilitate rapid transfer over networks. MKV: stands out for its ability to handle multiple audio, video, menu and subtitle tracks. Always with a distribution mindset, it offers high image quality, but at the expense of a large file size. It is characterized by having been chosen at the time for Blu-Ray media.



WMV (from Microsoft): being somewhat old and seldom used today, it is another excellent example of confusion, since both the codec and the container use the same name. The container may also contain other codecs and the codec may also be encapsulated in other containers.

to H.264 and highly rated for broadcast as it offers very good image quality with reduced data volume when compared to its predecessor. It supports HDR and resolutions up to 8K. Parallel processing reduces latency by facilitating live broadcasts. The downside: it requires greater processing power.

We end with the container section simply by citing one that is already practically obsolete, such as FLV, despite its ability to transmit video through old networks with a fairly limited bandwidth.

VP9: an open-source codec, competitor of H.265 and developed by Google with its YouTube platform in mind, shares with the former the efficiency and compatibility features along with high resolution up to 8K, offering higher visual quality, adding support for VR (Virtual Reality) in 360º videos and contemplating specific metadata for YouTube information and subtitles.

As for codecs, let’s see the most outstanding characteristics of the four most widely used at present (2024), both in systems oriented to file storage and in distribution through streaming: AVC (Advanced Video Coding) or H.264: Widely compatible with both devices and browsers. It offers a good balance between data size and quality and it is very popular due to its compatibility with a wide variety of devices. Limited by the maximum resolution it is capable of offering. HEVC (High Efficiency Video Codec) or H.265: a successor


AV1: an open source codec developed by the Alliance for Open Media, a consortium made up of Google, Netflix, Amazon, Microsoft, Facebook and Apple. Engineered as a successor to VP9, it improves upon all its features while also incorporating support for HDR. But a time has come in which while the codec continues to be a fundamental part of our development efforts,

The goal of compression is usually aimed at achieving the optimal balance between the highest perceived image quality and the lowest possible data volume. With different levels depending on the use: production or broadcasting.

the container is losing its purpose because it will no longer exist. And how is that? Very simple, as always. In cases of broadcasting through streaming, we do not need a physical file that brings together the content and is played back from a certain physical medium, but the content simply flows, like a (water) stream through our internet connection and arrives directly from the provider or depository to our screen. Here appears a new term: “protocol”. A protocol is the


“language” in which our player, usually a software running on a machine, liaises with the content distributor so that this information reaches us and we find it possible to enjoy it. Working with network protocols, our options will be restricted by the requirements of the platform that hosts and distributes our content. Unlike content encapsulated in a file, in which it is necessary to have the complete file before playback becomes viable, streaming is characterized by the fact that

the file can be reproduced practically as it is being downloaded and does not need to be stored locally.

make room for the content

Actually, on some occasions there is a tendency to temporarily store small fragments that are downloaded in advance and played back with a certain delay. The purpose of this technique is to avoid any potential interruptions that an unstable download speed could bring about. This temporary memory is called a “buffer” and is emptied as the content is played back to

In this sense, it is important to

that is being downloaded. This whole process is totally seamless to the user.

distinguish different scenarios, among which we can highlight three large groups. On the one hand, the VOD (Video On Demand) platforms, on the other the OTT (Over The Top, desktop systems) and finally a group that includes social media, telemedicine, virtual events and a good part of new emerging industries in the field of creation.



Working with network protocols, our options will be restricted by the requirements of the platform that hosts and distributes our content.

Whereas the first two have big names behind them (Google, Amazon, Netflix, etc.), social media have a huge number of content generators behind them and, although quality and purpose are far behind the former, we must recognize that nowadays they are a reality. In the first instance -VOD- we find platforms in which we usually connect to a YouTube or Vimeo type content server, by using our own computer or tablet device or mobile phone and select the content from the catalog that we want to watch. In this scenario we will find free and subscription content, with or without advertising. There are also large numbers of individual creators of content here. In the other instance -OTT- we will be subscribed to a platform that will provide us with the device, the “decoder” that, connected to our screen, will also offer us the contents of our service provider. Although it is also possible to do without the device by directly accessing the contents through an internet browser. In these cases, most of the content comes from well-established creators and is almost always produced with the highest level of quality in terms of content. In one way or another, in both cases we are dealing with streaming video systems that will use the various protocols available for IP (Internet Protocol) video. This is the method that we



mentioned at the beginning of our article as the form of distribution and consumption that has been gaining followers steadily and unstoppably for years. While, in the case of files, the codecs and containers developed are usually built around conditions regarding data volume and quality, the codecs and protocols that are developed for streaming contemplate other priorities that -always prioritizing the volume of data to be transmitted and the perceived final quality- must be able to reach customers without interruptions. In both cases, as technology has being offering increases storage capacity, processing power and transmission speeds, codecs have been able to make the most of these features to provide the best performance. Such as, for example, the ability to offer content with better resolution such as 4K and 8K, better color reproduction as with the BT2020 or BT2100 spaces, or better quality perception as with HDR (High Dynamic Range) or HFR (High Frame Rate). Interestingly enough, and it could not be otherwise given that the market is moving towards streaming consumption, we find the most advanced developments in this area. Although it is true that as creators we will always be constrained by the requirements that each broadcaster sets as necessary to disseminate our content through their platforms.



Let us then go on to list the key features that define the properties of our protocols, as well as a list of the most common ones in order to make an informed choice of the most appropriate one according to our needs.

of several seconds in a live

The first purpose of a protocol will be to break up the content created with a codec into smaller packets, so that they can travel over an IP network such as the internet. These packets are pieced together at the destination to reconstruct the audiovisual content’s dataset.

protected at all levels without

The distinct feature of streaming distribution is that we do not need to have the complete content before starting to play it, as in the case of containers, but it can be played as it becomes available. We begin by defining some important features that will determine the usability of a protocol based on need. Starting with latency, this feature tells us the delay time between the broadcast of the original content and the viewing on the destination display. A latency of one millisecond between players in e-games can be a real problem, while a latency


broadcast of a concert from the other side of the planet is perfectly valid. No less important is security, which is related to the ability to keep the connection between sender and receiver allowing the alteration of their own data or the input they provide to devices to access any type of information outside the transmission itself. Stability is also important, in the sense of the ability to recover possible lost packets so that quality losses do not occur, within the parameters of each connection. Adaptability also impacts on the usability of the protocol, understood as a quality that allows the features of each connection to be assessed in real time and the data rate to be adapted so that there is no loss of continuity in transmission. And finally, it is necessary to take into account the compatibility with different devices and operating systems and browsers, since if this is restricted for any reason, our number of potential viewers can be drastically limited.

As for the most common protocols, together with their main features, we have: HLS (http Live Streaming): developed by Apple, it is secure and compatible with devices and browsers, high latency being its main drawback. WebRTC (Web Real Time Communication): aimed at video chat platforms due to its low latency. SRT (Secure Reliable Transport): an open source protocol developed by Haivision; it is secure, compatible and features low latency, but seldom used by members outside the SRT alliance. RTMP (Real Time Messaging Protocol): developed by Adobe, it is flexible, adaptable and has low latency, but can suffer interruptions if bandwidth is unstable. RTSP (Real Time Streaming Protocol): developed to establish and control connections between endpoints, it requires working in combination with other protocols. With a low latency, but also poor compatibility for broadcasting with devices and browsers. On the other hand, its great ease for segmenting


content and its ability to be customized with protocols such as TCP and UDP make it suitable for production environments. TCP (Transmission Control Protocol): as it prioritizes accuracy and error correction, it is ideal for ensuring the quality of content, at the expense of a greater latency. UDP (User Datagram Protocol): ideal for transmitting data to a large number of clients with minimal latency due to its multicast characteristics. It is very fast and lightweight although its error control is very basic. SIP (Session Initiation Protocol): a modern open standard signaling protocol, allows distribution to customers of different capabilities, with high customization features that will facilitate its development in different aspects for the future. In short, and returning to our headline, there have indeed been changes in recent years and, reviewing the panorama with a little perspective, we can ensure that everything will continue to change, not always along the lines we foresee, but surely more quickly than we imagine. We have to continue studying and meet again after some time to review the situation... 



Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.