


One of TM BROADCAST’s defining features has always been its commitment to offering readers an accurate snapshot of the technological landscape of the world’s leading broadcasters, through interviews with their technical decision-makers.
In this context, the ambitious initiative launched by Swedish public broadcaster SVT recently caught our attention. The broadcaster has adopted a software-based audiovisual infrastructure to handle its communication, distribution, and production workflows.
Coinciding with the launch of this new architecture, we published an interview a few months ago with its CTO, Adde Granberg, to explore the project’s key elements and gather his unique market perspective. “As long as broadcast remains separate from IT, it will eventually become obsolete and die—like a dinosaur in the modern world,” he noted at the time.
From there, we decided to dig deeper and take a closer look at the new strategy adopted by SVT’s neighbor, Danish public broadcaster TV 2, which
Editor in chief
Javier de Martín editor@tmbroadcast.com
Creative Direction
Mercedes González mercedes.gonzalez@tmbroadcast.com
Chief Editor
Daniel Esparza press@tmbroadcast.com
shares several similarities. “It doesn’t matter much whether the servers are in the cloud or on-premises. What really matters is that production is software-defined,” explained Morten Brandstrup, Head of Newstechnology at TV 2.
What began as a one-off feature quickly evolved into a special series of reports exploring the Nordic broadcasting ecosystem. This month, we present the third installment: an interview with Iceland’s public broadcaster, RÚV, which is currently undergoing a major technological transformation. Key strategic initiatives include its strong commitment to virtual production and a complete overhaul of its sports graphics package.
Finally, we’re pleased to announce the launch of a new section: Industry Voices. In this space, we will speak with senior leaders from the world’s top manufacturers and companies in the broadcast and media space, taking a unique approach to explore their strategic vision and the trends shaping the industry. We kick off the section with Jon Wilson, recently appointed as the new CEO of Grass Valley.
Key account manager
Patricia Pérez ppt@tmbroadcast.com
Administration
Laura de Diego administration@tmbroadcast.com
Published in Spain ISSN: 2659-5966
TM Broadcast International #141 May 2025
TM Broadcast International is a magazine published by Daró Media Group SL Centro Empresarial Tartessos Calle Pollensa 2, oficina 14 28290 Las Rozas (Madrid), Spain Phone +34 91 640 46 43
INDUSTRY VOICES
Interview with Jon Wilson, new CEO of Grass Valley.
“Customers don’t want to be locked into one vendor. We’re delivering on our promise of an open ecosystem”
BROADCASTERS
Special Nordic Reports: The technological transformation of Iceland’s RÚV
We interview Gísli Berg, Head of Production and Marketing, and Hrefna Lind Ásgeirsdóttir, Director of Digital Strategy, to explore the current technological state of Iceland’s public broadcaster.
Virtual production, from the inside: How leading studios are implementing this tool
We bring together insights from Quite Brilliant, dock10, and Dimension to highlight key takeaways and provide a clearer understanding of how this increasingly popular technology is being used across advertising, television, and cinema.
46
Generative Artificial Intelligence in the Audiovisual Industry: Towards an Era without Limits
Gen AI, unlike its predecessors, has the ability to create something new. It is an analytical, creative, innovative and expansive technology. When applied to the scope of our market, it is revolutionizing the way we create content... and this is just the beginning.
58
TECHNOLOGY
Efficiency, flexibility, planning, security and innovation: keys to remote production
Guide to not getting wrong the origin and the main pillars supporting these new models that are revolutionizing audiovisual production.
TEST ZONE
Standalone or in a team:
The development and innovation in professional video/film cameras is experiencing an ongoing towards the newest. This allows various kinds of users to have capture equipment of a high technical and creative level in a very short period of time.
NEP Europe has announced that it is delivering broadcast solutions for the Eurovision Song Contest 2025. This marks the sixth consecutive year NEP is the official media services provider for Eurovision, as the company has claimed in a statement.
Hosted this year in Basel, Switzerland from May 13-17, the 2025 Eurovision Song Contest 2025 is produced by Swiss national broadcaster SRG SSR in partnership with the European Broadcasting Union (EBU). For the occasion, NEP has provided OB facilities, capture solutions, technical crew and its TFC broadcast orchestration platform, the IP 2110 management system.
To try to meet the demands of this large-scale production, NEP Europe has also mobilized equipment and crew from across its European network, including from Switzerland, Sweden, The Netherlands, Germany, Finland, Belgium, and the UK.
“We’re incredibly proud to continue supporting the Eurovision Song Contest 2025 with our technical expertise and innovation”, said Lise Heidal, President of NEP Europe. “This event is not only a celebration of music and culture, but also a testament to what can be achieved when the best of European broadcast talent comes together. Every year, we push the boundaries of what’s technically
possible in live production and this year is no exception”.
“The deployment of our TFC platform has been a gamechanger, allowing us to streamline signal management and reduce rigging and setup time dramatically. It’s this kind of software-defined flexibility that allows us to deliver complex live events with greater efficiency and reliability than ever before”.
NEP’s Eurovision 2025 media services delivery
› Main outside broadcast (OB) unit: UHD2
› Backup OB unit: UHD24
› Music mix truck: Music One with dual audio
› Technical Operations Center: Built on NEP’s TFC broadcast orchestration platform for EBU services and signal distribution
› 6 wireless RF cameras
› Augmented reality services
› 4 EVS servers
› 27 cameras, including six wireless RF cameras
› 40 NEP crew, with 780 working days onsite
› Delivering over 320 monitors around the venue
› Using over 60 kilometers of cable
NEP’s TFC broadcast platform
For this year’s production, NEP has used its TFC broadcast orchestration platform,
designed with the intention of empowering engineering and production teams by trying to make IP technology fast and easy-to-use. TFC is a software-defined infrastructure management solution that streamlines control and signal routing across complex live environments.
By using the platform, the objective of the company is to strength the traditional baseband infrastructure with the benefits of ST 2110, looking forward to making the system more scalable, dynamic, and easier to monitor. Additionally, it enables NEP to constantly monitor the contribution and distribution of all signals involved in the Eurovision production on a dashboard, receiving notifications when parameters are not met.
Axel Engström, Project Manager Eurovision 2025 for NEP, concluded; “Delivering a production of this scale is only possible thanks to the enormous team effort from everyone involved. From logistics and engineering to onsite operations, it’s been a true cross-border collaboration across our NEP teams in Europe. We also have the privilege of working closely with SRG SSR and the EBU, whose professionalism and partnership are outstanding. Together, we share the passion for delivering world-class television”.
Blackmagic Design has announced that gaming firm Loaded used one of its backed workflow for streamer shroud’s recent “Fragathon”, the gamer’s version of a subathon. With an average concurrent viewership of 11,879, the subathon amassed more than 27 million views across YouTube, TikTok and X and raised more than $1 million for St. Jude Children’s Research Hospital via a mixture of donations, channel revenue and in game challenges, as Blackmagic Design has claimed in a statement.
“The ‘Fragathon’ is shroud’s version of the popular subathon style event where a creator goes live for 30 days straight”, details Loaded’s VP of Content and executive producer on the project, Ricky Gonzalez. “Shroud is one of the most storied and successful content creators in the gaming space, so we wanted to create an event that married his love for gaming with raising money for an incredible cause”.
A multitude of Blackmagic Design studio cameras and an ATEM Mini Extreme live production switcher were used to help produce the 30 day live stream across multiple sets constructed in shroud’s home.
“We worked closely with shroud to first develop the strategy of the event and then turn his home into a multi set studio for creating content”, Gonzalez explained. “We needed the flexibility of being able
to swap to different sets at any moment, and this meant heaps of cameras, cables and control stations for each studio. The entire event was live, so we had to be prepared for anything to happen”.
The streaming setups included a LAN center, which was effectively a PC cafe in shroud’s living room, with six PC gaming stations each outfitted with a Blackmagic Micro Studio Camera 4K G2 as a webcam and one also mounted high up in the living room to capture a wide angle of the LAN center. “This was a camera/scene that we used a ton throughout the event as the interstitial element between scene changes”, he added.
Additional setups included a board game/podcast studio that used three Blackmagic Studio Camera 6K Pros, a Blackmagic Studio Camera 4K Pro G2 capturing a wide shot of shroud’s stream setup in his personal studio, and the ATEM
Mini Extreme switching between gameplay and live shots in the kitchen and living room studios.
“Blackmagic Design cameras became our top choice for several reasons. We were working in tight spaces but couldn’t sacrifice quality in the end product, so we went with the best option on the market. Having beautiful, accurate monitors on the back of each Studio Camera made for a small footprint with an excellent image”, explained Gonzalez.
“Having cameras that were reliable and that we could flip on and immediately jump into broadcast were essential, and I can’t stress enough how awesome the Studio Cameras were at that. We could frictionlessly get our studios up and running in a matter of seconds”, he added. “Also, the overall ecosystem and having control via PoE or USB-C from across the entire studio was amazing. That goes for both the Studio and Micro Studio Camera setups. We were able to stash
cameras in hard to reach places without worrying about running a ton of cables across the house, which was important since we wanted to have a studio like setup while maintaining the livability of the home”.
Expanding on the Blackmagic Design ecosystem, Gonzalez praised the ATEM Mini Extreme’s functionality, especially when used in conjunction with the cameras.
“Using Blackmagic cameras alongside the ATEM is stunning. You have so much remote control without having to jump onto a camera physically, dialing in your
image from anywhere in the room. ATEMs are extremely user friendly, and with a small crew that needed to wear many hats all at once, we could hand over switching to anyone on the team, know the ATEM was simple to use and super reliable”.
“Raising more than $1 million for St. Jude was the biggest highlight for the production and talent working on the project.
Accomplishing that while doing the work you love is really humbling and feels fulfilling to the soul”, concluded Gonzalez. “The beauty of a project like this,
especially in a live environment, is that there are always technical puzzles to solve on the fly, and from a production standpoint, that’s always a fun challenge”.
EVS has announced that it has received an order from Finepoint Broadcast Ltd. to deliver 25 XT-VIA live production servers. With this investment, Finepoint Broadcast is replacing the last of its XT3 servers and standardizing its fleet on EVS’s flagship XT-VIA platform, as the providing company has claimed in a statement.
The XT-VIA server is designed with the intention of offering
speed, reliability, and scalability for live productions. It integrates with the LSM-VIA replay system and added tools like XtraMotion, which uses generative AI to deliver visual effects. The EVS servers also deliver processing power and channel density. In addition, hybrid SDI/IP connectivity looks forward to ensuring deployment in any environment, with compliance to SMPTE 2110 standards for interoperability.
Giles Bendig, Managing Director at Finepoint Broadcast Ltd., commented: “The EVS XT-VIA remains the most powerful and versatile live production server on the market. With this investment, we’re guaranteeing our customers access to the highest quality and most flexible solutions available— helping them meet the industry’s highest standards and, quite simply, do more”.
Nicolas Bourdon, Chief Commercial Officer at EVS, added:
“We’re proud of our long-term relationship with Finepoint Broadcast and deeply appreciate the continued trust they place in EVS. Their decision to expand their XT-VIA fleet reflects their forward-thinking approach and commitment to providing premium services to broadcasters worldwide”.
Radio, an ally in Spain’s massive blackout: How public broadcaster RTVE reacted to an unprecedented crisis
The director of its technical division, Jesús García, confirms that both television and radio were able to continue operating normally thanks to their backup generators.
Spain and Portugal experienced an unprecedented day this Monday, with a massive blackout that affected the entire country unevenly. In addition to the chaos caused by spending several hours without electricity or internet, there was also the uncertainty of not knowing what was happening, as people could not turn to their usual sources of information. In this context, radio regained a special prominence, becoming, for many, the only connection to the outside world.
The day left scenes of long queues outside bazaars and electronics stores as people searched for transistor radios and batteries to power them, as well as groups of people gathered in the middle of the street around a radio device, taking us back to images from another era we thought were long forgotten. “In these cases, the essential medium, as has been proven, is the radio, which is what everyone quickly turns to for information,” explains Jesús García Romero, head of the technical division at public broadcaster TVE, to TM BROADCAST.
Jesús García confirms to this magazine that both television and radio were able to operate technically with complete
normality thanks to their backup generators. “It didn’t affect us; we are prepared to act in this kind of adversity. All essential services react immediately, and the news services start looking for sources of information and pass on the authorities’ recommendations to the public, which is our main mission in these cases.”
“We have tanks with nearly 15,000 liters of diesel fuel, which would have allowed us to last two or three days without a problem,” details RTVE’s technical director, who explains that they did shut down non-essential equipment, in order not to push the generators to their limits, given that it was unclear when power would be restored. The corporation has three generators to supply each of its production centers in Madrid.
Another issue, of course, was the clear disruption the blackout caused among staff, with employees arriving late for the afternoon shift and others unable to return home for several hours due to the massive traffic jams that paralyzed Madrid for much of the day.
“Some staff members, especially those in support units, were asked to stay until someone from the afternoon shift could arrive, provided they didn’t have any family issues, in order to carry out shift changes and ensure continuity,” García explains.
“We tried to let those who had family issues leave as soon as possible, and for the others, we suggested they stay here too to avoid unnecessary travel and not to clog up the roads, which was what the authorities were recommending,” continues Jesús García, noting that the public broadcaster had electricity at all times and could stay fully informed.
They also made a large number of emergency rations available to staff, enough to provide up to 500 meals in case the kitchen services failed.
Program suspensions
Programs were suspended, mainly in the Prado del Rey studios (Madrid), because many professionals had to return home to take care of children, family members, or other emergencies triggered by the crisis. And also because most viewers were unable to watch. Instead, all broadcasting was shifted to Torrespaña [its production center in Madrid dedicated to news broadcasting], with a special news service that ran throughout the afternoon and merged at night with the program ‘La noche en 24 horas’ hosted by Xavier Fortes.
“All programming that was not essential was canceled,” summarizes RTVE’s technical director, noting that, as mentioned, people could not
watch those programs anyway, and those who were gradually regaining electricity turned to the public broadcaster in search of updated information about what was happening.
Coincidentally, the day also coincided with a visit to RTVE from Swiss television, with planned meetings and a tour of Torrespaña, according to Jesús García. Naturally, the entire agenda had to be canceled. As a side note, the RTVE executive recounts that they had to lend cash to the visiting delegation because they only carried credit cards. “An engineering colleague was able to give them a lift to the hotel,” he explains.
Ultimately, the disruption caused by the massive blackout that shook Spain was, for RTVE, more human than technical.
“We managed to survive this time — now we just need an alien attack,” jokes Jesús García as a closing remark.
Fox Corporation has introduced FOX One, its whollyowned direct to consumer streaming service. FOX One is designed with the objective of bringing all of FOX’s News, Sports and Entertainment branded content together in one streaming platform, as the Corporation has claimed in a statement.
For the first time, cord-cutters and cord-nevers will have live streaming and on-demand access to the full portfolio of FOX brands including FOX News, FOX Business, FOX Weather, FOX Sports, FS1, FS2, BTN, FOX Deportes, FOX Local Stations and the FOX network as well as the option to bundle FOX Nation within one platform.
“We know that FOX has the most loyal and engaged audiences in the industry, and FOX One is designed to reach outside of the pay-TV bundle and deliver all the best FOX branded content directly to viewers wherever they are”, explained Pete Distad, CEO, FOX One. “We have built this platform from the ground up to allow consumers to enjoy and engage with our programming in new and exciting ways, leveraging cutting edge technology to enhance the user experience across the platform”.
Additionally, FOX One will feature personalization technology that looks forward to adapting to viewing preferences while
integrating live and video on-demand content in a cohesive experience.
FOX One is on track to launch in the fall ahead of the NFL and College Football seasons.
3 Screen Solutions (3SS) announces that One Hungary has launched an upgrade to its OneTV service, with the objective of delivering a new viewing experience on existing Linux settop boxes (STBs) from Sagencom to households across the country. As part of the initiative, users won’t need to replace their hardware, as the company has claimed in a statement.
“Our priority is to deliver real value to customers while reducing environmental impact”, explained Tamás Bányai, CEO of One Hungary. “This upgrade shows that innovation doesn’t
have to come at the expense of sustainability. By revitalizing existing devices, we’re helping customers enjoy the best of both worlds – modern TV services and a reduced carbon footprint”.
In collaboration with 3SS, One Hungary introduced a new user interface using the 3Ready product platform. The company is looking forward to offering viewers an enhanced navigation, personalized recommendations, and streamlined content discovery.
Additionally, the upgraded OneTV platform integrates with streaming services including
3 Screen Solutions (3SS) has announced that Vodafone Group has selected 3SS and its 3Ready product platform to power the next-generation Vodafone TV user experiences (UX) across all Vodafone TV’s markets, as the copany has claimed in a statement.
Following the RFP process, Vodafone Group chose 3SS to lead the new programme with the objective of delivering a
Netflix, Prime Video, and YouTube, and supports deep-linking to regional content providers such as RTL+ and HBO Max.
Kai-Christian Borchers, Managing Director of 3SS, said:“We are delighted with the achievement to migrate One Hungary’s customers onto the new One TV service platform. We offer huge congratulations to One Hungary for accomplishing this project that demonstrates commitment to innovation, and to providing the best possible service to customers, new and old”.
world-class TV UX based on 3Ready, including the Product Framework and Control Center. Vodafone’s new TV platform will be deployed across multiple countries and devices.
Vodafone TV is available in Germany, Portugal, Romania, Albania, Czechia, Greece and Ireland.
Vodafone Group will feature 3Ready’s user experience and its technology foundation to try to deliver first-class targeted experiences for Vodafone TV customers.
3Ready is designed with the intention of offering to aggregators like Vodafone the possibility to curate content across all sources, including thirdparty providers, and delivering a personalized experience targeted to different user segments across all screens.
The media landscape is undergoing a major transformation. Traditional mass media, including television, continues to evolve toward new roles. Amid this shift, creators are thriving, and content from Japan is breaking through media boundaries and national borders, making a powerful leap onto the global stage.
As one of the largest events in Asia for the media and entertainment industry, Inter BEE will be held over three days from November 19 to 21 at Makuhari Messe in Japan. While it focuses on attracting broadcasting professionals, it also brings together people involved in content creation across a wide range of industries, as it has claimed in a statement.
The event looks forward to catching up on the evolution
of technology. In addition to broadcasters, people involved in content production from a wide variety of industries, and in recent years, young creators active on YouTube and other media, have been making an appearance to discuss about the unseen future of content.
For the exhibitors community at Inter BEE 2025, the call has just opened.
Inter BEE 2025 is comprised of four categories covering all the different fields of the media and entertainment industry: Entertainment/Lighting, Video Production/Broadcasting Equipment, Professional Audio, and Media Solutions. The diverse fields of video, broadcasting, film, sound, lighting, live performance, internet, and facilities have evolved in an intertwined manner, influencing each other while developing individually.
All kinds of technologies, products, and solutions for creating, connecting, and experience content will be gathered in Makuhari.
Another highlight of Inter BEE is its lineup of Special Events, each focused on a specific theme, with the intention of offering deeper engagement with emerging topics and trends. In addition to the exhibition, Inter BEE also serves as a conference platform where practitioners, researchers, and journalists from across the industry share practical insights and cutting-edge knowledge.
The conference sessions are designed to provide space to form partnerships and to serve as opportunities to exchange knowledge, tackle shared challenges, and collectively envision the future of content together with the audience.
7fivefive is enabling the BBC Studios Global Media & Streaming team to scale its virtual infrastructure. As part of this collaboration, 7fivefive is delivering the remote editing backbone that supports postproduction and content-based workflows. To help realise the global media company’s cloud-first vision, 7fivefive is also working in tandem with Amazon Web Services (AWS), as it has claimed in a statement.
The partnership between 7fivefive and BBC Studios has guided digital transformation which began in 2020. 7fivefive’s cloud now underpins the Global Media & Streaming teams’ post-production process, with the objective of enabling creative teams to manage resources efficiently whilst collaborating across different time zones.
The system is designed to scale active resources, with automated controls that enhance productivity. Virtual workstations can be provisioned and optimised based on realtime requirements, empowering users to self-serve without compromising on performance. Different teams receive resources that are suited to specific creative tasks, across editing, graphics, colour grading, compliance, and branding.
The latest phase of the project harnesses the capabilities of Adobe and AWS within a unified cloud environment. Regional departments can transition
between projects, access secure content from anywhere, manage allocation, and act on data-driven insights. Predictive cost metrics and automated budget management are based on detailed analytics.
Tim Burton, Managing Director of 7fivefive, said: “We are extremely proud to continue our longstanding partnership with BBC Studios, providing the technology, insights, and strategic support that helps to power its cloud-first approach to post-production infrastructure. These workflows enable creative teams to focus on the task at hand, whilst the complexity of integrated cloud resources is handled via an intuitive interface. Our shared commitment to innovation ensures that they are wellequipped for the challenges of tomorrow’s media landscape”.
Emma Ellis, Creative Services Technology Manager, Global Technology at BBC Studios, added: “7fivefive’s cloud expertise and practical insight has been key in helping us modernise
our creative technology stack and post-production workflows, allowing us to implement meaningful optimisations with confidence and efficiency. These enhancements have increased flexibility across our creative teams, allowing seamless access to remote editing resources and global media content from anywhere. This shift has not only introduced additional operational efficiencies, but has ensured we can continue delivering high-quality, engaging content to our audiences worldwide”.
Adam Jakubowski, VP of Technical Operations, Global Technology at BBC Studios, concluded: “The collaboration with 7fivefive has played a key role in our cloud trajectory. We have scaled virtual post-production workflows over the past 5 years, and adaptive resource allocation continues to provide significant cost and workflow efficiency gains. The flexibility of this solution means we can respond to evolving business and industry needs and enhance the autonomy and agility of our creative teams”.
Arri announced that it has entered into a definitive agreement to sell its subsidiary Claypaky to new owner EK Inc. The objective of this change is to try to offer a different foundation for Claypaky, as the company has claimed in a statement.
“This decision is part of our strategic realignment as we focus more strongly on our core business”, explains Chris Richter, Managing Director of Arri. “Clearly recognizing Claypaky’s potential, it was important to us to find a new owner who pursues next level growth based on a deep understanding of the market and a long-term strategic vision—for both Claypaky and its customers”.
“Our collaboration with ARRI has been a valuable and rewarding journey”, remarksMarcus Graser,
CEO of Claypaky.“We part ways with sincere appreciation— especially for the insights gained from Arri’s deep expertise in the motion picture industry, which will continue to influence our path forward. At the same time, we look forward to the opportunities new ownership brings. We are certain to benefit from EK’s strong development, manufacturing, and supply chain capabilities which could open new doors in terms of new product development, scalability, and global market access”.
Raymond Chen, CEO of EK Inc., adds: “The acquisition of Claypaky, along with its theatrical brand ADB, is a strategic investment that significantly enriches our portfolio. Claypaky, a nearly 50-year-old brand rooted in Italian design and globally recognized for
its innovation, strengthens therefore our global presence, especially in the European market. This acquisition marks the next step in our evolution. With our combined manufacturing expertise, innovative spirit, and expanded industrial backbone, we are well-positioned to continue shaping the future of entertainment lighting worldwide”.
The acquisition highlights all three parties’ commitment to their primary markets. While details of the integration between Claypaky and EK Inc. are still to be defined, Claypaky will retain its headquarters and core competences in R&D and Operations in Italy. The transaction is expected to close in the coming months, subject to customary regulatory approvals.
LiveU has signed a definitive agreement to acquire Actus Digital’s business and technology, with the objective of enhancing its recording, monitoring and Artificial Intelligence (AI) capabilities and helping customers improve operational effectiveness and create more value from video, as LiveU has claimed in a statement.
Actus Digital used by broadcasters and media agencies worldwide, is designed to bring content analysis, Quality of Experience (QoE) monitoring solutions, and AI enabled tools that will focus on complementing the LiveU EcoSystem, looking forward to delivering workflow simplicity and operational efficiency. The LiveU EcoSystem is a set of IP-video solutions, that add workflows across the entire video production chain,
inclu Video Monitoring and Analytics, and Deepening AI Capabilitiesding multi-cam 5G contribution encoders and cloud-based ingest, production and IP-video distribution solutions.
“We are delighted to be welcoming Actus Digital to LiveU”, confirms Samuel Wasserman, CEO and Co-founder of LiveU.
“Their exceptional, marketleading monitoring platform and AI expertise, combined with our shared commitment to reliability, quality, and customer service will significantly strengthen our entire offering in line with our overall strategy”.
“The LiveU EcoSystem is IPPositive, meaning it accepts all the major video protocols for maximum interoperability and ease of use for our customers.
With Actus Digital, we can now give our customers the valuable ability to monitor and gain insights from all these different feeds in a single solution, so that they can always be sure of their LiveU experience”, he continues.
Sima Levy, President and Founder of Actus Digital, adds: “Joining LiveU provides our customers with easy access to a comprehensive, high-reliability IP-video EcoSystem, backed by world-class support. Combining the LiveU EcoSystem with the Actus Intelligent Monitoring Platform delivers powerful new capabilities to our customers across radio, TV, OTT and the internet. In fact, more and more of our customers have asked for integration with LiveU and we’re excited to be able to combine our resources and expertise to serve them better”.
Riedel Communications has announced the opening of a new office in Hong Kong. The objective of this expansion is to try to enhance its presence in the Asia-Pacific region and provide support to its customer base. To mark the occasion, Riedel hosted an All-Asia Partner Conference in March at the new office. The event brought together regional partners to discuss market trends, product
developments, and partnership strategies – fostering collaboration across the region, while providing hands-on access to Riedel’s latest solutions, as the company has claimed in a statement.
“We are excited to further solidify our commitment to the AsiaPacific market with the opening of our new office in Hong Kong”, says Guillaume Mauffrey, Director of Riedel Communications Asia.
“This strategic move will allow us to better serve our customers, provide localized support, and drive growth through innovative solutions that meet the evolving demands of the market”.
Located in the heart of Hong Kong, Riedel’s new office has the intention of serving as a pivotal hub for sales, training, and technical support, catering to the diverse needs of media, sports, and entertainment industries across the region.
The new Hong Kong office will provide training programs, looking forward to maximizing the potential of the company’s solutions, including the Artist matrix intercom system, Bolero wireless intercom, SmartPanels, MediorNet real-time media network, and SimplyLive video production platform.
NVP has announced the appointment of Ismael Marcer as General Manager of the company in Spain, its new subsidiary based in Madrid, as it has claimed in a statement.
Marcel has over 20 years of experience in broadcast operations and technology, including leadership roles at Mediapro and Unitecnic. This appointment coincides with the latest announcement
of HBS where stated that NVP was its new partner to handle the media production and distribution of LaLiga for the next five years, after the decision of the Spanish football league’s to drop Mediapro.
The new General Manager will try to apply its knowledge with the objective of advancing the growth strategy and delivering broadcast solutions across new markets.
We’re launching a new section: Industry Voices. This series will feature interviews with key executives from leading manufacturers and companies across the global audiovisual market. Our aim is to explore their strategic vision for both their businesses and the industry.
We’ll also take the opportunity to discuss the latest trends and the most impactful news shaping the sector.
“Customers don’t want to be locked into one vendor. We’re delivering on our promise of an open ecosystem”
We speak with the company’s new CEO to explore the main strategic priorities for this new chapter and to get his take on some of the key trends and decisions currently shaping the industry—such as the shift to software, U.S. trade policy, and the recent decision by Spain’s LaLiga to end its collaboration with Mediapro
By Daniel Esparza
been in the works for some time. Louis Hernandez Jr., owner and Executive Chairman of Grass Valley, has played a key role in the process and will continue to be a central figure in shaping the company’s long-term strategy.
Jon Wilson joined the company in late 2023, initiating a transition process that culminated in his appointment as CEO. Until now, he had been serving as President and Chief Operating Officer. “Unlike many CEO transitions that happen under pressure, this one comes from a position of strength — and it’s been a very smooth process,” Jon Wilson tells TM BROADCAST. “We actually announced it internally last September, so our team was already well aware, and we had also shared it with many of our customers. NAB was more of a formal introduction to the broader industry.”
A leadership change in a company always draws close attention. Grass Valley officially announced the appointment of Jon Wilson as the new CEO of the Canadian company on
April 5th, making headlines across the industry at the very start of NAB.
The timing of the announcement clearly reveals that this move had
With this new chapter beginning at Grass Valley, we speak with Jon Wilson about the core strategic lines that will define his leadership. We also took the opportunity to get his
perspective on some of the major forces shaping the future of the sector, including the industry’s shift toward software, U.S. commercial policy, and the recent decision by Spain’s LaLiga to cut ties with Mediapro.
First, I’d like to hear your first impressions as the new CEO of Grass Valley. How are you feeling in your new role?
It’s been really great. Stepping into the role formally has been fantastic. One of the advantages I have is that, unlike many CEO transitions that happen under pressure, this one comes from a position of strength — and it’s been a very smooth process. We actually announced it internally last September, so our team was already well aware, and we had also shared it with many of our customers. NAB was more of a formal introduction to the broader industry.
Now that it’s official and I’m fully in the role, I feel
re-energized. I’m excited to drive the business forward. What’s also encouraging is being able to reassure both customers and employees that there’s no major shake-up on the horizon. It’s really about building on the momentum of the past 12 to 18 months and continuing to expand the initiatives already in motion.
“If we’re talking about industry-leading innovation but can’t help our customers deploy it successfully, then even the best tech won’t matter”
are these initiatives and strategic goals for the year ahead?
There are a few areas we’re focusing on. First, industry-leading innovation — which has been a pillar for Grass Valley for over 65 years. Second, operational excellence. Now that we’ve emerged from a transformation, we’re really focusing again on how we drive the business the right way.
Strategic partnerships are also very important. We’re looking at how to create win-win scenarios with our partners around the world — both through AMPP, by bringing more partners into the ecosystem, and also by reconnecting with longtime Grass Valley partners to ensure we’re aligned on how to win together.
And finally, it all revolves around customer success. Again, if we’re talking about industry-leading innovation but can’t help our customers deploy it successfully, then even the best tech won’t matter. That’s why we’re doubling down on customer success this year.
I also want to talk about the main challenges you’re currently facing as a company. One that comes to mind is the global uncertainty surrounding the new U.S. tariffs. How are these trade decisions affecting you, and what other major challenges are you dealing with?
First, I tend to view most challenges as opportunities — that’s just how I operate. That said, there are certainly challenges out there. For example, tariffs create more macro uncertainty. At the same time, we benefit from the fact that all our manufacturing today is based in Montreal, Canada, so our products currently fall outside the U.S.-Canada tariffs. That gives us a potential advantage.
We’re also being very transparent with customers about where we stand and what’s changing, which builds trust and appreciation. More broadly, when I look at the markets we serve, I see a real opportunity for Grass Valley to step back into a leadership position. We’ve always been a leader, but during our transformation over the past few years, we may have unintentionally stepped away from certain markets. Despite the
uncertainty and variable growth rates, I believe there’s still significant room for us to grow.
“Sports rights fees keep increasing, talent fees are going up, and yet advertising dollars are shifting across platforms. That puts enormous pressure on our customers to become more efficient and more effective”
I’d also like to dive into your approach to global sports production. What’s your vision on how this segment is evolving, and how are you helping to drive that change?
It’s evolving very dynamically, which is incredibly exciting. Across the board, our customers are under increasing pressure to reduce costs and do more with less — and that’s not unique to our industry. But in media and entertainment, that pressure is intensifying. Sports rights fees keep increasing, talent fees are going up, and yet advertising dollars are shifting across platforms. That puts enormous pressure on our customers to become more efficient and more effective.
Technology is one of the ways to achieve that. Remote production, for example, is accelerating — COVID was a big accelerator in testing what’s possible, and that trend has only continued. Now, we’re seeing 5G come into
play, and customers are exploring how to leverage it. They’re also looking to simplify workflows so customers can scale efficiently and operate profitably.
Grass Valley is very well positioned on these themes. Our product roadmap is closely aligned with where our customers are headed given the time our senior team, including myself, spend directly in market with customers.
“We benefit from the fact that all our manufacturing today is based in Montreal, Canada, so our products currently fall outside the U.S.-Canada tariffs. That gives us a potential advantage”
Would you say cloud technology is also contributing to this increased efficiency?
Absolutely — whether it’s cloud-based, on-prem, or hybrid. Everyone’s talking about cloud, hybrid, remote production, but the difference for us is
that we’ve been investing in cloud-based technology — specifically AMPP — for over eight years now. It’s a commercially hardened and widely deployed product, and we’re now seeing customers use it across Tier 1 deployments. That momentum is only going to grow as more clients recognize that we have a proven, reliable solution that drives meaningful commercial benefits at scale.
Sticking with sports: there have been shifts in Spain recently, with the surprising decision of the national football league to choose HBS over Mediapro as a provider. How are you dealing with these kinds of changes?
First, I’ll say that Mediapro is an incredible long-term partner for Grass Valley, and we have a tremendous amount of respect for that relationship.
As a technology vendor, it’s our responsibility to support all of our customers globally.
We hope to have the opportunity to work with HBS in their new deployment — we’ll see if that happens. These changes occur over time; different providers win different contracts.
At Grass Valley, we’re an enabling technology company — we’re not here to compete with managed or broadcast service providers. We’re here to help them succeed. And as such we want to continue supporting Mediapro, just as we do HBS, NEP, and others.
On the changes in Spain’s LaLiga: “As a technology vendor, it’s our responsibility to support all of our customers globally. We hope to have the opportunity to work with HBS in their new deployment”
Related to this, we’ve recently learned that HBS has partnered with NVP. What’s the scope of your partnership with this company?
We have a strong partnership with NVP. We’ve collaborated on some very interesting projects in Italy, alongside our partner Video Progetti, particularly around the Italian soccer leagues — across cameras, switchers and infrastructure. More recently, we’ve also worked together on AMPP-based production projects for Lega Pro. These are exciting collaborations that we’ve built together over time.
sports production, what do you see as the key areas of improvement your broadcast clients are requesting?
The recurring theme is cost savings and operational efficiency. Historically, many customers have tried to pick the best product for every individual application or workflow — but they’re realizing that’s not scalable. Now, they’re looking for partners who can help them consolidate their technology stacks and drive end-to-end efficiency. That often means a platform approach. For example, AMPP positions us strongly to deliver on that.
Customers aren’t just asking for 5–10% savings. They want, and need, meaningful reductions. We can deliver those, not only through AMPP, but also through our broader ecosystem — from cameras and switchers to infrastructure and interoperability with thirdparty products. That last point is critical — customers don’t want to be locked into a single vendor. Flexibility is key. We’re committed to that at Grass Valley, through initiatives like MXL with the EBU and others. We’re delivering on our promise of an open ecosystem.
I’d also like to hear your thoughts on some of the key trends shaping the industry. First, the convergence between the broadcast and IT worlds is a big talking point. What’s your view on this?
I think it’s great for our industry. There’s a lot to learn from IT — both in terms of mindset and technical expertise. At the same time, it’s important to recognize that broadcast
is fundamentally different from traditional IT infrastructure.
What I’m hopeful for — and what I see happening more and more — is a convergence of the best minds from both industries, working together to figure out what the broadcast industry really needs. That’s our focus at Grass Valley. I don’t believe in swinging entirely to one side. We need to embrace the best of both worlds to move forward.
“Now
is a really exciting time at Gras Valley as we’re operating with a renewed pace and purpose”
Along those lines, the industry is shifting from hardware to softwarebased infrastructure. How do you see the future of softwarebased workflows in audiovisual production?
It’s only accelerating — especially at the high end of the market, where customers have the resources to invest
in transforming their infrastructure. Our conversations with these clients are becoming more frequent and more focused. Today, even for top-tier productions, some workflows aren’t quite ready yet to handle the complexity and scalability entirely through software. But in the next one to three years, we expect most of those gaps will be addressed. Computing power continues to grow rapidly, and that will enable much more flexible production models.
Another hot topic is artificial intelligence. How are you integrating AI into your product portfolio?
We’re integrating it thoughtfully. It’s becoming an increasingly important conversation within Grass Valley — everyone wants to know what we’re doing with AI. We’re focusing especially on our asset management products, like Framelight X on AMPP. There’s a huge opportunity to use AI in ways that are practical and truly beneficial for our customers.
A good example is metadata tagging — identifying objects, people, and so on. That’s one of the early applications we’re already implementing, and I expect that use of AI will only expand moving forward.
To finish, I’d like to ask about the future. Are there any innovations, developments, or projects you’re currently working on that you’d like to share?
Yes — lots of exciting things. At NAB, we showcased several new developments. One of the highlights is our new filmic-look camera, the LDX 180, which will be shipping later this year. The market response has been overwhelmingly positive. We’ve also introduced a new mid-tier switcher, the VXP. What’s great about both the LDX 180 and the VXP is that they integrate seamlessly into the Grass Valley ecosystem. They’re not standalone products with different interfaces — they’re part of our unified approach to technology development.
On the AMPP side, we continue to refine our applications for commercial deployment, and some of the world’s largest customers are already adopting them. That’s where our focus will be in the coming months.
Looking ahead, what excites me most is our ability to move quickly with innovation while staying laser-focused on delivering the highest quality products. At the end of the day, customer success is what everything revolves around.
“Some workflows aren’t quite ready yet to handle the complexity and scalability entirely through software. But in the next one to three years, we expect most of those gaps will be addressed”
Is there anything else you’d like to add to close the interview?
Just over a month into the new role, and I remain incredibly energized about the opportunity in front of us. Grass Valley is in a
fantastic position. We’ve been an industry leader for over 65 years and now is a really exciting time at GV as we’re operating with a renewed pace and purpose to lead the media revolution.
Personally, I’m focused — and we as a team are focused — on building
the culture we want at GV, aligning everyone on where we are and where we’re headed. We’re already seeing the impact of that, and I expect the momentum to keep growing. I hope our customers are seeing it too — a new Grass Valley they maybe haven’t seen in recent years.
We interview Gísli Berg, Head of Production and Marketing, and Hrefna Lind Ásgeirsdóttir, Director of Digital Strategy, to explore the current technological state of Iceland’s public broadcaster
It’s interesting to see how the specific social and geographical circumstances of each country influence the technological developments of their television networks. In Iceland’s case, for example, the population has a particular interest in weather-related news, which has led the public broadcaster, RÚV, to improve its coverage in this area through a new graphics tool.
After exploring the technological landscape of TV 2 Denmark and Sweden’s SVT, it’s now time to take a closer look at Iceland’s public broadcaster, as part of our goal to offer a comprehensive overview of the current television ecosystem in the Nordic countries.
Helping us in this task are Gísli Berg, Head of Production and Marketing, and Hrefna Lind Ásgeirsdóttir, Director of Digital Strategy. As they explain, RÚV is undergoing a transformation that particularly affects its distribution strategy:
“As we move our focus and investment toward channels that show stronger audience engagement, several important changes are already underway.”
Throughout the interview, we explore their main projects and innovations, and we analyse how the broadcaster is adopting some of the industry’s key technologies.
To start off, what has been RÚV’s recent evolution from a technical standpoint?
RÚV is going through a significant shift in how we approach distribution. As we move our focus and investment toward channels that show stronger audience engagement, several important changes are already underway.
We’ve switched off longwave distribution, and we have plans in place to turn off satellite distribution this year. We’re also developing a clear plan for transitioning from terrestrial television to digital distribution.
Our core focus is now on digital platforms, which offer greater flexibility and better alignment with how audiences consume content today. FM will remain part of our strategy, especially in supporting national security and emergency communications.
Have you implemented any recent technological innovations or upgrades that you would like to highlight?
We are always open to adopting new technologies that support our digital journey, and we’ve made some exciting advancements recently. One of the highlights has been the adoption of Pixotope’s virtual production technology. It’s allowed us to streamline our broadcasting process and make it more eco-friendly by reducing the need for physical sets. So far, we’ve produced six new series with this technology, and our users have really enjoyed the experience. We’re excited to continue exploring what’s next.
We’ve also focused on enhancing our weather news coverage, which is especially important in Iceland due to the rapidly changing weather. Icelanders are very weather-conscious, and we want to make sure they have access to reliable, up-to-date information. To that end, we’ve recently deployed a new weather section on our website where users can view both short-term and long-term forecasts, as well as weather news, all in one place.
On top of that, we integrated the Raiden graphic weather system from Ross, giving us greater flexibility in how we deliver weather updates on television.
“One of the highlights has been the adoption of Pixotope’s virtual production technology. It’s allowed us to streamline our broadcasting process and make it more eco-friendly
Are there any interesting projects or upgrades in the near future that you can share with us?
One exciting project we’re focused on right now is replacing our archive system and associated media storage. It’s a major upgrade that will help us better manage and preserve our media assets.
A central goal of this project is to make the archive searchable and accessible to the public, so that Icelanders can engage with the media we’ve produced as part of our shared cultural heritage. It’s an important step in opening that history and ensuring it remains available for future generations.
In parallel, we’re mapping out our legacy systems and planning their replacement. This is part of a broader effort to modernize our technical infrastructure, enabling us to deliver faster, more reliable digital services.
What is your current technological landscape when it comes to the transition from SDI to IP on the production side?
Currently, we’re evaluating the best approach for transitioning from SDI to IP in our production workflows. While we’re not yet ready to implement the upgrade, we’re carefully assessing the available options to ensure we choose a solution that aligns with our needs and long-term goals. We are now looking at how other similar broadcasters are transitioning to IP, for the near future the production will stay in SDI.
In which resolution are you currently broadcasting? In this sense, do you have in mind any plans to upgrade the quality of your productions?
We currently broadcast in HD, though some of our content is already produced in 4K.
AI: “In the production we have been using image and video generators, we have been taking small steps and we have been setting guidelines for our staff since”
Cloud: “We use cloud-based tools for our web infrastructure and office systems, and they’ve proven essential in handling peak traffic moments, such as during volcanic eruptions”
OTT: “Big streaming services have raised the bar for performance and usability, and we want to meet those expectations as we put more focus on digital distribution”
5G: “We’ve started using 5G for some of our news reporting, mainly through portable IP-based tech that lets us go live quickly from the field“
IP: “We are now looking at how other similar broadcasters are transitioning to IP, for the near future the production will stay in SDI”
Resolution: “While we’re not planning a full resolution upgrade in the near future, our next step is moving from interlaced to progressive format within HD”
While we’re not planning a full resolution upgrade in the near future, our next step is moving from interlaced to progressive format within HD.
In news or sports production, graphics are a key element. In this regard, how do you approach this area, and to what extent have you tested immersive technologies or augmented reality?
Since the beginning of last year we have made a complete change of how we produce and use graphics in our sport production. We have been upgrading our graphics
systems from CasparCG to Xpression from Ross Video, now all our studios run on Xpression graphic systems as well as our OB production. With these upgrades we have improved our production workflow for more automated solutions for sport coverage. We have also started using Piero also from Ross video for live sport analisis, that is more engaging for our viewers and has been very successful for our productions. As we mentioned we started using a Virtual Reality solution from Pixotope that has been used for sport and other productions for more entertaining and engaging
features for our viewers. We have these three graphics solutions together and the solution is a really upgraded production value and the audience loves it.
I’d like to know a bit more about the key elements of your studios and facilities. Have you recently acquired any significant equipment? Which manufacturers do you usually rely on?
The VR production has been a learning curve for us, but we feel now that we have found the perfect balance and these new changes are now part of our workflow. We went live with our VR studio in January last year, not making this easy for
us, since this coverage was last year’s highest rated sport coverage. We had a lot of help and support from Pixotope and this process was very valuable for us since we have been improving the quality and we are getting more confidence with this new technology, and the audience are more engaged.
our systems to the test, and we’ve had to make some serious upgrades to our infrastructure and processes to stay agile and get accurate news out quickly—across all our channels, whether that’s for digital radio and TV, our website, or FM broadcasts.
“One exciting project we’re focused on right now is replacing our archive system and associated media storage. It’s a major upgrade that will help us better manage and preserve our media assets”
Among the events RÚV has recently covered, could you share a specific success story you’re particularly proud of — one that stood out due to the challenges it involved?
One success story we’re really proud of is how we handled coverage of the eight volcanic eruptions in Iceland since 2023. These events really put
During these eruptions, we’ve seen our web traffic spike tenfold in just a few minutes. Thankfully, our cloud-based infrastructure has been key in helping us scale up fast, making sure our users have uninterrupted access to critical information.
We have live camera feeds running at active volcanic sites around the clock. When an eruption hits, we can instantly switch to those cameras and broadcast live footage within minutes. Plus, our reporters are equipped with portable IP-based transmission technology, so they can report directly from the eruption sites, even if they’re in remote or difficult locations.
Along the way, we’ve strengthened our disaster recovery plans to reflect
this new reality, making sure we’re always ready for fast, flexible coverage whenever something happens. These eruptions have really shown how important resilience and flexibility are in what we do, and we’re proud of how our team and technology came together to serve the public in those critical moments.
What would you say is the current state of 5G adoption in your workflows?
We’ve started using 5G for some of our news reporting, mainly through portable IP-based tech that lets us go live quickly from the field. It’s been super helpful when setting up a full OB van isn’t really practical. That said, we still rely on OB vans for bigger broadcasts. Overall, 5G is a great addition to our toolkit, giving us more flexibility and speed when we need it most.
To what extent are you adopting artificial intelligence (AI) and process automation in your workflows? How do you see their influence in the near future?
We’re making steady progress with AI to enhance both our internal workflows and the user experience. In close collaboration with software companies specialising in language technology and artificial intelligence, we’ve added a speech-to-text service for live events, which improves accessibility for our audiences. We’re also using AI to generate simplified news articles, making important information more understandable and inclusive. In addition, AI supports us with proofreading and translation services and helps accelerate content production across radio and television. In the production we have been using image and video generators, we have been taking small steps and we have been setting guidelines for our staff since. We’re actively exploring further opportunities to improve our services and increase efficiency with use of AI.
What is your level of confidence in and adoption of cloud technologies compared to on-premise systems?
We’re confident in both cloud and on-premises technologies, and for us, it’s not about choosing one over the other. Instead, we take a hybrid approach that allows us to stay flexible, resilient, and cost-effective. The choice depends on the system’s purpose, how critical it is, and what level of control or scalability is needed.
Being in Iceland adds a unique dimension, since there’s limited presence from global cloud providers here. That makes us think carefully about what should be hosted locally and what works better in the cloud, especially when considering latency, availability, and disaster scenarios.
Most of our production workflows are still on-prem, where we value the stability and control—particularly when it comes to managing software updates. On the other hand, we use cloud-based tools for our web infrastructure and office systems, and they’ve proven essential in handling peak traffic moments, such as during volcanic eruptions.
Cybersecurity is also a key part of our thinking. Whether systems are cloud-based or onpremises, we evaluate each one carefully and ensure safeguards are in place to protect both our operations and user data.
What is your position regarding OTT services?
We’ve managed our own OTT platform for years, but we’re now exploring more standardised solutions. Supporting all major smart TV platforms is a key priority, and managing multiple in-house platforms is complex. Big streaming services have raised the bar for performance and usability, and we want to meet those expectations as we put more focus on digital distribution.
“We have live camera feeds running at active volcanic sites around the clock. When an eruption hits, we can instantly switch to those cameras and broadcast live footage within minutes”
In previous issues of TM BROADCAST, we had the chance to speak with the technical leads of three internationally renowned studios specializing in virtual production—Quite Brilliant, dock10, and Dimension (You can access the full interviews via the embedded links.)
This article brings their insights together to identify key takeaways and offer a clearer understanding of how the market is actually using this increasingly popular technology. We also explore the different ways it is being applied across advertising, television, and cinema.
Virtual production is enjoying a moment in the spotlight and has emerged as one of the hottest topics of the year so far. Although it isn’t a brand-new technology—it has evolved steadily over the past decade—recent advancements in LED displays, real-time engines, immersive experiences, and AI have propelled it to new heights.
These innovations don’t just fire the imagination of creative teams by enabling the creation of unprecedented hybrid environments; they also open the door to a broader range of projects by improving efficiency and lowering costs.
As with any technological breakthrough, the industry initially raced to explore its possibilities—often making missteps or facing misunderstandings along the way. But once that initial wave passes, there’s room for more measured reflection. That’s when a more accurate analysis of the benefits and challenges begins to take shape.
With that in mind, we spoke in recent months with technical directors at Quite Brilliant, dock10, and Dimension. They helped us understand how virtual production is truly being applied in the field today, and how studios are tailoring it to the specific needs of advertising, television, and cinema.
“Studios are now using the technology for the right reasons, focusing on scenes where it truly adds value,” says Russ Shaw, Head of Virtual Production at Quite Brilliant, when assessing how this technology is currently being used—a view shared by all the studios we interviewed.
In this respect, it’s important to note that virtual production isn’t a one-size-fits-all solution.
“In the early days, there was considerable hype surrounding the technology, and many productions rushed to explore its potential. However, there was often a misconception that projects needed to be entirely location-based or entirely virtual,” Shaw points out.
Another key trend is the increasing accessibility of virtual production for projects that previously couldn’t afford it. “As prices have decreased, the accessibility of virtual production has improved dramatically. Studios are now using the technology for the right reasons, focusing on scenes where it truly adds value. Lowerbudget projects are increasingly adopting virtual production because of its efficiencies,” Shaw adds.
Callum Macmillan, CTO at Dimension Studio, agrees: “There’s absolutely more demand from a wider range of projects for virtual production support. A few years ago, it was predominantly reserved for the biggest projects with significant ‘blockbuster’ budgets, but now we’re working with more independent projects thanks to the way virtual production technologies have become more efficient, and therefore more cost effective.”
One of the key advantages all three studios agree on is the ability of virtual
production to streamline workflows—sometimes even improving the final quality in the process. A major factor here is the leap forward in real-time rendering. “The most recent advancements in real-time rendering has been transformative. Right now, we can achieve photorealistic ICVFX results in realtime that used to need hours of post-processing,” says Callum Macmillan.
Paul Clennell, CTO of dock10, echoes that: “Using the latest real-time rendering, you can see how each shot will look as it happens either live in the studio or via secure remote viewing.”
Still, this remains a work in progress, and the technical directors we interviewed agree that rendering will be a crucial area of innovation moving forward.
As Russ Shaw explains: “Rendering quality will continue to improve, driven by faster GPUs and better connectivity standards like SMPTE 2110. This will enable low-latency, synchronized, HDR, and high-frame-rate streaming, reducing technical challenges on large LED walls.”
This new landscape disrupts the traditional linear nature of content creation. It also reshapes workflows, creating new points of coordination
between departments and particularly impacting post-production.
“This doesn’t eliminate post-production but rather transforms it,” Callum Macmillan notes. He illustrates the shift with Dimension’s experience on the film Here:
“We were able to capture final-pixel quality imagery in-camera while maintaining flexibility for creative decisions. The integration of realtime depth information and sophisticated environmental controls means that many traditional post-production tasks are now being handled during principal photography. The result is a more efficient,
creative, and collaborative process that maintains the highest quality standards while enhancing efficiency and reducing costs in postproduction.”
Another major factor in virtual production’s evolution has been the improvement in LED screen technology.
“Looking back to 2020, the quality of Unreal Engine environments was just passable for real-time production, and LED screen resolutions were far from optimal. Directors of photography often relied on shallow depth of field to obscure background imperfections,” explains Shaw.
“Today,” he adds, “advancements such as micro-LED technology and finer pixel pitch screens have significantly enhanced the realism of virtual environments. Unreal Engine has also improved, with innovations like Nanite for high-detail meshes and Lumen for enhanced lighting and shadow quality, resulting in sharper and more lifelike content.”
Callum Macmillan adds: “LED wall technology has matured significantly, enabling better color accuracy and higher resolution. We’re now on our third iteration of panel technologies and
second generation of image processing hardware which is having a huge impact.”
Virtual production is a broad term, and studios are applying it in different ways depending on their resources, goals, and creative direction.
Quite Brilliant, known for its work in advertising, has adopted a modular approach and built a strategic alliance with London’s historic Twickenham Film Studios— one of the UK’s oldest facilities. The partnership has even opened doors to projects in the film sector.
Russ Shaw (Quite Brilliant):
“Studios are now using the technology for the right reasons, focusing on scenes where it truly adds value”
“Operating primarily out of Twickenham Film Studios gives us access to multiple sound stages, some permanently set up for virtual production and others readily adaptable for bespoke solutions,” says Shaw. “Being partially modular allows us to customise the size and shape of the LED walls to meet specific production
needs. This flexibility is crucial, as directors often require tailored setups to bring their creative visions to life and don’t want restrictions or extra work in post-production unnecessarily.”
And he adds: “We are equally comfortable with building solutions off-site when required, as we prioritise the financial and logistical needs of our clients.”
Dock10, which is more focused on TV production, has opted for a different route. Instead of creating a single dedicated virtual set, the company has integrated
virtual production capabilities into all ten of its studios. “These solutions use the latest real-time games-engine technology from Epic and Unreal Engine with Zero Density, together with the Mo-Sys star tracking system. By centralising the systems in our CTA we give our customers the flexibility to choose the right size studio for their production – anything from 1,000sq ft to 12,500sq ft.”
Paul Clennell highlights the studio’s philosophy: “Our next-generation virtual studio capability is a powerful creative toolset for delivering even greater onscreen value—
enabling the creation of even more contentrich sets. Our proven technology seamlessly combines physical, virtual and augmented reality in real time, together with live data.”
“The results are entirely realistic, including the addition of shadows and reflections as well as the perfect integration of even single strands of hair. Cameras can be pointed in absolutely any direction across the whole of the studio to deliver a continuous on-screen set.”
Dimension, which has a strong focus on fiction, stands out for its work on virtual humans. “Perhaps one of our most exciting innovations has been combining Virtual Humans (either puppeteered metahumans or volumetric video captures), with ICVFX virtual production environments, creating hybrid experiences that weren’t possible even a few years ago,” says Callum Macmillan.
“Our recent project Evergreen demonstrates really well how we can create directable mid-ground characters that
DIMENSION (I WANT TO DANCE WITH SOMEBOD)
are fully virtual. Populating real-time VP environments with virtual humans to bring life to the mid and background elements of a scene is also something we’ve recently done for the Colosseum shots that Dimension worked on for the Amazon Prime series Those About To Die.”
Callum Macmillan (Dimension): “The future of virtual production will be an increasingly seamless integration between physical and virtual elements”
One of the major challenges with virtual production is training staff—given the need for expertise across multiple disciplines. “While we typically hire studio staff with strong 3D skills due to our use of Unreal Engine, internal training is essential to support live shoots effectively,” explains Shaw.
“Virtual production studios vary significantly, as they incorporate equipment from different manufacturers, each with unique software and setup requirements. For example, setting up LED walls, processors, camera tracking devices, media playback servers, and lighting controllers demands a deep understanding of many disciplines from frame synchronisation, down to types of networks, video and display cabling.”
In general, all studios interviewed emphasized the value of having a high-caliber human team—engineers, designers, and technical specialists—beyond just
the hardware or software.
“Our production crew are also highly experienced in camera setup and calibration,” says Shaw. “For clients bringing their own crew, we offer free technical recces, enabling them to experiment with both studio and personal equipment before the shoot. This hands-on preparation is invaluable for ensuring a smooth production process.”
Dock10 has taken this one step further by creating an in-house Innovation Team, which has become a leading R&D reference in the UK television industry.
“Its objective is exploring ways for the industry to adopt more interactive, immersive and game-like forms of entertainment at a time when traditional linear, 2D video is in decline, particularly amongst younger audiences,” Clennell explains.
“Having delivered hundreds of hours of innovative content, the Innovation Team has earned a strong reputation for developing the bespoke tools and
workflow required for cutting edge productions. It also plays an important role in educating and informing the sector about new ways to create content.”
Clennell points to an example of their work: “The team’s first development within the R&D studio was DMX-VL, a real-time software solution that allows for the control of both physical and virtual moving lighting from a single industry-standard lighting console. This helps to deliver infinite lighting possibilities for productions without additional kit, crew or cost.”
At Dimension, the approach to talent development is described as comprehensive and based on three key pillars: “We have strong ties with academic institutions; we operate a structured work placement program, partnering with training institutions, and we’re constantly developing our onboarding process, where senior team members provide hands-on guidance across various disciplines,” says Macmillan.
“This makes sure that we can transfer knowledge and get teams up to speed quickly with our workflows and best practices. AI is helping here too.”
AI has become an extraordinarily useful tool throughout the virtual production process. As Russ Shaw illustrates: “An AI-generated background can be created and approved in less than a day, compared to the three or more days required to build a comparable Unreal Engine scene.”
That said, the technology still has its limitations. “AI-generated content often lacks control and adaptability, meaning we still rely on traditional 2D visual effects software for subtle adjustments or artifact corrections,” Shaw adds. “Additionally, camera movement can be restrictive, as AI content often uses 2.5D layering to create a sense of depth and fails if pushed too far or held on for too long.”
Even so, AI helps overcome challenges that would otherwise be far more difficult to solve. Paul Clennell offers one example:
“The Innovation Team is developing techniques for creating realistic shadows and reflections in multi-camera green screen virtual production. For years, this has proven to be a huge challenge for production.”
“The solution is the development of a bespoke AI model.
Called AI Composure, this proves that green screen virtual production can look better and be delivered more cheaply than LED volume stages; and it isn’t limited to the size of the LED wall.”
Callum Macmillan explains that AI is integrated into Dimension’s entire operation:
“We use large language models to optimize our production workflows and project tracking, while also maintaining comprehensive departmental knowledge bases. Our artistic teams
use text-to-3D and mesh-to-3D tools, and generative AI accelerates our pre-visualization processes through fast 2D imagery generation.”
“One of our most significant implementations is in procedural content generation, which has revolutionized our environment and layout building processes. This multifaceted AI integration demonstrates how machine learning can enhance both technical and creative aspects of virtual production.”
Virtual production is not yet a fully mature technology, and all the technical directors we spoke with expect continued progress that will unlock new capabilities and greater efficiency. Beyond what has already been mentioned, Russ Shaw highlights another trend to watch: “We also anticipate growing
interest from postproduction companies through tools like Chaos Arena, which streamline the integration of V-Ray pipelines with LED walls without the need for preprocessing. These advancements will further enhance the efficiency and accessibility of virtual production workflows.”
Callum Macmillan adds: “The future of virtual production will be an increasingly seamless integration between physical and virtual elements. We’re seeing a convergence of realtime technologies, AI, and traditional filmmaking— systems will become more modular, volumes will become more flexible and dynamic… be that for LED production, performance capture, or volumetric capture, they all exist in a volume—that’s revolutionising how stories can be told. The viewer experience will become more immersive as the line between practical and virtual elements continues to blur.”
Paul Clennell (dock10):
“Using the latest realtime rendering, you can see how each shot will look as it happens either live in the studio or via secure remote viewing”
Finally, we asked the three technical directors to share a standout virtual production project they feel especially proud of. Here are their responses:
Russ Shaw (Quite Brilliant):
While every project we’ve undertaken has unique and interesting elements, a recent campaign for Nissan with City Studios and The Gate Films stands out. Over 60% of the TV commercial was shot virtually in the studio, featuring Manchester City’s manager Pep Guardiola. Despite his limited availability—just 90 minutes on set—he appeared in 20 shots.
The biggest challenge was the tight timeline, leaving no room for errors. To ensure success, we captured all the necessary plates a day prior to the VP shoot, creating a rough animatic for everyone to visualise the car’s position, camera placements, and background locations for continuity. Thanks to meticulous planning, an excellent director, DOP and experienced crew, the project was executed flawlessly, earning unanimous praise.
Paul Clennell (dock10):
The Innovation Team is working to perfect the integration of technology
that enables animated content to be rendered in real-time within a live, multicamera studio setting. This builds on dock10’s work on BBC Bitesize Daily, which made significant strides in animating motioncaptured performers in live virtual studio pipelines.
This technique was developed further in dock10’s work on Channel 5’s Dinosaur with Stephen Fry, where pre-animated dinosaur actions were manipulated in the gallery to allow for new playback sequences to be created live in the studio.
Off the back of these successes the Innovation Team created a dedicated R&D TV studio, working closely with dock10’s engineering department and key external suppliers to incorporate numerous innovative technologies into one space. The UHD / HDR multi-camera studio features a three-wall infinity curve with multitalent motion capture, face capture, optical talent tracking and video / audio recording capabilities.
Callum Macmillan (Dimension):
Our work on Robert Zemeckis’ Here represents a particularly significant milestone in virtual production innovation and was a project on which we really pushed the boundaries of what’s
possible with realtime technologies.
The project required us to create an evolving virtual world viewed through a single window across different time periods. We implemented sophisticated vehicle simulation systems with 55 physics-enabled
vehicles, developed complex weather systems including rain, snow, and seasonal changes, and pioneered new approaches to interactive lighting. The use of realtime depth cameras based on laser systems (nLight’s HD3D) provided unprecedented depth information
integration, enabling seamless blending between physical and virtual elements.
This project exemplifies how far virtual production has evolved from simple LED volumes to sophisticated realtime production tools.
Gen AI, unlike its predecessors, has the ability to create something new. It is an analytical, creative, innovative and expansive technology. When applied to the scope of our market, it is revolutionizing the way we create content... and this is just the beginning.
By Carlos Medina
AI is the trendy acronym nowadays. On March 18, 2025, we started an internet search for AI, which returned about 6,340,000,000 hits (0.20 seconds).
AI is short for Artificial Intelligence. The Royal Spanish Academy (RAE) chose “artificial intelligence” as the word of the year 2022 and was selected as the word of the year for 2023 by the Collins Dictionary.
We find ourselves in times in which AI is not a hidden thread present in the optimization of processes, management and data analysis, but its use is patent in achieving an audiovisual work
On this occasion, we are going to use an AI, specifically ChatGPT, to conceptualize what it is:
AI or Artificial Intelligence is a branch of computing that is dedicated to
creating systems capable of performing tasks that normally require human intelligence. These tasks may include learning, reasoning, problem solving, perceiving (e.g., computer vision), understanding natural language, and making decisions.
There are different types of AI, and some examples include:
› Weak AI: Focuses on specific tasks, such as virtual assistants or customized recommendations (e.g. Siri or Netflix).
› Strong AI: refers to a form of artificial intelligence that could perform any human cognitive task, even autonomously, although it is still a theoretical concept.
We would have to go back to 1956 to find the origin of the term “Artificial Intelligence”. Thanks to the computer scientist John McCarthy, during the Conference at Dartmouth College,
in Hanover (United States), the possibility of creating a machine that could think like a human being is was already argued.
The “Dartmouth Summer Research Conference on Artificial Intelligence” was organized by John McCarthy (Dartmouth College, New Hampshire) and proposed by McCarthy himself, Marvin L. Minsky (Harvard University), Nathaniel Rochester (I.B.M. Corporation) and Claude E. Shannon (Bell Telephone Laboratories):
› John McCarthy, (Boston, Massachusetts, 04/09/1927-Stanford, California, 24/10/2011). He was a prominent American computer scientist who received the Turing Award in 1971 for his important contributions in the field of artificial intelligence. He drove the development of the first AI programming language, LISP, in the 1960s. He has worked on issues about the mathematical nature of the thought process, including Turing’s theory of machines, the
speed of computers, the relationship of a model of the brain to its environment, and the use of languages by machines.
› Marvin Minsky, (New York, 09/08/1927-Boston, 24/01/2016). American scientist considered one of the fathers of artificial intelligence, co-founder of the artificial intelligence laboratory at the Massachusetts Institute of Technology and author of the first neural network capable of learning, SNARC, in 1951.
› Nathaniel Rochester (Buffalo, New York, 14/01/1919-Newport, Vermont, 06/08/2001). He wrote the first assembler and was involved in the founding of the field of artificial intelligence. Research Information Manager at IBM Corporation, New York. He worked on some of the automatic programming techniques that are widespread today, and he is concerned on issues about how to get machines to perform tasks that previously could only be done by people.
› Claude E. Shannon. (Petoskey, Michigan, 30/04/1916 – Medford, Massachusetts /24/02/2001). He was an American electronic engineer and mathematician, remembered as “The father of the information theory”. His work ‘A mathematical theory of communication’, was a hit in 1948. He spent fifteen years at Bell Telephone Laboratories (computer and artificial intelligence area). In 1950 he published a work that described the programming of a computer to play chess, which became the basis for subsequent developments.
Other prominent participants in the Conference at Dartmouth College were:
› Arthur Samuel, a pioneer in the field of computer games and artificial intelligence and the creator of one of the first didactic games as a very early demonstration of the concept of artificial intelligence.
› Allen Newell, a researcher in computer science and cognitive psychology. He contributed to the Information Processing Language -IPL- (1956) and two of the earliest AI programs, the Logic Theory Machine (1956) and the General
Problem Solver (1957) (alongside Herbert A. Simon).
› Herbert Simon, an American political scientist whose research spanned the fields of cognitive psychology, computer science, public administration, economics, management, philosophy of science, and sociology.
› Oliver Selfridge, who wrote important works on neural networks and pattern recognition and machine learning. ‘Pandemonium’ (1959) is recognized as an AI classic. He has been called the “father of machine perception”.
› Ray Solomonoff, one of the founders of the branch of artificial intelligence based on machine learning, prediction and probability. He was the inventor of algorithmic probability.
› Trenchard More, a mathematician and computer scientist.
So far this year, the mass media has released a huge amount of AI-related news. Perhaps the most significant piece is the fierce competition that is taking place between the different AI models:
› ChatGPT, a neural network model developed by OpenAI, a company founded in 2015 in the United States by Sam Altman and Elon Musk (who left the company in 2018). It’s basically a virtual robot (a chatbot).
› DeepSeek (V3 and R1). This Chinese company dedicated to artificial intelligence was founded in 2023 by Liang Wenfeng in Hangzhou, Zhejiang.
Develops extensive open-source language models (LLMs).
DeepSeek-R1 stands out by its ability to generate more extensive chains of thought (CoTs), thus representing a remarkable advancement in AI. Its V3 model was launched in January 2025.
› Manus AI is an artificial intelligence agent developed by the Chinese company Butterfly Effect. Unlike other AI models that require specific instructions, the beta version of Manus AI (2025) can initiate tasks, analyze information in real time, and adapt its response strategies independently. Its functionalities range from autonomous web browsing to data analysis.
Gen AI is an added value in itself. You just have to know how to use it and let perfectly prepared professionals make the right decisions.
› Gemini: In December 2023, Google DeepMind introduced Gemini, its language model designed to integrate into a wide range of the company’s products and services as a personal AI assistant and become a successor to LaMDA and PaLM.
› Grok, developed by Elon Musk’s company, X, presents itself as an AI chatbot with a distinctive touch of humor and direct accessibility to the X platform (formerly known as Twitter). It was launched in November 2023.
Specifically, this article covers the unprecedented emergence of what is known as Generative Artificial Intelligence (GenAI or GAI) for audiovisual content: scripts, photos, videos, dubbing, production documents, design, music, illustrations, graphics...
We find ourselves in times in which AI is not a hidden thread that was already present in the optimization of processes, management and data analysis, but its use is patent in achieving an audiovisual work.
We are again counting on our particular collaborator -ChatGPT-, now to see its contribution on GenAI /GAI:
Generative AI is a branch of artificial intelligence that focuses on creating new content from existing data. Unlike other forms of AI that are limited to recognizing patterns or making predictions, generative AI has the ability to generate text, images, music, videos, and other types of content that did not previously exist.
This technology uses machine learning models, such as deep neural networks, to analyze large amounts of data and then generate new and original outputs based on that information. Some of the best-known examples of generative AI include models such as GPT-3 (for generating text), DALL·E (for creating images from textual descriptions), and Jukedeck or OpenAI’s MuseNet (for creation of music).
In short, generative AI not only understands and processes information, but also creates new things from that information, thus opening up a wide range of possibilities in areas such as art, writing, music, programming, design, and more.
Primitive generative models have been used for decades in statistics in order to assist in the analysis of numerical data. Neural networks and deep learning were the recent predecessors of modern generative AI. Variational Autoencoders (VAEs), developed in 2013, were the first deep generative models that could generate realistic images and speech.
Deep Learning is a Machine Learning technique that enables GAI and is used to analyze and understand large amounts of data. This process, also known as deep neural learning or deep neural networks, involves computers acquiring learning through observation, in a similar way to people. It is central
to the use of computers for the difficult task of understanding human language, what is known as Natural Language Processing (NLP).
Gen AI is achieving very significant results, thus becoming the perfect ally for this industry
Therefore, Gen AI works with different models such as:
› Foundational Models (FM) that use learned relationships and patterns to predict the next item in a sequence.
› Large Language Models (LLMs) specialized in tasks based on text generation, open conversation and information extraction.
› Diffusion models, which work by first adding noise to training data until it becomes random and unrecognizable and then training the algorithm to iteratively diffuse the noise until a desired result is revealed.
› GAN (Generative Adversarial Network) models that work by training two neural networks (generating and discriminating) in a competitive way, continuously improving their ability to create data.
› VAE (Variational AutoEncoders) that use two neural networks (encoder and decoder) mapping input data and reconstructing new data that looks like the original.
› Transformer-based models that allow adding more layers within the VAE, thus improving performance and an implementation of contextualizing embeddings, which
yields more complex and significant results.
› Retrieval-Augmented Generation (RAG), which serves to complement and refine the parameters or representations in the original model and ensure that a generative AI application always has access to the most up-to-date information.
The audiovisual industry has always aimed at looking for creative talent and specialists who make it tangible (a film, a music, a photograph, an illustration, a design...). To this end, technology has always driven the conditions for everything or almost everything to become a reality. -Today we can say
that Gen AI is achieving very significant results, thus becoming the perfect ally for this industry.
Therefore, it is time to learn about some of the most outstanding applications for Gen AI:
› Texts and presentations
Copilot (Microsoft 365: Word, Excel, PowerPoint, Outlook and OneNote); Sendsteps.ai; TextCortex; Jasper; Copy.ai; Rytr; Writesonic; Google Workspace ai (offers similar collaboration and productivity tools, such as Google Docs, Sheets and Slides); Notion; Zoho Office Suite; Slack; Trello; Asana; Beautiful.ai; Canva. ai; Gamma; Crello…
› Screenwriting and production
Musely; YesChat; Ahrefs; Largo.AI; BIGVU; Invideo; Flexclip; Monica AI; MyMap.AI; GitMind; Plakit; CamPlan AI; ShotDesigner App; Relux; Numerous.ai; SkyReels; Appypie; Miró; Kapwing AI
› Photography & Images
PhotoDirector; Pixlr; Adobe Photoshop. ai (Content-Aware Fill and Select Subject); Lumen5; DeepArt; AI ease; Magnific; Midjourney; Bluewillow; Dreamlike; Leonardo AI; Copilot Dall-E 3; Craiyon; Scribble Diffusion; Stable Diffusion; Picfinder; FreeImage.AI; Dreamstudio; Tinkercad; Fotor …
› Video
Adobe Firefly and Adobe Sensei (Generative Extension for Premiere Pro and Adobe After Effects); Pictory.ai; Synthesys; Frame.io; Magisto; Lumen5; Filmora; Pinnacle Studio.ai; PowerDirector; Runway; DaVinci Resolve. ai; TopazLabs; Elai; Designs AI; Raw Shorts; InVideo; Opulus; StoryKit; BHuman; Veed.IO; DeepBrain AI; Clipchamp; Open AI Sora; YouCam Video; …
› Music
Loudly’s; SongGeneratir. io; Media.io; AmperMusic; AIVA, Jukedeck; Soundraw; Ecrett Music; MusicLM Google; MusicGen Meta; MusicFX Google; Muzix; Music Muse; Loudly;
Song.do; Suno AI; Rifusion; Hydra II Ringsify; Sound; SoundDraw.ia; Rumbling; Amadeus Code; Mubert; Soundful Music; Beatoven. ai; Spline AI; X- Generate; AIMusic.so; Songgenerator. io; Brev AI...
› Sound and dubbing
ElevenLabs; iZotope Neutron; Respeecher; WavTool.ia; Vidnoz AI; Rask AI; Smartcat.ia; Sonix; Describe; Aufonic; IBM Watson…
› Visual effects, posters, design and loops
EndLess AI Video Loops; Veo 2; CapCut; D-ID Creative Reality Studio; FlexClip; Unity Weta; Gamma; LivePortrait AI; Monica AI; Fotor;
LogoAI.ai; GoEnhance. ai; PosterMyWall; Canva. ai; Crello; Designify; Snappa; Designs AI; Image FX; Microsoft (AI image generator); X (Grok); Kling; Pika...
› Comics, 2D and 3D Krikey AI; Renderforest; FlexClip; Animaker; Toon Boom Harmony.ai; Adobe Character Animator; Celsys Clip Studio Paint; Drawpad Graphic Design; Adobe Substance 3D Viewer; Project Neo; AI Cartoon Generator; Midjourney; Comic Life; YouCam AI Pro; SkyReels; AI Comic Factory; AI 3D Image Creator; Meshy; Appy Pie; Meshcapade & CLO; Luma Labs AI; Spline AI; 3DFY IA; RODIN IA; Avaturn; Alpha3D; Deep Brain AI; Colossyan; Kaiber…
similar content in order to edit, cut, news or the given piece following the given parameters? Why can’t our friend the algorithm do it, and this person devote their time to more human tasks such as thinking of the next story, format or emotion to convey to viewers? That’s the added value of AI, not the mere cost savings.”
are being affected by the use and development of specific applications for Gen AI, such as advertising, videos for the internet and customer websites, covers and posters, online and press photographs, songs, visual loops, etc.
Technical staff specialized in generating more traditional content will be cut down due to the lower demand for necessary jobs.
The various applications of Gen AI in 2025 have mushroomed as compared to previous years, so we can make out five very important aspects to take into account:
Let me recall some lines by Yeray Alfageme from his article “AI, Machine Learning and the Make Movie button” published on TM BROADCAST: “Why should a person spend hours and hours in a monotonous and repetitive environment performing the same task over and over again with
These statements, considered over time, are showing us that Gen AI is an added value in its own right. You just have to know how to use it and let perfectly prepared professionals make the right decisions. There are more and more feature film and TV series productions that use Gen AI in some processes to achieve an optimal end result. Below are some examples: ‘Emilia Pérez’, directed by Jacques Audiard; ‘Alien: Romulus’, directed by Fede Álvarez; ‘Rogue One’, by Tony Gilroy or ‘The Brutalist’, by Brady Corbet.
But in the audiovisual industry there are many other fields and sectors that, in one way or another,
› We are at the beginning of Gen AI as applied to audiovisual content, with very surprising results being achieved in some cases.
› Technical staff specialized in generating more traditional content will be cut down due to the lower demand for necessary jobs.
› Specialized training on Gen AI in the various areas of AV that allows growth within this sector is an absolute must.
› The use of Gen AI requires guaranteeing the rights and intellectual property of the audiovisual work, since using other nonproprietary resources may involve legal and technical problems in the resulting work.
› It is necessary to work responsibly towards Gen AI and pass laws according to a society that allow telling the difference between real, false (deepfake) and fiction.
On August 1, 2024, the Artificial Intelligence Law came into force, the first attempt to regulate AI in Europe.
The Conference at Dartmouth College, in Hanover (United States), marked a starting point regarding AI: “Any aspect of learning or other characteristic of intelligence can in principle be accurately described in such a way that a machine can be built to simulate it.”
That was in 1956. We are in 2025, and we have been able to see how right the debate and the approaches at the beginning of AI were. We have no doubt that society, companies, telecommunications, entertainment and each of the professional fields around us are being modified and are benefiting by the use of AI.
Klaus Martin Schwab, a German economist and entrepreneur, in his work ‘The Fourth Industrial Revolution’ (Debate, 2016) already presents us with “a world in which virtual and physical manufacturing systems cooperate with each other in a flexible way on a global level”.
Gen AI, unlike Descriptive AI or Predictive AI, has the ability to create something new. It is an analytical, creative, innovative and expansive technology. Specifically, in the field of the audiovisual industry, Gen AI is surprising us all with what it is capable of originating in any type of content: Gen AI towards an era without limits.
Guide to not getting wrong the origin and the main pillars supporting these new models that are revolutionizing audiovisual production
By Carlos Medina, Audiovisual Technology Advisor
The main mission of the audiovisual sector is to offer audiovisual content to be marketed according to specific goals such as: providing fun, entertaining, informing, educating or developing a cultural asset. It is a sector highly conditional to business interests, economic factors, social context, talent, creativity and technological development.
This list of constraints has marked each of the decisions taken in the different areas/departments of AV to render creative and technical quality. We are talking about what it means to make a film, a music concert, a sports broadcast, a visual/mapping show, among others.
Therefore, to understand a little more than what we will be talking about in this article, we have to understand the word "production". We are referring to a greater concept, not just to a simple department integrated by professionals specialized in production and management
(producer, co-producer, executive producer, delegated producer, production assistant...); it is a more complex process than a mere sequence of phases in the preparation of an audiovisual product.
We refer to "production", under an economic-productive point of view, as the process in which available resources and raw materials are used to create goods and services with added value, by combining factors such as labor, capital and resources. "Production" as a synonym with creation, elaboration and/or manufacture.
"Current audiovisual production has been directed towards remote production or what is understood as REMI (Remote Integration Model)"
In recent years there has been talk of different audiovisual production modes, which need to be clarified and which coexist in full competition. These are:
1- On-site production: this type of production means that physical (real) headquarters are necessary, a place that brings together all personnel and technical equipment. It is the most traditional production, the model in which everyone is more experienced. Both the artistic staff (presenters, singers, speakers, lecturer, speaker,...) and the technical staff (production, organizers, image, lighting and sound) have to be physically in the place, both to develop the content for the production and to handle the necessary equipment and run the technologies. It involves working in a facility perfectly set up to make audiovisual content: a TV studio (control room + TV set) and a central control for a broadcast.
2- Remote production: this type of production means that we move specialized teams to the place where the content is developed,
outside the environment of a traditional TV studio. This implies an integration of signals and communication between both environments: travel of OB Trucks, Vans & Fly Packs and the central broadcast control that is typical of a TV station. This is also known as distributed production.
3- Automated/assisted production: this simply means that some aspects or processes are programmed with the intervention of machines, in an autonomous way, through learned protocols and artificial intelligence (AI). Therefore, it is an improvement in workflows that affects only the technical processes in what concerns management of audiovisual content towards simplification.
4- Remote/off-site production: this mode of production results in audiovisual content from work and
decision-making in other locations that are distant and independent of the place where the content to be produced is located. It is the most modern production mode and the one that is undergoing the greatest development.
The highest level of this mode is a total remote production - REMI (Remote Integration Model), also known as a centralized production.
5- Hybrid production: it is the combination of an on-site production and an off-site production to generate audiovisual content. That is, it is necessary to deploy a human team and a limited amount of technical equipment to the place where the audiovisual content is, as well as the existence of headquarters where decisions are made and remote technical operations can be carried out.
6- Virtual/online production: perhaps more for the future than at present, it is a production mode
where there is nothing physical. We rely on a completely created and generated environment (realistic or imagined) with tools, applications and technical devices under control on local and global networks. It is also entailing changes and innovations in the way viewers/users/ audiences consume and participate in content.
"At present, a combination of REMI production and hybrid production is what best adapts to reality for generation of audiovisual content"
But, being more realistic and considering the present situation, virtual production (VP) is the way of producing audiovisual material and content under just two premises: digital technology as a dynamic element of the process in all existing phases of AV and carrying out the work in real time. I refer to the article published on TM BROADCAST "Virtual Production in Broadcast" (No. 183 - January 2025).
This variety and diversification of production modes as a global production process arise due to three main reasons of various origins:
› COVID-19. The COVID-19 health crisis caused the immobilization of people in their daily lives and significantly affected the dynamics of the work environment. It was the global implementation of working from home or remote technical solutions. In the case of the audiovisual sector, it meant working with very small technical teams in the broadcast field, or even the fact that the technical staff were able to carry out their functions from their own homes.
In June 2020, TM BROADCAST magazine organized an informative breakfast entitled "Remote Production over IP", with the presence of some of the most significant players in the field, who were visionaries of what would happen in the following years.
› Live content (strictly live). Undoubtedly, live shows have become the current driver of audiovisuals to offer new experiences to viewers and where the audiovisual machinery reaches a high level of
showiness in images, videos, projection, sounds and lighting.
› Technological innovations. R&D companies investing in new technologies, together with the decisions taken by different governments, mean that new communication and audiovisual content production protocols are being implemented, such as DTT, the development of IP or the novelties around 5G.
Based in this wide approach, current audiovisual production has been directed towards remote production or what is understood as REMI (Remote Integration Model). New production methods with the aim of obtaining what is called a broadcast master or final master in audiovisual slang.
At present, a combination of REMI production and hybrid production is what best adapts to reality for generation of audiovisual content. Therefore, there is some of on-site and quite of remote. For example,
the coverage of the Tokyo 2020 Olympic Games (held in 2021) by Olympic Broadcasting Services (OBS).
This global sporting event has become one of the most important REMI productions in recent history, being a milestone in the scale and sophistication of remote television production:
› OBS implemented a massive REMI infrastructure that produced more than 9,500 hours of content.
› Much of the production, signal mixing, editing, subtitling and quality control was carried out from the International Broadcast Centre (IBC) and from remote centers located in other countries.
In Spain, in 2022, we highlighted the remote production in sports broadcasts of Real Madrid's UEFA Champions League match, produced by Telefónica Servicios Audiovisuales (TSA). In this case, the football match was played at the Santiago
Bernabéu stadium, where the cameras and capture equipment were installed, with the implementation of multi-camera production carried out remotely from the Telefónica facilities in Tres Cantos, more than 30 km from the stadium.
"The coverage of the 2021 Olympic Games became one of the most important REMI productions in recent history, a milestone in its scale and sophistication"
How is this possible? What should be taken into account in a REMI from a technical point of view?
› Image and sound capture technology and devices: this involves having cameras (operated on-site, with remote handling, for example PTZ cameras, and/or robotic cameras) and having microphones in place. Not to forget one's own lighting equipment.
In specific circumstances and within a high budget, a Mobile Production Unit
(MPU) equipped with precise technology to make a live multi-camera is usually available.
It is essential that the equipment operates under professional connections (SDI; IP), standard protocols (DANTE, NDI, DMX512, ART-NET...) and secure encoders/decoders (LiveU, TVU, Haivision, Kiloview).
› Interconnection and intercommunication systems. Secure videoconferencing and streaming platforms can be used, thus allowing effective real-time coordination without the need for physical presence.
Depending on the requirements and budget constraints, connection option vary:
– Point-to-point fiber: Ideal for ensuring the highest quality, although at a higher cost.
– FTTH: Conventional fiber optic connections with or without traffic prioritization, offering good value for the money.
– 4G/5G: Mobile equipment such as wireless transmission backpacks that allow flexibility and lower costs, and compatible with satellite, WiFi or internet.
– DSNG (Digital Satellite News Gathering).
It is very significant to establish communication and exchange networks that support SRT, RTMP/ RTSP, NDI/NDI Bridge, Free-D, among others.
A vital element in any REMI production is the internal communication and coordination between audiovisual technicians (Intercom IP), for example: Clear-Com, Unity Intercom, Riedel, among others.
› Remote control multicamera filmmaking systems, such as: video mixer, audio consoles, monitoring systems with multiviewer, camera control units (CCU) and some Replay application for recordings and replays (EVS XT, Slomo.TV).
It is very common to work with remote production software such as vMix, OBS, Vizrt, etc.
› New computational paradigms, such as Fog Computing, Edge Computing, Cloud Computing, and Blockchain, enable remote access to applications for production, monitoring, reliable and scalable data sharing, increased personalization, improved streaming quality, and reduced latency or scale of live content streaming on social media and online video platforms.
Some examples of cloud or data center infrastructure: AWS MediaLive/ MediaConnect; Google Cloud Video AI/Transcoder; Microsoft Azure Media Services; cloud collaboration platforms (Sony Ci, LucidLink).
› AI automation and Deep Learning. In some projects they are being implemented in a simple way for assignments of automatic production guidelines based on predefined rules and templates, positions and focus tracking, as well as intelligent movement in the cameras, automatic subtitling tasks and/or
dubbing in several languages, or generating graphics in real time, among others.
› Distribution and Output: CDN (Content Delivery Network): Akamai, CloudFront, Fastly; Streaming platforms: YouTube, Twitch, Facebook Live; Broadcast Signals (HLS, MPEG-TS, DVB).
"The greatest success that remote production is having is in the generation of live audiovisual content"
It is easy to observe that a REMI allows audiovisual content to be made or produced from a place other than its physical execution. Therefore, to the experience of an on-site production and/ or a remote production, which involves carrying audiovisual equipment and rebroadcasting content, a novelty is added: it is how it is produced in a remote production (REMI).
The greatest success and the attention that
remote production is having is the generation of live audiovisual content, through the rebroadcasting of the contents by means of transfer of audio and video to a central location, from which video cameras, equipment, different signals and on-site communication are remotely controlled.
A REMI production is a solution that uses new techniques, thus allowing a series of benefits and adaptations in the work dynamics in the audiovisual sector such as:
› A reduction in costs and travel: it minimizes the need to deploy personnel and equipment to the event site.
› Operational efficiency: it allows a more agile and flexible implementation, thus reducing times.
› Personnel optimization: Requires fewer technical staff in the field.
› Centralized operations and streamlining of resources.
› Sustainability and reduction of the carbon footprint in the audiovisual sector.
› Greater adaptability and scalability towards different audiovisual content.
› Flexibility and diversity in workflows: from traditional physical environments to cloud resources.
› Ideal solution for content with virtual reality (VR), augmented reality (AR) and mixed reality (MR).
The development and innovation of large companies and providers of REMI technology for TV are reassuring the sector to drive a commitment for LIVE and remote production. For example:
› NEP Group: one of the world leaders in remote production solutions. It has implemented large REMI operations for events such as the NFL, NBA, and the Olympics. It offers production from its global centers or in the cloud.
› Grass Valley: supplier of filmmaking equipment and software for REMI. Its GV AMPP platform (Agile Media Processing Platform) allows 100% cloud productions.
Widely used by networks such as CBS Sports, Eurosport and Sky.
› Sony: it developed IP Live Production solutions. Cameras, mixers, servers and integrated systems with capacity for remote production. Worth highlighting are its XVS mixers, which use the available virtual toolkits, such as Virtual Menu, Virtual Panel and Virtual ShotBox. It has collaborated with OBS and networks such as NHK and the BBC.
› Panasonic: its KAIROS solution is fully compatible with baseband and IP signals, such as SDI, ST 2110 and NDI, in any combination without the need for external conversion. As a native IP system, KAIROS also supports PTP (Precision Time Protocol) synchronization and is ideal for remote video production in a fully IP-based environment.
› Vizrt: Known for its real-time graphics software, it has also developed remote control
platforms for the cloud. Viz Vectar and Viz Engine integrate into REMI workflows.
› LiveU: specialists in live video transmission over mobile (4G/5G) networks. Widely used in sports and news productions in REMI. A popular solution for its portability and ease of use.
› TVU Networks: Offers tools for remote production such as cameras, virtual switches, graphics control and IP communication. TVU Producer and TVU RPS (Remote Production System) are widely used by media in Latin America and Asia.
› AWS (Amazon Web Services) - its cloud services platform is the foundation for many modern REMI productions. It provides cloud processing, storage, editing, and distribution capabilities.
› Ross: with its Ross Production Cloud and Remote Production (REMI) technology.
› TSA (Telefónica Servicios Audiovisuales): in Spain, has been a pioneer in
applying REMI with fiber and IP networks, including in Champions League matches.
› VMIX: It is a complete software solution for live video production and streaming. It creates, mixes, switches, records and live-streams professional productions on a Windows PC or laptop. Support for inputs including cameras, IP cameras, video files, images, NDI, SRT, virtual sets, titles, audio, instant playback, video calls, Zoom meetings, and more.
› Nevion Virtuoso: It is a software-defined, virtualization-ready, standards-based media
node that can perform a variety of real-time functions for a wide range of applications, including IP contribution and wide-area media transport, IP production facilities, and converged LAN/WAN networks for remote and distributed production. It is a key component of Networked Live.
› Matrox Video: Monarch EDGE decoders offer new 4K remote production workflows on public networks.
AIMS (Alliance for IP Media Solutions); AMWA (Advanced Media Workflow Association); EBU (European Broadcasting Union); JTNM (Joint Task
Force on Networked Media); SMPTE (Society of Film and Television Engineers) or VSF (Video Services Forum) are essential agents for technological innovations and workflows to be professional, complying with global agreements and standards such as IP, SMPTE 2110, JPEG XS, AES67, MADI, SMPTE 20222, SMPTE 2022-6, SMPTE 2022-7, PTP, JPEG 2000, TICO, AES67, FEC...
Each of these solutions and innovations participate under the efficiency, flexibility, strategic planning, safety and technological innovation within the specific workflow of a REMI production.
The development and innovation in professional video/film cameras is experiencing an ongoing towards the newest. This allows various kinds of users to have capture equipment of a high technical and creative level in a very short period of time.
The different camera manufacturers have entered into intense competition when looking for solutions of a high level and at very affordable prices. Canon Inc., a Japanese company (1937) based in Tokyo, has long been a world reference for the products it offers, both to the amateur and professional audiovisual sectors (Broadcast/Cinema).
We must highlight its product series called EOS CINEMA, a range in which the maker has been presenting a camera model to the audiovisual market year after year: EOS C300 (2011); EOS C500 (2012); EOS C700 (2017); EOS C300 Mark III (2020); EOS R5C (2022), among others. Surely some of them are already well known by our readers and camera operators who work in the sector, since they offer very successful solutions to the needs of their customers.
The EOS C80, which has been available since November 2024.
It has taken us some time to get our hands on this model, but we must point out that it has given us a very good feeling since we took it out of its carryng case. It is compact and fully self-contained (i.e. allows media recording and playback without any extra accessories).
In principle, it reminds you of a photo camera but, as you look at it in more detail and get acquainted with its performance, it gradually seduces you to carry out work in video Broadcast (single-camera-multicamera), filmmaker work or for the field of cinema.
Lab test performed
by Carlos Medina Audiovisual Technology and Camera Advisor
At present, it could not be otherwise, as Canon continues in this vein to present yet a new model:
The Canon EOS C80 has a fairly ergonomic camera body thanks to its integrated handle and strap, the possibility of placing a top handle (with shoe and microphone holder) and also to a very successful design in the placement of the menu buttons, control dials and/or the addition of a new navigation mini-joystick (on the back).
With dimensions of approximately 6.3 x 5.4 x 4.5 inches (160 x 138 x 116 mm) and a main body weight of approximately 1.3 kg, we have been pleasantly surprised by the good use of all areas and parts in the camera body, so its size is fully justified.
Let's take a closer look at some of the camera's external options:
› POWER switch. Located at the top, the function of ON CAMERA, OFF and lock controls are found.
› REC/SHOT button: Also at the top (above the handle).
› CAMERA/MEDIA button: When the camera is on, pressing this button toggles the camera between CAMERA mode and MEDIA mode.
› SLOT/SELECT button:
Allows us to choose between the two card slots (A-B) found in this camera. We can work with several types of cards: SD, SDHC and SDXC; with UHS (UltraHigh Speed) speed class: U3; and Video Speed Class: V30, V60 and V90.
› Measuring tape hook and focal plane mark.
› Shortcut buttons (+/-) to ND filter: One of the three ND density levels can be selected and if the extended ND range is activated, then one of the five density levels can be selected. In short, this model has a wide range of ND filters, which allows better control of exposure adaptable to different lighting conditions: from ND not activated to ND10 stops.
› Customizable access buttons: A total of 13 buttons where 73 functions can be assigned. From the most basic
ones: WB, Customizable White Balance, Peaking, Waveform MonitorWFM, Zebra, DISP, AF Lock, AUTO IRIS, AUDIO STATUS...; to the most precise: autofocus frame, tracking modes, ISO/Gain mode, optical IS stabilization; IP transmission, on-screen markers, card slot selection, REC button...
› Control dials. The camera body incorporates two: one on the front and one on the back. Also a new navigation mini-joystick on the back.
› Audio Area: When opening the LCD monitor, this area is uncovered: Input 1/Input 2 switch -LINE/mic/mic 48V-; audio input level selector CH1 and C2 and a Manual or AUTO audio selector.
› Connectivity Area: Located mainly on the left side of the camera body we can find a complete range of connections:
– BNC connector with SDI Video output: HD: SMPTE 292; 3G: SMPTE 424, SMPTE 425; 6G: SMPTE
ST 2081; 12G: SMPTE ST 2082. Audio: SMPTE ST 299-1, SMPTE ST 299-2.
– Mini-XLR 3 connectors for audio input 1 and 2.
– 3.5mm stereo minijack connector for mic (condenser microphones)/LINE (line devices) input.
– USB Type-C™ terminal equivalent to SuperSpeed USB (USB 3.1 Gen 1) for UVC video output.
– 2.5mm submini stereo minijack remote terminal (for RC-V100 remote controller connection or remote controls available).
– HDMI connector video/audio/TC output.
– Headset terminal under a 3.5mm stereo minijack.
Elsewhere in the camera body are: A BNC connector for Time Code input/output, an Ethernet terminal (RJ45 connector, compatible with 1000BASE-T with CAT5e cables, shielded twisted pair -STP-) and the DC IN power terminal (24 V DC).
› Menu Area: Placed in the central rear part and comprising a button giving direct access to the menu, a navigation dial (SET) and a cancel button.
› Other: Power battery compartment (models BP-A30N and BP-A60N, BP-A30 and BP-A60), mounting ring for RF lenses (with their respective lens contacts), holes for tripod screws (1/4") -on the side- and tripod screws 1/4" and 3/8" together with tripod anti-rotation pin -on the base of the camera body-, double slot for storage/recording cards, tally (green/red), top mount with 1/4" screw, multifunction shoe and air ventilation inlet (side)/ outlet (on the base).
And we must highlight the 3.5-inch LCD touch monitor that has an approximate screen resolution of 2.76 million dots (1280 x RGB x 720) with touch selection functions of the focus point and touch user interface for shoot settings.
But not only will the outside allows us to respond to different work, handling and/or setup methods, but inside the Canon EOS C80 we discover great features and technological innovations in order to become a very successful solution for different environments (filmmaker, video Broadcast, TV and multi-camera live events, advertising, cinema...).
First of all, we have a CMOS sensor (with rolling shutter) under a BSI Stacked Full-Frame construction architecture (36.0 x 19.0 mm -40.7 mm diagonally-) 6K (actual: 26.67 Megapixels -6202 x 4300-; effective: 19.05 Megapixels -6008 x 3170-). The decision to include a stacked BSI (Back Side Illuminated) sensor offers us several advantages: better light
capture, less noise at high ISOs, better performance with low lighting, sharper images and better colors, much faster data reading, improved autofocus, correct rolling shutter performance and lower error levels in slow motion recordings. Quite a success in this model to position itself in the audiovisual field with broadcast and cinematographic quality.
Secondly, the EOS C80 has a triple-base ISO (800/3 .200/12.800), which allows an optimization in very different light conditions at capture time. Our checks have given us a good result with a base of 800 and 3,200, obtaining very clean images, thus allowing a greater dynamic range and greater colour fidelity.
In this sense, the combination of the type of sensor, the triple-base ISO responses and the possibility of working with some logarithmic tonal curve, together with signal processing parameters, means that the dynamic range of this camera can be placed in 16 steps (Canon Log 2: 1600% at ISO 800) and 14 steps (Canon Log 3: 1600% at ISO 800).
On the other hand, Canon is not oblivious to optics when it comes to obtaining high quality images, so the EOS C80 comes with an RF mount system, which makes unmatched optical performance possible, working with real-time lens metadata and highly responsive AF/IS systems. In addition, by using a Canon mount adapter we can add EF, EF Cinema and PL type lenses (compatibility with Cooke/i Technology™ and with anamorphic lenses (x2.0/x1.8/x1.3).
Another of the innovations featured by this camera is the Dual Pixel CMOS AF II (second generation) with AI, an autofocus technology that uses each pixel of the
sensor to focus and capture images at the same time. Together with the EOS iTR AFX algorithm that uses "deep learning", we get a faster and more up-to-date autofocus, smooth focus transitions when switching from one object to another, extremely accurate and responsive face, eye, head, body and animal tracking of the entire sensor.
All these novelties and good ideas by Canon are materialized in a resulting 12-bit Cinema RAW Light image in 6K (6,000 x 3,164 pixels from 1 to 29.97 fps -553 to 639 Mb/s VBR) and 4,368 x 2,304 pixels (from 1 to 29.97 fps -678 Mb/s VBR).
Worth highlighting is the fact that this camera can offer: 6K at 30 fps in Cinema RAW Light LT; 4K DCI and UHD up to 120 fps in Super35 mode and 2K up to 180 fps for slow motion effects.
Not to forget that the combination of the image captured by its Full Frame 6K sensor together with a kit of professional optics and signal processing under the parameters of exposure adjustment/colorimetry,
are pleasantly benefited by the recording modes offered by the EOS C80. The result is excellent camera "RAWs" for broadcast, transmission and/or postproduction processes:
› Raw LT (Full Frame) 12bit (6,000x3,164); Super 35mm (cut) 12bit (4,368x2,304)
› Raw ST (Super 35mm (cut) 12 bit (4.368x2.304)
› XF-AVC YCC4:2:2 10-bit, intra-frame (4096x2160, 3840x2160, 2048x1080, 1920x1080)
› XF-AVC YCC4:2:2 10-bit, Long GOP (4096x2160,
3840x2160, 2048x1080, 1920x1080)
› XF-HEVC S YCC4:2:2 10-bit, Long GOP (4096x2160, 3840x2160, 2048x1080, 1920x1080)
› XF-HEVC S YCC4:2:0 10-bit, Long GOP (4096x2160, 3840x2160, 2048x1080, 1920x1080)
› XF-AVC S YCC4:2:2 10-bit, intra-frame (4096x2160, 3840x2160, 2048x1080, 1920x1080)
› XF-AVC S YCC4:2:2 10-bit, Long GOP (4096x2160, 3840x2160, 2048x1080, 1920x1080)
› XF-AVC S YCC4:2:0 8-bit, Long GOP (4096x2160, 3840x2160, 2048x1080, 1920x1080)
› Proxy: 4:2:0 10-bit, Long GOP (2048x1080, 1920x1080)
› Proxy: 4:2:0 8-bit, Long GOP (2048x1080, 1920x1080, 1280x720)
› JPEG still photo
In summary, we have professional recording formats with high recording speeds and with their respective metadata for different VFX and virtual production (VP) workflows.
These innovations and features provided by the EOS C80 allow us to fully enter a workflow streamlined and prepared for professional environments that require 6K/4K. Thus, by means of card recording we can film in 6K/4.3K (RAW - Cinema RAW Light), 4K (XF-AVC / XFHEVC S / XF-AVC S); 6K with an external recorder using HDMI RAW or record in 4K connected to the camera's the SDI OUT or HDMI OUT.
Also, we simultaneously get 2K proxy clips (XF-AVC /
XF-HEVC S / XF-AVC S) on an SD slot B card.
The EOS C80 enables a workflow using ACES, the color coding system defined by the Academy of Motion Picture Arts and Sciences, either RAW data on an external recorder (HDMI RAW) or on internal SD card. We must highlight the importance of this camera when working with logarithmic gamma curves and with the color spaces: Canon Log 2/C.Gamut, Canon Log 3/C.Gamut, Canon Log 3/BT.2020, Canon Log 3/BT.709, Canon 709/BT.709, BT.709 Wide DR/BT.709, BT.709 Standard/ BT.709, PQ/BT.2020/ITU-R BT.2100, HLG/BT.2020// ITU-R BT.2100. Additionaly, the user LUT input and Look Files (compatible with 17-frame or 33-frame .cube files) that can be applied in the recording; and that in the CAMERA mode can be monitored on the LCD with the display assistance function.
In regard to the menus of the EOS C80, we must indicate that all users accustomed to the previous Canon models will remain
very familiar with each of the tabs/submenus and parameters to be modified. We now explain the menu system for those who will find EOS series in their hands for the first time:
› Camera configuration: For example, Iris, Shutter (1/1 to 1/2000), ISO (100, 160, 200, 400, 640 , 800, 1,600, 2,500, 3,200, 6,400, 12,800, 25,600, 51,200, 102,400), Gain (-6 dB, -3 dB, -2 dB~42 dB, 45 dB, 48 dB, 51 dB, 54 dB), Shutter angle (11.25° to 360°, depending on frame rate per second), light measurement, AWB, continuous AF, AF tracking reason (people/ animals), eye/face/head/ body detection, Color Bars (SMPTE, EBU, ARIB), optical stabilization, among others.
› Custom Picture: Allows to choose between
C1:Canon 709, C2:Canon Log 2, C3:Canon Log 3, C4:BT.709 Wide DR, C5:BT.709 Standard, C6:PQ, C7: HLG, C8:EOS Standard, C9:EOS Neutral and C10: User10 to C20: User20.
› Media/Recording
Configuration: The choice of card slots (A/B),
sensor mode (Full Frame/ Super 35mm cut), system frequency (59.94 Hz, 50.00 Hz, 24.00 Hz), recording format, recording mode (Normal Recording, Slow and Fast Camera, S&F Clip/Audio -WAV-, Pre-recording -3 seconds-, Continuous Recording on Main A card and uninterrupted B card, Frame Recording, Recording Interval), Proxy Recording, Secondary Recording, Relay Recording, Double Slot Recording (A+B). Also, the metadata settings and HDMI settings (RAW and Time Code).
› Audio Configuration: Selection of the audio input in CH1/CH2 and CH3/CH4, recording level, Input limiter, configuration of the multifunctional shoe (shoe/wireless), tone (1 kHz: –12 dB, –18 dB, –20 dB, Off), configuration of headphones, speaker, monitor channels and HDMI OUT channels, among other parameters.
› Monitor Configuration: Brightness, Contrast, Color, Sharpness, LCD Embedded Display
Luminance, B/W Image Output, and Detailed DISP Display/Setup on LCD, SDI, and HDMI, including the highlight function of the Display Assist, for recordings that have a Custom Picture applied.
› Assistance functions: Very important for enhanced control of the recording such as: focus guide, Peaking (LCD, SDI and HDMI), Color False, Zebra, WFM (LCD, SDI and HDMI), aspect ratio marker (4:3, 13:9, 14:9, 16:9, 1.375:1, 1.66:1, 1.75:1, 1.85:1, 1.90:1, 2.35:1, 2.39:1, 9:16, 4:5,[2:1, 1:1, Custom) and security area marker, among others.
› Network settings: They make reference to the activation and settings of the various connection modes: FTP Transfer, IP Transmission, Remote Browser, Canon App, XC Protocol, CV Protocol, Frame.io.
› Customizable buttons.
› System configuration: From time zone, date and time and language,
to Remote terminal, SDI output signal (4096x2160P/3840x2160P, 2048x1080P/1920x1080P, 1920x1080P, 1920x1080P, 1920x1080i(PsF), 1280x720P), HDMI output signal (4096x2160P/3840x2160P, 1920x1080P, 1920x1080i, 1280x720P), Time Code mode (Preset/Regen), Time Code Run (REC Run/ FreeRun), User Bit Type, functionality of the front and rear dial selector, button lock, response level of the touch screen; and also the settings of the tally (indicator lamp), mode and speed of the fan and the DC IN (V) warning, among others.
› Custom Menus (My Menu).
Canon remains a reference in the audiovisual sector, always looking for the greatest compatibility so that its products have the best place among professionals. It is compatible with various softwares: Cinema RAW Development, XF Utility, EOS VR Utility, camera's remote app, RAW plugin for Avid Media Access and
Final Cut Pro, XF plugin for Avid Media Access, EOS VR plugin for Adobe Premiere Pro, CV metadata plugin for Adobe After Effects, Live Link plugin for Unreal Engine, CV metadata extraction tool, MP4 Join Tool, HEVC Activator, and Frame.io Camera to Cloud.
We have had little time to test each and every one of the possibilities offered by the EOS C80, but we stress the point that Canon leaves a wide range of possibilities and configurations open for the different camera operators and cinematographers to find the best aesthetics, texture and color in their audiovisual productions.
Finally, we have to thank the Canon SPAIN staff for giving us the opportunity to get to know and work with the EOS C80, together with a case of four Canon CN-R lenses, with which we have been able to enjoy the process and obtain optimal visual results.
When a camera or any new equipment is launched, we tend to compare it with what is on the market, and we do
not realize that each solution provided by manufacturers is designed for specific needs and environments. With the Canon EOS C80 the manufacturer is very aware of where the trends, technologies and types of audiovisual productions that exist today (from 2024) can go.
In short, the Canon C80 camera is a tool ready to work in many professional environments (cinema, TV, advertising, documentaries, live events...); but above all it allows achieving quality images with a cinematographic/Broadcast look both for work done by
a single person (filmmakers or low budget), and with complex equipment: a second camera unit in high-level productions, a multi-camera TV or Live production.
Canon makes it easier. EOS C80: standalone or in a team?
ASOCIACIÓN DE FABRICANTES Y DISTRIBUIDORES
DEL AUDIOVISUAL EN ESPAÑA
ASOCIACIÓN DE FABRICANTES Y DISTRIBUIDORES DEL AUDIOVISUAL EN ESPAÑA
ASOCIACIÓN DE FABRICANTES Y DISTRIBUIDORES
DEL AUDIOVISUAL EN ESPAÑA
¿Quieres FORMAR PARTE de la ASOCIACIÓN QUE REÚNE a las EMPRESAS MÁS RELEVANTES del SECTOR AUDIOVISUAL en España?
EL PUNTO DE ENCUENTRO PARA LOS PROFESIONALES DEL SECTOR AUDIOVISUAL EN ESPAÑA