Camera special:
Mappingthecurrent landscape
BROADCASTERS
Nordic media ecosystem: Strategies fromSVT, TV2, RÚV, andNRK
INDUSTRY VOICES
Faraz Qayyum
ZeroDensity
ANALYSIS
Knowntechnologies, newmomentum: 2025sofar



Mappingthecurrent landscape
BROADCASTERS
Nordic media ecosystem: Strategies fromSVT, TV2, RÚV, andNRK
INDUSTRY VOICES
ZeroDensity
ANALYSIS
Knowntechnologies, newmomentum: 2025sofar
With summer upon us, it’s the perfect time to reflect on the key news and trends that have shaped the first half of the year — as we’ve been covering in detail issue after issue.
Broadly speaking, we’ve observed that technologies such as virtual production and IP, which have been present in the market for years, are now gaining definitive momentum. At the same time, various streaming platforms are undergoing ambitious technological upgrades, further consolidating their prominence over traditional broadcast.
One of the key areas we’ve focused on is virtual production. With the pandemic serving as a turning point — and propelled by recent advances in LED displays, graphics engines and artificial intelligence — this technology has firmly established its dominant role in the industry during the first half of the year. Cost savings and impressive creative possibilities are among its main advantages.
Beyond content production, the television industry is also undergoing significant infrastructure changes, with the shift to fully IP-based systems gathering pace. After speaking with major
Editor in chief
Javier de Martín editor@tmbroadcast.com
Creative Direction
Mercedes González mercedes.gonzalez@tmbroadcast.com
Chief Editor
Daniel Esparza press@tmbroadcast.com
Editorial Staff
Bárbara Ausín
Carlos Serrano
broadcasters such as Italy’s RAI, Sweden’s SVT and Denmark’s TV 2, among others, we’ve observed a strong commitment to IP (although each broadcaster is at a different stage of the transition), and a more cautious attitude towards UHD.
Hybrid approaches to cloud technology and growing interest in 5G networks are among the strategic priorities shaping current roadmaps. Meanwhile, competition from global streaming platforms continues to emerge as a major challenge.
Finally, we’d like to end this brief overview with a few words on radio. While its margin for innovation is more limited — and its transition to the digital realm, especially in terms of distribution, is still far behind that of television — we’re also seeing signs of technological acceleration in this space.
In the issue you’re holding, we remain committed to delivering a comprehensive snapshot of the industry. This time, we’ve placed a special focus on the world of cameras, which takes center stage in our in-depth feature. We hope you enjoy it!
Key account manager Patricia Pérez ppt@tmbroadcast.com
Administration Laura de Diego administration@tmbroadcast.com
TM Broadcast International #144 August 2025
Published in Spain ISSN: 2659-5966
TM Broadcast International is a magazine published by Daró Media Group SL Centro Empresarial Tartessos Calle Pollensa 2, oficina 14 28290 Las Rozas (Madrid), Spain Phone +34 91 640 46 43
Known technologies, new momentum: Broadcast and media trends defining 2025
A snapshot of the Nordic media ecosystem: Strategies from SVT (Sweden), TV 2 (Denmark), RÚV (Iceland), and NRK (Norway)
We compare the technological landscape of these four public broadcasters, review their most prominent upgrade initiatives, and highlight a number of shared approaches
SPECIAL
For this edition, we’ve structured the report into three main sections. First, we’ve selected flagship models from leading manufacturers. Second, we’ve focused on cabled camera systems—one of the most clearly emerging trends shaping the market. Lastly, we also explore camera supports and accessories—equally essential elements for ensuring optimal performance.
› A look at the industry’s leading models
› Overhead cable camera systems
› Evolving camera support: Market trends and buyer priorities
› Artificial intelligence in camera robotics adds value to production workflows
Faraz
Qayyum, Head of Zero Density Academy and Solution Manager for Real-Time Motion Graphics.
“We merge multiple industries such as visual effects, gaming engine and broadcasting together so our users can imagine the next big thing”
From 14-17 July 2025, Media Rights-Holders (MRHs) gathered in Los Angeles to attend the OBS World Broadcaster Briefing (WBB) for the Olympic and Paralympic Games Los Angeles 2028 (scheduled to take place from 14-30 July 2028 and 15-27 August 2028, respectively). This meeting aimed to provide an opportunity for Olympic Broadcasting Services (OBS), the LA28 Organising Committee of the Olympic and Paralympic Games, and MRHs to convene in person, tour competition venues and assess their proximity to the future International Broadcast Centre (IBC) while also reviewing the progress of Games preparations and discussing the innovative broadcast coverage plan, as the Organisation has claimed in a statement.
In less than three years, Los Angeles will become the third city, after Paris and London, to host the Olympic Games for a third time (1932, 1984 and 2028) while also welcoming its first ever Paralympic Games in 2028. The LA28 Games are set to be the most expansive in history, featuring more sports, disciplines, and events than ever before.
The journey began at the 2028 Stadium in Inglewood which will co-host the Olympic Opening Ceremony alongside the LA Memorial Coliseum, as well as host swimming events. The tour continued to the Inglewood Dome, a sleek, next-generation arena purpose-built for immersive fan experiences and set to host basketball during the Olympic Games.
A drive-by of the legacy buildings under construction for the IBC, located in Hollywood Park Studios in Inglewood. The tour then moved to Exposition Park Stadium, a venue that will welcome two new Olympic sports, flag football and lacrosse, before heading to the legendary LA Memorial Coliseum, a symbol of Olympic heritage that will host athletics and para athletics (track and field), the Olympic Opening Ceremony (together with the 2028 Stadium) as well as the Olympic and Paralympic Closing Ceremonies.
At the DTLA Arena, broadcasters previewed the future gymnastics venue for the Olympics and wheelchair basketball for the Paralympics. The day concluded at the LA Convention Center located next door, the host of a mix of Olympic and Paralympic sports including boccia, fencing and wheelchair fencing, taekwondo and para taekwondo, judo and para judo, wrestling, and table tennis and para table tennis.
Together, these venues reflect Los Angeles’ bold vision: to deliver a technologically advanced, more sustainable, and unforgettable Olympic and Paralympic Games experience, entirely through the use of existing and purpose-built temporary competition venues.
On 16 July, MRHs attended the Main Session of the WBB held in Downtown LA. The session opened with remarks from OBS Chief Executive Officer (CEO) Yiannis Exarchos, joined by LA28 CEO Reynold Hoover.
Emphasising the identity of Los Angeles and the ambition of OBS to try to push the boundaries of broadcast innovation, CEO Yiannis Exarchos reaffirmed OBS’s commitment to delivering coverage that will focus on capturing the spirit and energy of the LA28 Olympic
and Paralympic Games: “In Los Angeles, the Olympic spirit is not just a tradition, it is part of the city’s DNA. As a global hub of technology, storytelling, and sport, LA28 offers us an unparalleled opportunity. Together with our broadcast partners, we are committed to bringing bold ideas to life and redefining what it means to experience the Games – deeper, more connected, more human”.
Building on this momentum, LA28 CEO Reynold Hoover highlighted the Organising Committee’s strong progress and readiness: “With less than three years to go, our strong and growing LA28 team is well
positioned to move forward with purpose, working closely with OBS, MRHs, and all other stakeholders to deliver an exceptional Games. We’ve made significant strides this year: finalising venue plans, securing eight major commercial partners, publishing the Olympic competition schedule, and announcing our game-changing athlete quota. For the first time in the history of the Olympics, more women will compete in the Games than men. With LA’s world-class venues and vibrant energy, we have a powerful stage to showcase the city’s spirit and the essence of the Movement”.
ROE Visual has announced its involvement in Studio Ulster, one of Northern Ireland’s most advanced virtual production studio. The studio selected ROE Visual’s LED solutions to build several LED volumes in their studio complex, as the company has claimed in a statement.
Officially opened in June 2025, Studio Ulster is a facility located within the Belfast Harbour Studios complex. Developed in partnership between Ulster University, Belfast Harbour, and Northern Ireland Screen, with partial funding from the Belfast Region City Deal, the 75,000-square-foot studio is designed to try to support a full spectrum of content production, ranging from television and feature films to immersive gaming and animation.
Studio Ulster is home to three virtual production stages, respectively named VP1, VP2, and VP3, and each is designed with in-camera VFX at its core. All three studios are equipped with ROE Visual’s Ruby-C curved 2.2 LED panels and CB3 LED ceiling panels, while the VP3 studio features a Black Marble BM4 LED floor; Brompton Technology LED processors power all LED panels.
VP1 is among the most advanced virtual production stages, featuring the Nant Studios-certified Dynamic Volume System.
This 61-meter-wide seamless LED canvas is constructed from nine 8-meter-high wall pods and five motorized ceiling pods, creating a configurable volume of 327.5 square meters.
With rapid reconfiguration options, productions can adapt the LED environment in a fraction of the time usually required. It enables real-time, immersive environments for film and television production.
VP2 features a continuous LED wall-to-ceiling “cap” design, making it the perfect environment for car processing work and moderate-scale virtual productions. Like VP1, it is equipped with ROE Ruby-C 2.2 and CB3 ceiling panels, delivering color-accurate visuals across a digital surface.
VP3 – The CoSTAR Screen Lab
VP3 hosts the CoSTAR Screen Lab, led by Ulster University and part of the UK-wide CoSTAR initiative. This space forms part of Europe’s largest virtual production research and development network, fostering innovation and skills development across the creative industries.
VP3 features an integrated LED floor utilizing ROE Black Marble BM4, enabling extended reality (XR) workflows—ideal for broadcast, live performance, and experimental virtual sets.
Together, these stages aim to support everything, from real-time world-building using Unreal Engine to high-quality 2D video playback.
“Virtual production has truly come of age at Studio Ulster”, affirmed Richard Williams, CEO
of Northern Ireland Screen. “This facility pushes the boundaries of what’s possible on screen and enables directors to bring any imagined world to life, right here at Belfast Harbour”.
Studio Ulster is also home to one of five national CoSTAR Screen Labs, part of a UK-wide initiative supported by the Arts and Humanities Research Council (AHRC). With this research component integrated directly into the studio, Studio Ulster looks forward to offering a unique blend of commercial production capabilities and academic innovation.
“We’ve taken the spirit of Belfast’s shipyards—precision, ambition, and pride—and
applied it to the future of storytelling”. explained Professor Declan Keeney, CEO of Studio Ulster. “ROE Visual’s LED volume plays a foundational role in helping us offer world-class tools to visionary creatives”.
One of the first productions will be a BBC Northern Ireland drama series about the Titanic, produced by Stellify Media, which will utilize the LED volume and digital production tools to bring the story to life in real-time.
“At ROE Visual, we are proud to contribute to the global evolution of virtual production”, added Olaf Sperwer, Business Development Virtual Production at ROE Visual. “Studio Ulster
showcases how technology like our BP2V2 panels can power a new era of storytelling—combining creativity, flexibility, and realism like never before”.
“We’re honored to collaborate once again with NantStudios”, concluded Frank Montero, Managing Director for ROE Visual US, “First on their flagship facility in Melbourne, Australia, and now with this impressive new studio complex in Ulster, Ireland. This partnership reflects our shared commitment to pushing the boundaries of virtual production. We look forward to continuing our work with NantStudios and supporting the industry’s ongoing innovation”.
Zero Density has introduced a new workflow that streamlines everything from virtual sets to on-air graphics in one unified platform. The latest innovation, Lino, is a template-based broadcast graphics workflow that aims to revolutionize how broadcast graphics are designed and controlled, as the company has claimed in a statement.
Using Unreal Engine 5, Lino has the intention of allowing users to design, manage, and control 2D/3D overlays, video walls, stingers, AR, XR, and virtual set graphics from a single solution. With the principle of
“design once, use anywhere”, it eliminates redundancy and cuts design time dramatically.
In a live demonstration, Zero Density turned a compact LED stage into a full-scale virtual sports studio. Starting with what audiences see first — on-air graphics like lower thirds — Lino showcased how template-based elements built in Unreal Engine can be controlled through Reality Hub. Operators can execute all types of graphics with familiar commands such as “take in” and “take next”, but now within a workflow from a modern web-based interface.
EV has announced that it has been selected by HBS LLC to provide core broadcast technology for an international football tournament taking place in 2026 across the United States, Canada, and Mexico, as EVS has claimed in a statement.
As part of this agreement, which the company affirms that will contribute to its Big Event Rental (BER) segment revenues
Along with overlays, Lino unifies playout for AR, XR, video walls, and virtual set graphics, all within the same control interface. It is designed to try to support custom-sized high-resolution video walls. Additionally, AR graphics are also templatebased and come from Unreal
in 2026, EVS will provide turnkey solution encompassing broadcast and media equipment, as well as services to support live replay operations, logging, asset management and file content distribution.
The deployment will leverage its live production and replay solution LiveCeption, its live production asset management solution MediaCeption, and the VIA MAP platform.
Motion Design assets and are controlled using the same rundown.
This approach looks forward to eliminating fragmentation, reducing learning curves, and empowering broadcasters to deliver better visuals.
Factum Radioscape has announced the launch of an experimental DAB+ trial in collaboration with Radio Mais, part of the Mais Media Brazil group, in Florianópolis, the capital of the Brazilian state of Santa Catarina. Beginning with the low-powered, localised transmission of three radio services: Radio More, K Radio, and P7 Nosta, all services are being broadcast using Factum Radioscape’s MultiMuxa system. MultiMuxa is engineered with an interface designed with the intention of allowing community and regional radio stations to operate digital radio with ease, without requiring advanced technical expertise, as the company has claimed in a statement.
Factum Radioscape is working to try to lower the barrier to digital broadcasting, making
tools like MultiMuxa accessible, scalable, and easy to deploy. As part of the technical trial, it is also supplying its Observa Analyser. The monitoring software offers real-time analysis and verification of the DAB+ services being transmitted, including features such as validation of service logos (SLS), dynamic radio text, and audio stream monitoring.
James Waterson, Sales Manager at Factum Radioscape, commented: “As Brazil actively considers which digital radio path to follow, we’re proud to demonstrate how DAB+, and especially our MultiMuxa platform, can deliver high-quality, user-friendly solutions that are scalable for national deployment.
Supporting stations like Radio Mais shows how DAB+ radio can empower both commercial and community broadcasters across South America”.
Yan, Station Co-ordinator at Radio Mais, added: “I really appreciate the assistance from the Factum Radioscape team. The MultiMuxa and Analyser software are working well, and I’m happy with the results. The transmission is also running smoothly without any interruptions or dropouts, which is great. We’re broadcasting multiple test services to evaluate how they perform in practice. DAB+ receivers and adapters are becoming easier to find online in Brazil, and prices are now more accessible than they used to be – especially compared to DRM or HD Radio equipment, which remains more limited and costly”
On May 3rd, 2025, the sixth Patagonia Running Festival took place inside Torres del Paine National Park in southern Chile. One of South America’s most iconic races, the event featured multiple distances—from 5K and 11K to half marathon (21K), full marathon (42K), and a 50K ultra—attracting nearly 500 participants from around the world.
The park, a UNESCO Biosphere Reserve often referred to as the “Eighth Wonder of the World,” boasts a unique natural landscape and harsh climate that create an exceptionally challenging environment for live broadcasting.
PlayMedia Chile, an audiovisual technology company specializing in remote-area live productions, was appointed by the event organizer, The Massif, as the official broadcaster. For the first time ever, they broadcast live from inside the national park.
To deliver a long-duration, full-course, multi-camera HD livestream without interruption, PlayMedia Chile partnered with DuoChile and Kiloview to deploy an IP-based encoding and transmission solution. This project posed significant technical challenges and demonstrated the reliability and flexibility of Kiloview products in extreme conditions.
Broadcasting live from Torres del Paine meant tackling some of the toughest conditions imaginable— from signal limitations and harsh weather to equipment constraints and budget restrictions. The production team faced numerous obstacles that pushed both technology and field operations to their limits:
› Unstable connectivity across large portions of the course, including complete signal loss in valleys and lowlands.
› Harsh environmental conditions, with temperatures dropping to –8°C, strong winds, and rugged terrain requiring portable and durable gear.
› Simultaneous signal handling from multiple sources, including fixed cameras, drones, and
mobile teams tracking runners across all race categories.
› Workflow complexity, demanding highly compatible and adaptive encoding solutions.
› Limited budget, requiring compact, energy-efficient, and rapidly deployable devices for on-site use.
To succeed, the team needed an efficient, flexible, and reliable solution to ensure signal stability, multi-source video support, and fast deployment—all within budget.
To overcome the challenges, PlayMedia Chile implemented Kiloview’s end-to-end IP-based solution, supported by an on-premise KiloLink Server Pro for signal aggregation, remote management, and stable transmission.
Virtual production and IP, long-standing technologies in the market, have experienced a decisive boost. Streaming, meanwhile, is further consolidating its prominence over traditional broadcast
By Daniel Esparza
With summer upon us, it’s the perfect time to reflect on the key news and trends that have defined the first half of the year. Broadly speaking, we’ve observed that technologies such as virtual production and IP, which have
been present in the market for years, are now gaining definitive momentum. Meanwhile, various streaming services are undergoing ambitious technological upgrades, further consolidating their prominence over traditional broadcast.
One of the main topics we’ve focused on is virtual production. With the pandemic as a turning point — and propelled by recent advances in LED displays, graphics engines, and artificial intelligence — this technology has firmly established its
dominant role in the industry during the first half of the year.
Having moved past its initial hype, virtual production has reached a level of maturity that allows it to be deployed where it truly adds value — not as a one-sizefits-all solution. Among its main advantages are cost savings and vast creative possibilities.
To understand its real impact on workflows and how it’s currently being used in advertising, television, and film, we compiled a special feature including key
insights from the technical directors of three pioneering production companies — Quite Brilliant, Dock10, and Dimension. Quite Brilliant, in fact, has just launched a new virtual production facility located at the iconic Twickenham Film Studios in London.
Beyond content production, the television industry is also undergoing significant infrastructure changes, with the shift to fully IP-based systems gathering pace. After speaking with major broadcasters such as Italy’s RAI, Sweden’s SVT and Denmark’s TV 2, among others, we identified a strong commitment to IP, albeit with each broadcaster at a different stage — and greater skepticism around UHD.
Their hybrid approach to cloud technology and growing interest in 5G networks are among the key strategic directions. Competition with global streaming platforms stands out as a major challenge.
In this regard, internet-based content distribution has emerged as one of the key forces driving change, bringing both challenges and new opportunities for broadcasters in their relationship with audiences.
Within this segment, the rise of FAST (Free Ad-Supported Television) channels has particularly caught our attention. That’s why, in recent issues, we’ve explored specific case studies such as Rakuten TV or the American network NBC, speaking directly with top executives.
From a business perspective, the sustainability of OTT services is built on three main pillars: monetization, advertising, and data analytics. From a technological standpoint, artificial intelligence — which we could hardly fail to mention — is playing a key role in content personalization and advertising strategies.
More broadly, it’s worth noting that the broadcast and media
sector is taking on a pioneering role with AI, despite being a traditionally cautious industry when it comes to adopting new technologies into its workflows.
And to conclude this analysis, let’s take a look at radio. While its room for innovation is more limited and its transition to the digital world — particularly in terms of distribution — still lags far behind television, we’ve also observed signs of accelerated technological progress.
Indeed, digital radio has gained momentum in the last six months in countries such as Turkey, Greece, Ireland, Bosnia, and Thailand. That’s why we recently published a global overview of its current state, highlighting the main challenges and opportunities, with the help of WorldDAB — an organization promoting the adoption of digital radio based on the DAB standard, predominant in Europe and the Asia-Pacific region.
We compare the technological landscape of these four public broadcasters, review their most prominent upgrade initiatives, and highlight a number of shared approaches
We observe a clear and collective move toward IP-based infrastructures, although each broadcaster is at a different sta ge. In contrast, there’s more hesitation around UHD, a technological leap that several broadcasters are not currently prioritizing. A hybrid approach to cloud technology and growing interest in 5G networks are also among the key topics explored. Meanwhile, the competit ion from global streaming platforms stands out as a major challenge
By Daniel Esparza
Software-based systems, “online-first” strategies, infrastructures built on IP... The broadcast world is speeding up its innovation pace, and public broadcasters are closely watching what their regional peers are doing to draw inspiration for their own next steps. While we can identify certain shared directions, each broadcaster is defining a unique strategy based on its technical evolution, specific needs, and national context.
This means there are both parallels and divergences when analyzing the technological horizon across different territories. In this report, we focus on a specific region: the Nordics. After interviewing technical leads from the public broadcasters of Sweden (SVT), Denmark (TV 2), Iceland (RÚV), and Norway (NRK)—full interviews available via the linked articles—we now aim to provide an overarching view of the media ecosystem in this region. We also attempted to contact Finland’s public broadcaster (Yle) to include
them in this series, but did not receive a response. Readers interested in learning more about Yle can refer to our past interview with Janne Yli-Äyhö, its CTO, available through the following link.
SVT enters a new era
First, we’ve identified that all four broadcasters we spoke with have either recently completed or are currently undergoing ambitious technological transformation plans. SVT stands out with the recent implementation of NEO—a new softwarebased infrastructure for communication, distribution, and production.
“This transition is like moving from a gas engine to an electric one,” SVT’s CTO, Adde Granberg, told TM BROADCAST. “It is the most significant change I have witnessed in my 32 years in the industry. We still have our broadcast vehicles running, but now, the engine—the core of our production infrastructure— is completely new and built on proven communications technologies.”
The broadcaster has been working on this project for the past five years and officially launched it this spring. “I believe this is the biggest step SVT has taken towards software-based production in an industry traditionally reliant on hardware infrastructure,” Granberg explained.
According to SVT’s technical director, one of the main challenges was convincing both internal teams and industry peers that hardware could, to a large extent, be replaced by software—though a core infrastructure will always be needed. “We need to rethink how we set up production, how we operate, and how we design production control rooms. We also need to reconsider the traditional mindset around production workflows,” he added.
This belief in softwaredefined workflows is shared by TV 2. Morten Brandstrup, Head of Newstechnology at the Danish public broadcaster, told TM BROADCAST that the migration from traditional broadcast
models to software-defined infrastructures represents the industry’s main axis of transformation.
When asked about their stance on cloud adoption, he stated: “For me, it doesn’t matter much whether the servers are in the cloud or on-premises. What really matters is that production is softwaredefined. The physical location—whether in the cloud, on-premises, or in our own VM—is not a big deal for us. What’s more important is transitioning to software-defined production, and that’s what’s truly interesting.”
Cloud usage across the different broadcasters is another area we explored in depth. All of them agree that a hybrid approach best suits their operational needs. This was underscored by Pål Nedregotten, CTO of Norway’s NRK: “Cloud services are a crucial part of our technological setup, although we’ve experienced that the cost benefits aren’t always there, and that TV 2
sometimes it makes sense to retain a service on-prem.”
“It’s very much a question of utilising cloud offerings in the right way, for the right kinds of services,” Pål Nedregotten added.
“We’re still planning to retain a significant presence in hosted data centres for our core production processes. The key issue is finding the right balance and flexibility—and making informed choices as the technology develops and opportunities arise.”
A similar perspective was shared by Icelandic broadcaster RÚV. “We’re confident in both cloud and on-premises technologies, and for us, it’s not about choosing one over the other,” said Gísli Berg, Head of Production and Marketing, and Hrefna Lind Ásgeirsdóttir, Director of Digital Strategy, in a joint interview with TM BROADCAST. “We take a hybrid approach that allows us to stay flexible, resilient, and cost-effective.
The choice depends on the system’s purpose, how critical it is, and what level of control or scalability is needed.”
National circumstances also play a role. In Iceland, the limited presence of global cloud providers shapes decision-making.
“That makes us think carefully about what should be hosted locally and what works better in the cloud, especially when considering latency, availability, and disaster
scenarios,” Gísli Berg and Hrefna Lind Ásgeirsdóttir explained.
With this context in mind, RÚV outlined the core of their hybrid strategy: “Most of our production workflows are still on-prem, where we value the stability and control—particularly when it comes to managing software updates. On the other hand, we use cloud-based tools for our web infrastructure and office systems, and they’ve proven essential
in handling peak traffic moments, such as during volcanic eruptions.”
Major upgrade initiatives at NRK, RÚV, and TV 2
While we’ve already covered SVT’s NEO platform, it’s far from the only major upgrade initiative underway in the Nordic public broadcasting sector. Norway’s NRK is currently undergoing a full transition to an IP-based infrastructure aligned
with the SMPTE 2110 standard. This new architecture is designed around highly resilient, shared data resources and conceived for remote accessibility.
“We are in the process of standardizing our systems and have developed a working Minimum Viable Product (MVP) for this new IP-based infrastructure,” NRK’s CTO told us. “This shift also necessitates a significant upgrade in our workforce’s skillset, moving from traditional broadcast engineering to software and IP-based integration skills.”
“Our primary focus for the near future,” he added, “is the complete technological outfitting of our new, custom-built facilities. This is a massive undertaking with hard deadlines. The first major delivery will be for our new Trondheim location in 2026, which will feature the first productionready version of our new IP-based platform.”
The most extensive and complex part of the project will be the relocation of NRK’s Oslo operations, scheduled for 2029.
This includes moving 2,400 staff and completely replacing the technological backbone of its numerous radio and TV studios, as well as editing suites. “While the foundational IP architecture is set, we are still analysing future production needs to determine the final scale of studios, control rooms, and editing suites required in the next decade,” said Pål Nedregotten.
As is often the case with this kind of transition, one of the central challenges lies in ensuring continuous service throughout. “The entire project is a delicate balancing act of building the future while maintaining current broadcast operations,” the CTO summarized.
RÚV, for its part, is also undergoing a shift in its distribution strategy.
“As we move our focus and investment toward channels that show stronger audience engagement, several important changes are already underway,” their technical leads told us. “We’ve switched off longwave distribution,
and we have plans in place to turn off satellite distribution this year. We’re also developing a clear plan for transitioning from terrestrial television to digital distribution.”
“Our core focus is now on digital platforms, which offer greater flexibility and better alignment with how audiences consume content today,” they noted, though FM radio will remain part of the strategy—especially to support national security and emergency communication needs.
In the case of TV 2, where we focused specifically on the news department, the broadcaster’s renovation efforts are guided by a purpose: putting the story at the center. This objective has prompted a fundamental shift in workflow. The Danish public broadcaster implemented Wolftech News, a platform that has enabled the creation of an integrated system fostering collaboration between all teams involved in news production. It also optimizes resource management and,
ultimately, enhances the quality of content delivered to audiences across platforms.
It’s worth noting that TV 2 is not just a traditional broadcaster—it also operates Denmark’s most visited news website and employs a staff of 650, underscoring its production capabilities.
“We needed to operate as a cross-media outlet while continuing with podcasts and various other content formats,” Morten Brandstrup explained. “Given the wide variety
of content we produce, it was essential to implement a story-centric tool to ensure better coordination, prioritization, and platformagnostic storytelling across all our different outputs.”
“Typically, we make decisions as a broadcast operation. However, in this case, we prioritized giving our online colleagues a successful platform first, then integrating our broadcast colleagues into the same system,” he added. Regarding the outcome, TV 2’s technical director was unequivocal: “It has been a huge success—
far greater than we anticipated. Moving 600 staff members onto a shared news platform turned out to be less complicated than expected.”
As previously mentioned, NRK is heavily invested in a large-scale transition to IP, placing this goal at the center of its technological overhaul. Broadly speaking, it’s a direction all broadcasters in the region are moving toward—although each is at a different stage.
“On the live production side, we are in the middle of a project to transition our in-house production platform to 2110,” explained Morten Brandstrup (TV 2). “We’ve spent over half a year developing an embedded operation within the Danish Parliament, staffed by more than 20 people. There, we have an NDI island connected to the main broadcast center as part of a 2110 operation. We are gradually transitioning our studios and galleries to the IP world.”
At Iceland’s RÚV, the transition is still at the evaluation stage. The broadcaster is carefully observing how other similar organizations are handling the shift before taking further steps: “While we’re not yet ready to implement the upgrade, we’re carefully assessing the available options to ensure we choose a solution that aligns with our needs and long-term goals. We are now looking at how other similar broadcasters are transitioning to IP, for the near future the production will stay in SDI.”
In a recent interview, SVT’s Adde Granberg pointed out that IP migration goes beyond the SMPTE 2110 standard. “Our transition to IP-based infrastructure at SVT is progressing, but it’s important to clarify that we’re focusing on IP in general, not specifically on SMPTE 2110. This distinction is crucial because SMPTE 2110 is a production industry standard that isn’t necessarily optimized for end-user needs, which is a key priority for us.”
As Adde Granberg noted, the value of IP-based infrastructure lies largely in the flexibility it enables— and within this context, 5G networks are increasingly coming into play.
“We’ve started using 5G for some of our news reporting, mainly through portable IP-based tech that lets us go live quickly from the field,” RÚV told us. “It’s been super helpful when setting up a full OB van isn’t really practical. That said, we still rely on OB vans for bigger broadcasts. Overall, 5G is a great addition to our toolkit, giving us more flexibility and speed when we need it most.”
In Denmark, where the country’s geography allows for a particularly robust 5G infrastructure, Morten Brandstrup described a similar approach. “We have conducted extensive experiments with private 5G networks. Instead of deploying an OB truck, we set up our private network in the conference center
[referring to a success case he mentioned], allowing for quick deployment without the need for cabling. We handle around 40 such events annually.”
TV 2’s technical lead also encouraged broader reflection on the role broadcasters can play in pushing 5G beyond the media industry and promoting cross-sector collaboration. One of the core advantages of 5G is its low latency and capacity for high-quality video transmission— essential features for professional use cases such as police bodycams, security systems, facial recognition, drones, and other applications where real-time data delivery is vital for monitoring and response.
“This is where 5G becomes a truly valuable technology,” Brandstrup noted. “As broadcasters and media professionals, we may be a small industry, but we have extensive expertise in transmitting video from point A to point B, which many other sectors could benefit from.
We may not be adopting 5G as quickly as some would expect, but on the other hand, it’s also about building the right business cases. The key here is that, as an industry, we should engage with other sectors to ensure we get the right services and help them understand that what we do is valuable—and that they, too, can benefit from our expertise.”
NRK echoes this view, albeit with a more cautious stance. “We have an interest in 5G as technology, but we have been slow at adopting it,” explained Pål Nedregotten. “We made some interesting trials during the Nordic Skiing championships in Trondheim, where we collaborated with Telia to produce roaming camera production using private 5G networks, while trying to use public network during medal plaza production in the evening. We learned that it has a lot of potential, but we are dependent on private networks. We also learned that latency quickly becomes an issue.”
Artificial Intelligence: The real gamechangers are still ahead of us
No overview of current trends would be complete without touching on artificial intelligence—a technology that Nordic broadcasters are gradually integrating into their workflows. At TV 2, AI is primarily being used in the newsroom for text-related tasks, such as speech-to-text transcription. However, the goal is to expand its application to more areas.
“I look forward to seeing AI more integrated into
live production,” Morten Brandstrup told us. “AI could help streamline operations in galleries, especially as we move to IP and network-based workflows. AI assistance in areas like routing, network control, and live production recommendations would be incredibly valuable.”
Pål Nedregotten (NRK) agrees with Brandstrup on the current uses and future potential of AI, though he also offered a caveat: “Some of the opportunities we see are tremendous, some are frankly more hype (and cost!) than real value.”
“Automation in text to speech in our user-facing products, automatic generation of metadata, and speech to text in production processes show a lot of promise –but currently the entire business is applying AI to known problems, sometimes with tangible benefits,” he added. “I think, however, that the real gamechangers are ahead of us – where we see entirely new opportunities not tied to our current thinking.”
At RÚV, AI is also being steadily integrated with the dual aim of enhancing internal workflows and improving the user
experience. “We’re making steady progress with AI to enhance both our internal workflows and the user experience. In close collaboration with software companies specialising in language technology and artificial intelligence, we’ve added a speech-totext service for live events, which improves accessibility for our audiences.”
“We’re also using AI to generate simplified news articles, making important information more understandable and inclusive,” added Gísli Berg and Hrefna Lind Ásgeirsdóttir. “In addition, AI supports us
with proofreading and translation services and helps accelerate content production across radio and television. In the production we have been using image and video generators, we have been taking small steps and we have been setting guidelines for our staff since. We’re actively exploring further opportunities to improve our services and increase efficiency with use of AI.”
SVT, like the other broadcasters, is already applying AI to support real-time transcription and subtitling. “Beyond this,” Adde Granberg
told us, “we’re exploring AI’s potential in several other areas. We’re investigating its use in content recommendation systems to enhance the viewer experience on our platforms. In postproduction, we’re testing AI-assisted editing tools, which could streamline our workflow considerably. We’re also leveraging AI for more efficient archival searches, allowing us to better utilize our vast content library. Additionally, we’re experimenting with AI for content optimization, such as article summarization and spell-checking.”
He was also careful to underline that all of these tools are subject to human supervision.
One topic that generated notable consensus among the technical leads we interviewed was their cautious stance on UHD.
“We are currently broadcasting in HD, and we don’t have any plans to
go beyond that resolution,” Morten Brandstrup (TV 2) stated firmly. “The reason is simple: we don’t see a viable business case for improving resolution, as viewers wouldn’t be willing to pay more for higher quality.”
At NRK, Pål Nedregotten believes HDR offers a more practical path forward. “We are producing in 1080i25 at the moment, but are considering 1080p50 for
selected productions. Our distribution chain is prepared for going 1080p50. We also consider HDR more valuable to the viewer than UHD, which also makes sense from a cost perspective.”
“Currently, our focus is on optimizing our existing HD and 4K offering, ensuring we deliver the best possible quality within these formats,” Adde Granberg (SVT) told us.
RÚV takes a similar stance. “We currently broadcast in HD, though some of our content is already produced in 4K. While we’re not planning a full resolution upgrade in the near future, our next step is moving from interlaced to progressive format within HD,” explained Gísli Berg and Hrefna Lind Ásgeirsdóttir.
HTML-based graphics solutions
Graphics is another area where several broadcasters are carrying out ambitious renewal projects. One example is TV 2, whose news department has undertaken a series of initiatives in line with its “online-first” strategy.
“We used to produce graphics using templates, with dedicated graphics staff working on them daily. However, we’ve changed our approach. Our graphic designers now primarily work with Adobe, creating graphics that are suitable for both TV and online platforms, following an online-first strategy,” said Morten Brandstrup.
The aim is to make graphic elements reusable across platforms, which has also reshaped their production priorities. “As a result, we reuse more two-dimensional graphics for prime-time shows while reducing our use of virtual or immersive graphics. While virtual graphics may look impressive in the
studio for prime-time news, spending one or two days creating a 20-second scene isn’t practical for online content.” The Danish broadcaster is also gradually adopting HTML-based graphics as its standard.
TV 2 has also begun using virtual studios for some of its programming—a field where RÚV has made significant progress as well: “One of the highlights has been the adoption of Pixotope’s virtual production technology. It’s allowed us to streamline our broadcasting process and make it more eco-friendly by reducing the need for physical sets. So far, we’ve produced six new series with this technology,” said Gísli Berg y Hrefna Lind Ásgeirsdóttir.
In the realm of graphics, RÚV has also introduced workflow changes tailored to sports production: “We have improved our production workflow for more automated solutions for sport coverage.”
NRK, for its part, shares TV 2’s commitment to HTML-based graphics: “We have an in-house developed HTML graphics solution called NORA that basically handles our main graphics demands.”
The expanding shadow of streaming platforms
The “online-first” strategy mentioned by Morten Brandstrup (TV 2) brings us to the final area of analysis: OTT and streaming services—domains that are increasingly shaping the future of television.
“I think it’s fair to say that the tipping point from linear to streaming is very close. In many user groups, it’s clearly already happened,” noted Pål Nedregotten (NRK). “Currently, roughly one third of our viewers choose streaming when accessing our content. Linear holds steady compared to last year, while streaming is growing. So our total reach is actually increasing—but it’s very much streaming that is the growth factor.”
“This is a trend that potentially has a profound impact on the way we approach our programming,” he added. “But it’s also a shift that in time has the potential to impact the entire technological value chain, from the choices we make in distribution, to internal discussions on the right technological quality. And of course it impacts the way we approach the player end.”
NRK also highlighted challenges related to compatibility with older smart TV models. “Maintaining and developing across a whole range of different operating systems with varying degree of technological support is, quite frankly, expensive.”
This concern is echoed by Iceland’s RÚV: “We’ve managed our own OTT platform for years, but we’re now exploring more standardised solutions. Supporting all major smart TV platforms is a key priority, and managing multiple in-house platforms is complex. Big streaming services have raised the
bar for performance and usability, and we want to meet those expectations as we put more focus on digital distribution.”
The overwhelming influence of global streaming platforms is also felt at NRK. “While we could never compete with the budgets of the worldwide streaming platforms, we’re nonetheless judged by our users using the same yardstick.”
TV 2
Cameras are arguably the most iconic symbol of the broadcast world, and their evolution has been instrumental in shaping the industry as we know it today. That’s why we regularly take a close look at key models and trends to provide an accurate snapshot of the current landscape.
By Daniel Esparza
For this edition, we’ve structured the report into three main sections. First, we’ve selected flagship models from leading manufacturers, using a general classification as our guide: studio cameras for television sets; EFP cameras for productions outside the studio, such as sports events or concerts; ENG cameras or camcorders for news coverage and field reporting; digital cinema cameras; and robotic PTZ cameras.
In some cases, we’ve chosen to highlight product lines that manufacturers themselves have brought to TM BROADCAST’s attention within this framework; in others, we’ve selected— based on our editorial criteria—the most
representative, innovative, or iconic model in the category or categories where we believe each brand presents its strongest offering.
Second, we’ve focused on cabled camera systems—one of the most clearly emerging trends shaping the market. For this section, we asked one of our regular contributors, Carlos Medina, to provide an article detailing the evolution, key features, and main applications of these systems.
Lastly, we also explore camera supports and accessories—equally essential elements for ensuring optimal performance. To do so, we gathered key insights from two companies specialized in this area: Telemetrics and Villrich.
The LDX 100 Series from Grass Valley delivers groundbreaking advancements in broadcast camera technology. Designed for high-performance production environments, these cameras combine exceptional image quality with innovative features, enabling broadcasters to capture dynamic content with unmatched precision and creativity.
Features
› Superior Image Quality
The LDX 100 Series cameras utilize advanced imaging sensors that provide exceptional detail, vibrant colors, and superior low-light performance. With a global shutter always operating, they consistently deliver the highest quality in every shot.
› Flexible Operational Modes
The LDX 100 Series adapts to diverse production needs, supporting multiple formats—including HDR, 4K, and HD—simultaneously. With integrated LUT-based HDR-to-SDR conversion, it ensures seamless transitions between production standards.
› Robust and Flexible Design
Built with an adaptable architecture, these cameras allow easy function expansion through software options—both permanently and daily. This ensures long-term versatility and readiness for future production demands.
LDX 110 and LDX C110Premium-quality entry-level broadcast cameras
The LDX 110 and LDX C110 are premium-quality entry-level broadcast cameras designed to deliver exceptional performance. Both models utilize a single 2/3” CMOS imager with a Bayer pattern color filter, offering high sensitivity and an excellent signal-to-noise ratio. This ensures outstanding image quality, even in challenging lighting conditions. They natively support all HDR formats, featuring exceptional dynamic range and integrated LUT (Look-Up Table) processing for seamless SDR conversion.
The LDX 110 supports UHD (Ultra High Definition) at standard frame rates and HD (High Definition) at up to 3x high-speed capture, making it ideal for a range of production needs. Its robust imaging capabilities meet the demands of modern broadcast environments while maintaining ease of use.
The compact version, LDX C110, provides the same advanced feature set and format support as the full-size LDX 110 but in a smaller, lighter form factor. This makes
it particularly well-suited for applications where space and weight are critical factors, such as cable cams, Steadicam rigs, and pan/tilt head systems. It also features flexible connectivity options, including compatibility with Grass Valley’s XCU base stations, offering seamless integration into existing production workflows.
Together, these models deliver high-end imaging in a streamlined, cost-effective package.
LDX 135/150 and LDX C135/C150 –High performance for every speed
The LDX 135 and LDX C135 are top-tier solutions for single-speed broadcast applications, delivering exceptional image quality and reliability. For productions requiring high-speed capabilities, the LDX 150 and LDX C150 are the ultimate choices, offering advanced slow-motion
performance without compromising quality. All four models are built on the same optical platform, featuring three 2/3” CMOS imagers with a precision prism beam splitter. This design ensures high sensitivity, excellent signal-to-noise ratio, and outstanding image clarity in all lighting conditions.
They natively support all major HDR formats and feature exceptional dynamic range, with integrated LUT (Look-Up Table) processing for smooth SDR conversion. This guarantees consistent, optimal results regardless of format or environment.
The LDX 135 and C135 support UHD and HD in single-speed operation, while the LDX 150 and C150 extend functionality with support for up to 3x speed in UHD and up to 6x speed in HD, making them ideal for capturing fast-paced action in sports, live events, and studio productions.
All models also support NativeIP connectivity and offer optional JPEG XS compression for high-quality, low-latency video transmission—ensuring seamless integration into modern IP-based workflows. Together, they deliver versatility, performance, and future-ready production power.
LDX 180 – Bringing cinematic depth into media production workflows
The LDX 180 brings a new dimension to live storytelling, delivering cinematic depth that enhances every close-up, every expression, and every emotional beat. It draws viewers closer to the action, blending the aesthetic of film with the energy and immediacy of live broadcast. As part of the LDX 100 Series, it integrates effortlessly into your existing ecosystem, supporting shared accessories, transmission systems, and workflows for streamlined, efficient production.
At its core is Grass Valley’s in-house developed Super 35mm Xenios CMOS imager, offering true Global Shutter performance and oversampled UHD resolution. This ensures crystal-clear imagery with reduced motion artifacts, even in fast-moving scenes. Its PL mount allows compatibility with a wide range of iconic cinema lenses, giving creators the flexibility to craft a unique visual style while maintaining the speed and reliability required for live production.
With the Creative Grading system, the LDX 180 can be color-matched and controlled alongside the entire Grass Valley camera range. Operators gain intuitive, real-time visual control of all image parameters—not just through numbers, but through immediate feedback—empowering creative decisions in even the most demanding lighting and production environments.
The Panasonic AK-UCX 100 is a compact, fullfeatured IP studio camera system designed for modern, flexible broadcast and live production workflows. Equipped with a 1-inch MOS sensor, it delivers rich 4K/60p video with HDR (HLG) and BT.2020 color support.
While it integrates seamlessly with Panasonic’s CCUs and ROPs for traditional control, the UCX100 also supports CCU-less operation — ideal for productions aiming to reduce equipment bulk or simplify remote and mobile setups. It can output full-quality video directly from the camera body via 12G-SDI, HDMI, or IP including ST2110, making it perfect for flyaway kits, compact studios, OB trucks, and remote-controlled installations.
The camera also includes a highperformance anti-moiré filter, minimizing unwanted visual artifacts and ensuring clean, crisp images even when shooting fine patterns, textures or against LED walls, essential for professional broadcast quality.
Critically, the UCX100 now supports Dante for networked audio integration — a key requirement for modern IP-based
environments — enabling seamless routing of audio over standard networks with low latency. In addition, it offers full bandwidth NDI support (via optional license), expanding its IP capabilities for high-performance live production, alongside native NDI|HX, SRT, and RTMP compatibility.
Optional ST 2110 IP support enables uncompressed video, audio, and data over IP networks, with PTP synchronization, eliminating the need for SDI cabling in IP workflows.
With a native B4 lens mount and fanless design, the UCX100 balances professional image quality with versatile, quiet operation, suitable for a wide range of production environments.
The Panasonic AG-CX370 is a compact, shoulder-mounted 4K camcorder built for news, events, and live streaming. Featuring a 1-inch MOS sensor, it captures UHD 4K/60p video with 10-bit HEVC recording for outstanding image quality and efficiency.
It records in HEVC, AVC-Intra, or MOV/MP4 formats to dual SDXC card slots, allowing for relay or simultaneous recording.
Enhanced features include 5G mobile streaming via USB-C, face detection autofocus, GPS metadata tagging, and wireless timecode sync for multicam shoots. The CX370 supports HDR recording (HLG/PQ) and advanced LUTs for professional color workflows. A 12G-SDI output enables uncompressed 4K delivery to broadcast switchers.
Its integrated IP capabilities include RTMP/ RTSP, NDI|HX, FTP, and browser-based remote control. Built-in Wi-Fi, LAN, and USB-C tethering offer flexible connectivity.
Optics include 20x optical zoom, optical image stabilization, and dual XLR audio inputs with manual controls. Lightweight at 1.9kg, the CX370 is rugged and ready for the field.
Ideal for journalists, corporate teams, and streamers, the AG-CX370 delivers versatile recording, seamless streaming, and pro-grade performance in one reliable camcorder.
The AW-UE160 is Panasonic’s flagship 4K PTZ camera, engineered for high-end broadcast, live production, and studio
applications. It features a large 1-inch MOS sensor, delivering stunning 4K/60p video with HDR (HLG) and BT.2020 for rich, accurate color reproduction. With excellent low-light performance (F9/2000lx) and a 20x optical zoom, it excels in venues from stadiums to studios.
A key strength of the UE160 is its native ST 2110 support, enabling full broadcastgrade IP workflows without external converters. It delivers uncompressed video over IP, synchronized audio/video, and PTP timecode — ideal for complex, multi-camera productions.
Now with support for both full bandwidth NDI and NDI|HX, the UE160 offers even greater IP flexibility, allowing seamless integration into a wide range of networked production environments, from high-performance to bandwidth-constrained systems.
The camera also supports SRT, RTMP, 12G-SDI, and HDMI, providing broad compatibility for hybrid workflows. Built-in auto framing with AI-based subject tracking enhances remote and dynamic productions in corporate, education, or studio settings.
Another standout feature is the free builtin auto framing — leveraging intelligent AI tracking to automatically frame subjects in real-time, ideal for corporate events, education, or remote-controlled studios.
With additional features like FreeD for AR/ VR, Genlock, tally, and return video, the UE160 is designed to integrate smoothly into demanding setups. It’s a powerhouse PTZ that combines cinematic image quality with robust IP production tools.
UNICAM-XE is the family name of Ikegami’s latest studio camera range. The family is growing every year, this year on IBC 2025 adding the wireless camera type UHK-X700RF. The UNICAM-XE series comprises two studio camera versions (UHK-X750 and UHK-X650) plus 4 portable camera models, UHK-X700, UHK-X600, HDK-X500 and now UHK-X700RF.
All models share the same technology platform, i.e. the same 2/3-inch CMOS sensor technology with global shutter architecture and the same DSP (Digital Signal Processing). As such, all cameras share similar key features, such as dual filter wheel operation for ND and CC filters, HDR operation with WCG (Wide Color Gamut, BT2020), RBF (Remote Back Focus), OVC (Optical Vignetting Correction) and an integrated 16-axis color corrector.
The portable models were designed with strong focus on operability, low weight and a perfect center of balance. In combination with a Large Lens Adaptor (SE-U430) telephoto box type lenses can easily been employed, transferring the portable camera head into a true studio version within a few seconds only.
UHK-X750 and UHK-X650 are the studio camera versions of the portable sister models UHK-X700 and UHK-X600, offering exactly the same specifications compared
to their portable companion sister model. They were designed as native studio camera heads, offering some operational advantages, e.g. a lower center-of-gravity and easier staging compared to a portable camera with telephoto lens and large lens supporter. In addition, these full studio cameras feature a greater design emphasis on ease of service and operation.
The difference between all these camera models will be visible when looking at their native resolution and supported frame rates:
› HD frame rate is supported as a standard feature (i.e. 1080@50P/i, 59.94P/I and 720@50P, 59.94P) by all UNICAM-XE camera models.
› UHK-X700, -X750, -X600, -X650 and -X700RF are all equipped with the same type of native UHD resolution sensors (3840x2160 pixels) with global shutter technology.
› UHK-X700 and -X750 support UHD frame rate (2160@50P, 59.94P) as a standard feature.
› UHK-X700 and -X750 can support UHD frame rate in up to 2x speed and HD framerate in up to 8x speed by adding an optional hardware to the base station (Type: BSX-100 + BSX-HFR)
› UHK-X600 and -X650 only support HD frame rate as a standard feature, UHD frame rate is an optional SW license key.
› UHK-X600 and -X650 can employ High Frame Rate (HFR) in HD by adding an additional SW license key (Type: HFR-X700) for up to 4x speed in HD.
› UHK-X700RF supports UHD frame rate (2160@50P, 59.94P) and HD frame rate.
› HDK-X500 is equipped with a newly designed CMOS-sensor with native HD resolution (1920x1080) and global shutter architecture, supporting standard HD frame rate.
All UNICAM-XE models can easily be integrated into an IP infrastructure (ST-2110) by either adding an MoIP board (option) into the base station (BSX-100) or by using the new IPX-100 mini CCU. No matter which of these camera models will be chosen for the specific production, the user can be 100%sure to obtain a state-of-the art performance.
720@50P, 59.94P
1080@23.98P, 25P, 29.97P Option Option Option Option
1080@50i, 59.94i OK OK OK OK
1080@50P, 59.94P OK OK OK OK
1080@100i/119.88i Incl. HFR-1
Incl. HFR-1 Option 2160@50P, 59.94P OK OK Option
HFR-1: 1080P/i@2x, 3x, 4x Option Option
HFR-2: 1080P/i@6x, 8x Option
UHD-HFR: 2160@2x Option
The UHL-X40 is a new ultra-compact box style UHD/HD HDR camera designed for broadcast applications, where space plays a major role. With its small size and ultra-high performance, UHL-X40 meets a wide variety of PoV camera purposes, including high-end surveillance applications.
The UHL-X40 consists of two elements, the small sized und ultra-low weight camera head plus the compact CCU (CCU-X40). The low weight of the camera head allows the use of a Pan&Tilt head with a low payload, thus saving installation space and costs.
The CCU-X40 includes the digital signal processing and all necessary connections to the traditional broadcast infrastructure. The UHL-X40 camera head connects with the compact CCU-X40 via up to 10km of duplex single-mode fiber carrying uncompressed RGB raw pixel data. Included with the fiber link is a 1G Ethernet trunk for external data such as pan/tilt/zoom control.
This special box-type camera head incorporates three high quality 2/3-inch UHD sensors with a global shutter pixel architecture and native UHD-resolution (3840x2160) capturing natural images even when shooting LED screens, clear of geometric distortion during still frame replay. The global shutter imagers also minimize artifacts when televising flash/ strobe-illuminated stage environments.
The UHL-X40 features a high sensitivity of F11 (at 2160/50P) at signal-to-noise ratio of 62 dB. With its B4 mount it is compatible to the majority of all broadcast TV lenses.
Two integrated optical filter wheels allow separate control of color temperature and incoming light. UHL-X40 supports HDR (High Dynamic Range, HLG and optional PQ) as well as WCG (Wide Color Gamut, BT2020). Further features are RBF (Remote Back Focus), OVC (Optical Vignetting Correction), HD Cut Out Function (from CCU-X40) and an Image Reverse function.
Sony’s HDC Series comprises highperformance system cameras tailored for live broadcast, sports, studio, and OB van environments. These units deliver premium 4K/HD HDR image quality in a compact, cost-effective design.
At the flagship level, the HDC 5500 offers exceptional image clarity, portability, and creative flexibility for high-end live production setups. The HDC 3200, equipped with a highly sensitive 4K 3 CMOS global-shutter sensor, produces crisp, low noise pictures and operational versatility. Complementing the 4K models are the HDC 3100 and HDC 2500L, which continue the series’ liveproduction strengths, offering refined picture performance and advanced feature sets-
Compact point-of-view variants like the HDC P31 (HD) and HDC P50 (4K) extend the series’ reach to mobile and tight-space applications. The HDC P31 delivers superb HD imaging with high sensitivity, while the P50 pioneers a 2/3″ 4K global-shutter 3 CMOS design for exceptional compact image
capture. The lineup also includes the HDC F5500V, featuring a Super 35 4K CMOS sensor and integrated variable ND filter for greater creative control.
Overall, Sony’s HDC Series stands out for combining state of the art global shutter sensors, 4K/HDR output, and modular scalability, supporting a wide range of broadcast scenarios—from multi-camera studio productions to nimble on field operations.
The Sony VENICE 2 is a flagship digital cinema camera designed for high-end film productions. It features a full-frame 8.6K sensor (also available in 6K), offering up to 16 stops of dynamic range and dual base ISO at 800 and 3200, delivering exceptional performance in both low-light and highcontrast environments. The camera supports internal recording in 16-bit X-OCN and Apple ProRes formats, eliminating the need for external recorders. With built-in 8-step ND filters, support for anamorphic lenses, and a compact, modular design, the VENICE 2 combines image quality, flexibility, and reliability for the most demanding shoots.
The URSA Cine platform is built for productions that need high-resolution, large-format digital cinematography with streamlined post integration.
The URSA Cine 12K LF features a full-frame 36 x 24mm RGBW sensor, capturing up to 12K 3:2 open gate at 80fps. It’s joined by the URSA Cine 17K 65, which steps up to a 65mm-format RGBW sensor delivering a massive 17520 x 8040 resolution in 2.2:1 open gate, aimed at IMAX, VFX, and virtual production.
URSA Cine captures at resolutions up to 12K and 17K, while allowing users to record lower resolution Blackmagic RAW files, including 8K and 4K, directly from the full sensor area. This delivers the benefits of large-format oversampling, improved image quality, and manageable file sizes, without compromising creative flexibility.
Both cameras offer 16 stops of dynamic range and use Blackmagic RAW with built-in h264 proxy generation. An internal 8TB media module and 10G Ethernet enable real-time syncing to Blackmagic Cloud and DaVinci Resolve, allowing editorial to begin during acquisition. Lens mounts are interchangeable (PL, LPL, EF), with the 17K also adding support for Hasselblad.
› Recommended for: While the 12K LF is suited to commercials, music videos, narrative and documentary productions looking for a flexible, cost-efficient full-frame workflow, the 17K 65 targets premium large-format use cases where resolution, sensor size, and visual scale are critical, offering an accessible route into ultra-high-end digital cinematography.
› URSA Cine 12K LF
Sensor: Full-frame RGBW 36 x 24mm
Resolution: Up to 12K (12288 x 8040) 3:2 open gate
Dynamic Range: 16 Stops
Media: 8TB internal Blackmagic Media Module, optional CFexpress
Connectivity: 10G Ethernet, 12G-SDI, USB-C, WiFi, Lemo, Fischer, XLR
Frame Rates: 12K 3:2 open gate up to 80 fps, 12K 17:9 up to 100 fps, 12K 2.4:1 up to 120 fps, 9K 3:2 Super 35 up to 100 fps, 8K / 4K 3:2 open gate up to 144 fps, 8K / 4K 2.4:1 up to 224 fps
› URSA Cine 17K 65
Sensor: 65mm RGBW ultra large format
Resolution: Up to 17K (17520 x 8040) 2.2:1 open gate
Dynamic Range: 16 stops
Media: 8TB internal Blackmagic Media Module
Connectivity: 10G Ethernet, 12G-SDI, USB-C, WiFi, Lemo, Fischer, XLR
Frame Rates: 17K 2.2:1 open gate up to 60 fps, 12K 2.4:1 up to 90fps, 8K 2.2:1 open gate up to 100 fps, 8K 2.4:1 up to 170 fps
The URSA Cine Immersive is Blackmagic Design’s entry into stereoscopic capture for Apple Vision Pro. Built on the URSA Cine platform, it’s designed specifically for spatial video production, using a dual-lens, dualsensor configuration to capture 8K per eye at 90fps with 16 stops of dynamic range.
Unlike traditional VR rigs, the system records both eyes into a single Blackmagic RAW file, with factory-calibrated metadata for lens geometry and alignment. This helps simplify ingest and post, especially when working in DaVinci Resolve Studio, which now includes dedicated support for immersive timelines and deliverables.
An 8TB or 16TB internal media module handles high-bandwidth recording and supports real-time sync to Blackmagic Cloud, enabling teams to start reviewing or editing footage almost immediately. Connectivity includes 10G Ethernet, 12G-SDI, USB-C, Wi-Fi, and mobile tethering, mirroring the professional I/O found across the URSA Cine line-up.
› Recommended for: URSA Cine Immersive is aimed at productions working in spatial immersive formats, whether for live events, branded content, documentary or narrative.
› Sensor: 8K stereoscopic 3D immersive image capture, Dual custom lens system for shooting Apple Immersive Video for Apple Vision Pro.
› Resolution: 8160 x 7200 resolution per eye with pixel level synchronization
› Dynamic Range: 16 stops
› Format: Dual 90 fps capture to a single Blackmagic RAW file.
› Connectivity: 10G Ethernet, 12G-SDI, Wi-Fi, USB-C, mobile data
› Media: 8TB internal network storage, proxy support
The Blackmagic PYXIS 12K takes the same full-frame RGBW sensor found in the URSA Cine 12K LF and places it into a smaller, more modular form factor. With a 36 x 24mm sensor capable of 12K open gate recording and 16 stops of dynamic range, the camera is designed for flexibility, particularly where space, weight, or rigging constraints come into play.
It’s available in PL, EF, or L-mount configurations and is housed in a CNC-machined aluminium body with multiple mounting points for cages, gimbals, drones, or handheld builds. Recording is handled via dual CFexpress slots and a USB-C port, with support for simultaneous Blackmagic RAW and H.264 proxy capture.
Like its larger sibling, PYXIS integrates with Blackmagic Cloud and DaVinci Resolve Studio, allowing media to be synced and accessed in near real time. A 4-inch HDR touchscreen provides on-set monitoring, and the camera is compatible with the URSA Cine EVF for more traditional operating setups.
› Recommended for: PYXIS is aimed at productions that need high-end image quality without the bulk, whether for documentaries, commercial work, aerial cinematography, or second unit shooting where mobility and speed are critical.
› Sensor: 36 x 24mm full frame RGBW 12K 12288 x 8040 sensor.
› Resolution: Multi scale RGBW sensor for capturing 12K, 8K or 4K at the full sensor size.
› Dynamic Range: 16 stops
› Mounts: EF, PL, L-Mount
› Connectivity: 12G-SDI, 10G Ethernet, USB-C, 4G/5G mobile via tethering, mini XLR with 48 volt phantom power.
› Frame Rates: Records full resolution up to 40 fps or 112 fps at 8K.
RAPTOR [X] 8K VV and V-RAPTOR XL [X] 8K VV cinema cameras
RED has expanded its DSMC3 line with the introduction of the V-RAPTOR [X] 8K VV and V-RAPTOR XL [X] 8K VV cinema cameras. Both models build upon the capabilities of the original V-RAPTOR platform, now incorporating RED Global Vision, a new system enabled by an 8K full-frame global shutter sensor. This upgrade enhances low-light performance, image fidelity, and exposure control, while maintaining the compact form factor and high-speed recording features of their predecessor.
A central innovation of Global Vision is the Extended Highlights mode, which enables the sensor to preserve detail and color in extreme highlights, achieving over
20 stops of dynamic range and smoother highlight roll-off, particularly useful in uncontrolled lighting conditions. Another notable feature, Phantom Track, facilitates virtual production workflows by capturing separate R3D clips per LED wall view using technologies like GhostFrame™ or frame-remapping, while offering live monitoring via SDI outputs.
The V-RAPTOR [X] retains the compact body of its predecessor and includes upgraded in-camera audio preamps, dual 12G-SDI outputs for simultaneous monitoring views, a locking Canon RFstyle mount, and CFexpress Type B media support for data rates up to 800 MB/s. Frame rate capabilities include 8K up to 120 fps, 6K up to 160 fps, and 2K up to 600 fps, even with global shutter readout.
The larger V-RAPTOR XL [X], aimed at high-end cinema and television, features an integrated electronic ND filter, adjustable in 1/4, 1/3, and full-stop increments. It supports dual-voltage batteries (14 V and 26 V V-Lock or Gold Mount), offering flexibility across production environments.
Both models capture 16-bit REDCODE RAW and are compatible with RED’s IPP2 workflow, reinforcing their role as flagship tools for demanding production settings.
ARRI has introduced a more accessible version of its flagship ALEXA 35 camera, expanding the platform with the launch of the ALEXA 35 Base model and a flexible licensing system. The existing model, now called ALEXA 35 Premium, continues to offer all high-end features by default, while the new Base version retains identical hardware but offers a reduced core feature set at a lower price point. Key functionalities can be unlocked on demand via temporary or permanent software licenses.
Both models share the same Super 35 sensor, 17 stops of dynamic range, and ARRI’s REVEAL Color Science, ensuring no compromise in core image quality. The Base model includes ProRes recording up to 60 fps in 4K 16:9, ARRI Look File support, Enhanced Sensitivity
up to EI 6400, and three independent 10-bit outputs supporting SDR and HDR monitoring.
To provide flexibility for varying production needs, ARRI now offers five modular licenses: ‘120 fps’, ‘ARRIRAW’, ‘Open Gate/ Anamorphic’, ‘Pre-record’, and ‘Look’ (which includes ARRI Textures and the Look Library). These can be activated for 7 days, 30 days, one year, or permanently. Users may also choose the Premium License, bundling all features into a single upgrade. If purchased outright, the total cost remains equal to that of the Premium model.
ARRI has also enhanced the ARRI Shop for easier online license purchases, supporting multiple payment platforms and currencies. Licenses can now be purchased by production companies, not just camera owners.
The various overhead cable camera systems (Cable Cam Systems) have demonstrated their success in generating content for audiovisual productions, thus providing showiness and enrichment to the AV language under very high standards in terms of safety
A technology that is based on simplicity, perfect control, reliability and creativity has more than enough reasons to be established as a standard solution. More specifically, he various overhead cable camera systems (Cable Cam System) have demonstrated their success in generating content for audiovisual productions, thus providing showiness and enrichment
By Carlos Medina, Audiovisual Technology Advisor
to the AV language under very high standards in terms of safety.
The Cable Cam System is the most generic name encompassing all existing systems that hang the camera from one and/ or several types of cables. In this article we will not be making reference to cameras placed on a helicopter, or the use of
drones in AV, or any other support that allowing the aerial shots.
To be clear, the name Cable Cam System does not include chain systems that are operational by cable communication (either fiber optic/copper/IP and by any type of Triax, Multicore, Remote Multipin, RJ45...) enabling the handling of operations and recording.
If we check the time line a bit in detail, we can see that there have been different solutions in the field of Cable Cam Systems:
› 1984: SkyCam emerged from the hand of Garrett Brown. It used a system of cables and pulleys that hold the camera in the air. There are several types:
• SkyCam 4K/FHD: a very interesting option for its great stability, supporting camera/ optics equipment for the 4K and FHD environment.
• SkyCam X: Enables greater flexibility and control with the ability to fly at greater angles and perform more complex movements.
• SkyCam Nano: a lighter model to get the job done where spaces are smaller.
› 2000: Austrian company CCSystems Inc. developed the SpiderCam. Building on the SkyCam, its success was further enhanced by innovations in the four-cable suspension and control system.
As part of IBC 2022, the news was that SpiderCam had been acquired by Ross Video. There are several types of SpiderCam:
• SpiderCam Light: designed to work in smaller places and spaces, which has allowed this device to enter the multi-camera dynamics of a program made on a TV set.
• SpiderCam Field: designed to adapt to large spaces, allows working with camera bodies and optics from the field of cinema and/ or cinema broadcast.
• SpiderCam Mini: This is the most compact version.
• SpiderCam Bow. a more precise solution since it only offers a point-to-point system, controlling different flight altitude levels.
› 2011: flyLine Cable Cam. An American point-topoint solution with an automatic final stop movement control system, called MoCo, which allows setting
user-defined endpoints to avoid collisions with other objects or surfaces.
› 2012: Defy DactylCam. It is an American company that developed a fixedcable camera system. There are several models:
• Cadence: a professional, intelligent and simple system that can be operated by a single person.
• Dactyl Pro: system with an operating range that can exceed 8 hours and system status monitored wirelessly through the pulse controller. Designed for the largest film applications, up to 68 kg, with speeds than can reach up to 50 mph.
• Dactyl Live: A system designed for live broadcasts offering an intelligent 2D solution under IP control.
› 2015-2018: High Sight Cam, an American company that developed several models adapting to the needs of an increasingly competitive market:
• XL2. High-end model that allows working with a very large range of payloads. Point-to-point “clothes-line” based point-to-point cable system. It is virtually silent and offers extremely smooth operation at any speed, even on slow downward movement.
• Pro. It features innovative vibration damping wheel technology as well as an adjustable vibration isolator for a perfectly stable camera movement.
• Mini. A simplified solution that is adapted to lighter devices, with full support for DJI Osmo, DJI Osmo Mobile and Karma Grip.
• Wiral Lite. A simple system supporting all cameras up to 1.5 kg.
These systems can be classified by the type of movement allowed: 1D systems for vertical movement; 2D systems for horizontal and vertical shots; and 3D systems for dynamic overhead shots offering 360º views
› 2020-2025: A myriad of solutions and systems adapted to any type of production, space, content, equipment and budget: Sky Vertical, Sky Horizontal, Speed Line SPL ATC 17, Syrp Slingshot 3-Axis, RobyCam, DynamiCam, Joymech anix JM, IndustryCam, FlyingKitty Cablecam Shooting System, FM12CableCam...
Worth highlighting is the SEC Swiss Eagle Eye Wire Cam System (Ross Video),
a modular system that offers 1D, 2D and 3D assemblies for various production needs. It offers maximum flexibility as it can be used in fixed installations, studios and live events. In addition, it features a high level of control and stabilization thanks to its software developments, thus providing high quality dynamic aerial images.
USA, United Kingdom, Germany, Italy and China are the manufacturing
countries that dominate the different cable-mounted overhead camera systems, offering solutions for professional (Broadcast/ DCI), semiprofessional and amateur (procomsumer) environments. When it comes to choosing which is the most convenient system, we must know some technical and operational specifications: movement on axles, payload, maximum speed reached, cable length supported, type of control and stabilization, among others.
Although there are several types of overhead camera systems suspended by cables, we can make a generic classification based on their characteristics, cable system configurations and applications:
› Linear Cam Cable (1-axis): this is the most basic system. It features a single cable between two points. The movements are simple, allowing only a forward and backward (X-axis)
motion. Widely used in racing sporting events, concerts, extreme sports... The most representative models in this category are: FlyLine Cable Cam and Wiral LITE.
› Linear Cam Cable with Stabilizer (Gimbal): it is an improvement on the previous one since it features a gimbal (DJI Ronin type or similar) to control the camera’s orientation. With this improvement we can
make forward/backward movements along with panning, tilting and rotating the camera.
› Cam cable in “X” or grid (2 or more axes): they have several cables (usually four), anchored in different directions. These are the most sophisticated systems allowing greater freedom of movement along axes X, Y, and even Z (up/down), within a given space. Widely used in large stadiums, massive concerts and spectacular events. The most used models are Skycam and Spidercam.
› Cam cable in “X” with stabilizer (Gimbal). Based on the previous one, with the possibility of placing a gimbal. This allows to achieve camera position and shot type (Tilt up, Panning, Overhead shot, Whole shot, Tilt down, Travelling In/out, Chopped shot, Foreground,...) which increases the spectacularity in the filmmaking/directing of the audiovisual material. A good example is the Eagle Eye 250 Cable Cam.
In short, we can classify these systems by the type of movement they allow: 1D systems for vertical movement; 2D systems for horizontal and vertical shots; and 3D systems for dynamic aerial shots offering 360º.
Every aerial camera system suspended by cables comprises several key parts perfectly assembled to render stable, controlled and safe aerial shots. Below, we present the main components:
It is the key element of aerial cable systems. Its function is basic and
essential: to support the equipment at heights and guide the travel of the system. It involves the installation of a cable line (or several lines) between two or more points that are elevated above the area where the recording or live performance is made.
Motorized winches are also used for the collection or extension of the cable, which are placed according to the location and positioning of the equipment on premises (one for each cable, especially in four-wire aerial systems).
The materials most commonly used are steel
wire and/or Kevlar or Dyneema. It must be said that both paracord-type elastic ropes or common nylon, which have elongation and cause dangerous oscillations, and fine braided steel should not be used because they feature greater vibration and weight.
It is very remarkable that the kevlar cable is being used the most, given that it is made of aramid (aromatic polyamide) fiber, a material well known for being five times stronger than steel in relation to its weight. It is a very resistant synthetic material developed by the company DuPont.
The kevlar cable is widely used for its high tensile strength, lightness, resistance to heat and cutting, in addition to not being electrically conductive and adapting to raveling or bending without losing strength.
Being a braided cable, there are several types of kevlar cables for the Cable Cam System:
› Pure braided Kevlar: typical of the professional field.
› Kevlar with polyester cover: also used in professional environments.
› Kevlar coated with Teflon or PU: semi-professional field.
› Hybrid rope (Kevlar + Dyneema or Spectra): it is typical in amateur systems.
It is the plate or platform that moves through the cables and incorporates the gimbal and the camera. As an essential component, it has a set of grip wheels together with electric motors (brushless) for propulsion and a built-in battery (in portable models).
The motor provides a forward/backward motion through the tensioned cable. It can be remote (RC), stand-alone or through programming software. Automatic brakes, speed change and/or automatic return are optional, but highly recommended.
COUPLING SYSTEM, GIMBAL and/or CAMERA STABILIZER
Obviously, all these are accessories and add-ons that allow us to correctly and safely position the capture device. If image stabilization is available, sudden movements or vibrations are avoided.
A gimbal is a mechanical device that allows a camera to be stabilized. It can be 2-axis (pan/tilt) or 3-axis (pan/tilt/roll). The most popular are: DJI Ronin, Gremsy, Freefly Movi, among others.
Any image capture and recording device. They can be film, broadcast, action, modular, compact, DSLR, PTZ cameras... depending on the environment and type of AV production.
In these systems, the figure of the Cable Cam operator or pilot is key, as they controls and positions the equipment in the space (height, angles, forward/backward movements)
Its main function is to allow the operator/s to control the movement of the shuttle/trolley, gimbal and camera. Several options are available:
› RC (radio control)
› Mobile app (WiFi/Bluetooth)
› Console with professional joysticks
It makes it easy for us to send video in real time from the camera to the operator or the production facilities. There are several technologies such as wireless HDMI, SDI, WiFi, 5.8 GHz FPV.
There is the possibility of making wired connections by using the benefits of a braided kevlar cable.
We refer to any accessory and add-on that holds the cables in secure points
and allow their tension to be adjusted: pulleys, rope tensioners, snap hooks, clamps or straps, German carracks, tie ratchet with tape...
It is the electrical power supply for all the components of the Cable Cam System to work: the shuttle/trolley, the motors and the video transmitter. LiPo batteries or external packs are normally used.
Its main function is the prevention of falls of or damages to the Cable Cam System and risks for audiences, players or singers and for the technical team itself. It is advisable to have multiple redundancy mechanisms such as safety lines, parachutes, emergency brakes, captive safety wheels, captive line lochers and holding snap hooks, among others.
The Cable Cam System has many advantages over other aerial shooting
options such as total control of the travel, recording over the audience, mechanization of movements to repeat shots with absolute precision. It is a silent and discreet system, benefiting from a broad range of operation, relying of a legal and safe use, in addition to its recognition as a system enabling precise and stable movements.
The most complex overhead cable camera systems require qualified human staff for assembly/disassembly, motion and operation. More specifically, in these systems, the figure of the Cable Cam operator or pilot is key, as they control and position the equipment in the space (height, angles, forward/backward motion). There is also the camera operator, who is dedicated to the functions and operations of the camera. It is the person who renders the type of shot (PAN, TILT, zoom, focus, diaphragm,...). And finally, we find the figure of the audiovisual or auxiliary technician, who performs the functions of assembly
and disassembly, camera stabilization and tuning of controls and remotes.
Both indoors and outdoors, there are many environments where cable cams are used: TV programs, filming, live broadcasts or recasts and recordings of concerts and festivals
Both indoors and outdoors, there are many environments where cable cams are used: TV programs, filming of commercials/feature films/videoclips, live broadcasts of sporting events, as well as corporate events and, of course, broadcasts and recordings of concerts and festivals.
LigaSantander, NFL (national Football League), Premier League, Champions League, Eurovision, Olympic
Games, Formula 1, American Ninja Warrior, The Voice TV, Benidorm Fest RTVE, and Tele 5 News are some examples where dynamism, monitoring at different speeds, maximum control of the frame, very precise and stable in and out movements, aerial views have all been achieved through this system.
In short, using overhead cable camera systems (Cable Cam System) has brought freedom to visuals.
In the fast-paced world of broadcast, cinema, and professional video production, camera support systems are undergoing a quiet but critical transformation. While sensor innovations, lens upgrades, and software tools often capture headlines, the humble tripod, fluid head, teleprompter, jib, dolly, or slider — the foundation of visual storytelling — is being reshaped by new operational needs, gear convergence, and user expectations.
By Richard Villhaber, Managing Director, Villrich Broadcast
At Villrich Broadcast, we work closely with clients and partners across EMEA, APAC, and the Americas.
This strategic review reflects our hands-on experience and dialogue with professionals navigating today’s
evolving production environments. Here’s what’s shaping the camera support landscape in 2025:
Versatility & modularity are no longer optional
Operators and rental houses alike are seeking tripods and heads that can accommodate multiple payloads — from mirrorless and PTZ cameras to full-size digital cinema rigs — without sacrificing performance. The goal is a support ecosystem that grows with the user's toolkit.
Modular systems such as tripods with interchangeable bowl adapters (75mm/100mm/150mm), sliders that can be motorized or manually controlled, and remote heads that fit jibs or dollies are gaining favor. Clients increasingly ask, “How much can I do with this system before I need to upgrade?”
Key trends:
› Tripods with interchangeable bowl adapters (75mm / 100mm / 150mm)
› Modular sliders (manual or motorized)
› Remote heads compatible with both jibs and dollies
› Buyers now ask: “How far can I go with this system before I outgrow it?”
2. Payload matters — but so does portability
Where once the focus was purely on maximum payload, users now seek lightweight, travel-friendly solutions that don’t sacrifice strength or reliability.
Operators are now prioritizing:
› Carbon fiber tripods for lighter travel kits
› Compact fluid heads that maintain performance with lightweight cameras
› Foldable dollies and sliders that fit into cabin-sized luggage
Yet, lightweight should not compromise durability. The market continues to reward products that offer a balance: high payload capacity with low carry weight and small pack-down size.
In demand:
› Carbon fiber tripods for mobile crews
› Compact fluid heads with reliable counterbalance
› Sliders and dollies that fit into carry-on luggage
› Balance is key: high load capacity + low carry weight + minimal pack-down size.
3. Seamless integration with remote and robotic systems
Remote production, PTZ workflows, and IP-based control are pushing support gear to integrate with digital automation and robotics.
Especially in live event coverage, corporate streaming, sports, and house-of-worship environments, the rise of robotic workflows has reshaped expectations.
Camera support gear is now expected to interface smoothly with automation and IP-based control systems. Key examples:
› Sliders with motion control compatibility
› Tripods designed for PTZ or robotic heads
› Remote heads supporting joystick or pan-bar inputs
› Elevation units (e.g. vertical lift columns) for dynamic positioning
The overarching trend: mechanical support is no longer isolated from digital workflows. Buyers are looking for camera support that can integrate into a broader automation and IP control ecosystem.
The bottom line: mechanical support is now part of the broader control ecosystem.
4. Demand for smoother movement — at any level
In high-end cinema, broadcast, or even mid-level corporate work, camera movement quality has become a key differentiator. Whether it’s a fluid live-show tilt or a cinematic dolly move, motion quality is nonnegotiable. Clients expect consistent, professional-grade movement — regardless of scale or budget.
This affects several gear categories:
› Fluid heads are increasingly judged by their fine drag tuning and counterbalance at all tilt angles
› Sliders need to minimize friction or jitter even in compact sizes
› Cranes and jibs are being requested with advanced control modules to maintain stable arcs or remote-adjustable movement.
Today’s client is more informed and critical. “Good enough” is rarely good enough — gear must enable cinematic motion even in live or fast-paced productions.
A rising — though still niche — concern is the sustainability and long-term serviceability of camera support systems. Particularly among rental houses and budget-conscious broadcasters, there’s growing emphasis on:
› Easily replaceable parts (clamps, knobs, pads)
› Firmware updates for electronic components for remote heads or sliders
› Local or regional repair service
› Long-term product support and spare part availability
While not yet a top buying factor for everyone, these attributes are influencing procurement, especially for larger fleets or institutional clients.
From high-end drama productions to live interviews, quiet gear is essential. Operators are
increasingly scrutinizing the noise profile of every moving part.
Clients increasingly inquire about:
› Silent motors in sliders and remote heads
› Tripods and pedestals with noise-free operation
› Non-clicking locks and friction systems
This is an especially key factor in multi-camera studio environments, where multiple operators and automation must work in silence.
As always, cost remains a factor — but the story is nuanced. Clients are less interested in the cheapest option and more focused on cost-performance balance. A mid-tier solution that checks the boxes for modularity, reliability, and integration will often win out over either extreme.
At the same time, budget pressures in education,
corporate, and regional broadcast sectors are pushing manufacturers to produce scaled-down versions of flagship gear, often with similar DNA but fewer bells and whistles.
Preferred choices:
› Mid-tier solutions with modularity, reliability, and integration
› Entry-level versions of flagship products, sharing core DNA
› Scalable systems that can grow with production needs
The winner? Products that deliver real-world performance at justifiable cost.
Key takeaways for buyers & integrators
When evaluating your next camera support system, consider this checklist:
› Modularity: Will it grow with your camera lineup and shooting style?
› Weight vs. stability: Is it portable without compromising performance?
› Automation ready: Can it integrate with robotic systems in an IP control workflow?
› Motion quality: Is movement smooth, precise, and controllable?
› Silent & reliable: Does the gear operate quietly and last under daily use?
› Serviceability: Are parts and repairs easily accessible?
› Workflow compatibility: Does it fit with your existing infrastructure?
Conclusion: Camera support as a workflow enabler
Camera support is no longer just a physical base — it's an integral component of today’s production ecosystem. It must be smart, modular, robust, quiet, and aligned with digital workflows.
As these systems continue evolving, the most successful vendors and buyers will be those who treat camera support not as an afterthought, but as a core enabler of visual storytelling.
At Villrich Broadcast, we don’t just supply equipment — we curate and integrate a portfolio of premium brands that work seamlessly together. From camera sliders, elevation units and teleprompters, we help you build a cohesive, future-ready production workflow.
Want to learn more or discuss your camera support strategy?
Feel free to reach out — we're here to help.
By Michael Cuomo, Vice President at Telemetrics, Inc.
In the area of robotic camera control, AI is helping us develop products that can track on-screen talent more accurately while also enabling our camera trolley systems and OmniGlide® robotic studio camera rover to start and stop more precisely. This is important as AI-assisted automated camera operation frees up camera operators to focus on other tasks.
For talent tracking, AI can also automatically adjust zoom levels and camera angles based on the scene dynamics, ensuring the subject remains well-framed even if they move closer or further away. This is a key component of our reFrame® talent and object tracking software, built into the Telemetrics
RCCP-2A Studio control
panel. We use AI to recognize faces and ensure optimal framing of the subject by consistently maintaining precise framing and tracking, even during live broadcasts.
We’ve also developed ultrawideband (UWB) sensors for reFrame that can be affixed to a person or furniture to protect talent and crew. UWB tracking
leverages AI and machine learning (ML) to enhance its capabilities and provide more accurate and insightful data. While UWB excels at precise location tracking, AI and ML algorithms can help interpret the data from UWB sensors to understand movement patterns, identify objects, and even predict future behavior.
Another feature of the RCCP-2A controller is Pathfinding, which utilizes AI algorithms to learn patterns of movement in a studio, enabling the OmniGlide rover to safely navigate around studios, even when talent or set pieces are present.
All these features and more are yet to come are leading broadcasters and production studios to embrace robotic camera control in everincreasing numbers. We expect the use of AI to continue to add value and operational functionality for our customers and we’re devoting a significant number of resources to implementing it across our products in unique and production-friendly ways.
“We merge multiple industries such as visual effects, gaming engine and broadcasting together so our users can imagine the next big thing”
Born in the gaming world, Unreal Engine has become a key tool in the broadcast industry — especially at a time when broadcasters are striving to deliver more personalized and immersive content to compete with global platforms. Zero Density has played a major role in this transformation, recently launching Lino: a new workflow that enables templatebased graphics playout for all broadcast graphics, powered by Unreal Engine 5.
“We merge industries such as visual effects, game engines, and broadcasting to let our users imagine the next big thing with unparalleled efficiency and quality,” says Faraz Qayyum, Head of Zero Density Academy and Solution Manager for Real-Time Motion Graphics, who joins us as the latest guest in our Industry Voices section.
“Unreal Engine’s role in broadcast and live production is rapidly evolving from a game development tool to a cornerstone of virtual studio production, onair graphics, Augmented Reality, and Extended Reality workflows, thanks to our solutions”, he will add during the conversation.
By Daniel Esparza
The interview follows.
As an introduction, what are Zero Density’s strategic goals for this year?
2025 has started as a breakthrough year for Zero Density in many areas, including the preview and launch of the Lino workflow, following a year of development. Lino is our new workflow that provides template-based graphics playout for all broadcast graphics, using Unreal Engine 5, and it has already grabbed the attention of top-tier broadcasters even before its official release. Along with the latest release of Reality 5.6 and Reality Hub 2.0, we deliver the most significant upgrade to the broadcast graphics industry in our history. It will define the broadcast graphics domain in the next decade.
Overall, how would you assess Zero Density’s position in today’s industry landscape? What would you say sets it apart from other competitors in the market?
Zero Density is the company that introduced
Unreal Engine to broadcasters for virtual studio production a decade ago, and now elevates on-air graphics production using Unreal Engine.
The Zero Density team continues to deliver futureproof Unreal Engine-based technology to broadcasters worldwide.
“As audiences demand more interactive and personalized content, virtual studios and AR will become even more integral”
We drive the industry forward by introducing innovations over the last decade, which have combined to form an endto-end graphics ecosystem. More importantly, we merge multiple industries such as visual effects, gaming engine and broadcasting together so our users can imagine the next big thing with unparalleled efficiency and quality.
Our versatile platform and solutions enable us to support broadcasters and their critical workflows, fostering innovation and
elevating the standard of visual storytelling.
You have an established virtual studio client base. How do you see the market in general?
As a global leader of virtual studio production, we transformed the industry with our innovations and global footprint. With the latest release of Reality, our real-time broadcast graphics platform is expanding beyond virtual studio use cases. We are ensuring that our ecosystem contains irreplaceable solutions for everything they need now and in the future with the best workflows.
The current level of adoption for virtual studio (VS) and augmented Reality (AR) tools in broadcasting is steadily growing, moving beyond niche applications
to become more mainstream, especially for high-profile events and news. On the other hand, AR is increasingly integrated for enhanced storytelling, data visualization, and interactive content, making complex information more accessible and engaging for viewers.
So much so that our toptier broadcaster client demolished their physical studios and built an additional green screen after using our system for two years in a day-to-day live format, in addition to special programming.
As audiences demand more interactive and personalized content, virtual studios and AR will become even more integral, offering new opportunities for audience engagement, data visualization, and innovative storytelling across
all industries, including broadcasting with news and sports, to entertainment and advertising.
I’d like to learn more about the Zero Density Academy. What role does this initiative play in your overall strategy, and how has it impacted your user community and adoption so far?
We deliver industry-shaping solutions, and we want to provide everyone with a learning platform to master them. That’s where the Zero Density Academy comes in. It is our online learning platform for broadcast graphics and virtual studio production designers. It offers a centralized and structured online learning platform for Reality with comprehensive, handson video courses. Users can turn the knowledge they accumulated from
Zero Density Academy into hands-on experience with the Open Studio License Program. Through this initiative, anyone can request self-learning licenses and turn theory into practice.
“Our top-tier broadcaster client demolished their physical studios and built an additional green screen after using our system for two years in a day-today live format”
Zero Density Academy currently has over a thousand users, and this number is increasing daily. With Zero Density Academy, we are moving industry expertise forward and welcoming future professionals with constantly updated courses and solutions.
Graphics are a key element in the broadcast world. What are the main challenges you observe that organizations are facing when implementing any on-air graphics solution?
When implementing onair graphics solutions, organizations can experience several challenges, primarily due to the complexity of the system. Some solutions fail to evolve, offering few updates or new features and limiting creative possibilities. Adding to these difficulties, organizations frequently face scenarios where solutions overpromise with flexibility, features, or integrations but fall short in real-life use. This is compounded by a steep learning curve associated with non-intuitive systems, which impacts onboarding and increases dependency on expert users. The lack of support for gaming or rendering engines, such as Unreal Engine, causes a major setback as realtime rendering becomes increasingly crucial for immersive storytelling. Finally, many tools are not
designer-friendly, alienating motion designers, and weak after-sales support can leave issues unresolved, impacting broadcast reliability and team confidence.
However, as these problems became more apparent, next-generation solutions are emerging to eliminate them!
We are addressing these challenges by offering a comprehensive and forward-thinking solution with Lino, our templatebased workflow for all broadcast graphics, using Unreal Engine 5. To eliminate system complexity and maintenance issues, our new workflow features a modern, streamlined architecture designed for quick deployment, easy updates, and efficient maintenance across on-air graphics. Unlike legacy systems, Lino workflow is a future-ready solution working with Unreal Engine’s Motion Design Mode, ensuring continuous evolution
and benefiting from the latest advancements in real-time graphics and rendering performance. This approach also drives a high ROI by providing a single, template-based workflow for all broadcast graphics, including on-air, video wall, Virtual Studio, Augmented Reality and Extended Reality, consolidating operations, and reducing overhead, thus delivering long-term value.
“Lino workflow is the only broadcast graphics solution deeply integrated with Unreal Engine’s Motion Design Mode”
We are reaching out with transparent commitment, working closely with customers during onboarding to align expectations with practical demos. Lino workflow significantly reduces the learning curve for both operators and designers, offering an intuitive workflow, a clean UI, a preview environment, and a drag-and-drop playout interface, thereby minimizing the need for extensive training. As a forward-thinking industry
partner, we help our users with an open ecosystem and seamless integration, being fully API-ready and supporting MOS integration with NRCS systems, ensuring compatibility with evolving broadcast pipelines. Lino workflow is the only broadcast graphics solution deeply integrated with Unreal Engine’s Motion Design Mode, empowering motion designers to work directly within Unreal Engine but with a familiar skillset. This allows users to create stateaware, dynamic templates using real-time rendering power without any thirdparty bridges. Finally, Zero Density ensures reliable global support through strong pre-sales guidance, onboarding, training, and 24/7 assistance, ensuring customers are never alone in launching new shows or troubleshooting issues.
In line with this, what are your clients asking for the most right now?
In the fast-paced broadcast industry, broadcasters need to act at the speed of news and maintain audience attention as competition gets fiercer and attention
spans decrease. Knowing this, we are redefining news broadcasting by providing solutions that blend creativity with technology, efficiency with reliability, and visual impact with enhanced storytelling.
We provide a unified ecosystem comprised of best-in-class, easy-to-use solutions that can be utilized to design and control all types of graphics using Unreal Engine 5. For the last decade, we have been developing innovations that set the industry benchmarks. Now, with Lino workflow, all the content design and control can be handled by using a unified solution. We also provide camera and talent tracking, as well as engine hardware, to elevate virtual studio productions.
How do you see the role of Unreal Engine evolving in the broadcast and live production industries?
Unreal Engine’s role in broadcast and live production is rapidly evolving from a game development tool to a cornerstone of virtual studio production, onair graphics, Augmented Reality, and Extended Reality workflows, thanks to our solutions. Its realtime rendering capabilities enable broadcasters to create dynamic, photorealistic virtual sets and interactive graphics instantly, streamlining broadcasts and reducing reliance on traditional physical sets and extensive post-production.
Now, Unreal Engine Motion Design has added to the fray, and with Lino, our template-based workflow for all broadcast graphics is adding a whole new layer of graphics design and control by combining it with our advanced web-based control solution, Reality Hub.