Best of 2023 Awards ebook

Page 1

AWARDS A SPECIAL GUIDE TO ALL THE NOMINATED & WINNING PRODUCTS FROM FUTURE’S 2023 BEST OF SHOW AWARDS AWARDED BY


WINNER Accedo Accedo One Marketplace

I

n September 2022, Accedo officially made the commitment to become one of the forerunners of the sustainability movement in the OTT space, and to contribute to the transformation towards a sustainable future. With our overarching goal to bring sustainable choices to the customer in a more transparent way, we started an initiative to gain a closer understanding of what sustainability means for our customers, as well as key players in our partner ecosystem. Ahead of IBC, we launched new criteria targeted to sustainability in the Accedo One Marketplace. It will enable video service providers to make buying decisions based not only on product features, but also on sustainable KPIs such as carbon impact, audience influence, and community impact. The Accedo One Marketplace showcases five key criteria for listed vendors, allowing customers to make educated decisions on who they choose to work with. This will include: n DPP Committed to Sustainability Program score n SBTi committed or approved targets n CDP score n UN Global Compact membership n Sustainability report (availability of annual report) These criteria bring together a number of different aspects of sustainability, thereby providing customers with a solid level of transparency into the sustainability status of a potential supplier. Accedo’s partners, Brightcove and Cleeng have both demonstrated their commitment to sustainability, as evident on their Accedo One Sustainable Marketplace listing. Additionally, as well as a supplier being classed as sustainable having made commitments on a corporate level, it may also provide products and services that actively help to reduce the carbon footprint of an OTT service. Accedo’s partner’s Bitmovin Player’s ECO mode and Humans Not Robots’ HNR to ZERO platform are both listed on the Accedo One Sustainable Marketplace and fall into this sustainable products category. With our Sustainable Marketplace, we’re inviting everyone to contribute to the next generation of sustainable video services. We

envision any streaming company should be able to build a sustainable video platform, but they need to know which vendors and products have made a commitment to sustainability. Accedo One Marketplace now: n Highlights sustainable products & features. n Shows the level of sustainability commitments of Accedo One vendors from a corporate perspective. n Enables filtering to surface the most sustainable integrations. n Enables users to book a consultation with Accedo to help on their sustainability journey. Sustainable products & features can go beyond just technology, so we split this into three tracks: Technology (showcasing how each component can contribute to an optimised video production and distribution setup), Content (working with partners who can help improve diversity and sustainable storytelling in a content strategy), and Audience (involving the audience to validate assumptions and sustainability features). With the combined wealth of experience and sustainability commitments, Accedo One’s ecosystem of partners together with Accedo is ideally placed to help customers progress on their sustainability journey. The Sustainable Marketplace is where it all comes together, surfacing the level of information required to support the customer in making sustainable choices.


WINNER Agile Content Agile Live

A

gile Content’s Agile Live is a new unified production solution that enables true remote and distributed TV production using cloud and internet technologies. Using readily-available technologies, innovative GPU deployment, and a new transport protocol, it offers a transformative solution that delivers high-quality programming, significant cost savings and a substantial reduction in up to 90% of CO2 emissions for broadcasters or content producers. The solution does all of this by replacing traditional broadcast stacks with cloud-based applications for remote, distributed production. It uses standard internet connections to produce and contribute content in real time from any live broadcast cameras and/or consumer devices. Agile Live is completely cloud agnostic and can be deployed on bare metal COTS, within private data centres or in public clouds. The GPU- and cloud-based solution features web-based proxy editing for fast turnaround live post-production, HTML graphics (which can be burnt-in or delivered separately), 3D video effects, and interactive graphics or personalised visuals. All of this comes together to allow broadcasters and content producers to widen their reach by more easily facilitating viewer-centric formats on platforms like YouTube, Twitch, Kick or TikTok. A key innovation that helped realise Agile Live was the development of an open-source transport protocol; EFP (Elastic Frame Protocol). EFP not only ensures quality adjustment based on bandwidth and preferences but also achieves perfect camera synchronisation. This enables the seamless integration of diverse video sources on-the-fly, allowing any connected device to contribute to the production process. Where traditionally, the synchronisation, mixing and curation of live content have all been confined to the local domain of high-bitrate SDI, Agile Live’s internet-based infrastructure allows a diverse range of sources, operating in low latency and low bitrate, to be configured for mixing, graphics, and audio, with action commands sent to the high-quality delivery feed in the cloud for seamless playout. Agile Live makes engaging increasingly disparate audiences across multiple platforms easier by transforming today’s

production workflows. For example, it can create completely synchronised TikTok-curated versions of programming elements with portrait-style videos and other platform-specific adjustments. Ahead of the development of Agile Live, Swedish Television (SVT), Sweden’s public broadcaster and a long-term partner of Agile Content, used Agile Live to successfully broadcast its first entirely cloudbased remote live sports event in July 2023; the Royal Rally of Scandinavia, a new 16-stage, 185 km round in the 2023 season of the European Rally Championship. With an architecture that is designed to bridge production and distribution workflows, Agile Live’s ability to lower production costs meant that SVT could cover the entire three-day event with a significantly reduced budget. Additionally, following the event, SVT surmised that its partnership with Agile Content on the rally had substantially cut the production’s environmental footprint when compared to rolling out standard solutions. Agile Content’s GPU and cloud-based production solution, Agile Live is an efficient, eco-friendly and cost-effective platform that is future proofing broadcasting for broadcasters and current and emerging streaming platforms.


WINNER Amazon Web Services Amazon IVS Real-Time Streaming

L

ow latency live streaming has quickly become vital to the audience experience, enabling content creators to connect with live audiences in a more direct, personal way to ensure memorable viewing experiences. To this end, Amazon Web Services (AWS) recently launched an update to its fully managed live streaming solution Amazon Interactive Video Service (Amazon IVS), which enables the delivery of real-time live streams to 10,000 viewers, with up to 12 hosts and a latency of under 300 milliseconds from host to viewer. With the new Real-Time Streaming capability of Amazon IVS, participants can join a virtual resource called a ’stage“ as either viewers or hosts in a video app. Multiple hosts can collaborate on a stage, and up to 10,000 participants can be viewers watching the stage. Hosts can also promote an audience member ’on stage“, turning them from a viewer into a host. The new functionality makes it easier for Amazon IVS customers to build more dynamic, interactive video experiences for a broader range of latency sensitive environments, like social media applications or live auctions, without having to build custom workaround using external tools. Ideal for user-generated content platforms, retail, education, and other applications featuring interactive live streamed video, the new Amazon IVS Real-Time Streaming capability also includes layered encoding or simulcast. It ensures that Amazon IVS automatically sends multiple video and audio variations when

hosts publish to a stage, so viewers can enjoy the stream at the best quality possible with their respective network connections. Commenting on the new update, live stream shopping platform Whatnot shares, ’Scaling live video auctions to our global community is one of our major engineering challenges. Ensuring realtime latency is fundamental to maintaining the integrity and excitement of our auction experience. By leveraging Amazon IVS Real-Time Streaming, we can confidently scale our operations worldwide, assuring a seamless and high-quality real-time video experience across our entire user base, whether on web or mobile platforms.“ As audience expectations for live high-quality interactive streams continue to grow, Amazon IVS is helping content creators keep pace. Amazon IVS Real-Time Streaming is a game-changing new capability for Amazon IVS.


WINNER Appear NEO Series

T

he cloud is often celebrated as a cutting-edge innovation and Appear is a 'cloud enabler'; providing ground-tocloud and cloud-to-ground solutions that enable its customers to benefit from cloud-based workflows and media delivery. However, as the needs of the media industry evolve, a ‘de-cloudification’ trend has emerged. While the cloud brings exceptional benefits, it also presents limitations, particularly in terms of resource consumption, cost, and energy efficiency for processing hungry applications like Adaptive Bitrate (ABR) transcoding. This has prompted a return to hybrid and on-premise solutions that offer a more balanced approach, catering to the requirements of flexibility, reliability, and cost-effectiveness. Appear’s NEO Series is a leading on-premise solution enabling organisations to optimise their approach to media infrastructure. Appear’s on-premise live transcoding solution assists broadcasters and operators through the intricate process of technological realignment. By facilitating the transition from cloud to on-premise or hybrid, Appear enables media organizations to redefine their strategies, achieving cost and performance efficiencies in the process. The NEO Series harnesses HEVC and UHD capabilities within an on-premise solution. This approach ensures seamless scalability and future upgrades while minimizing resource consumption. A crucial aspect of the NEO Series’ appeal is its ability to cater to a diverse range of broadcasting scenarios. It enables running a channel line-up on-premise while leveraging the cloud for intermittent channels or initial testing of new channels, blending on-premise and cloud strengths. This creates a flexible hybrid broadcast approach, maximizing adaptability and overall infrastructure strategy. One notable case study is Sappa, one of Sweden’s largest telcos serving 100+ TV channels. Working alongside Magine Pro, Appear facilitated Sappa’s migration from a cloud-based workflow to the NEO20 on-premise live transcoding solution. This integration seamlessly dovetailed with Magine Pro’s Media Cloud Packaging and OTT

platform, ensuring a comprehensive and efficient transition. Marcus Sundh, Head of Business Development at Sappa, noted: "Appear has been our trusted encoding and distribution partner for over a decade, and its NEO20 solution was the perfect choice for us when looking to move our OTT transcoding from the cloud to on-premise. It is both HEVC and UHD enabled, so when we upgrade with UHD channels in the future, it is just a matter of adding hardware accelerated boards without the need for any extra rack space, and with very low power consumption." Similarly, A1 Croatia, a major telco serving millions of customers, partnered with Appear to migrate its ABR transcoding and distribution platform to the NEO Series. Vedran Krhač, Head of TV and Video Services at A1 Croatia, noted: "Migrating to All-IP OTT distribution, Appear’s server-based NEO Series was the ideal fit for our live channel encoding and distribution. The NEO Series' ease of integration, cost efficiency, and scalability means we can more effectively manage the transcoding and delivery of our live channels to our customers, while significantly reducing shipping and energy costs." Broadcasters and operators are increasingly seeking solutions that marry innovation with practicality, while delivering maximum efficiency, minimal power consumption, and seamless service delivery. The NEO Series strikes this delicate balance.


NOMINEE Ateliere Creative Technologies Ateliere Connect

T

he challenges facing content owners in the media and entertainment industry have never been more complex. The demand for content is skyrocketing, audience preferences are evolving at an unprecedented pace, and competition is fierce. Amidst this ever-changing landscape, businesses need more than mere intuition to succeed; they need datadriven insights that empower them to navigate these challenges effectively. Enter Ateliere’s data analytics feature; a solution designed so customers can easily measure media supply chain volume and performance. Powered by AWS QuickSight, the new functionality aggregates all processing events from acquisition to distribution into a single, accessible data warehouse. At the heart of this solution is the “at-a-glance” dashboard; a beacon of clarity amidst the complexity. High-level Key Performance Indicators (KPIs) are presented in a user-friendly format, offering customers unprecedented visibility into titles processed, rejected, and delivered. This instant insight enables stakeholders to make calculated adjustments and projections with ease. In addition, the system provides detailed reports that list every event by type, status, quality, technical specifications, and provider. It also tracks system events to identify peaks in content production. This gives businesses incredibly helpful information to forecast and plan for the future without needing a data scientist. Beginning with the production or acquisition of an asset, the platform

will watch each processing event, providing users with detailed information about the asset’s journey through the media supply chain. The result is a contextualized window into workflows, helping businesses isolate inefficiencies. For example, the system might reveal a higher-than-usual number of defects or failed events with a certain partner. It could also help save on storage by identifying assets that can be archived because they’re not processed as often as initially thought. Users can set email alerts to be notified of specific events or thresholds reached, a critical resource when there are preset operational parameters that users need to stay within. Results are fully exportable in formats like PDF or CSV to share further or import into other business platforms for an even broader view, and dashboards are fully customizable and shareable as well.


NOMINEE Audinate Dante Connect

W

ith a growing need to create more content faster without sacrificing quality, many broadcasters are turning to cloud-based production to meet demands. These platforms allow news, sports, and entertainment broadcasters to provide real-time audio and video experiences for less money. Dante Connect is a software application suite that facilitates cloud-based broadcast production and is a powerful platform for A1s and mixing engineers, combining the familiarity, ease of use and tight synchronization of Dante audio with seamless connectivity to centralized production tools running on cloud instances. Comprised of Dante Virtual Soundcard, Dante Gateway, Dante Domain Manager, and Dante Controller, Dante Connect allows broadcasters to rethink how they approach production. By letting customers take advantage of Dante audio devices anywhere in the world, they can create a cloud-based network of these devices and then manage it from wherever their production staff is based. The ability to create efficient remote production workflows directly from the hundreds of thousands of on-premise Dante devices has the potential to revolutionize remote audio production. With Dante Connect, Dante audio products can send up to 256 channels of synchronized, high-fidelity audio directly from on-premise Dante devices to best-of-breed or customer-preferred production software in the cloud, reducing the need for mobile studios and trucks. Audio can be distributed globally within the cloud, allowing different teams to use the same audio within multiple applications and locations to address different audiences, languages, and aspects of the production process. Source audio can be sent from remote sites directly to the cloud so mixing engineers can do their jobs from wherever they are located, again without the need to roll expensive outside broadcast truck deployments. Dante is the de facto networking standard

for the professional AV industry, and now using Dante Connect, broadcasters can put more devices to work for more productions, on- or off-site. Picture a week filled with overlapping races: a Formula 1 Grand Prix in Monaco, a MotoGP race in Valencia, and a World Rally Championship event in Finland. Each of these races, happening in different locations and time zones, requires a unique audio setup, and Dante Connect is the key to managing these complex requirements. On the ground at each race, Dante-enabled microphones capture the engines’ roar, the crowd’s cheers, and the commentary from the pit lane. These microphones are connected to Dante-enabled communication systems, creating a robust on-premise audio network that can capture and process high-quality audio in real time. This audio is then seamlessly transmitted to a centralized production hub via Dante Connect. Here, cloud-based Danteenabled mixers, digital signal processors (DSPs), and intercom systems take over. These devices allow technical staff to control and fine-tune the audio broadcast from a single location, regardless of where the races occur. This seamless integration of on-premise and cloud-based devices creates a flexible, efficient, and scalable audio ecosystem that can adapt to the needs of any broadcast.


NOMINEE Avid Avid | Stream IO

A

vid | Stream IO is a new scalable, subscription-based ingest and playout solution that delivers next-generation media production. Software-based, it provides unparalleled speed, adaptable media format support and flexible deployment capabilities. With a flexible architecture that can be configured to ingest or play out IP streams and SDI streams, Avid | Stream IO allows content producers to migrate from legacy workflows and on-premises deployment to cloud and IP workflows at their own pace. Avid | Stream IO offers flexible channel configurations supporting popular video formats, codecs and resolutions, enabling media companies to leverage new and emerging technologies within their production workflows. It provides complete deployment flexibility, supporting both off-the-shelf certified hardware for onpremises studio environments and standard cloud instances. The new solution also allows media companies to increase efficiency by combining different ingest sources in a single configuration. Ideal for live entertainment and multi-camera productions, Avid | Stream IO supports all common production

formats including SDI at launch, with support for compressed IP streams SRT / RTMP / NDI, and uncompressed IP with SMPTE 2110 due next year. And as a next-generation product, Avid | Stream IO offers all the best capabilities of Avid’s proven hardware-based server solutions—AirSpeed® and FastServe®— while expanding support for emerging IP standards and higher-quality formats, such as 4K and UHD. Avid | Stream IO is designed to help production teams accelerate their content creation pipelines and bring their proven end-to-end Avid workflow deployments into the future. Its flexible software architecture supports on-prem, cloud and hybrid deployment models, but at a lower cost per channel than traditional hardware-based systems. Avid | Stream IO also supports Avid’s proven fast-turnaround workflows, including shot-listing, craft editing and logging through its unmatched integration with Avid’s MediaCentral® production platform to minimize disruption to production and enable staff to create and distribute their content faster than ever. With all these benefits, the speed and flexibility offered by the new Avid |Stream IO make it a sure-fire winner.


NOMINEE Backlight ftrack Studio

B

acklight’s ftrack Studio is an Academy Award-winning shot management, production tracking, and media review platform. It brings a powerful, customizable, and visual approach to the most complicated tasks of media production. In this real-world success story, we demonstrate how ftrack Studio transformed the workflow of the award-winning postproduction studio Cine Chromatix, which was recently recognized with an award for Best VFX at Lola 2023 for its work on All Quiet on the Western Front. Cine Chromatix embarked on a journey with ftrack Studio that redefined their approach to production, helped maintain seamless collaboration across dispersed teams, and enhanced the studio’s various processes. Automating collaborative pipelines ftrack Studio isn’t just an out-of-the-box production tracking tool; its powerful API empowers studios to create custom workflows tailored to their processes. Cine Chromatix used it to implement a workflow that automatically syncs shots between the studio’s Italian and German locations. When a task is assigned to an Italian colleague in ftrack, it automatically gets transferred to their ftrack workspace. And when the team in Italy needs a shot reviewed in ftrack the footage easily pops up in front of the right person in Berlin. The seamless integration between locations has had a transformative impact on productivity and collaboration for Cine Chromatix. By leveraging ftrack’s real-time updating and automation features, the team has eliminated the delays and inefficiencies often associated with managing crossborder projects. "In this way, ftrack Studio acts to keep everybody informed about the state of their work but also enales them to automate otherwise repetitive tasks," says Markus Frank, Head of the VFX department at Cine Chromatix. Custom workflows with third-party integrations Cine Chromatix also uses ftrack Studio to create integrations with popular, day-to-day DCCs. "We use Studio for things

like QC tracking between our Editing and Color Grading departments, thanks to integrations with software like Assimilate Scratch, Avid, and DaVinci Resolve, powered by ftrack’s API," says Frank. "Our Leipzig colleagues also use it to organize and automate tasks for their virtual studio productions. Ftrack Studio is really quite versatile!" Media reviews in ftrack Studio Cine Chromatix has also used the API to create custom parameters in Studio’s review and collaboration tools, included as standard. These custom review tools allow Cine Chromatix to perform advanced creative reviews, with the ability to include LUTs, aspect ratios, and tailored burn-ins during the media review process. With these tools, Cine Chromatix can achieve a more nuanced, accurate evaluation of their media and have creative conversations tailored to meet their clients’ expectations. Frank adds, “With ftrack Studio, we not only get creative reviews but also seamless interaction with directors, even when in-person meetings are not possible.” ftrack Studio With the power of ftrack Studio’s one source of truth for collaborating across shots, reviews, budgets, production plans, tasks, managing teams, and more, studios across Europe like Cine Chromatic are reducing admin tasks and refocusing on their craft of creating quality content.


NOMINEE Black Box Emerald®AV WALL

A

t IBC 2023, Black Box is previewing the Emerald®AV WALL software package, the latest edition to the established Emerald® KVM over IP platform. EmeraldAV WALL is designed to meet user requirements for more effective workflows and easy collaboration by incorporating video walls. Simplifying remote access, user management, and content delivery, Emerald®AV WALL for the first time unites KVM with AV video wall processing in one ecosystem. Complementing and expanding upon the features and functionality of the Emerald DESKVUE receiver, EmeraldAV WALL enables users to display source content from their KVM system on a 2X2 Video Wall. Emerald DESKVUE users in the PCR, SCR, OB van, or any remote location can view sources on their desktop monitors and directly share content with their workgroup on a video wall. Content can come from any source connected to the Emerald DESKVUE receiver such as physical servers via Emerald transmitters, virtual machines, cloud applications, or third-party systems connected via the Emerald API such as media players or automation devices. User authentication, access rights, video compression, and more are managed and monitored via one central management, called Boxilla®. At IBC, Black Box demonstrates a modern simplified workflow for the Broadcast, Media, and Entertainment industry. A desktop keyboard/mouse workspace uses Emerald DESKVUE to simultaneously display 16 sources in any arrangement on one ultra-large 5K monitor with seamless command-free switching. With EmeraldAV WALL installed on the DESKVUE receiver hardware, users can monitor the Emerald KVM system at their personal workstation while sending important content that needs to be viewed by a group to a 2x2 video wall. When a concern arises that requires the attention of others, users can switch the video wall to display the content of concern for review by the team. Content is controlled via API commands using control solutions such as Black Box® ControlBridge® touch panels or other 3rd-party devices for quick and convenient switching of video wall content at the touch of the screen; or, via the operator’s workspace using DESKVUE.

With an expected release in early 2024, EmeraldAV WALL gives DESKVUE users the ability to access all source content on the KVM network as well as H.264/265 streams including webcams. It will be available free of charge to all existing Emerald DESKVUE customers and future purchases up to the end of 2023. After this period all new customers will have to purchase an EmeraldAV Wall license. EmeraldAV WALL will be upgraded with software packages released to provide enhancements and new exciting features such as support for expanded video wall sizes, glide-and-switch access to video wall display, PTZ (Pan, Tilt, Zoom) control of webcams, expanded source content support, multiview video wall layouts, and more. Users will experience a new level of customization, responsiveness, reduced complexity, and efficiency in working with numerous systems, and benefit from lower cost of ownership while preserving existing IT investments.


WINNER Blu Digital Group BluConductor

B

luConductor is an end-to-end operations and project management system designed for the M&E industry. By centralizing workflows and processes, BluConductor boosts operational efficiency up to 150%, eliminating the need for online spreadsheets, countless emails, and disparate systems. BluConductor offers a suite of tools designed to streamline and optimize various business processes, enabling large global media organizations to achieve unparalleled levels of efficiency, productivity, and decision-making. Its intuitive interface and advanced capabilities, such as automated workflows, file management, unified collaboration, chat and comments, and alerts, make BluConductor the ultimate tool for workflow orchestration. The platform is highly customizable, adapting to each organization’s unique processes rather than requiring users to adapt to it. With robust permission-based features, BluConductor ensures a secure, role-specific interface, offering a singular system where internal teams, vendors, and external collaborators can seamlessly interact. Beyond the standard project management features like Gantt charts and task dependencies, it offers deep integrations into virtually any system, automating workflows and delivering real-time metrics and reporting. Its ability to streamline workflows and automate repetitive tasks revolutionizes the way media organizations handle their operations, allowing them to allocate more time and resources to strategic initiatives. Within BluConductor are sub-applications such as BluSpot - an intuitive, automated, and frame-accurate tool that has streamlined the ad marking detection process to an unprecedented level. Gone are the days of manual frame-byframe analysis; BluSpot employs state-of-the-art technology to efficiently identify and choose frame-accurate markers for any type of audio and visual event. With BluSpot, manual review has been dramatically reduced, offering time savings and unparalleled convenience. BluSpot also offers Compliance and Content Moderation Functionality. Leveraging the power of Artificial Intelligence, BluSpot automatically detects and lists profanity, violence, nudity,

suggestive scenes, and more. This ensures content creators and distributors can confidently uphold industry standards and safeguard their audiences, fostering a responsible and ethical media environment. Since its inception, BluConductor has earned the trust of major players in the global media industry, including three major motion picture studios. This widespread adoption speaks to its efficacy in enhancing operational efficiency and driving growth in the digital media sector. By empowering organizations to manage projects, analyze data, and automate workflows efficiently, BluConductor has become an indispensable asset to its users. BluConductor does not limit itself to project management alone; it also offers a diverse range of tools catering to various media business needs. The inclusion of data analytics capabilities provides invaluable insights that inform better decision-making, while its workflow automation features reduces human intervention in repetitive tasks. With BluConductor, media organizations have access to an all-encompassing solution that transforms their entire business landscape by streamlining project management. Organizations that implement BluConductor have experienced remarkable improvements in their efficiency and productivity. By eliminating manual processes and centralizing project management, they have drastically reduced the risk of errors, saved valuable time, and increased overall output. This newfound efficiency has allowed teams to focus on strategic tasks, leading to improved competitiveness and growth opportunities for businesses.


WINNER Bolin Technology EX-Ultra

B

olin elevates its leadership in the Outdoor PTZ camera market by introducing the all-new EX – Ultra 4K60 Outdoor PTZ camera. It offers three image solution options for various applications: Full HD 30X Zoom, 48X zoom Full HD with a 1“ 4K sensor, and 20X 4K60 ultra high resolution. The EX – Ultra features two FPGA imaging engines outputting simultaneous, independent video streams. There are two 12G-SDI outputs, Optical SDI, HDMI 2.0, and Multiple IP streams, including the FPGA hardware codec FAST HEVC. All the simultaneous video outputs have independent resolutions and frame rates. The EX-Ultra can be powered externally or with PoE++ and sends out 12v DC for accessories. And of course, it has two channels of professional, balanced audio. This is a revolution in Outdoor PTZ cameras. FAST HEVC is based on the H.264/265-AVC/HEVC open standard platform, leveraging the power of the AMD Zynq UltraScale+ MPSoC. With FAST HEVC, the EX – Ultra delivers up to 4K60, 4:2:2, 12-bit video signal over IP, with only two frames of latency, in just 50 Mbps of bandwidth, maximizing existing 1Gbps network IP video environments. Since FAST HEVC is based on HEVC, it can be decoded by a standard HEVC decoder, and HEVC can be decoded by our EG40F FAST HEVC decoder. FAST HEVC gives broadcasters a powerful, reliable, scalable, and creative workflow option for any situation. The outdoor EX – Ultra can withstand winds up to 60 m/s. An in-built heater and defroster allow for an operating temperature of -40° to 60° Celsius. The entire camera is IP67 rated. The connections cover and mounting bracket system also meets the IP67 standard. It has all-metal mechanical parts, and an aluminum alloy body. It has a nitrogen-filled image module housing and C5 salt air corrosion-resistant coating. These features mean there is no need for additional outdoor housings or cases for the EX-Ultra. It is a true outdoor performer. The Pan, Tilt, and Zoom performance of the EX – Ultra is stunning. The 340° pan and 210° tilt move at a variable rate from 0.01 of a degree per second to 60 degrees per second. The 255 presets execute at 100 degrees per second variable to five

different speeds, all with Zero Deviation Positioning. The EX – Ultra also supports the Free-D protocol. The EX – Ultra is not just for permanent stadium installations or situational awareness environments. It features a quick release plate option, and can be tripod mounted for live, mobile production. We also designed a custom vibration absorption plate for more extreme vibration and motion environments. Bolin’s new EX – Ultra is the most advanced, high-performing, and rugged PTZ camera we have ever made, and we are eager for broadcasters to experience it.


WINNER BT Media & Broadcast Wireless IP and Low Earth Orbit Satellite TV Production Demonstration

T

raditionally, a live broadcast requires cabling between camera and production truck, and fixed connectivity onwards to the broadcast centre. To make the camera wireless required several radio links to carry video, return, camera control and comms. To make the truck wireless requires a large satellite antenna and expensive satellite capacity to carry the signal to the broadcast centre. Overview In this demonstration, we show an entirely wireless IP endto-end system, which simplifies the camera connection to one radio link, shrinks the satellite equipment into something that can be mounted onto the roof of a small car, and can be operated without the assistance of an engineer. Pictures from a TV camera are carried over a wireless IP link within the BT marquee to the demo pod, where we can see the camera pictures and remote-control the camera. The pictures are then sent onwards over bonded OneWeb and Starlink low-Earth orbit satellites to a cloud ingest, before being returned to the marquee in Amsterdam via BT’s internet gateway in London and over BT’s Vena network. How does it work? A professional broadcast camera was fitted with an encoder developed by BBC R&D and a radio unit provided by Neutral Wireless, operating in the 2GHz radio band. The encoder used a low-latency HEVC codec to compress the video to 10 Mbps (4:2:2 10-bit) and Zixi to protect and monitor the video transmission. An antenna and radio receiver were mounted on a tall stand in the corner of a BT marquee. The bi-directional IP link over this wireless system enabled the camera operator to roam around within the marquee while maintaining full remote camera control. The radio receiver was connected over optical fibre to the demonstration pod. Inside the pod, a baseband unit provided the network

core and software-defined radio stack, handing off the IP signals to a Gigabit Ethernet switch. The video signal from the camera was decoded for local viewing and displayed on the pod multiviewer. A camera control panel and associated master control panel were used to control the camera remotely over the wireless IP link. Ethernet cables connected the pod to a OneWeb Kymeta terminal and a Starlink standard terminal, both mounted on the roof of a BT Outside Broadcast truck. The terminals were small enough to fit on the roof of a car or to be placed on the ground in a newsgathering or production situation. Why use two satellite systems? Throughput on low Earth orbit systems is not perfectly constant. Using two satellite antennas on the same constellation is a good solution, but an even better solution is to use two different constellations as demonstrated. What’s next? This was the start of an exciting journey. Significant performance improvements are expected over the coming months as we continue to test and optimise, and over the coming years as the satellite systems mature, such as the launch of the ‘v2’ constellations with inspace satellite-to-satellite communications.


WINNER Cobalt Digital WAVE RTR-64x64 12G-SDI Router

C

obalt Digital is introducing the WAVE RTR-64x64 12G-SDI router at IBC, with first shipments available immediately after the show. The key features of this SDI router are: n Non-blocking 64-input, 64-output SDI router. n Supports all SDI rates: SD-SDI, HD-SDI, 3G-SDI, 6G-SDI and 12G-SDI, as well as ASI and MADI. n Uses full-size BNCs for robustness and ease of cabling. 4RU unit, only about 4 inches in depth (including BNCs) Support for RP-168 switching. n Support for creating, storing, and running salvos. Web interface control via Ethernet port. Support for multiple control protocols, both over Ethernet and serial port. n Redundant power supplies. n Supports cable lengths of up to 50 meters at 12G speeds. With the increasing adoption of uncompressed video over IP (SMPTE ST 2110), important questions would be ’why build a new SDI router?“ and ’what is so different about this SDI router?“ Let us address these questions. For the first question, the transition to IP is far from complete. Running ST 2110 at 12G speeds is still expensive, and many plants are still SDI. Unless one needs the ST 2110 routing and mixing features, SDI is still the cost-effective way to go, and the market still needs SDI routers. The WAVE RTR-64x64 is a compact design specifically optimized for 12G-SDI while it also handles lower SDI rates with ease. This product meets the market need at a reasonable cost. There are larger routers available with an extensive set of additional features (and the price to go with them), and small inexpensive routers, but not much in this range. Cobalt received orders for this router months before it was available. The need is there, and Cobalt is addressing it.

For the second question, the WAVE RTR-64x64 has a set of additional features that distinguish it from other routers in this class. The first feature is passive cooling for ambient temperatures up to 50C. This means no fans and absolutely no noise; it can be used in quiet environments such as production control rooms. For environments where the temperature can exceed 50C, Cobalt offers a fan kit that is field-installable. This allows the WAVE RTR64x64 to be adapted to any environment. The second distinguishing feature is the router control. It does support several common legacy protocols for interfacing with existing panels and control systems, but it also includes a modern fully documented RESTful/Websocket API, thus facilitating any custom integration. Cobalt also offers new types of flexible WAVE control panels to suit the customer’s budget and needs. Finally, the router exposes an intuitive Web GUI that facilitates the operation of the unit and is a viable alternative to control panels for some situations. The WAVE RTR-64x64 is a modern take on the traditional SDI router with the Cobalt stamp of reliability, flexibility, cost effectiveness and five-year factory warranty.


WINNER CuttingRoom CuttingRoom

C

uttingRoom is a powerful cloud-native video editing tool that debuts as a co-exhibitor at this year’s IBC. CuttingRoom has revolutionized cloud video workflows for content owners worldwide with a tool for creating engaging video content with motion graphics from live streams and cloud sources. Customers choose CuttingRoom to increase their output and reach a wider audience with engaging video content. With its super responsive user interface and fast ingest, upload, rendering, and publishing of videos, it is the preferred cloud video platform for anyone looking for a scalable video editing platform that answers today’s video editing, working from anywhere, and collaboration requirements. The video platform has a range of smart features for capturing content directly from live streams, the CuttingRoom Reporter iPhone app, or connected cloud services. Users can collaborate in real-time when editing, adjusting aspects and frames, creating or importing multi-layer graphics, and creating video clips in any formats required. Videos can be shared directly to any connected social media channels, creating an extremely efficient workflow for distributing content to a broad audience. The many out-of-the-box integrations make it quick and easy to use with footage from external cloud servers and for publishing content directly to favorite media platforms. New video assets can be saved in the CuttingRoom cloud platform or any connected MAM system, such as Mimir, Iconik, or VIA, to AWS’ S3 cloud storage buckets, Wasabi or Backblaze, to Dropbox or other integrated platforms. CuttingRoom is built for access from anywhere and requires only an Internet connection. There are no limitations on the number of users, projects, incoming streams, or simultaneous outputs. No installation, updates, or maintenance is needed, as it is a true cloud-native Software-as-a-Service (SaaS) offering. A typical use of CuttingRoom for a broadcaster is to cut directly from live video streams during premium events, like a world championship or e-games events. Users can efficiently and easily create clips from connected live video sources, add branding and multi-layer graphics, create the different video

aspects required, and publish directly to MAMs, VOD platforms, websites, and any social media platforms. Time to market is crucial, especially for ad-driven content. The CuttingRoom team demonstrates the platform at IBC for the first time as a co-exhibitor and will show a range of new features, such as: n Editing with multiple video tracks. n Animated and keyframed pan & scan with portrait mode outputs. n Editing straight from live sources coming from MAM partners like Mimir. n A powerful graphics engine enabling full motion graphics support in the editor. CuttingRoom stands out from the crowd with its unique combination of getting started in a minute, finding the footage needed, editing with professional features, and publishing to any platform from the same easy-to-use interface.


NOMINEE Dalet Dalet InStream

R

ecording multiple video feeds simultaneously for one or more live events is hardware-intensive. Scaling it on the fly is a challenge. Organizations tend to provision to support peak usage, which is expensive. Dalet InStream, the elastic IP ingest solution, lets you scale ingest on the fly in seconds in the cloud, reducing the need for peak provisioning. Pay only for what you need. A cloud-native solution, Dalet InStream is integrated with the Dalet ecosystem, enabling users to access the content streams from anywhere across the operation. Operational Impact Dalet InStream can be deployed in a day. No upfront deployment investment. It’s fully integrated with Dalet Flex, Dalet Pyramid, and Dalet Galaxy five. Dalet InStream supports high-quality dual encoding (proxy and high resolution) with the ability to edit growing files with the Dalet Cut web-based multiformat editor. Dalet InStream is a SaaS solution that delivers continuous updates and reduces administration time and costs. There’s no big payment because you have to replace systems as they age out. You can spread your technology budget across much more cost-effective outlays over a longer term. User Benefits A web-based, centralized monitoring user interface allows users to securely monitor feeds, schedule ingests and crash records. An extensive API allows third party systems to integrate smoothly. Key Functions at a Glance n Scale dynamically ingest capabilities based on immediate needs, while maintaining broadcast-quality capture and formats (high resolution and proxy generation). n Bring cloud-native, IP stream ingest functions into the Dalet Flex media asset management and content supply chain platform as well as the Dalet Pyramid and Galaxy five news production ecosystem, supporting growing file workflows and enabling users to get access to content faster.

n Manage both streaming ingest and on-premises IP and SDI ingest in a centralized, intuitive user interface. n Support a wide range of broadcast feed formats including SRT, NDI or RTMP from Web sources, Zoom and backpack solutions to ingest live content from the field. The lynchpin of a multichannel ingest cloud platform like Dalet InStream is scalability and elasticity. As your ingest workload shifts and evolves, you can rightsize your operations on a minute-to-minute basis. No matter how large you grow, you only pay for the data pipeline you are using. On average, Dalet InStream users can save between 50 and 80 percent of their current on-premises costs. So, if you are covering a special event like the Olympics or World Cup, where the number of simultaneous ingest sources dramatically increases, Dalet InStream scales up the sources when you need them. There is no need for investing in additional infrastructure and hardware equipment like servers that will soon become outdated. In conjunction with Dalet InStream, customers can leverage tools such as the web-based media editor Dalet Cut for live editing, fast-tracking assembly of content for highlights, social sharing, or programs, as well as distribution; all in the cloud.


WINNER Dina Dina

D

ina fills the gap for journalists and storytellers on the move and in need of a digital-first newsroom system. Dina allows journalists to create and publish stories in the newsroom from their favourite web browser and in the field from the Dina Mobile application, with a live link into the newsroom to track schedules, chat and control when a story goes live. Dina announced a range of new features for the IBC show, including: n Order management system empowering reporters, producers and editors to order needed resources from departments and teams. n An extensive built-in tool for for daily and longterm editoral planning. With Dina Spaces, newsrooms are empowered to thoroughly plan and manage stories and rundowns related to distinct topics. A clear overview of the day’s tasks and stories ensures effective monitoring of both long-term content initiatives and the day-to-day news planning. n Multi-studio support for rundowns, empowering producers and directors with access to select their preferred studio for rundown dispatch. n Collaboration with TRINT enabling users to seamlessly select text from a TRINT transcript and effortlessly transform it into a news story for Dina. n Integrated workflow with LiveU and Mimir. Reporters in the field can now get video recorded from LiveU directly into their stories. The media will be immediately available in both Dina and Mimir while it is being recorded, ready for editing and publishing. n New chat features for improved collaboration and transparency in the newsroom and with reporters in the field. Other key features include: n Create stories, edit, take photos and videos and publish from anywhere. n Get push notifications for story assignments to reporters in the field.

n Monitor news rundowns to see what is on air, with a countdown and more. n Upload media content to news stories, write, edit, add photos and videos, and publish stories from the palm of their hands. With Dina, journalists and storytellers can swap between the newsroom web interface of Dina and the Dina Mobile app for their story creation, planning and publishing to linear/ live shows, LinkedIn, Twitter, Facebook, web CMS systems, and other destinations, and also for the collaboration and communication with using chat. With the newest update to Dina Mobile, the communication experience reaches a new level. The app users can chat 1-2-1, in groups, teams, and departments. Users can also engage in chats connected to a specific news story and a newsroom rundown. Dina changes how journalists work with innovative features designed to revolutionise how you communicate and collaborate, making it simpler and more efficient to keep everyone connected and informed. Dina is a true Software-asa-Service delivery, with new features added for the end-users continuously and with no downtime on the system. Upgrade projects are never required. Dina can connect to on-premise devices via the MOS gateway and integrates with news feed providers, MAM systems, graphics systems, automation systems, and more. It also has an open API for integrations.


NOMINEE Edgio Uplynk

T

he future of digital broadcasting, especially high-profile sporting events, hinges on the dilemma of delivering near-real-time streaming experiences while maximizing revenue through targeted digital ad engagements. A common pitfall for many broadcasters is either sacrificing viewer experience due to excessive latency or sacrificing advertising revenues due to lower reliability. Getting the balance right is crucial, for any flaws in the stream resulting in missed moments could be hugely damaging at a time when there is more competition than ever amidst a worsening economic climate. Reducing latency for sports fans has long been a big talking point, and there have been numerous examples of ultra-low-latency streaming. However, none have been applied at scale for major sporting events due to a) the significant risk to advertising revenues through less-reliable server-side ad insertion (SSAI), and b) the marked increase in operational costs. Solution Edgio’s Uplynk solution maintains a critical balance between reduced latency and reliability of essential features like SSAI or personalized content delivery (essential for delivering the best stream to each viewer regardless of location or device). Uplynk enables broadcast-quality live streaming with full-featured advanced customization and monetization capabilities. Edgio pulled apart and optimized every stage of the Uplynk streaming workflow, including Encode+, Live, Storage, and Smartplay, to achieve a reduced latency solution that saves time and costs while maintaining the highest video quality. Encode+ offers granular controls for 4K UHD content or live sports, adjusting resolution, frame rates, slice size, markers, multi-pass, and multi-DRM. This streamlined encoding cuts down processing time, leading to reduced latency. Live optimizes live streaming with reduced latency, ensuring seamless integration with rights-holders’ existing workflows. Key features include preserving metadata and messaging for triggering ad breaks, blackout markers, and automatic availability of live content for on-demand playback; all contributing to a more immediate and engaging viewer experience.

Storage capitalizes on Edgio’s unique position as the only streaming platform with its own content delivery network (CDN), allowing unprecedented control over latency. Built-in cloud storage also eliminates the need for third-party solutions, maximizing efficiency and flexibility. Smartplay is the final cog in Uplynk’s latency reduction machine. It allows fine-tuning of ad breaks for every viewer with targeted ads or break durations. Edgio’s deep understanding of the mechanics of advanced advertising for live streaming enabled it to create a highly efficient workflow that is incredibly reliable at scale, ensuring that Uplynk’s quest for reduced latency doesn’t compromise business logic. Result By drastically simplifying the operational challenges of largescale live broadcasts, Uplynk allows broadcasters to provide reduced latency in a cost-effective way without sacrificing ad revenues or reliability. Uplynk delivers a sub-15-second latency - comparable to the long-held experience of linear TV - without compromising the SSAI or content delivery. Amid economic uncertainty, this solution enables sports rights-holders to efficiently deliver high-stakes live events to millions concurrently, ensuring a better, more engaging viewer experience, while continuing to maximize ROI. Edgio’s reduced latency sets the new standard in the industry meaning customers can focus on audience growth, creative decisionmaking, and business differentiators.


WINNER Evertz Reflektor On-Premise & Cloud Signal Processor

R

eflektor, Evertz’ Software-as-a-Service (SaaS) IP distribution platform is ideal for providers of live/ linear services, cloud applications or OTT. Reflektor is a microservice-based signal processor for on-boarding and normalizing video transport streams, transcoding, and the replicating of streams. Reflektor represents a key element within Evertz’ remote production solutions by providing customers with a powerful, low-bandwidth cloud on-ramp option for easy and convenient contribution of highquality video with ultra-low latency for production and streaming applications. Reflektor addresses the challenges found in today’s cloud workflows; handling multiple transport formats and codecs while delivering incoming feeds to different software instances. For example, some remote locations may use Secure Reliable Transport (SRT) to send H.264 streams with Evertz XPS whereas another remote location will use a third party encoder to stream using MPEG-2 transport streams. A customer may have created a cloud production suite in the public cloud that is expecting only SRT. Reflektor receives all the incoming streams, normalizes them with video and audio processing, converts the MPEG-2 TS to SRT and sends copies of the streams to the cloud instances. Reflektor’s versatility and ability to transcode, translate and replicate IP flows in and out of the cloud makes it a valuable

tool for everyone who wants to transition to cloud workflows. With Reflektor, it is easy to simultaneously distribute, stream and playout multi-signal content directly to broadcast centers, remote operators, CDNs and more, which opens many creative possibilities. Any signal type can be accommodated, as can all video, audio or data content required for any broadcast application, including monitoring, encoding/decoding, TS muxing, duplication, etc. In combination with an XPS edge device, venues can use common transport protocols such as SRT, Reliable Internet Streaming Transport (RIST) and Zixi to send a low-bandwidth HEVC signal to Reflektor for immediate transcoding into a format best suited for the endpoint. Reflektor can also accommodate bi-directional support for the XPS encoder/decoder, meaning this process can be replicated in reverse, ensuring video content is distributed instantaneously to and from the cloud using reliable transport protocols. Reflektor is a valuable tool in managing the expanding number of signal formats (MPEG-TS, NDI, ST 2110, HLS, MPEG DASH, etc.) that can be produced by a traditional broadcast. Reflektor uses licensed microservices in the cloud to normalize signal types to best suit the needs of the end user or final application, making it an ideal cloud solution for UHD/4K field contribution, remote production, return feed monitoring, remote collaboration and cloud production.


NOMINEE Evertz DreamCatcher™ BRAVO Studio Virtualized Production Suite

D

reamCatcher™ BRAVO Studio is the complete cloudbased production control suite that redefines live production today. BRAVO Studio, is a collaborative, web-based live production platform that is redefining the creative experience for content creators and broadcasters. Providing virtual access to all the services found in the traditional control room, BRAVO Studio is a simple, reliable and cost effective platform that accesses live video and audio from remote locations over dedicated networks, 5G networks or public Internet. The platform ingests multiple live camera feeds; provides live video and audio mixing with transitions; multiple video overlays for picture-on-picture or multi-box looks; slow motion replays; clip playout; highlight clipping and packaging; multiple dynamic HTML5 graphics layers; and multiimage display of sources and outputs on the user interface. Using MAGNUM-OS for orchestration, BRAVO Studio enables users to schedule and automate the event preparation including routing of incoming remote feeds, allocating resources, and configuring the operator stations. This allows customers to seamless transition between productions with minimal effort. Technical Directors and operators collaboratively produce live events with BRAVO Studio remotely from anywhere in the world using a web browser. Advanced, AI-powered BRAVO Studio co-pilots help automate and simplify production workflows and enable small creative teams of all skill levels to maintain high-quality and consistency throughout the production. BRAVO Studio’s new 'Highlight Factory’ co-pilot creates clips and stories automatically using AI technology. These clips and stories

are published to Evertz Ease Live where users can pick their curated highlights from the interactive overlay. This ability to reach back to the production from the edge device to create a personalized experience addressed the ever-changing audience engagement. In addition, Evertz has also brought all the power of Studer Vista audio mixing console to its BRAVO Studio platform with the introduction of the new Vista BRAVO. This integrates a full mixing console into BRAVO Studio and gives users all the flexibility they need to enhance audio for live productions, whether working onpremises or through the cloud. BRAVO Studio is proving to be a game changer, particularly for events that include live sports, local news, esports, entertainment, corporate, and government.


NOMINEE Evertz NEXX Router with NEXX-670-X30-HW-V2

N

EXX, Evertz’ compact processing routing solution has become a cornerstone for broadcast facilities, mobile production trucks, venues, and stadiums. The NEXX platform enables facilities to upgrade from aging HD/SD-SDI routing cores to a flexible core that supports 384x384 12SDI routing matrix with an integrated Multiviewer in compact 5-rack units. NEXX’s popularity with Evertz customers lies in its modular-based frame and main interface/backplane that offers redundant control and ease of swapping components, including crosspoint, fans and I/O modules. An integrated, software-enabled multiviewer with over 30 pre-configured layouts (using internal Evertz X-LINK signaling), full mono channel audio shuffling, licensable output frame syncs, timecode and mixed reference support are some of the key features of the NEXX platform. The addition of the new NEXX-670-X30-HW-2 FPGA Accelerated IO processing module extends the NEXX feature set to include enhanced multiviewing and IP gateway functionality enabling a transition to SMPTE ST 2110 or ST 2022-6. These are

the initial software applications (apps) that are being launched for IBC 2023. This multiviewer app has a dynamic canvas, scalable PIPs, analog timecode support, UHD output, ANC data, close caption decode, and more. The IP gateway app supports encapsulating and decapsulating 12G/3G-SDI into/from ST 2022-6 or ST 2110 workflows and support for NMOS IS-04 and IS-05. The app library for NEXX-670-X30HW-2 will evolve over the next few years ensuring that the NEXX platform is future-proof. The versatility of NEXX enables customers a cost-effective path for their transition to IP while replacing legacy HD-SDI today. This is important because a broadcaster who is looking to update HD to 12G, might not be ready for IP yet. NEXX with the module will support the near term needs of going to 12G-SDI with support for legacy SDI while laying the foundation to move to IP in the future. NEXX is controlled by MAGNUM-OS, which provides all the common user interfaces including traditional hardware router control panels, virtual web-based control panels, and Evertz’ VUE intelligent panels.


NOMINEE Evertz Studer Vista BRAVO

T

he Studer Vista BRAVO combines a Studer Vista audio mixing console with advanced DSP as part of the BRAVO Studio production suite. Vista BRAVO’s feature set allows it to handle the widest range of applications, small OB and ENG vans, small studios, mobile and remote productions. Vista BRAVO allows for audio over IP (AoIP) using SMPTE ST 2110 by adding the 570EMR, 9821EMR or D23 audio gateways. Evertz has also brought all the power of Studer Vista to its DreamCatcher™ BRAVO Studio virtual production control suite, which allows users to quickly and easily produce high quality live content at a lower cost. The introduction of Vista BRAVO adds an exciting new dimension to the platform by providing virtual access to a full mixing console featuring all the enhanced audio capabilities for which Studer is so famous. With Vista BRAVO, users all the flexibility they need to enhance live music, entertainment, gaming and sports productions, whether working on-premises or through the cloud. Although Vista BRAVO is part of the BRAVO Studio production suite it is still a standalone Audio mixer, just like the rest of the

Studer Vista consoles. With increased popularity in fixed and mobile broadcast facilities, most engineers will already be familiar with the operation of the console, but new users will find themselves easily assimilating the VistonicsTM user interface. The Vista BRAVO is fully-equipped to handle large numbers of sources and feeds, along with full surround management, integral interfacing capabilities to numerous source formats including SDI, AES, Dante, MADI and more, despite its benefit of a compact footprint. The integral audio router functionality means that systems may be much more closely integrated and controlled than before. Another innovation to the Studer Vista range is the expanded Vista control using the Evertz VUE Intelligent User Interface. Now offering over 2000 bidirectional controls, the combination of Vista and VUE has created the ideal tool for multiple small production suites and remote productions. Vista BRAVO, together with VUE, pulls Studer’s superior audio capabilities into the BRAVO Production suite eco system with MAGNUM-OS at its heart.


NOMINEE Fabric Origin

O

rigin is a data service for content owners that delivers all the trailers, clips and value-add content that drive customer discovery and engagement, as well as AIenriched content metadata & imagery. Trailers are the most effective selling tool for a piece of entertainment. A viewer is 50-60% more likely to purchase/rent a movie or TV show if they can sample it first, and Origin is now the largest B2B aggregator of movie, TV and game promotional videos in the world. With Origin, Fabric has also been pioneering the incorporation of cutting-edge AI tools to enhance the available dataset across titles, to include tags, moods, genres and sub-genres, and to locate and verify 3rd party IDs. Origin also offers a custom AI service that can edit, reduce, expand, stylistically conform or otherwise transform descriptive text such as synopses to clients’ specific requirements. AI is now a central part of the data enrichment process, which will undoubtedly expand going forward. Origin also delivers up-to-the-minute platform availability, business insights and trending data, for content owners to assess the competitive landscape for individual titles to help them drive intelligence-led acquisition strategy. This strategy can be augmented by Origin’s proprietary 'Power Rating’, which is algorithmically generated from a wide array of input data, removing the dependency on subjective usergenerated ratings & reviews. When combined with immediately responsive social and platform trending data, and visibility of global platform availability, Origin gives content owners and distributors a powerful tool for assessing the commercial viability of any given title, plus all the content metadata, imagery and video content they might need to maximize the value of their acquisition. The market for content metadata is currently a profound oligopoly, dominated by a few major players with few real competitors. The costs are usually very high, and subscriptions dependent on complex negotiation, expensive contracts, and long onboarding processes. Even then, some of the title-matching processes can be quite onerous, requiring manual workflows and

increased headcount for delivery and possible title enrichment. The lack of competition has encouraged complacency. There is a clear market need for a challenger data-service to disrupt the existing market by delivering improved standards of service, and a catalog geared towards trailers; now the most important selling tool for entertainment content. Fabric Origin delivers that, with the largest collection of movie, TV and game promotional videos in the world, alongside a vast catalog of content title metadata and imagery. Key features of Origin include: n Trailers and TV previews n Value-add content n Platform availability data n Future releases n Business insights and trending data n API Connectors to EPG listings, Fandango showtimes & ticketing, Metacritic ratings, content recommendations, and Rev.com captioning and translations. n Ready-made title carousels and pre-curated thematic collections that drive content discovery and engagement. n Third-party IDs to prevent record duplication and unify a definitive master set of metadata n Games data and footage n Celebrities, including profile photos, filmographies, and social media links for trending n Images and artwork n AI metadata transformation and generation


NOMINEE GatesAir Maxiva GNSS-PTP

G

atesAir’s Maxiva GNSS-PTP is a timing and signal reference solution for broadcast and telecom facilities that speaks to modern navigation technologies used in second-generation Global Navigation Satellite Systems. The new Maxiva GNSS-PTP is a standalone 1RU solution with a sophisticated switching algorithm that assures high-precision 10MHz and 1 PPS reference signals to mission-critical components in the signal chain, including transmitters, networking, and studio equipment. Each GNSS-PTP device feeds up to twelve 10 MHz and 1 PPS references in the technology infrastructure, removing the need to integrate a standalone timing source in each component. This substantially reduces equipment costs and installation timelines while providing a single, yet highly redundant, point of failure for engineers. Precise timing and frequency generation is assured because of the product’s high level of redundancy. The product design includes redundant AC power supplies with built-in battery back-up for 'always-on' protection, and diverse timing sources including redundant GNSS receivers. The GNSS receivers include OCXO temperature control to prevent frequency changes, and

support all major global satellite constellations (GPS, GLONASS, Galileo, QZSS). Timing sources also include a hardware-based PTP module and an external 10 MHz and 1 PPS reference. Builtin switching control logic ensures reliability and flexibility for selecting the highest priority source as a reference at all times. Support for the Precision Time Protocol v2 (PTP) adds further reliability and flexibility for customers. Available as a modular option, users can prioritize PTP as a facility’s primary source, or configure PTP as a backup to one of the GNSS receivers. The PTP module can function as a master or slave and, same as the unit’s GNSS receivers, provide reliable timing and frequency reference to 12 external devices. GatesAir has further simplified the user experience with an integrated web interface that allows users to easily and flexibly select frequency bands for each GNSS system and configure timing source selection in automatic and manual modes. The user interface also offers useful visual aids, including detailed tracking maps and tables, satellite status and signal quality. The GNSS-PTP works in any broadcast studio, RF plant and telco facility worldwide.


WINNER Genelec 9320A Reference Controller

B

eing unveiled globally at IBC 2023, Genelec’s 9320A Reference Controller is a compact and tactile desktop device that provides seamless integration between professional loudspeaker and headphone monitoring. By acting as a single touch point for Genelec’s Smart Active Monitors, GLM calibration software and the Aural ID personal headphone plug-in, 9320A users can switch instantly between well calibrated in-room and headphone monitoring without any interruption in workflow; creating a truly next-generation reference quality monitoring system, wherever the location. The 9320A can control up to three separate monitoring systems plus headphones, and each system can operate at a calibrated listening level, according to EBU R128, ATSC A/85 or SMPTE RP200 standards. While the 9320A can support any active loudspeaker system – such as one or two ALT stereo sets – it crucially provides instant one-click access to a vast number of extra monitor control features built into Genelec’s Smart Active Monitoring family. Smart Active Monitors themselves can be optimised to the room’s acoustics via GLM calibration software, and the 9320A streamlines this process since it comes complete with a factorycalibrated reference microphone, and integrates closely with the latest GLM 5.0 software to allow automated system calibration and control of key GLM features. This creates accurate, reliable loudspeaker mixes that translate consistently to other rooms and systems. For headphone users, the reference-grade headphone output of the 9320A features excellent linearity and dynamic range, and allows users to combine their choice of professional quality headphones with the latest Aural ID 2.0 plug-in, which models the listener’s own unique HRTF. Users can therefore experience truthful and completely personal headphone monitoring while simultaneously measuring sound exposure, to ensure safe listening. Additionally, the 9320A allows integration with

any DAW or audio interface, and with its analogue, AES/EBU and USB connectivity, the 9320A can connect directly to stereo monitoring systems (with and without subwoofer), providing monitor control and doubling as a high quality A-D and D-A converter for both monitors and headphones. These days, audio professionals in music, broadcast, post-production and game audio are increasingly wanting the flexibility to work wherever and whenever they want, even if that sometimes involves unpredictable and challenging acoustic environments. To be able to switch instantly between well calibrated in-room and headphone monitoring without any interruption in workflow is crucial to this way of working, and so we see the 9320A as a powerful tool in creating a truly next-generation reference quality monitoring system.


NOMINEE Grass Valley Grass Valley LDX C150 Compact Camera

T

he LDX C150 is a revolutionary new compact NativeIP/ UHD broadcast camera that is setting a new standard for uncompromised high-value live production. No other camera on the market delivers comparable operational freedom, flexibility, scalability and convenience. As the newest member of the LDX 100 Series camera platform, the LDX C150 ('C150') puts the features and functionality of the advanced, full-size LDX 150 NativeIP/UHD camera into a compact form factor. The C150 brings to market an unprecedented feature set that encompasses NativeIP capabilities (SMPTE 2110), exceptional UHD imaging, high-speed 3X native UHD HDR and 6X 1080p HDR for super slow motion applications. The C150 offers built-in options for baseband SDI (12G, 3G, 1,5G) support, SMPTE 2110 and JPEG-XS compression without any external conversion, direct from the camera head. For backwards compatibility, the C150 also supports direct, single cable connectivity to an XCU base station. With a virtual licensing model, customers can upgrade functionality in the field such as 1080p to full UHD/4K raster support with a tap of their phone via NFC, even if the upgrade is only needed for a short time and the camera is powered off. Ideal for live event rental scenarios. "Our new C150 camera’s versatility ensures very high usability that quickly leads to a return on investment," said Paul de Bresser, Product Manager. "Instead of just using the camera on a few productions per week or in special situations, the C150 is designed with so much functionality and flexibility that it can be maximized every day." A great complement to the full-size LDX 150, the C150 is beneficial to diverse applications including, live sports, REMI remote IP-based production, OB mobile units, Steadicam and SKYCAM/FLYCAM operations. Its lightweight, compact form factor makes it ideal on robotic heads at locations where camera operators dare not go due to heights, inaccessibility, or safety risks, such as near racetracks. The LDX C150 offers a high sensitivity F11@2000 lux global shutter via its three new 2/3-inch Xenios imagers. This state-of-

the-art imaging results in a wider dynamic range in native High Dynamic Range (HDR) in PQ, HLG and S-Log3 modes. It also delivers an improved signal-to-noise ratio by reducing the gain for cleaner images and greater depth of field by allowing smaller lens apertures for easier and better focusing. Users can enable up to five built-in JPEG XS codecs for up to 20X compression with extremely low latency. Only a single cable is needed to connect the C150 to the same GV XCU base station being used to paint and match pictures for the entire multi-camera production. For full IP trucks, IP facilities, and IP-based REMI remote production, the C150 natively streams ST2110 IP signals in and out without any need for external conversion. It offers simple, scalable multiformat image capture and distribution and supports AMWA NMOS and SMPTE 2110 among other industry standards. Broadcast, Media & Entertainment professionals can maximize their investment in the C150 due to its unprecedented scalability for a broadcast-quality NativeIP/UHD compact camera.


WINNER Haivision Haivision Pro460

S

imple, versatile, and high-performance, Haivision’s latest release of the Pro460 meets the demands of multi-camera remote production with ease, complementing both the latest and legacy mobile networks, from 5G through to 3G. The ultra-low latency Haivision Pro460 is a mobile 4K UHD and quad-HD video encoder and transmitter that enables broadcasters to leverage the latest 5G technology in addition to bonded 3G/4G networks with Haivision’s SST technology for live sports, news, and events. The latest updates to the Haivision Pro460 mobile video transmitter offer pristine-quality video, up to 4K and in HDR, with end-to-end latency across a 5G network as low as 80ms to SDI, NDI, ST 2110, and cloud production workflows. New features added to the Haivision Pro460 mobile transmitter include: n Ultra-low latency mode that significantly decreases the minimum latency to as low as 80ms, end-to-end, from encoding to decoding over a private 5G network. n Compatibility with Haivision’s cloud-based platform for remotely managing devices and mapping video sources to destinations over cellular and IP networks. Pairing Pro460 to Haivision’s cloud solution allows for the real-time monitoring, configuration, and control of appliances including Haivision StreamHub receivers, from anywhere using a single browser window. n Improved UI makes device control and management over the web interface and the on-board screen even easier. n Auto-configuration with USB memory key to allow broadcasters to revert to a previous configuration. n Simultaneous live video streaming and recording to an SD card at different resolutions and bitrates, ensuring optimal, consistent video quality.

The Haivision Pro460 mobile video encoder and transmitter is equipped to support broadcasters in leveraging the latest mobile and IP video technology for remote production. Compliant with both private (NPN) and public 5G mobile networks, including network slicing services from CSPs, the Haivision Pro460 can encode and transmit video from up to 4 HD cameras (or a single 4K UHD camera) to a broadcast production facility at very low latency. The Pro460 encodes live video from up to four camera SDI outputs in real time using either the HEVC or H.264 codec, enabling it to be transferred over a mobile network with very little latency. With its six cellular modems, the Pro460 can aggregate multiple networks for greater bandwidth and reliability, including IP, 3G, 4G, and 5G, for error-free transmission using Haivision’s SST (Safe Stream Transport) technology. Designed to support broadcasters in leveraging the latest mobile and IP video technology, the Haivision Pro460 mobile video encoder and transmitter offers a complete range of advanced features for live broadcast contribution and remote production.


WINNER Hammerspace Hammerspace

H

ammerspace is a software-defined solution that automates high-performance unstructured data orchestration and global file access across any storage type from any vendor, including on-prem, cloud, and multi-site. With Hammerspace, broadcast and film customers can now unify workflows across multiple silos, sites, and clouds. Native integration with tools such as Autodesk ShotGrid and Flame and more enable artists to continue using tools they are used to, with Hammerspace transparently stitching together workflows behind the scenes to bridge silos, clouds, and locations. This provides users anywhere the ability to collaborate live, regardless of storage type or infrastructure, as though they were all local. For example, Mathematic, a Paris-based VFX, animation & motion design studio, has steadily grown to include offices in Paris, Montreal, LA & Montpellier. This growth required continual & rapid collaboration between artists and production workflows across all locations to meet tight deadlines. With projects that may originate anywhere in the world, it is essential that team members in all sites can collaborate seamlessly. Work may originate in LA, but require collaboration between artists in Montreal or Paris as the elements are created. And then rendering will need to happen at the three main data centers in Paris, but additional finishing or other tasks may need to be done at one of the other locations. The volume of data for such projects is so great that it was impractical to send the files electronically between the sites. This meant Mathematic needed to physically ship file copies on hard drives back and forth. With increasing deadline pressures, and as the volume of projects increased, this rapidly became unsustainable. Mathematic needed a solution that would enable them to use their existing storage resources and infrastructure, but to tie all their sites together into a seamless collaborative environment. This meant that all users everywhere could be

collaborating on their projects as though everyone were sitting in the same offices. Mathematic tried multiple solutions to address this problem, including public cloud providers and other point solutions. The problem was these were still too disruptive, and involved data copies and delays. Solutions that required them to move their data to another platform and essentially erase everything and start over from scratch simply would not work and were a non-starter. The answer to this problem was Hammerspace, which enabled Mathematic to seamlessly bridge all sites into a multi-site global namespace. In this way, all artists were able to collaborate on the same files in a live file system, and not have to wrangle file copies between sites. Since Hammerspace enables customers to use existing storage from any vendor, it meant that Mathematic could rapidly put the system into operation with minimal disruption to existing projects. "We could just plug in all our 500TB storage arrays into Hammerspace, and then all that data was immediately available to all our users in other offices in Canada and the US," said Clement Germain, Lead Flame Artist and VFX Supervisor at Mathematic.


WINNER Harmonic VOS® Media Software

H

armonic’s innovative VOS® Media Software represents excellence in the realm of media processing solutions for video streaming and broadcast delivery. Harmonic’s groundbreaking solution, to be showcased at IBC2023, redefines the landscape of media software. The latest version of Harmonic’s software addresses the burgeoning demand for video processing and delivery solutions that can be deployed on-premises, or in a private or public cloud infrastructure, providing pay-TV, cable and telco operators with the capability to manage and operate their systems without the need for extensive DevOps expertise. VOS Media Software answers a critical industry need for viable media software that is easy to deploy and operate. Running as advanced software for video servers in private data center, bare-metal infrastructure, or the public cloud, the end-to-end video platform offers unparalleled agility, resiliency, security, and scalability for a superior viewing experience. The economically viable solution offers significant reduction in hardware requirements, along with up to 40% improvement in origin performance, dramatically reducing infrastructure costs. Harmonic has introduced a series of enhancements that extend its appeal to a wider audience: 1.Scalable Packages: The solution can scale easily from predefined starter packages up to customized systems deployed across multiple data centers. 2.Enhanced Origin Performance: The software boasts a remarkable 40% improvement in origin performance, translating into substantial cost savings for pay-TV, telco and cable operators. 3.Qualified Off-the-Shelf Storage Solution: VOS Media Software provides seamless integration with off-the-shelf storage solutions tailored for non-live streaming applications. 4.Simplified Operations: The solution offers comprehensive

self-serve documentation and guidelines to facilitate the smooth integration of orchestration and monitoring tools. 5.Comprehensive Broadcast Features: Beyond streaming, VOS Media Software offers telco and cable operators all the broadcast features needed for deploying a broadcast headend, including premium video quality, statistical multiplexing as well as playout and branding for originating linear channels from live and file content. The latest version of VOS Media Software reinforces Harmonic’s commitment to being at the forefront of delivering cutting-edge video streaming solutions tailored to the dynamic requirements of the industry. As pay-TV, cable, and telco operators seek video processing and delivery solutions without fully embracing DevOps requirements, Harmonic recognized the need for a comprehensive, simplified, and economically viable offering that does not require extensive resources or technical expertise.


NOMINEE Hedge OffShoot

F

ast, verified data transfers with metadata management for Offload and Ingest workflows.

Built for media Offload media lightning fast, whether it’s video, stills or audio. Make your life easy and let OffShoot do the tedious jobs. Peace of mind When creating multiple backups takes no longer than just one, you can finally sleep well; no more late night hotel room backup sessions. Happy together OffShoot feels just at home on your Mac as it does on Windows. 100% native code, without cross-platform ballast, ensures stability and speed. OffShoot adds cloud workflows The team at Hedge is very aware of the challenges everyone has faced over the last three years. They have impacted every level of our industry in general and changed our customer requirements specifically. We recognise how important the cloud has become in enabling remote workflows, and we are certain that they are here to stay. For this reason, OffShoot and OffShoot Pro both offer integration with iconik, a popular hybrid cloud asset management solution from our partners at Backlight. With Offshoot, users can copy cards and send media straight to iconik to catalogue, archive, and share with their team. OffShoot Pro includes additional metadata options to further deepen the integration with iconik. OffShoot and OffShoot Pro also offer opportunities to upload files to Frame.io via a dedicated watch folder. OffShoot Pro is introducing Amazon S3 integration for the first time so that users can link their offload workflow to broader corporate cloud initiatives within Amazon Web Services (AWS) that their business uses. With this approach, we have opened OffShoot to support all sorts of production workflows at every level of our industry.

Connect 3.0 expands notification support Alongside the new versions of OffShoot, we will be launching Connect 3.0, which further expands our comprehensive push notification system and further extends our cloud technology strategy. Currently, notifications are app-based, which is inherently limited, so we’ve developed a cloud platform that will eventually allow us to add notifications wherever we need them across our entire product line. Connect 3.0 adds support for Android and iOS devices on 16.4 and above. Our investment in the underlying technology will enable us to roll out a cloud-based notification system for more Hedge apps later in the year. Connect 3.0 introduces a new web interface for monitoring the status of transfers on every OffShoot system users manage. This will help large productions track footage across numerous locations and better coordinate resources. For Offshoot, users will need to use a code to manually add each licensed system to their monitoring account. Offshoot Pro provides a ’magic link“ that automatically adds each system to the user’s account.


WINNER HTC VIVE Mars VIVE Mars CamTrack (Studio Edition)

V

IVE Mars CamTrack is democratizing access and enabling studios of all sizes to embark on virtual production journeys with a camera tracking solution that’s easier to use and more affordable than ever before. This technology was once exclusive to major Hollywood blockbusters like “The Mandalorian” and “Avatar,“ but is now accessible to studios, production companies and even home setups. Historically, the size and scale of the lengthy setup process has been a significant cost contributor to virtual production. VIVE Mars CamTrack significantly streamlines this setup by leveraging industry-leading lighthouse tracking technology, providing a simple and accurate camera-tracking solution that reduces overall production time. It can be up and running in minutes versus hours. The ecosystem continues to build, we recently launched a new addition, Mars FIZTrack, a lens encoder completing the camera tracking solution allowing filmmakers to track their physical camera and lens actions for their virtual camera. This year we are introducing the VIVE Mars CamTrack Studio Edition, offering maximum functionality for the Mars CamTrack. This includes a Mars Module that streams positional data to industry-standard protocols like Livelink and FreeD, and supports broadcast-standard synchronization, including genlock and timecode. It also has three VIVE Trackers simultaneously tracking three cameras, to allow multi-camera shoots. And four VIVE Base Stations expand the tracking volume to an 10 meters by 10 meters. Last but by no means least, two VIVE Mars FIZTrack seamlessly integrate lens parameters into the virtual environment, allowing filmmakers precise control over focus, iris, and zoom within their virtual scenes, resulting in a seamless blend of real and virtual elements. VIVE Mars CamTrack has taken care at every level of design and manufacturing. The kit has easy-to-use calibration for nodal offset and lens distortion, as well as one-click origin reset. And lastly, it has robust wiring to reduce

latency. VIVE Mars CamTrack Studio Edition comes in a compact, portable design housed in a rollable protective case. This portability means it’s easy to set up a virtual production studios anywhere needed. VIVE Mars CamTrack Studio Edition is not just about technology; it’s about empowering filmmakers to adapt virtual production efficiently. Whether you’re engaged in crafting a blockbuster film, a TV series, a commercial, or an indie labor of love in your garage, the VIVE Mars Studio Edition is ready to be up and running within minutes. For VIVE Mars, our mission is clear: to simplify virtual production and make it accessible to all, and unleash imagination. VIVE Mars CamTrack Studio Edition, with its innovative features delivers precision and simplicity, eliminating the technical challenges that once hindered cinematic creativity.


WINNER Humans Not Robots HNR to ZERO

T

he digital revolution is creating enormous value to society, and is accelerating at an unprecedented rate. However, there are unintended consequences: n The carbon footprint and cost of managing data is too high. n Customers, regulators and shareholders want companies to act, now. n We recognise this problem, conceptually, is easy to understand but difficult to solve. In this rapidly accelerating context, reducing the environmental impact of video streaming should be everyone’s concern within the media industry. At Humans Not Robots (HNR) we help data-heavy businesses operate more sustainably and efficiently. Our solutions empower media and broadcast organisations to manage cleaner, cheaper and faster digital operations for the benefit of the business, the people and the planet. Our product HNR to ZERO is a workflow observability and analytics platform that helps companies in the broadcast media and streaming sector, whether technology providers or buyers, operate greener, more efficient, streamlined operations. This is achieved by automatically analysing, at a very granular level, what is happening within media operations and then applying advanced machine learning techniques to identify optimisations and savings across the entire digital supply chain. The platform targets an initial 20-30% reduction in carbon footprint, with minimal disruption to the business, as follows: n HNR DISCOVER helps companies set a baseline by efficiently capturing data from multiple sources and calculating carbon footprint and costs. It then measures this information against similar organisations. n HNR DELIVER is an AI/ML tool that automates continuous data analysis and applies regularly updated industry’s benchmarks to deliver actionable insights. n HNR To ZERO helps organisations understand emission hotspots and provides quick wins to reduce inefficiencies. It also automates scope 3 reporting inline with CSRD and GHG Protocol.

Why it matters Digital supply chains across all industries are quickly expanding, with the consequent increase in carbon footprint and costs. Data is stored across multiple platforms, with frequent resource and data duplication, at high energy costs, making sourcing and analysing data difficult. Regulatory requirements are becoming more stringent, with regulators demanding that companies deliver their plans to reduce their carbon footprint. Some examples are the CSRD mandate to report on sustainability by 2024 and 2050’s net zero target. Internal and external stakeholders, from customers, to shareholders, investors and employees all expect companies and their management to deliver on sustainability goals. This is where Humans Not Robots helps. HNR to ZERO in action HNR is working closely with industry organisation Greening of Streaming (GoS) to better understand energy consumption during live event streaming. The HNR to ZERO platform records power measurements from stakeholders involved in live event streams to understand if consumption fluctuates during audience peaks. The project will help raise awareness, and reduce the environmental impact of video streaming across the broader regulatory spectrum. HNR To ZERO is also used by video experience technology provider Accedo, to help baseline their Amazon Web Services’ (AWS) carbon footprint and energy consumption. With this information, Accedo will be able to set sustainability targets and help its 20+ customers understand their environmental impact.


WINNER IMAX Stream Smart™

S

treaming businesses are facing enormous challenges. Subscriber growth is lessening and ultimately impacting bottom lines compelling businesses to search for new revenue channels or ways to cut costs. The number one video technology challenge is cost control, and the biggest cost for delivering video is distribution charges from Content Delivery Networks (CDNs). IMAX Stream Smart™ helps content distributors reduce CDN delivery costs by 15%-20% on average while guaranteeing preferred video quality and minimal workflow changes. This potentially saves millions of dollars annually - risk-free. Why Stream Smart: n Software overlays on existing workflows requiring minimal encoding workflow changes. n Works with and optimizes leading third-party encoders. n Faster and more accurate than competitive approaches n Deploys in the cloud or on-prem, wherever the infrastructure lives. n Easy implementation in hours not days or weeks. Automation reduces the need for human intervention. Simplified pricing represents a % of bandwidth savings, leading to consistent ROI. How it works Stream Smart™ software overlays on existing workflows to analyze every frame of a video and optimize encoding settings for best picture quality and compression efficiency. Stream Smart is enabled by the IMAX SVS (SSIMPLUS® Viewer Score), which provides a single, objective quality score that benchmarks viewer experience throughout the workflow. Our patented IMAX SVS® is scientifically proven to be the most accurate measure of endviewer experience. What it delivers is certainty. Once it locks in a quality score, it serves as a guard rail, allowing reductions to be made in the amount of data that make up each segment of video, without harming the viewer experience. As long as the quality score doesn’t change, it’s safe to keep making reductions. This means video operations leaders can confidently reduce

bandwidth, potentially saving millions in delivery costs while guaranteeing there’s no noticeable difference to viewer experience. The most accurate measure of quality The IMAX SVS is the only video quality metric that maps to the human visual system, making it the most accurate and complete measure of how humans perceive video. Our Emmy® Awardwinning technology is the only algorithm with >90% correlation to Mean Opinion Score (MOS), verified across various video datasets. It sees quality differences other metrics can’t. It works for all types of content including live,VOD, HDR and 4K. Makes encoders smarter Stream Smart software overlays on existing workflows and supports leading third-party encoders, requiring minimal encoding workflow changes. It analyzes every segment of video and directs the encoder to optimize settings automatically, to reduce bandwidth while maintaining the desired video quality. Cloud-native and easy to deploy The software can be deployed in a streaming provider’s on-prem data center or in the cloud. It can also be offered as an SDK. Save millions in distribution The result? Stream Smart reduces bandwidth by 15%20% on average with no perceptible impact on visual experience. In fact, one leading streaming platform saves $25M annually across their library of most watched titles by using Stream Smart technology to reduce bandwidth consumption without sacrificing the subscribers’ viewing experience.


NOMINEE InSync Technology MCC-HD

D

ue to the incredible challenge they need to solve, standards converters are computationally intensive and algorithmically complex devices that have traditionally led motion compensated converters to be implemented as large, power-hungry equipment. For example, an industry mainstay hardware quoted power consumption of 500W-2kW in a 4+RU chassis. Also, in the confines of an OB truck or in a cramped and busy broadcast centre these large units generate significant heat requiring high levels of cooling that make a huge difference to operating and carbon costs, especially when scaled for multiple channels. On site broadcast trucks and the global transmission centre for the Paris Olympics, for example, will need to cater for channels all over the world; the Tokyo Olympics delivered around 48 concurrent channels being broadcast globally. For a truck to service even a tenth of those channels, a scalable system with redundancy and the highest quality conversion is needed. This is where the MCC-HD excels, featuring algorithms from InSync’s industry-leading broadcast conversion heritage, it produces crystal clear images and smooth motion, all within a 1RU solution at 45W of peak power consumption. In a truck, engineers can use any single available rack space for the compact unit with the knowledge that the hardware won’t be a drain on their available power or aircon. Using the Tokyo Olympics as an example, with an alternative hardware converter, it would cost €34,900 to power only 20 units for 9,500 hours, with the MCC-HD only costing €4,800. Furthermore, an additional reduction of 86% in carbon emissions from 38,600kg of CO2 to 5,330kg of CO2 emissions in energy consumption alone. This saving calculation excludes the additional cost savings derived from lower air conditioning and fossil fuel consumption requirements. InSync’s newest generation of hardware proves that traditional conversion methods are cost effective, sustainable and viable for the future of broadcasting. Central to InSync’s design and implementation is a clear commitment to sustainability and the end user. Since going to market with our newest line of motion compensated frame

rate converters our programme of continual development has reduced the computation demand and energy cost by up to 86% without any compromise to quality. InSync’s engineers achieved this through intensive work to maximise algorithmic and hardware efficiency allowing for a more compact design. Running hardware on-site or from the comfort of a broadcast centre has never been more costeffective, sustainable and high quality. With hundreds of thousands of hours of content being captured every year by the industry’s biggest broadcasters the figures represented here are multiplied by the same factor. Broadcasters can be simultaneously saving hundreds of thousands of Euros on less energy demanding solutions and reducing their carbon footprint by hundreds of thousands of Euros of CO2 emissions. InSync should be awarded for its culture of continual sustainable growth, drive for perfect image clarity and focus on the needs of broadcasters who use our equipment around the world.


WINNER Interra Systems BATON Captions

B

ATON Captions allows media professionals to generate high quality captions and subtitles for global content creation. The solution also enables service providers to QC captions and subtitles; create captions from transcribed audio; auto-correct errors; translate captions; and regenerate captions as well as repurpose to match different video deliveries. Utilizing cutting-edge AI and machine learning (ML) technologies — including automatic speech recognition (ASR) and natural language processing (NLP) — BATON Captions brings simplicity and cost savings to the creation, management, and delivery of captions and subtitles for traditional TV and video streaming. The platform provides multilingual support for QC, captioning, and subtitling of media files, and the latest version has been enhanced to support all popular caption formats, including SRT, SCC, TTML, MacCaption closed caption (MCC), STL, TT, Cavena890, PAC, CAP, and IAM-based S3 access. Additional new features for BATON Captions include support for the Finnish and Polish languages; project-based custom dictionaries; burnt-in captions detection and management; profane word filtering; RHEL support; and the ability to browse and download log files from the web browser, export dictionaries, and perform a database backup before upgrading. The solution’s core infrastructure has also been upgraded to include distributed processing using multiple nodes; next-generation formatspecific checks; support for the QC of stand-alone subtitle files; on-prem translation; segmentation, auto timestamps, and more. It supports all embedded and sidecar formats, major text and document formats, and an array of content locations, such as FTP, AWS S3, Microsoft Azure Storage, and GCP Storage, including YouTube URLs. Future

updates will include a fully functional live captioning capability, which will be available on the platform by the end of 2023. Industry-leading QC, performance, and innovation set BATON Captions apart from other solutions on the market, making it a worthy candidate for a Best of Show Award at IBC2023. It is unique in its ability to offer comprehensive quality QC checks for captions and subtitles; the platform reports any drop or inaccuracy in captions and audio, as well as compliance issues, providing users with automated options for correcting alignment and text. Captions can be checked against actual audio essence, corrected, and exported to any industry-supported caption format. In addition, the solution’s ability to repurpose captions dramatically streamlines workflows. For example, if live content needs to be re-telecasted as VOD content, the platform can automatically correct the sync, spelling, segmentation, and positioning; provide missing captions while removing extras; change the format and framerate; censor profanity; and more. It also tracks changes made to media content during post-production and applies them to captions, ensuring synchronization and reducing manual effort. And with its support for multiple languages and a variety of popular caption formats, BATON Captions is a game-changer for content providers, helping ensure the global accessibility of their content and capitalize on monetization opportunities. The platform allows them to easily repurpose captions for different audiences around the world; it corrects inaccurate captions when audio is changed due to localization, removes extra captions, and provides a translation for the creation of subtitles.


WINNER LiveU LU4000 ST 2110 Receiver

L

iveU’s LU4000 ST 2110 receiver is the latest edition to the LiveU EcoSystem. It’s the world’s first LRT™ (LiveU Reliable Transport) to ST 2110 4K/Quad HD video receiver for SMPTE ST 2110 broadcast facilities, seamlessly connecting both LRT™ and ST 2110 for increased efficiency. It provides uninterrupted connectivity from the field to the facility. The benefits of an IP-video based production facility are well understood; greater flexibility, scalability, quality and efficiency. SMPTE ST 2110 continues to gain traction among the world’s premier broadcasters as the on-premises IP protocol of choice. Now, for the first time, broadcasters can combine their ST 2110 facilities with freedom afforded to them by LRT™ and the LiveU EcoSystem with a single, easy to manage device. Many of LiveU’s customers are engaged in transforming their production capabilities to take full advantage of the benefits of IP-video. What has become clear is that managing the complexity of full IP-video installations is essential. The LU4000 ST 2110 is designed, from the ground up, to help overcome complexity by removing the need to manage and maintain separate receivers and transcoders, simplifying the flow between LRT™ and 2110. Scalability is built in, with the ability to contribute up to four full HD feeds into the huge growing content repository on the customer side. With this latest addition to the LiveU IP-video EcoSystem, the company is providing an efficient and adaptable way to receive high-quality LRT™ (LiveU Reliable Transport) live content feeds from field units and then seamlessly output them as

2110-compliant streams. LiveU’s LU4000 ST 2110 operates as an all-in-one receiver in the video chain, reducing IT costs, time and overhead while keeping everything in sync. Building on LiveU’s LRT™ protocol, the receiver enables a resilient, low latency IP-to-IP workflow for receiving a single 4K video feed or up to four full HD live feeds, adding to the essential efficiencies of ST 2110. Tasks have been automated to shorten the workflow, including the routing, switching and processing of separate bonded video, audio and data streams. The LU4000 ST 2110 is fully NMOS-compliant, making device discovery and control easy. The product architecture future-proofs it to allow further workflow simplification developments, providing customers with complete peace of mind and effortless upgrading. Also vital is, of course, redundancy and the LU4000 ST 2110 adheres to the highest 2022-7 SMPTE-defined path redundancy standard. Customers benefit from stable stream transmission along with the consistency afforded by hardware-based PTP feed synchronization, again providing complete peace of mind via rock-solid operation. Moving to 2110 can be a complex process and any extra component can increase, not decrease efficiency. Constantly at the forefront of the industry, LiveU has achieved greater efficiency with its latest innovation. The LU4000 ST 2110 can be swiftly deployed regardless of network configuration, providing an efficient and resilient bridge between the rich contribution capabilities of LRT™ and modern 2110 production facilities.


WINNER Looper Insights Looper Boost

L

ooper Boost™ is the latest analytics tool from Looper Insights that launches in time for IBC 2023. Looper Boost™ is an end-to-end solution that gives marketing departments in VoD platform companies and broadcasters comprehensive insights from past, present, and future merchandising efforts across a diverse array of devices, including mobile, web, smart TV, set-top boxes, streaming devices, and game consoles. As the cost of streaming packages rises, consumers demand more value for their investment. For example, this autumn top US streaming services will cost $87 a month as compared to $73 a year ago, according to the FT. In an era marked by the dynamic rise of VoD streamers and broadcasters, Looper Boost™ addresses the challenges that marketers face in understanding and capitalizing on the ever-changing landscape of content consumption. By leveraging the insights provided by Looper Boost™, digital marketers can elevate their marketing strategies, seize emerging opportunities, and maintain a competitive edge in the dynamic and evolving entertainment industry. Key features and benefits n Immersive Interface: Looper Boost™ offers digital marketers an immersive interface that mirrors the devices they target, fostering a deep understanding of customer experiences across platforms. n Comprehensive Insights: With a focus on New Release titles, Looper Boost™ empowers streaming services and broadcasters to collaborate on merchandising strategies, ensuring campaign compliance and enabling full tracking of performance against competitors using the innovative Media Placement Value (MPV™) metric. The MPV™ metric is Looper Insights’s proprietary metric, which gives content owners the ability to overlay their own performance data securely for informed comparisons and competitors. n Data-Driven Decision Making: Looper Boost™ leverages data-driven insights to empower marketers to make informed decisions, optimize marketing efforts, and enhance the overall impact of their strategies.

n Supporting Industry Growth: In an era where subscription costs are rising and competition is fierce, Looper Boost™ equips marketers with tools to increase services amongst competitors and foster customer loyalty, driving the industry forward. The innovation embodied by Looper Boost™ underscores Looper Insights’ commitment to shaping the future of Media & Entertainment analytics. This feature represents a transformative leap forward, equipping digital marketers with the tools they need to excel in a highly competitive environment. Looper Boost™ joins the other SaaS products from Looper Insights which provide comprehensive title compliance and merchandising tracking across all connected TV devices. Looper Insights empowers digital marketers and industry leaders with transformative insights, equipping them to navigate the complexities of the everevolving digital landscape. Looper Insights’ analytics and datasets are trusted by major industry giants including Disney +, Prime Video, Warner Bros Discovery, Sony, Cinedigm, A+E, Hulu, ITV and Lionsgate.


WINNER LTN Wave

M

edia companies are under constant pressure to deliver a larger volume of content at an unprecedented scale and with more efficiency. Satellite and fiber lack the scale, agility, and reach to deliver sufficient channel variations across multiple platforms. Faced with shrinking satellite capacity and changing regulations, organizations need viable, reliable, and scalable delivery mechanisms that provide cost-effective, more flexible alternatives. Adopting an IP-based approach is the key to success. Relying on unmanaged IP protocol-only solutions to transition from satellite to IP brings several limitations. Without native multicast ability, customizing and directing multiple content versions to fragmented audiences is incredibly challenging, with costs driven up by additional public cloud egress and processing fees that come with each version of the content. Solution The shift to IP means media companies now operate in hybrid network environments that are difficult to navigate. They need the flexibility to leverage various solutions such as third-party encoders, decoders, and hardware or software infrastructure that fit their specific requirements. This network infrastructure also must enable organizations to acquire content from anywhere and deliver it everywhere. LTN Wave provides broadcasters with simplified and intelligent tools that de-risk satellite migration with end-toend management and automatic, stress-free changeover. Through Wave, media organizations can harness built-in business intelligence such as blackout and rights management capabilities that enable seamless distribution across regions and platforms. Built on LTN’s intelligent, multicast-enabled, fully managed IP transport network that delivers <200ms latency and high reliability (99,999%), customers can route channels around any inherent problems on the public network. LTN Wave benefits from innovative and patented technology to protect against packet loss and delay, including LTN Rapid Error Recovery (RER)

protocols and LTN dynamic multi-carrier routing (DMR) algorithms. LTN Wave provides access to rich data such as video quality, ISP connectivity, signal continuity, packet loss, latency, and last-mile health. LTN’s fully managed and monitored network with always-on TOC support provides automatic alert triggers and proactive troubleshooting to quickly resolve any issue while empowering customers to understand and counteract any potential issues before they occur. Outcome Wave offers an intelligent, flexible, and cost-efficient means of reliably distributing multiple versions of content to any destination worldwide, enabling media owners to maximize reach, monetization, and ROI. Deutsche Welle and MSG Networks are examples of companies benefiting from a highly scalable, intelligent video transport mechanism. LTN’s managed service layer and proven technology enables both companies to migrate their channel distribution to IP quickly and easily while increasing operational efficiency. Wave provides the crucial reliability, scalability, and business intelligence needed to maximize reach and ROI on premium live content, all without complex technology headaches or heavy CapEx investment. Wave simplifies complex video transport workflows to drive operational efficiency and scale while granting complete visibility and control to help customers achieve business goals. Underpinned by proven, ultra-reliable network performance and always-on expert TOC support and monitoring, Wave enables media companies to focus on content and audience growth while achieving total peace of mind.


NOMINEE LucidLink LucidLink Panel for Premiere Pro

L

ucidLink Filespaces provide content creators real-time access to media assets of any size in the cloud, without requiring creators to download or sync their media first. Globally-distributed creative teams can collaborate simultaneously, with an experience no different from using any conventional hard drive. LucidLink fits any workflow, without requiring creatives to learn anything new and empowers users to work together on projects from any location. LucidLink can stream heavy media in real-time from a single location in the cloud to any creative user anywhere in the world. Creatives love LucidLink because it’s incredibly flexible, works with any creative tool, and runs on any major operating system. LucidLink is powered by a unique, collaborative file system that sits atop any S3-compliant object storage and Microsoft Azure Blob. Creatives can use any creative tool they like, including NLE tools from Adobe, Avid, Apple, and Blackmagic. For IBC 2023, LucidLink will be showcasing the entirely new LucidLink Panel for Premiere Pro. The first integration of Lucidlink Filespaces within a creative tool is with Adobe’s Premiere Pro video editing software, allowing creative editors to preemptively cache just the media needed in their edit, directly within the Premiere Pro application. Unlike conventional methods that require painful downloads and time-consuming file transfers, LucidLink has revolutionized the media and entertainment industry by streaming only the essential bits that are needed by any creative tool, right at the playhead. The new Premiere Pro integration brings intelligent, sequence-aware caching directly into the editing tool. By eliminating the need to pin entire folders on a desktop level, the LucidLink Panel for Premiere Pro enables creatives to effortlessly pin and cache only the relevant content needed by the editing timeline. Within Adobe Premiere Pro, creative editors

can now either pin just the clips needed within their sequence or if more precision is needed, cache the clip ranges found within their edit, giving editors a faster and more performant experience. The LucidLink experience is instantly familiar to content creators everywhere, who need collaboration to be as simple and accessible as found with on-prem, brick-and-mortar spaces used in the past. LucidLink expedites workflows, enabling teams to focus on creativity rather than media management. Security is paramount to media and entertainment organizations, and LucidLink rises to this challenge better than any other solution in the market. With end-to-end Zero-Knowledge encryption and robust user access control, it safeguards sensitive assets without compromising efficiency impacting a creative user’s experience. A split-plane streaming architecture and advanced file capabilities further solidify its appeal to local area networks (LAN) and wide area networks (WAN). For creatives, LucidLink fits into known and familiar workflows with no learning curve required; for teams, LucidLink facilitates adapting to the new normal of distributed teams, working from anywhere. For IT Administrators, LucidLink offers infosecapproved, end-to-end, zero-knowledge encryption, making the job of supporting creatives a snap.


NOMINEE Magewell Director Mini

M

agewell’s new Director Mini is a portable, all-in-one live production and streaming system. Combining multi-input switching, graphics, streaming, recording and monitoring in one compact device, Director Mini enables a single operator to easily create visually compelling productions for live event coverage, remote production feeds and more. Director Mini offers a unique combination of input source flexibility and feature richness in an exceptionally versatile form factor. Users can switch between two HDMI inputs and two USB AV inputs as well as three simultaneous live IP sources including SRT streams, RTMP streams or up to two NDI® HX sources. File-based media assets including video, audio and images can also be combined freely with live sources. Embedded audio is supported on the HDMI and USB inputs alongside a 3.5mm analog audio input and an audio-only additional NDI® input. Director Mini’s intuitive user interface is accessed through its integrated 5.44“ AMOLED touchscreen. Users can define multiple scenes that combine live HDMI, USB and audio inputs with network streams, media sources and graphics, then switch or transition between these scenes on the fly. Chroma keying enables the use of virtual backgrounds, while telestration enables real-time on-screen drawing and combines with built-in scoreboard functionality to support sports productions. PTZ camera control is also available through the touchscreen, simplifying single-operator productions. Director Mini’s touchscreen interface is complemented by the Director Utility smartphone and tablet apps, which provide remote configuration, audio controls, input switching, scoreboard control and more. The Director Utility can also turn the smartphone’s camera into a streaming source as a mobile input to the Director Mini hardware, further enhancing the multi-camera production possibilities of the system. Up to three mobile devices can be used simultaneously as sources to Director Mini. Director Mini can encode video up to 1080p at 60 frames per second and bitrates up to 30Mbps, with a flexible array of

output possibilities. Productions can be streamed using the RTMP protocol to popular platforms such as YouTube™ Live, Facebook™ Live, Instagram® Live or custom destinations. Live comments can be displayed while streaming to YouTube, Facebook or Twitch. The ability to output an SRT stream makes Director Mini an excellent tool for contributing high-quality feeds to an off-site location for remote production. Alternatively, the device can create one NDI® HX output to serve as a source for additional IP-based production tools. Its USB-C port can be configured to display the program output, user interface or a loop-through of either HDMI input on a connected USB-C display. Last but not least, content can be recorded to an SD card, USB flash drive or the device’s internal storage. The compact Director Mini hardware features a 1/4”-20 thread for use with standard camera-mounting accessories and can be operated in horizontal or vertical orientation. The device can be powered with the included power adapter but also supports two NP-F hot-swappable external batteries (not included), enabling uninterrupted power for long productions.


WINNER Matrox Video Matrox ORIGIN

T

ier 1 live production requires frame-accurate, deterministic, low-latency, redundant, and responsive interconnected systems. So far, no cloud solutions have satisfied those requirements without compromising quality, latency, and reliability. Matrox ORIGIN solves that problem. This disruptive technology is a software-only, vendorneutral, asynchronous framework that runs on IT infrastructure, free from the constraints of the synchronized video realm. It can achieve highly scalable, responsive, lowlatency, easy-to-control, and frame-accurate broadcast media facilities for on-premises, cloud, and hybrid deployments. Significance of Matrox ORIGIN: n Asynchronous processing of uncompressed video for live production uses the speed and power of IT architecture to process faster than in real time. n Cloud-native, not a 'lift and shift'. n Operates on a single host or across multiple hosts within the distributed environment, making it equally as effective onpremises as in the cloud. n Vendor-agnostic, so broadcasters can choose best-ofbreed components from anyone without being locked into a specific ecosystem. This is already possible on-premises. n Built-in, frame-accurate redundancy and live migration, even across multiple AWS Availability Zones. With Matrox ORIGIN as the underlying infrastructure, developers can focus resources on what differentiates them. Products will run equally well on a single host or in distributed systems on-premises or in the public cloud. They can develop once and deploy many times. Broadcasters can operate, build, and develop scalable, best-of-breed solutions for public or private clouds without restriction to a particular vendor. They can make better use of their on-premises resources, offload peak needs into the cloud, run exclusively in the public cloud — or all the above — at whatever pace makes sense for their business.

Unique features and benefits n Asynchronous: Matrox ORIGIN operates asynchronously to process and interconnect uncompressed data as fast and as soon as possible, removing all delays associated with synchronous interconnects. This enables low-latency, uncompressed, and highly responsive systems that make large-scale, tier 1 live production in the cloud possible. n Single-Frame Control: Matrox ORIGIN provides simple, granular control of a single frame. Any single unit can be frame-accurately routed or processed anywhere within the distributed and nonblocking environment of the Matrox ORIGIN framework, resulting in great flexibility with guaranteed AV synchronization that hasn’t been possible before. n Integrated Clean Routing and Switching: This is possible because Matrox ORIGIN controls every frame. Signal-path compensation delays are no longer relevant, and any frame can reach any destination frame-accurately on a large-scale, uncompressed, and distributed fabric. n On-Air Scalability: Matrox ORIGIN can provision or decommission compute to closely match dynamic operational processing needs with infrastructure costs while on the air. It can live-migrate software processing in runtime without dropping a single frame or disrupting the control system. n Built-in Redundancy: Matrox ORIGIN provides infrastructure to develop and operate stateless mediaprocessing services with granular protection of every frame. The framework manages redundancy and requires no additional intervention. It also supports redundancy across multiple AWS Availability Zones to address mission-critical resilience requirements. n Simple APIs: Developers can now build multiple best-ofbreed offerings for broadcasters to choose from.


WINNER Media Distillery Ad Break Distillery

A

d Break DistilleryTM is the latest (new) solution from AI video analysis company Media Distillery. Through AI and ML-powered deep content analysis, this solution identifies the start and end times of ad breaks in real time and in a fully automated way. It targets content owners, broadcasters, TV operators (MVPDs), and OTT streaming services; all players that process or distribute video content but do not know the exact location of ad breaks in TV broadcast streams. After detecting ad breaks, it instantly delivers time markers as actionable metadata, which can be used to: n Create new ad inventory: Replace existing linear ad breaks with dynamically inserted and personalized ads for catch-up and replay. n Enforce trick-play restrictions: Apply restrictions during ad breaks but allow viewers to freely navigate through the content. n Repurpose broadcast content for FAST and AVOD: Use for content archives where existing ads no longer hold value. n Bring the OTT experience to Replay: Elevate catch-up and replay viewing experiences to align with OTT UX with shorter and personalized ads. n Offer a skipping subscription option: Offer a premium subscription that allows ad-skipping. Ad Break Distillery allows broadcasters and video platform operators to derive additional revenue from advertising in catchup and replay when the location of ad breaks is not provided in the broadcast. Key features n Fully automated ad break detection, delivered immediately after the program ends. n Eliminates the need for broadcasters to provide ad markers for a consistent user experience.

n Can be directly integrated or integrated via existing metadata providers. n Leverages the Deep Content Understanding platform for easy integration, so customers can easily add it on top of the existing services by Media Distillery. Ad Break Distillery is commercially available as a product. It was launched on March 14, 2023. After a successful Proof of Concept trial in 2022, Swisscom, a major telecommunications provider in Switzerland, selected Ad Break Distillery from Media Distillery, which is now being deployed. The anticipated launch with subscribers is in the summer of 2023. In comparison to manual ad break annotation, which tends to be costly and time-consuming, Ad Break Distillery saves time and resources of video services. Detecting ad breaks automatically, it allows video services to boost advertising revenue and improve the viewing experience for catch-up, replay, FAST, and AVOD. The integration of Ad Break Distillery across Swisscom’s most popular channels demonstrates higher user satisfaction from a frictionless viewing experience.


WINNER Media Distillery Topic Distillery

I

n a world where on-demand content rules, Media Distillery’s Topic Distillery™ is changing how video service providers deliver and viewers consume non-scripted live content.

Unique selling proposition (USP) Non-scripted live content, like reality shows and news, often lacks detailed descriptions, leaving users unsure whether to watch the program or not. Due to this limitation, users can only find programs based on their titles, not the content’s subject. Topic Distillery™, powered by AI and ML algorithms, automatically identifies topics, generates chapter markers, and creates meaningful chapter titles for nonscripted live broadcasts. Viewers now receive comprehensive content descriptions, motivating them to watch. Furthermore, viewers can discover more content aligned with their interests and enjoy long content in shorter forms. This elevates content discovery, offers chapter-based playback, and enhances user experiences, ultimately increasing engagement and reducing churn. Design and innovation Topic Distillery™ employs cutting-edge technology, including Computer Vision, Speech-to-Text, content embeddings, and Large Language Models. This automated solution generates ready-to-use chapters within just 15 minutes after a program concludes, unlocking non-scripted live content’s potential that previously lacked necessary metadata for effective search and discovery. Technical excellence Research from Screenforce/SKO revealed that 60% of catchup viewing happens on the airing day. Topic Distillery™ empowers TV operators to capitalize on this window, providing an enhanced viewing experience for their users. NLZIET’s Proof of Concept and beta test demonstrated that the majority of users prefer chapter-based playback. An impressive 92% expressed willingness to use the solution if available live.

Business benefits Since its adoption by NLZIET on March 1, 2023, Topic Distillery™ has improved the viewing experience for over 200,000 subscribers. These enhancements span 400 hours of programming each month, including news, talk shows, sports, and more from top broadcasters. A major global media company is currently evaluating Topic Distillery™ in a user trial, indicating its potential for widespread industry adoption. Cost-effectiveness Topic Distillery™ offers a cost-effective alternative to manual metadata generation. By automating the process, TV operators save valuable time and resources while expanding their content catalog’s search capabilities. This approach enables them to reach a broader audience with diverse content preferences and effectively monetize their offerings. Topic Distillery™ empowers TV operators to offer engaging, snackable content and detailed program overviews. It facilitates topic-based content discovery, and chapter-based playback, and streamlines navigation throughout the program. The positive feedback from NLZIET viewers confirms its effectiveness, positioning it as a leader in content delivery innovation.


WINNER Mimir Mimir

M

imir is a cloud-native video production and collaboration platform. Mimir has live production features, media asset management features, and a wide range of live production features in the cloud. With Mimir, users can access and find content independently of its location. All that is needed is an Internet connection. Artificial intelligence (AI) assisted automatic metadata logging, including ChatGPT, combined with a powerful search tool reduces the time necessary to find the required content for editing projects and from video archives. Mimir is the tool of choice for anyone transitioning from on-premise to the cloud or looking to modernise their existing media infrastructure and workflows. Mimir can fill the gap of several legacy systems for media ingest, production asset management, media asset management, archive and backup in a cloud environment, a hybrid cloud or even with on-premise infrastructure. Mimir is also a transcoding platform and tool for collaboration, sharing and comments. Since its launch in 2019, media houses, production companies, news agencies and broadcasters have embraced Mimir for their media cloud archive and backup, AI-assisted automatic metadata logging, video collaboration and sharing, PAM and MAP needs, live feed integrations, and more. Customers include, amongst others, The New York Times, blinx, Formula E, Hilton, Deutsche Press-Agentur, GB News, NRK, TV 2, and WHO, and is counting 60+ customers worldwide. Mimir is created from scratch on cloud technology without the traditional legacy of on-premise, so its deployment and update cycles, without downtime, adapt to modern requirements of continuous updates.

Mimir has certified a range of integration partners for Mimir and support technologies for an enhanced workflow, including NDI support and using ChatGPT to create video content descriptions automatically. Mimir is available as an example through MOS, iFrame and as a panel in Adobe Premiere Pro, Vimond VIA, and Cutting Room. With its open interface, Mimir is agnostic to which storage solution it connects, as well as non-linear editing solutions and newsroom systems. Mimir also integrates with a wide range of AI technologies for transcription, translation, face detection, label detection, OCR, categorisation and more. Mimir is NRCS agnostic and integrates with newsroom systems, such as Dina, Octopus, Inception, iNews, and ENPS. It is also NLE agnostic with Adobe Premiere, Avid Media Composer, Final Cut Pro X, Edius, DaVinci Resolve, Blackbird and Cutting Room.


NOMINEE MultiDyne APE Advanced Power Extenders

M

ultiDyne’s new APE Advanced Power Extension Line broadens the possibilities of signal transport and power extension for 12 and 24-volt fiber camera systems. MultiDyne developed the APE family, featuring the HUT-APE and SilverBack-APE, to meet the needs of DC-powered cameras from leading vendors. Both devices are plug-and-play with automatic camera recognition. The HUT-APE frees camera chains from the limitations of hybrid copper and fiber cabling by enabling cameras to be separated from their CCUs. It achieves this by tricking the camera and CCU into seeing a physical copper connection. Content producers can now use affordable, conventional single-mode fiber, which brings added benefits such as improved performance (no RF, EMI, or grounding issues), accelerated set and strike times, and reduced weight for transport on OB trucks and within flypacks. The HUT-APE offers long range power for DC-powered systems, broadening the range of camera manufacturers and types now supported. Content producers can pair a HUT-APE with the latest high-end SMPTE studio cameras from Grass Valley, Panasonic, and Sony, and provide power from up to three kilometers away over SMPTE hybrid fiber. The HUT-APE can be paired with a companion throwdown power converter to provide 12 and 24-volt accessory power to lights, monitors and other production equipment in places where local power isn’t available. APE products supply up to 325 Watts of power, leaving plenty of room to power camera accessories. The chain gets even more interesting when extending capability to include MultiDyne SilverBack fiber camera adapters. MultiDyne’s latest SilverBack-V and SilverBack-VB camera adapters specialize in converting digital cinema cameras into SMPTE studio cameras for live multi-camera productions, adding a cinematic feel to sports, TV and worship content. The SilverBack-APE includes multiple outputs to match with camcorders, PTZ, or digital cinema cameras from any manufacturer, including the 24-volt ALEXA35 camera from ARRI and RED-V RAPTOR XL, both very popular in the film production community.

Seamless connectivity between APE extenders and MultiDyne VB Series of signal transport products brings additional user benefits. MultiDyne’s VB Series is a range of highly configurable fiber transport solutions that allow users to support a broad spectrum of signal transport combinations over long distances. MultiDyne builds modular VB Series products to specification, allowing customers to populate its chassis with various video, audio, data and Ethernet cards. The APE family represents the most substantial remote powering systems from MultiDyne to date, at nearly ten times the distance of its predecessor. Using its dual connectivity capability, the APE’s impressive power output can used for both a camera and a VB Series fiber throwdown for signal transport to a studio, control room or video village. APE Extenders have user-selectable voltage outputs that range from 5 to 24 volts to meet almost any camera powering requirement, along with related camera accessories. It is a versatile power supply system and with connectivity to MultiDyne VB Series products, customers can easily design and build their ideal fiber transport networks with a granular building block approach.


NOMINEE MultiDyne HoneyBadger

T

he MultiDyne brand is synonymous with fiberoptic transport, and its new HoneyBadger solution, receiving its European debut at IBC, breaks the paradigm of what’s possible in a single bulk fiber transport platform. HoneyBadger brings several classic MultiDyne strengths together into one common platform with exceptional signal density for HD, 3G (quad-link 4K) and 12G (singlelink 4K) productions. Ideal for stadiums, arenas, campus and venue-wide signal extension, metropolitan intra-facility connections and classic point-to-point links between trucks, control rooms and studios, HoneyBadger offers an expansive, unparalleled feature set for production-based fiber transport. With support for eight camera feeds and SDI return channels, MultiDyne’s latest innovation takes several decades of bulk fiber transport product design experience into account. With HoneyBadger, there are no limits for local signal connectivity and extension, thanks to its high I/O density and two independent 1Gb local-area network (LAN) extensions. The latter enables IP connectivity over single-mode fiber strands. As users can also extend four partyline intercom channels (wet or dry), eight bi-directional line level analog audio outputs and eight mic-pre inputs with phantom power over two cost-efficient single-mode fibers, customers can manage all long-distance bulk fiber transport needs from one box. That also includes analog tri-level or bi-level genlock outputs, legacy GPIO/serial control signals and more. HoneyBadger is an ideal field fiber solution for a new generation of content producers faced with a broader array of formats, signals and connectors than ever while seeking to bridge the gap between fiber and IP. It scales to serve the expanse of any production workflow or requirement, so that everything the content producer needs for multi-camera or multi-announcer productions converges exists within its design. A typical HoneyBadger application for live production will employ a 5RU remote unit for media contribution and transport and a 4RU unit at the receiving location. They both provide standard connectivity for full-size BNCs for video,

XLRs for audio, and terminable Phoenix connectors for serial data and GPIO control signals. Speaking to the product design’s aesthetics, HoneyBadger is essentially an active broadcast junction box that is easy to transport, manage and maintain. It installs comfortably into an actual JBT junction box, which is typically a stainless-steel, wall-mounted enclosure that facilities build into the architecture. Built with streamlined architectures and quick, simple connectivity in mind, Honey Badger is perfect for customers that want to provide an all-in-one signal transport solution that technicians can access and plug into for hasslefree, quick-launch productions.


WINNER NEP Group TFC Ephemeral Productions

E

phemeral Productions is the newest advancement for NEP’s patented broadcast controller, TFC. The SaaS solution allows production teams to seamlessly reconfigure any TFC-enabled facility (including mobile units and connected production) for multiple shows in a matter of minutes. Ephemeral Productions removes cumbersome setups with preset templates that X. Ephemeral Productions allows broadcasters to effortlessly customize and optimize their production environment, saving valuable time and resources, and shifting their focus from technology to producing compelling content. TFC, NEP’s patented broadcast controller, offers broadcasters unmatched efficiency, consistency, and significant monetary savings in managing production configurations for multiple shows through connected production facilities. This is accomplished using a pair of new features: n Ephemeral Productions: Production teams can create shortlived productions that make use of ’fixed“ or shared resources (i.e. from a datacenter). Optionally, these productions may be discarded after use. n Resource Manager: Identifies locations and whether devices and flows are ’fixed“ in those locations. (As some devices are split-up across multiple locations, it’s possible to link the same device to multiple locations. In the same service, flows can be linked to devices that are linked to PCRs.) These new features add to TFC’s existing offering and increase the value that TFC is able to deliver in a production environment: n Efficienc: TFC’s intuitive interface streamlines production setup changes, reducing downtime between shows. n Consistency: TFC ensures that every production, regardless of the facilities being used, adheres to

the same high standards and workflows. By centralizing core settings and preferences, it guarantees uniformity in graphics, audio, video quality, and more. n Savings: TFC optimizes the utilization of shared resources across different shows, saving costs on additional hardware and maintenance. Moreover, the enhanced efficiency leads to reduced labor expenses, as fewer staff members are required to handle setup changes. With TFC, broadcasters can shift their focus from technology to producing compelling content. Its intuitive design ensures seamless transitions between productions, maximizing productivity and enabling broadcasters to cover more events while maintaining a consistent production environment. The platform’s ability to optimize resource utilization translates into substantial monetary savings, empowering broadcasters to invest in core areas of their operations.


WINNER NETGEAR M4350 Series Managed Switches

I

n AV environments, traditionally, customers had to use a switch designed for IT purposes which takes a lot of time to understand and configure. They had to learn arcane IT commands and terms and/or hire an IT person to set it up. It was complicated and time-consuming. The M4350 comes in solving this challenge out of the box for many installations, and with an AV-centric user interface and the Engage controller offering easy, port-based AV profiles to take the guesswork out of configuring a switch when needed. When M4250 switches paved the way for small to medium installations, M4350 models now bring scale and redundancy on their own, or at the aggregation layer in concert with M4250 switches at the edge. Designed for the most demanding AV over IP installations of up to thousands of endpoints. The M4350 brings all the simplicity from the M4250 AV Line packed in more

Enterprise-class hardware with redundant power supplies and larger fabrics with 25G and 100G uplinks. Centrally managed by the NETGEAR Engage™ Controller! Simplify your AV multicast deployments with NETGEAR IGMP Plus™ for out-of-the-box functionality for your Professional AV, Medical AV, Residential AV, Broadcast AV, Lighting installations, and more. Automatic multi-switch configuration and profilebased setups make setup a snap allowing you to focus on parts of the installation. The revolutionary NETGEAR AV user interface contains pre-configured profiles for all major audio, video, and lighting protocols including: AVB, Dante, Q-SYS, AES67, NVX, AMX, Q-SYS, NDI 4, NDI 5, ZeeVee, Aurora Multimedia, Kramer, Atlona, LibAV, Visionary, SDVoE and others. SMPTE ST 2110 is supported on select models.


WINNER On-Hertz Artisto

O

n-Hertz is leading a pivotal shift in the broadcast industry with the announcement of native SMPTE ST 2110 and Multi Audio Devices (MAD) support for Artisto at IBC 2023, a first for a fully virtualized audio solution. Broadcasters are navigating an ever-evolving industry landscape, necessitating more efficient, scalable and distributed operations to maintain a competitive edge. Transitioning to an ST 2110 IP-based infrastructure marks a significant milestone for many broadcasters. However, achieving true scalability requires a change from a hardware-centric to a comprehensive softwarebased production infrastructure. While video solutions have been advancing at pace, broadcasters have grappled with a lack of reliable software solutions specifically serving audio needs, a gap filled by On-Hertz’s Artisto. Unlike existing video-oriented solutions, often featuring a low channel count, Artisto caters to the high channel count demanded by audio. Moreover, existing solutions typically rely on dedicated hardware, whereas Artisto's fully softwarebased approach can be deployed on-premises or in the cloud, on bare-metal or hypervisor, traditionally or containerized. Its in-house developed 2110-30 interface requires just a generic network card. The solution allows broadcasters to break free from legacy hardware limitations, turning any browser into a powerful production platform. On top of that, its comprehensive API facilitates Artisto’s integration in larger workflows, including with orchestrators and third-party hardware or software. Enhancing Artisto’s already comprehensive integration with protocols such as SIP, WebRTC, SRT, NDI, HLS or Icecast, the new inclusion of ST 2110 and MAD provides broadcast engineers with an unmatched level of flexibility to reinvent their infrastructure. More than just a mixer, Artisto functions as a comprehensive toolbox, providing interconnectable

nodes for audio routing, shuffling, mixing, processing, and monitoring. This innovation ensures pristine sound quality across various configurations and allows unparalleled freedom for broadcast engineers. "From the initial launch of Artisto, we’ve advocated for a shift to a full-software approach as the only way to elevate broadcast productions in the competitive media landscape," said Benjamin Lardinoit, Co-Founder and CEO of On-Hertz. This revolutionary move towards robust virtualization aligns perfectly with real-world challenges and potential, making On-Hertz’s Artisto an exemplary candidate for this award. These additions allow customers to fully leverage the advantages of true IP workflows and virtualization, emphasizing efficiency, scalability, and distributed operations. It enables broadcasters to meet both creative and economic demands, catering to modern and future industry needs. By transforming a significant milestone into a stepping stone towards a future-ready broadcasting paradigm, On-Hertz is not only innovating but leading the industry towards a new era.


WINNER PHABRIX QxP

I

nheriting the all of the tools of the acclaimed QxL rasterizer, the QxP offers an integral 3U multi-touch LCD screen with V-Mount or Gold-mount battery support for 12G-SDI and 25GE IP portability. Designed for all production workflows, whether its HD, UHD, SDR, HDR, SDI or IP, remote or conventional, the QxP’s combined analyzer, generator and monitoring toolsets are designed to meet the ever-changing demands of today’s hybrid environments. The QxP includes the latest in PHABRIX’s patented waveform technology, featuring a high-resolution image processing pipeline with support for deep color sources up to 12-bits, delivering all the fine detail needed for camera shading or image grading. Users can access a choice of parade, stacked and overlayed display modes, with the option of multi-colored, highlighted, green or monochrome traces. Nits scales and user-controlled Nits markers are provided for SDR, HLG, PQ, S-Log3 and SR-live HDR formats, along with Rec 709 and Rec 2020 colorimetry over the wide-range of YCbCr:422, RGB:444, SDI, 2110, HD/2K/UHD/4K/EUHD formats for which PHABRIX is renowned. For real-time IP production, the unit provides support for generation and analysis of HD/3G/UHD/EUHD 2110 payloads on generic SFP28/25GbE interfaces. The flexible architecture of the QxP offers in-field, engineering grade data view and ANC packet inspection tools together with optional IP upgrades for 2110-UHD/4K 4860p RGB (EUHD), 4GB flexible PCAP capture, Dolby E Decode and HDR. For users working in ST 2110, with ST 2059

Precision Time Protocol (PTP), a core IP toolset offers all of the IP confidence status monitoring in an intuitive and accessible manner. An optional IP-MEAS test suite provides a comprehensive set of tools for compliance verification and commissioning of IP systems and equipment. Hardware-based timestamping locked to PTP ensures accurate, real-time, deterministic timing measurements of media flows and ST 211021 buffer models. On the SDI side, QxP offers RTE™ real-time SDI eye and jitter analysis with a highly advanced SDISTRESS optional toolset ideal for product development. In the 12G-SDI world, noise floors are required to be much lower to ensure that accurate and meaningful measurements can be taken. QxP SDI generation and measurement technology has been specifically adapted for 12G applications. With its unique class leading SDI-STRESS toolset, sophisticated RTE™ (RealTime Eye) multi-rate physical layer display, and automated SMPTE compliance measurements, the QxP offers an advanced product solution for SDI compliance verification.


NOMINEE Pixotope Live Controller

I

n the ongoing battle for viewership, broadcasters are tirelessly working to captivate and expand their audiences. Engaging content plays a pivotal role in achieving success, yet smaller studios often lack the financial means, expertise, and resources necessary to compete with larger counterparts and their highquality productions. Enter Pixotope Live Controller, a game-changer designed to level the playing field and empower these underdog studios. Live Controller is an all-in-one graphics control solution for broadcast virtual production workflows and is changing the game for broadcast studios of all sizes. By providing a simple, streamlined single-user interface, any broadcast operation can easily implement virtual production as part of their programming. The firstof-its-kind tool effectively democratizes virtual production in broadcast environments thanks to its template-based workflow, which is common among traditional broadcasters. This familiar approach makes it easy for users to tap into the power of Unreal Engine by flattening the learning curve. Users can also use templates to build playlists, which combine graphic assets, data, and user input forms and allows the specific combinations to be saved in a database for future re-use and quick recreation, saving time and technical complexities; a single operator can control virtual studio, AR, and XR graphics side by side with CG. This means broadcasters can create playlists for individual shows, segments or even the entire channel’s graphic workflow, and quickly pull from this database each time. These templates can be customized and broadcasters can mix and match templates, overlaying different elements and achieving dynamic visuals without starting from scratch. With the introduction of this software and its simplified and synchronized process, virtual production becomes more widely accessible to all studios, regardless of size or budget, allowing them to easily implement graphics and uplevel their programming.

Another barrier to the adoption of virtual production in smaller broadcast operations is that, for more than 20 years, the approach to implementing XR, AR or other virtual elements into productions involved multiple components, requiring multiple experts to make it work. Even if new engines and new capabilities are introduced to upgrade the look of existing broadcast graphics, they do not fit into the existing pipeline. Introducing these new workflows requires more money, training, greater expertise and the adoption of a different approach to working. Live Controller overcomes these issues by fitting into existing workflows and streamlining the licensing process into a single point of contact. It is a simple-to-use web-based centralized hub and, unlike traditional workflows where each graphics pipeline requires its own controller and operator, Live Controller is intuitive enough for anyone to use as long as the necessary graphics are available in the shared asset library, eliminating the need to collaborate with multiple vendors. As a result, it offers a unique and simplified approach compared to current workflows. This technology can be easily implemented and maintained in-house, ultimately saving time and reducing technical complexities. Broadcasters can minimize any sense of disruption, while maximizing the engagement and revenue advantages of implementing new technology into their shows.


NOMINEE Quantum Myriad

M

yriad is a new all-flash scale-out file and object storage software platform ideally suited for the evolving needs of VFX, animation, and rendering, and the increasing demand for AI and ML content creation and enhancement tools and new markets such as AR/VR, live production with LED video volumes, and digital twinning. Legacy NAS storage systems provide inconsistent performance, are complex, difficult to scale, and often deployed in islands that add workflow complexity and increased management burden. The slow performance makes rendering a painful and long process. Instead, Myriad makes full use of readily available NVMe storage and RDMA to deliver the extreme performance (10s of GB/s) and high IOPS (hundreds of thousands) needed for cutting-edge animation and multi-platform workflows without the drawbacks or design limitations of legacy systems. Myriad requires no custom hardware, so as market available NVMe storage servers gain higher capacities, higher performance, and lower cost, they can be used giving flexibility and adaptability as business evolves. Myriad lets you consolidate multiple animation, VFX, and rendering workflows into a single fast system to serve all departments, clients, workstations and workflows including rendering pipelines and AI and ML applications. Myriad delivers consistent performance for all users and is highly efficient storage for the large number of small files common in these workflows, and for serving rendering pipelines without impacting other users. Myriad is built with cloud-native technologies like microservices and Kubernetes making it extremely flexible and easy to use, no specialized IT or networking experience required, and can be easily deployed on-premises or in the cloud. Myriad delivers this performance in a smaller footprint requiring less power, cooling and fewer components to reduce networking complexity, administration overhead and operational costs. Myriad’s powerful data services ensure that data is deduplicated and compressed to deliver an effective storage size up to 3x the storage capacity. Zero-impact snapshots and clones protect against operator error.

Myriad benefits: n Consistent, fast performance of up to 10s of GB/s performance and hundreds of thousands of IOPS to serve every creative department’s needs, including rendering on a single system on-prem or in the cloud. n Modern microservices architecture orchestrated by Kubernetes to deliver simplicity, automation, and resilience at any scale. n Runs on readily available NVMe flash storage servers so you can quickly adopt the latest hardware capacities and form factors and adapt your storage infrastructure to meet future requirements. n A Myriad cluster can start with as few as three NVMe all-flash storage nodes, and its architecture enables scaling to hundreds of nodes in a single distributed, scale-out cluster. n No specialized IT or networking knowledge needed: powerful automated storage, networking, and cluster management automatically detects, deploys, configures storage nodes and manages the networking of the internal RDMA fabric. n Highly efficient data storage with intelligent deduplication, compression and self-healing, and self-balancing software to respond to system changes. n Simple, powerful data protection and recovery with snapshots, clones, snapshot recovery and rollback capabilities to protect against user error or ransomware.


NOMINEE Quickplay Quickplay Media Orchestrator

O

ne of the greatest challenges for any OTT provider is tackling the fundamentals of media services: Aggregating content from many different sources, preparing and orchestrating it for distribution, and delivering it to multiple endpoints and partners is often a maze of manual processes and spreadsheets. Quickplay has developed a media services solution that is designed to automate content ingest, distribution and syndication across multiple inputs and outputs as well as to proactively manage content distribution to ensure internal and external customer metadata, file format, and service level agreements (SLA) are achieved. Media Orchestrator is designed from the ground up for incredible flexibility and efficiency, Media Orchestrator simplifies the media supply chain challenges of content ingest organization, planning, versioning, packaging, reporting and more.Media Orchestrator reduces reliance on manual processes and assures that content meets all specifications from aggregation to distribution. Quickplay’s Media Orchestrator provides templates, integrations, dashboards, and alerts that will give OTT providers unprecedented insights into their businesses so they no longer are flying blind and responding to crises, but instead are able to leverage the partner and supplier management you need. The result is more efficient and less costly content ingest, management, and distribution. Media orchestration is aggregation, normalization, and distribution/ syndication. Quickplay Media Orchestrator’s power resides in automation that replaces manual steps and spreadsheets with validation against a spec, management of ingestion, and scheduled delivery of media packages to distribution/ syndication partners.

Key attributes of Quickplay Media Orchestrator include: n Automated content ingest and validation at scale to boost efficiency and quality control at the onset of aggregation; n Leveraging Quickplay’s deep expertise to expedite title management and operations, including proactively checking, correction and confirmation adherence to specifications and reduction of manual intervention to normalize media, metadata, and prepare content for distribution. n Using the power of automation to streamline distribution, including versioning, packaging and delivery of content to a growing number of syndication endpoints tro generate additional revenue. In addition, Quickplay Media Orchestrator’s content operations services capability enables content providers to leverage digital mastering/post-production services to increase engagement and reach new audiences. As the complexity of streaming presents content providers with multiple new media services challenges, Quickplay Media Orchestrator has the capability to untangle media supply chains, expediting availability of a wide diversity of content and improving bottom line results.


NOMINEE Rohde & Schwarz R&S®SpycerNode2

S

pycerNode from Rohde & Schwarz has proved itself as a popular and powerful storage platform. In response to large numbers of potential users looking for additional functionality to take it into new areas, Rohde & Schwarz has introduced SpycerNode2 this year. SpycerNode2 is built on IBM’s high performance computing technologies (HPC), including the Spectrum Scale RAID software. Rohde & Schwarz designers are uniquely placed to take full advantage of these tools, resulting in unprecedented performance in a media application. The Spectrum Scale RAID control and other functionality in SpycerNode2 is built into a redundant pair of external 1U servers. Two are provided for redundancy and security. This new architecture boosts performance dramatically – by as much as 50% in demanding 4k applications – and it allows the designers to incorporate significant new functionality, in response to user demand. VSA (virtual storage access) technology provides complete failover protection and uninterrupted data for direct attached and network users. The controller also incorporates AWS S3 export protocols, making it simple to integrate SpycerNode2 into hybrid storage systems. Each 1U controller includes space for up to eight NVM Express plug-in drives. This provides very fast caching, in a function called Dynamic Media Cache. This reduces demand on the main storage for regularly used content, ensuring much better performance for all users and managing bandwidth through the system. It provides a significant enhancement for network attached users calling on content regularly, as you would find in a busy post production house, for example. Used in conjunction with the SpycerPAM production asset management software, it also provides seamless interworking with third-party post production software like Adobe and Avid editors, giving users simple workflows with editors accessing SpycerNode2 storage directly. SpycerNode2 has a 5U RAID chassis. To provide the scalability required by users, storage is built from blocks

of 28 SSD or disk drives, up to a maximum of 84 drives in a single cabinet. As many as four 5U chassis can be managed under a single pair of controllers, giving a total capacity of up to 6.7 petabytes. Units can be networked together to provide increased capacity and performance. As well as performance and scalability, the foundation of IBM HPC software also provides greater security for all users. Using VSA (virtual storage access) and Device Manager alongside the other core tools and the redundant external RAID controllers, SpycerNode2 provides highly secure storage with zero downtime. Broadcast and post production facilities need to store large amounts of media, in a form which is fast and secure. Typical applications have large numbers of concurrent users, with controlled access to their content. To meet this real need, Rohde & Schwarz has taken its popular and proven SpycerNode platform and re-engineered it, to create something that is significantly more powerful, practical and flexible. It takes standard components and operating level software and adds layers of application software that delivers the practical performance users require.


WINNER Ross Video Extend Reality (XR)

R

oss Video offers complete XR virtual LED studio solutions that combine LED backdrops, set extensions, AR foreground elements, best-in-class studio control, and proven workflows. Our rendering platform, Voyager XR, is powered by Epic Games’ Unreal engine and Nvidia’s professional Quadro series graphics cards, providing unprecedented rendering quality and performance. Coupled with Lucid Studio, Ross’ virtual production control software, everything you need to produce state-of- the-art virtual productions is at the user’s fingertips via an intuitive interface. Voyager XR offers a comprehensive range of capabilities: n End-to-End Solution: Ross offers a highly integrated XR solution ranging from robotics, switchers, rendering engines, and LED walls. The Voyager-D3 LED integration allows unique possibilities like color matching. n Real-Time Dynamic Shaders: Leveraging Unreal5, users can create lifelike effects including shadows, light effects, reflections, and refractions in both Augmented Reality and Virtual Studio setups. n Unreal5 Support: Built on Unreal Engine 5.1, Voyager utilizes Nanite for complex geometries and Lumen for global illumination and reflections. n Lucid Studio Integration: Lucid Studio facilitates flexible tracking and camera options, making virtual productions easy without requiring Blueprint scripting. n News Integration: Seamlessly connect Voyager with MOS-based newsroom systems, enabling quick event selection and integration into users’ stories. n Data Integration: Voyager links with XPression Datalinq server to pull data from external sources for compelling content like statistics or show other engaging & informative content. n Logic & Scripting: While everything can be operated from Lucid Studio, scripting is still possible using the node-based Unreal Blueprint Visual Scripting system.

Harnessing the renowned Unreal Game Engine from Epic Games, the Voyager XR graphics rendering platform enables users to create stunning virtual environments for AR, VS and XR LED studio applications. Its powerful graphics capabilities enable users to create environments that are highly realistic and detailed while making the design process more efficient and outcomes more impactful. Despite its advanced backend, Voyager XR is designed to be user-friendly. For instance, the integration of the Lucid Studio control platform ensures that users, even without indepth knowledge of Unreal, can operate Voyager with ease. In addition, by integrating with Lucid Studio and Lucid Track, Voyager accommodates a wide range of tracking devices and protocols, ensuring it caters to diverse production requirements. Voyager XR users also benefit from a highly streamlined workflow. The solution can integrate external data, align with MOS- based newsroom systems and even interact with studio automation systems. With the added advantage of real-time adaptability, especially in PIE mode, on-air adjustments are both timely and efficient. Ross Video continues to innovate the Voyager platform with a range of features, including Nanite, a groundbreaking virtualised geometry system that enables more realistic and efficient rendering at lower cost, as well as Lumen, a new dynamic illumination and reflection system. For those focused on mobility, Voyager’s compatibility with AJA Io X3 is a key feature, allowing laptop-based operation. A new D3 plugin also allows seamless integration with D3 LED displays, enabling Ross to provide customers with an end-to-end XR solution as well as enabling unique colour-matching capabilities for LED/set extension and AR applications.


NOMINEE Ross Video Carbonite Ultra 60

F

or over a decade, Ross Video’s Carbonite Ultra range has set the standard for performance and ease of use for mid-sized production switchers. New in 2023, Carbonite Ultra 60 represents the biggest, fastest, and most powerful Carbonite brought to market to date, demonstrating a clear and ongoing commitment to innovation. Among its key capabilities, Carbonite Ultra 60 offers a wide range of flexibility and performance options. It supports an I/O of up to 60x25 in HD or UHD, while its modular 3RU frame allows for various configurations, such as 36x15, enabling expansion as demands increase. Leveraging the latest hardware technology for incredible performance, Ultra 60 retains the DNA of Carbonite Ultra, complete with the entire existing feature set. The Ultra 60 platform also goes beyond simple layering and transitions to offer capabilities such as onboard Frame Syncs, Format Converters, MultiViewers, and more, making it a versatile and powerful tool for production needs. Its compatibility ranges from SD to UHD and even beyond, supporting most major formats and frame rates. Carbonite Ultra 60 also comes with built-in support for HDR and WCG, making it an ideal system that can grow with the evolving requirements of the industry. Another advantage of Carbonite Ultra 60 is its audio capabilities. Audio mixing and processing features are readily available through an easyto-install license key. For users seeking to incorporate analogue audio, an external hardware module can add to its extensive functionality, making it an ideal choice for professionals looking to invest in a dynamic and future-proof production switcher. The availability of Carbonite Ultra 60 delivers a series of important technology and performance firsts that enhance both functionality and efficiency for users of mid-sized production switchers. For example, it is the first Carbonite to include modular I/O boards, a feature that makes it easier and less costly to leverage the switcher’s

impressive feature set, allowing customers to buy only the I/O necessary for current needs yet providing the flexibility to support future growth. Adding to its user-friendly design, Carbonite Ultra 60 is also the first of its kind to incorporate internal power supplies. This innovation not only simplifies the installation process but also results in a ’cleaner“ setup, reducing clutter and improving the overall aesthetics of the working environment. Furthermore, Carbonite Ultra 60 has widened its accessibility by being the first in the range to offer an I/O of up to 36x25. This expansion makes it a viable option for facilities that have been eager to harness the power, affordability and feature-rich nature of Carbonite but needed more inputs and outputs than were previously available. Carbonite Ultra 60 also stands out as the first model in the range to provide the same I/O in both HD and UHD. This uniformity across different formats emphasises its adaptability and makes it a versatile option for various applications and use cases. In essence, the Carbonite Ultra 60 represents a significant advancement in the Carbonite series, catering to the present demands while strategically positioning users to accommodate future developments.


NOMINEE Sennheiser Evolution Wireless Digital EW-DP

T

he fully digital Sennheiser EW-DP UHF wireless system marks a breakthrough in audio capture for content creators, filmmakers, and broadcasters. As the fifth generation of Sennheiser’s Evolution Wireless systems, the EW-DP is packed with innovative features, a user-centric design, Smart Assist app integration, and valuable troubleshooting capabilities, empowering creators to achieve exceptional audio quality without the need for wireless expertise. The Smart Assist app for iOS and Android devices revolutionises the way users interact with their wireless system. Through Bluetooth connectivity, creators can remotely sync components and adjust settings directly from their smartphones. This app-driven approach makes wireless audio accessible to an even wider range of users. The EW-DP stands as the industry’s first portable wireless with magnetically stackable receivers and a user-facing OLED display. The system’s automated frequency coordination expedites setup, freeing videographers to focus on creativity. At the core of the system lies the compact, intelligent EW-DP EK camera-mount receiver. Its wide switching bandwidth of up to 56 MHz combined with intelligent switching diversity ensures reliable audio even in demanding RF conditions. Transmitters include a bodypack (EW-D SK) for clip-on mics (choice of omni-directional and cardioid lav mics), a handheld (EW-D SKM-S) and a plug-on transmitter (EW-DP SKP) that will launch in late October. Real-time guidance and troubleshooting are achieved through EW-DP EK’s Smart Notifications. These provide suggestions for errors that occur when setting up or operating EW-DP. For example, if the audio clips, the system will suggest lowering the gain. When the battery is running low, the system sends an alert. If the videographer tries to operate on an occupied frequency, he or she will be promoted to select another. If transmitter and receiver are not linked, the EW-DP EK will prompt the user to press sync. And if the talent accidentally mutes their transmitter, the videographer is notified and can disable the mute remotely. This guidance makes all the difference on a hectic set. The EW-DP EK’s versatility extends to its mounting options.

With a cold shoe mount and a ¼“-20 thread, combined with a magnetic cheese plate, this receiver adapts effortlessly to various setups. The magnetic system not only provides secure mounting to camera cages and accessory arms but also enables stacking a second transmitter for dual-personality shoots. The availability of a Y cable further expands audio recording possibilities. As part of the Evolution Wireless Digital family, EW-DP inherits family features such as the exceptionally low latency of 1.9 ms and a full 134 dB of dynamics from the mic capsule to the receiver output. No other system can offer such wide dynamics and fine audio detail. The EW-DP receiver can be powered by a BA 70 lithium-ion rechargeable battery (extended battery life of up to 7 hrs on the receiver and 12 hrs on the transmitter), two standard AA batteries, or via USB, for example from a power bank; ready to tackle long days on location. In addition, all units feature an exact read-out of battery runtime in hours and minutes to avoid surprises.


WINNER Simplestream Channel Studio

S

implestream’s Channel Studio allows operators to leverage existing VOD assets to create live linear channels with optional monetisation and a seamless playback experience. Operators can choose the preferred distribution model: a virtual channel with adverts; a stream enriched by data-driven dynamic graphic overlays; a ‚Äòpop-up’ channel; a barker channel or a free ad-supported television (FAST) channel syndicated to multiple platforms. Channel Studio leverages a simplified workflow where VOD assets are assembled into a playlist. A scheduling file is used to store a record of each underlying video, together with breakpoints to form a HLS or DASH stream. The breakpoints are transformed into SCTE-35 markers when customers utilise advertisements. They can be placed at the start and end of the video, or as mid-rolls, throughout its duration. Once assembled, the playlist is submitted to be turned into a channel, by using AWS’ MediaTailor. Simplestream provides the reference schedule file alongside the video assets, MediaTailor normalises the content and turns it into a linear stream. A Channel Studio-powered channel becomes a broadcast channel through the easy-to-use scheduler. Continuous development of new features made the product even more flexible, understanding the needs of the market and existing customers to improve day-to-day usage as well as increase efficiency when running multiple channels. Channels can be built as thematic, genre-, or seriesbased, with ad slots and slates. Looped channels are created through content that’s scheduled on a variable loop (1-to-24 hours), using a web-based playlist interface. Scheduled channels use XML/JSON/ MRSS/Excel playout schedules to automatically allow the linear distribution of content. The streaming of channels happens with any CDN, by default using AWS’s CloudFront. Channels are distributed through a variety of applications (web, mobile, tablet, Connected TV, and consoles), with Live and EPG views. Streams are available in HLS and DASH formats, while linear EPGs are created automatically,

ready to be utilised on the platforms of choice. Further opportunities to enlarge a channel’s footprint are made available by 20+ syndication connectors that leverage Simplestream’s Syndication module, via XML, JSON, and MRSS. Monetisation is made possible with personalised ad content – delivered via SSAI – that lets operators to seamlessly serve unique ads to each user, without limitations. No SDK is required, the module supports VAST, VPAID, and VMAP tags, setting any compatibility concerns aside with an ad server-agnostic approach. Personalised key values and consent management platforms (CMP) are supported out-of-the-box. Granular details are available with the device ID for Apple’s Identifier for Advertising (IDFA) and Google Advertising ID (GAID), including users’ GDPR consent string, content ID, and more. SSAI can be completed as part of the AWS MediaTailor implementation, or the SCTE-35 markers can be passed downstream to have ads inserted by 3rd party providers. Virtual channels can be enriched by data-driven dynamic graphic overlays to deepen the end-user experience with additional layers of information on top of the video content. The feature has already found application in several use cases in the teleshopping space, but is suitable also for the broadcast of news programming, and sporting events.


WINNER TAG Video Systems Content Matching

T

AG’s Content Matching is a unique mechanism that detects similar content across two different streams to ensure correct and uninterrupted delivery to the intended destination. This is accomplished by creating a unique fingerprint for each video frame and audio envelope and matching them across the entire media distribution path against a user-defined reference point. This new technology dramatically reduces workflow complexity and eyes-on-glass and enables media companies to deliver quality content with fewer resources and more confidence. TAG’s Content Matching can identify and correlate audio and video uniqueness accurately regardless of the resolution, bitrate, or framerate, thus enabling a match between any two or more points in the workflow. Even after the content has been processed and manipulated, TAG will still be able to identify the match and confirm that the content is identical, correct, and behaves as expected. In addition, the new TAG technology allows users to get to the root cause of problems faster and troubleshoot more efficiently, even in the most complex, elaborate workflows. Based on a sophisticated real-time frame-to-frame correlation engine, the user will be notified when the first content mismatch occurs, and combined with TAG’s rich probing and monitoring, the source of the errors can easily be identified and the issue resolved. TAG’s content matching enables the following highly requested media workflow applications with potential uses limited only by the user’s imagination: n Frame-accurate latency measurement. n Comparing quality and content accuracy across different feeds.

n SCTE ad insertion. n AV alignment and audio channel drift validation. The ability to identify, match and correlate content to content anywhere in the workflow empowers users to measure a wide variety of parameters. With a reference point and one or more monitoring points, comparisons are easily made, and issues can be quickly identified. Combined with TAG’s flagship software-only monitoring & visualization platform, Content Matching is a powerful tool. The technology adds yet another layer of monitoring to TAG’s robust Multi-Channel Monitoring (MCM), a system that manages alarms and alerts operators of 500+ user-defined event thresholds. In addition, Content Matching provides another resource for the data collected and aggregated by TAG’s Media Control System (MCS). The MCS allows data to be visualized with IT open-source tools, providing engineers with a more precise understanding of their workflow and the information they need to improve it.


WINNER Techex tx darwin

T

echex is a UK-based video and cloud specialist with in-house software development. Designed for nextgeneration software-defined hybrid and pure cloud workflows, Techex’s tx darwin offers a highly secure and ultraflexible live media processing, transport & monitoring platform for deployment anywhere within a contribution & delivery chain. Why is it different? From its inception, tx darwin has introduced the distinctive ability to switch seamlessly between dissimilar MPEG Transport Streams allowing cloud workflows to be not only switched seamlessly & intelligently, but also changed, repurposed or regenerated in multiple ways. While the SMPTE ST 2022-7 seamless switching is standard for robust workflows, it only works with identical streams from a single encoder; Changing between two encoders can lead to downstream decoder errors, disrupting the viewer experience. Yet, today’s agile cloud workflows necessitate the flexibility to interchange system components during maintenance and updates. In response to requests from current customers like Sky, NBC and the BBC, Techex developed tx darwin to switch between two different transport streams, whether in the cloud or on-prem, without disruption bringing night-time work into regular hours with a fully informed team, reducing costs representing a significant leap in efficiency. tx darwin builds on its valuable role as a trust boundary, AKA media firewall, which allows networks from different organisations to interact safely and securely by also repairing out-of-spec inputs in another innovative feature not seen in the market. As an extensible live media processing platform, tx darwin is also built to accommodate any media format in common use, from uncompressed through to compressed, with a wide range of codec options and streaming types using a growing toolset of transformations derived from Techex’s own R&D as well as premium, trusted third-party providers. Functionality accessed either using API which provides a consistent integration point,

the HTML UI or via integrations with third-party solutions such as InfluxDB for comprehensive monitoring and analytics. Who is tx darwin made for? Taken as a whole, tx darwin’s feature set and deployment approach provides a unique platform written securityfirst for the leaders in our industry. Whether a major broadcaster or telco, when considering their most valuable live content, companies need a secure platform for transport, protection, monitoring, and transformation, designed to accommodate everything from transport streams to uncompressed video, just like tx darwin. From a financial operations perspective, tx darwin is a strong asset. Its infrastructure-as-code architecture allows for modular deployment of just the functionality required, providing a tight grip on cloud spending. Additionally, the platform supports full telemetry egress to applications such as InfluxDB, enabling data-driven forecasting of future operational costs. In an industry that thrives on evolution as it moves to deploy software in the cloud, on-prem and the edge, tx darwin’s adaptability and diversity is its greatest asset. By facilitating integration of processing blocks, from TS to ST 2110, it opens doors for the world’s largest broadcasters and telecoms organisations to explore new content formats and delivery mechanisms whilst addressing emerging workflows.


NOMINEE Telemetrics, Inc. TG5 TeleGlide Robotic Camera Trolley

T

elemetrics, the innovator in Robotic Camera Control Systems, is set to introduce its new TG5 robotic camera trolley to European broadcasters and production professionals for the first time at the 2023 IBC Show 15-18 Sep 2023 in Stand 12.G43. Part of the Telemetrics TeleGlide® family of robotic camera dollies, the new TG-5 offers superior performance due to built-in LIDAR sensors, an improved servo drive system, and a more esthetically pleasing housing that can easily be recessed into a production studio or corporate presentation venue floor. "Unlike any robotic camera track and trolley system on the market, the new TG-5 trolley is designed to be embedded into and flush with the floor, making it ideal to blend into innovative set designs without the track showing or causing safety issues for the crew/talent," said Michael Cuomo, Vice President of Telemetrics. "This will allow studio engineers and designers to create new types of sets with AI-assisted, full robotic camera efficiency." Telemetrics has leveraged a number of the safety features of its popular OmniGlide® Robotic Roving platform studio pedestal, making the TG5 the safest and most elegant camera trolley Telemetrics has ever built. There are laser scanners at the edge of each corner of the trolley so that if a person walks across the track, built-in x-y plane sensors will force the trolley to automatically stop. These AI sensors also help the trolley avoid collisions with set pieces. The TG-5 also features built-in LEDs that light up to indicate if there is a person or object in its path under normal operating conditions.

It also has additional safety features to detect failures in any of the components. For example, if the system senses a problem with a part of the trolley, it will automatically shut it down. The trolley is ideal for AR/VR systems and with the Studio View feature – a real-time graphical 3D reproduction of the studio environment the system is operating in – found in Telemetrics’ RCCP-2A Studio (STS) and Legislative (LGS) software. Lastly, the TG-5 is also backward compatible with all existing TeleGlide track systems (curved or straight), Televator® elevating columns and the full line of S5 series and RoboEye robotic pan/tilt heads. It is now in full production and will ship under normal lead times.


WINNER Telestream Vantage Cloud

C

loud transformation is key to modernizing legacy infrastructures and gaining a competitive edge in today’s media landscape. Mordor Intelligence backs this cloud market momentum with research that shows a rise in CAGR figures of 17.5% across 2023, citing a developing focus on streamlining workflows and business processes as key drivers behind this growth. These numbers show that media organizations are no longer just adopting the cloud; they’re being reshaped by it. They’re leveraging cloudnative environments to enhance workflows and infuse efficiency into key elements of media production, supply chain management, and distribution. However, creating a media ecosystem in the cloud from scratch, with the intelligence and automation required to support a robust video supply chain, can be time-consuming and expensive. To meet tomorrow’s high-volume media processing pipeline pace and complex industry media delivery requirements, Telestream has introduced Vantage Cloud. This new purpose-built solution combines all the feature-rich benefits of the Vantage media processing engine with cloud-native solutions to intelligently orchestrate hybrid media processing workflows. Users can now bridge on-premises and cloud systems with automation support for a broad array of processes, formats, and specifications. Tried, tested, and proven technology Built with the same best-in-class technology, you can be confident that you will get the same high-quality encoding and transcoding capabilities that you rely on with Vantage. Media operations teams who want to process their assets in the cloud can leverage powerful, automated, cloud-native workflows, and analyze media to make decisions based on various media properties, metadata, or user-configured rules. It couples the power of flexible workflow creation with the cloud’s capacity, flexibility, and scalability to transform content for editing, broadcasting, distributing, or archiving. Unmatched scalability Dynamically scale your resources based on demand to

accommodate spikes in viewership during events. Vantage customers who also want to ensure quality control for critical incoming and outgoing media files can now easily do so through the integration of Qualify, the industry’s first fully cloudnative automated QC solution. Qualify works with all CMS, MAM, or DAM platforms to enable fast and affordable QC for all VOD and broadcast operations. This simplifies content validation during transcoding and ingesting to ensure utmost quality at any stage. Built for innovation With UHD and HDR on the rise, Vantage Cloud is able to facilitate higher-bandwidth media processing workflows including transcoding, conversions, and distribution. The rise of UHD and HDR only scratches the surface. Adapt quickly to changing requirements and market dynamics by easily adapting and implementing new workflows. Vantage Cloud readily integrates with thirdparty services to empower enhanced workflows like dynamic ad insertion for new ads on VOD content, analytics and measurement to maximize revenue-generating potential, and intelligent workflows for black space removal that save critical media storage and processing costs.


NOMINEE Telos Alliance Jűnger Audio™ AIXpressor

T

he Jűnger Audio‚™ AIXpressor combines unparalleled I/O flexibility and legendary Jűnger audio processing into a compact, 1RU powerhouse. AIXpressor natively supports analog, AES3, MADI, and J√ºnger’s low-latency 1024-channel tieLight, plus Telos Alliance Livewire+ and AES67 in support of SMPTE ST2110-30 via AoIP. Four expansion slots allow deployment of additional I/O modules, including 3G HD-SDI, microphone inputs with pre-amps and 48V phantom power, Audinate’s Dante AoIP, and analog, AES3, and MADI sources. The full suite of Jűnger audio processing, encoding, and decoding solutions can be added in the field as needed via license. Based on the new flexAI platform architecture, AIXpressor can be used as a standalone processor or deployed as part of a larger processing array incorporating other AIXpressor units and flexAIserver for high channel-count applications.


NOMINEE ThinkAnalytics Think360 enerative AI

T

he global generative AI market size was valued at USD 10.3 billion in 2022 and is projected to reach USD 136 billion by 2031, according to Straits Research - with the Media & Entertainment (M&E) segment generating the most revenue. At IBC 2023, ThinkAnalytics, the world’s leading content discovery vendor, will launch Think360 Generative AI, featuring voice, chatbot and other solutions that use generative AI, personalised recommendations, content metadata and natural language interaction. Hundreds of millions of viewers already use Think360’s personalisation, search/content discovery and voice support. Building on this expertise, ThinkAnalytics is now working with broadcasters and content providers to add new ways of engaging with individuals or groups of viewers with generative AI for personalised recommendations and other inbound and outbound interactions, including personalised emails. Think360 Generative AI allows viewers to receive personalised recommendations using natural language on a variety of platforms, including Discord, WhatsApp or Twitch, extending user touchpoints beyond traditional owned and operated platforms. Also, this presents a new opportunity to reach those dormant users who are at risk of churn. ThinkAnalytics’ work with customers such as Liberty Global has already

demonstrated the value of voice as viewers no longer navigate through menus or search for content manually. Think360 Generative AI take this one step further with a conversational interface that suggests content based on the viewer’s taste profile using Think360’s vast understanding of viewer behaviour. This solution has exceptional understanding about consumer preferences and tastes by taking advantage of ThinkAnalytics’ 20+ years of experience in ML/AI plus the world’s largest data set of real-time viewing behaviour, featuring 475 million viewers. Think360 continually refreshes these results as behaviour and content changes. Importantly, operators have the confidence that the solution is based on trusted first party data from a proven source.


WINNER TSL MPA1-MIX-NET-V-R

M

PA1-MIX-NET-V-R is a fully redundant 1U audio confidence monitor and mixer with 16 instantly recallable independent mixes. Designed to ease the transition towards IP media transport, the MPA1-MIX-NET-V-R is ideal for ST 2110 trucks and installations. It offers a sleek solution designed to seamlessly integrate into compact broadcasting environments, outshining larger audio monitors. TSL engineered the MPA1-MIX-NET-V-R in accordance with key industry standards, SMPTE ST 2022-7, ensuring fully optimized network topologies that reduce customer risk and network complexity without limiting operational agility. Two 1G AoIP connections provide 64 channels of input, with a further 64 available via an optional MADI SFP. Support for in-band NMOS is built-in for integration with TSL Control and other third-party systems. Key features/benefits n 64 channels and an optional MADI input provide a further 64 channels, with an intuitive interface and easy-to-use controls.

n A simple and intuitive way to monitor multiple audio sources by using its eight dedicated rotary controls to quickly configure and create the required monitoring mix. n Provides full integration with today’s Broadcast IP networks allowing operators to easily access any audio sources for monitoring. n Comes complete with a comprehensive webpage and SNMP support, allowing remote configuration and control. n Remarkable low depth of just 100mm, effortlessly complementing setups where space is a premium. n Experience the advantage of its exceptional energy efficiency, drawing under 30W of power. User interfaces are critical in high-pressure production environments, so the MPA1-MIX-NET’s design is informed by users. One example is the V-shaped layout of the controls for accurate fingertip navigation in low-light environments. Additionally, SNMP integration allows remote configuration changes.


NOMINEE TVU Networks TVU RPS One

I

ntroducing TVU RPS One, an all-in-one solution that transforms live broadcasting by combining REMI production and cloud production into a single, portable unit. With unparalleled transmission speeds and exceptional multi-camera production performance, this cuttingedge technology is perfect for sports, news, events, and concerts. For venues with wired connectivity and a need for multi-camera synchronized SDI production, TVU’s award-winning RPS and RPS Link are the preferred choices among top live content producers, facilitating tens of thousands of live remote multi-camera events transmitted seamlessly. This wireless encoder boasts up to 4 synchronized SDI inputs for 1080p HDR remote production and provides up to 4 hours of autonomy. Its rugged backpack design ensures excellent shock absorption, making it the ideal solution for off-road live productions in challenging environments. With a sub-

second delay and impeccable frame accuracy in multi-camera synchronization, TVU RPS One is the industry’s #1 reference. To meet the most demanding sports production requirements, TVU utilizes time stamping technology along with its renowned low-latency transmission protocol, achieving perfect frame accuracy with subsecond latency in multi-camera production. Even in remote or crowded areas, it ensures extreme transmission stability with low bandwidth consumption. The encoding technology allows for HD encoding and transmission over IP and cellular data aggregation with as little as 1Mb/s per channel. Embracing next-generation 5G standards, TVU RPS One breaks through limitations with six worldwide 5G modems that feature LTE/3G fallback. The system aggregates up to 12 connections, including Starlink, WiFi, Ethernet, etc. As it supports up to 125Mbps, users can enjoy unmatched connectivity and a seamless live broadcasting experience.


NOMINEE Varnish Software VARNISH SOFTWARE: VARNISH ENTERPRISE 6 WITH NEW MASSIVE STORAGE ENGINE 4

A

s demand for data-rich live and VOD experiences skyrockets, so does the pressure on video service providers to rapidly scale content delivery while ensuring seamless viewing experiences. Despite this urgency, broadcasters, telcos and service providers of all sizes are struggling to keep up with demand while providing faster digital experiences, supporting greater and more geographically dispersed traffic, and delivering new experiences, given the limitations of existing infrastructure. Overcoming these challenges requires the ability to store, access and deliver content as efficiently and reliably as possible at the Edge, even during hardware failures and traffic spikes. Offering unprecedented resilience and performance, Varnish Enterprise 6 with new and upgraded Massive Storage Engine (MSE) 4 is Varnish’s most powerful, feature-rich web cache and HTTP accelerator to date and addresses today’s most pertinent challenges related to video streaming, CDNs and web acceleration at scale. Varnish Enterprise 6 provides a software-based, edge runtime environment, enabling operators to write and execute compute logic as close to the user as possible in mere microseconds within the cache. It can be installed in front of any server that speaks HTTP/HTTPS to significantly speed up content delivery by an impressive 300 to 1000% and reduce backend server load by up to 99%. The new MSE 4 is the most advanced solution for storing and managing cached objects and their metadata, and the latest addition to Varnish Enterprise 6. It builds on its predecessor, which offers high-performance caching and persistence for 100TB+ data sets, to deliver new capabilities and features that greatly benefit video service providers. MSE 4 further improves availability, reduces latency, and ensures customers can always deliver the optimal digital experiences, under high loads or even when certain drives fail. It’s ideal for video distribution and large-cache use cases, as it adds persistence and resilience to content delivery internally, externally, and at the Edge.

The latest enhancements significantly improve speeds when managing metadata, often cutting the input/output operations per second by half. Additionally, new updates to dynamic configuration processes, content integrity checks and disk error handling can prevent and mitigate the impacts of downtime for cached content, leading to greater resilience and less end-user disruption. Using Varnish Enterprise 6 with MSE 4, any organization can cache more content closer to end-users, protect infrastructure, and support huge, unpredictable demand while minimizing power, hardware and costs. Many of the world’s leading broadcasters and video providers already rely on Varnish to optimize their content delivery infrastructure. Tele2 and RTE, for example, deployed Varnish Enterprise with Massive Storage Engine with tremendous results. This included drastically reducing hardware needs, increasing throughput per server and gaining flexibility while transitioning to a streaming-first strategy. Other customers who’ve deployed Varnish have experienced significant improvements: n Reduced capital and operating expenses by 30%. n Reduced latency by 80%. n Increased object delivery speed by 10x - 100x times faster. n Increased cache hit ratios by as much as 50% - 90%. Over 20% of the world’s top websites and leading video service providers rely on Varnish Software’s caching and CDN solutions.


WINNER Veritone Veritone Digital Media Hub

V

eritone Digital Media Hub is an AI-powered, white-label asset management and monetization solution that gives content owners the opportunity to generate more revenue from their assets and makes content easily discoverable with AI-powered metadata tagging and content management. Digital Media Hub advances broadcast and entertainment monetization opportunities by connecting to an organization’s existing cloud storage so they can set up their own branded digital storefront and start licensing their valuable assets. Organizations can make their content easily discoverable by both internal and external audiences with AI-powered metadata tagging, and they can better manage their content with built-in rights management and sharing capabilities. Digital Media Hub is being leveraged by a wide range of users including: TV networks, studios and production companies, film and studio archives, sports and athletic organizations, brands and agencies, and more. In addition to the array of value-added capabilities and features that already set Digital Media Hub apart from others in the digital asset management space, Veritone continues to evolve the product. In July 2023, Veritone announced new features for streamlining content workflows, gaining valuable insights, and unlocking the full potential of media assets: n Analytics Center: Adding substantial new content analysis and measurement capabilities, the new Analytics Center feature offers Digital Media Hub admins enhanced visibility into user activities and content usage metrics. Users can identify user behavior patterns, media buyer activities, and search trends, right from within their personalized Digital Media Hub – enabling them to highlight high-performing content, optimize engagement, and drive more value from their media assets. n Synthetic Data Enhancement powered by Generative AI: One of the most enjoyed new features of Digital Media Hub is the ability to

use AI to automate the metadata creation process so users can easily find and discover content via metadata tags, such as through object, facial, or logo recognition. However, sometimes there are still gaps in those metadata, such as titles and descriptions of your videos. Synthetic data enhancement uses generative AI to fill in those gaps by using existing asset metadata to create and apply new metadata in order to improve cataloging and accessibility of that asset. This gives users the ability to reduce, or even eliminate, the need for manual human intervention to write consumable information about an asset, saving countless hours in data management and search. n New AI Engines: Another of Digital Media Hub’s most distinguishing features is the ability to leverage multiple AI engines from more than 20 cognitive categories. The goal is to enable a ’best-of-breed“ mentality so users can add the AI engines that are best suited for their needs to their Digital Media Hub, rather than be saddled with only the engines the vendor selects. With Veritone aiWARE, the AI platform underpinning Digital Media Hub, Veritone is adding a couple of new engines to the roster: Celebrity Facial Recognition and Content Classification. These will significantly enhance search and discoverability within Digital Media Hub, in particular for studios, broadcasters, and sports organizations looking for particular faces within their archives.


NOMINEE Veset Veset Nimbus

V

eset Nimbus enables broadcasters, media content owners and service providers to create professional linear TV channels and deliver them to any television distribution platform from OTT to satellite. Veset Nimbus is a feature rich SaaS cloud playout solution for advanced linear channel management. Multiple channels with multiple live feeds can be managed in real time. Broadcasters, media content owners and service providers can create professional linear TV channels that include a mix of live and video-on-demand (VOD) content, and deliver them to any television distribution platform. As a SaaS solution, it allows for the easy creation and scheduling of new channels in the cloud, meaning that content owners can manage multiple linear channels without the need to invest in hardware. It includes a range of all-in-one channel creation tools from live stream and file ingest, scheduling, EPG, content management, SCTE35, multiple live switching and complex graphics, to playout and encoding. Channels can include a mix of live and video-on-demand (VOD) content. Live recording support allows users to record live input sources and ingest these directly to the media library. Users can also record a live feed for real time streaming or schedule it for playback at a future time. The solution includes the ability to stream playouts in portrait modes such as 9:16, optimising footage for viewing on mobile phone screens. This feature makes it easier for broadcasters to reach and maintain viewership of the large and growing number of people viewing live and VOD content on mobile.

The latest updates add: n Support for Dolby Audio® meaning users can create linear channels with high quality Dolby Digital 5.1 audio for enhanced audio quality. n Support for HTML5 (Hypertext Markup Language revision 5) allowing users to create engaging and interactive HTML 5 graphics and import them into a cloud-hosted playout stream. n Integration with AWS Elemental MediaTailor making it simple for users to build channels and insert personalised ads entirely in the cloud. n Support for AWS Cloud Digital Interface (AWS CDI), enabling users to distribute high quality uncompressed linear video content, drastically reducing latency, and improving the user experience for viewers. n Addition of scheduling blocks, which can contain primary events that can be played sequentially or randomly every time the block is scheduled. n Adobe After Effects (AE) projects can now be imported natively into Veset Nimbus.


NOMINEE Viaccess-Orca QoX Suite

O

ffered as a SaaS with top-notch QoE, QoS, and device monitoring; failure prediction; and monitoring as a service, Viaccess-Orca’s (VO) QoX suite enables operators and service providers to ensure superior-quality video streaming experiences. By monitoring the various components of the video delivery chain, including non-VO components, and optimizing their quality, VO QoX suite ensures an uninterrupted, premium service, helping operators improve subscriber satisfaction, extend viewing times, and reduce churn. The unique features and benefits of VO’s QoX suite include: n Predictive monitoring: Trained on data collected from mission-critical TV platform components, a predictive monitoring service identifies weak signals, giving the operations teams advance warning about possible service incidents. The monitoring service also provides causality determination and pinpoints the affected TV platform component, meaning any issues negatively impacting the service can be quickly identified and resolved by the operator, without interfering with the enduser’s viewing. n Deeper UX insights: VO QoX suite provides comprehensive insights into the performance of video streaming services. With VO’s QoX suite, operators can see video streaming issues that subscribers are facing, detect insights into the video streaming experience to make UX improvements, and investigate the quality of playback. n Real-time access to complex data: With the VO QoX suite, operators are provided with an extensive understanding of the overall quality of their service. The suite provides more than 120 data points for each playback session in real time. This includes five important parameters outlined by the Streaming Video Technology Alliance: video start failure, time taken to start the playback, rebuffering, playback error, and stream quality. In addition, the QoX suite offers seamless integration with complex monitoring systems and datamining solutions, providing operators with access to real-time raw data and pre-processed

graphs. VO’s QoX suite respects the end-user’s privacy and is compliant with the General Data Protection Regulation; a legal requirement in Europe for handling personal information, as well as with the requirements of other local privacy legislation. n Better control over playback data: VO QoX suite is preintegrated and bundled with the VO Secure Video Player, allowing operators to see far more data from the playback side to gain a deeper understanding of potential QoE and QoS issues. n Unparalleled flexibility and scalability: As a SaaS running on the cloud, VO’s QoX suite ensures high flexibility and scalability. Upgrades are quick and simple, with little to no downtime or service disruption. The service is fully managed by VO, enabling operators to focus on other vital processes. VO’s extensive experience as a leading, global provider of OTT and TV platform solutions and services, deep technological know-how, and unique understanding of the complex TV ecosystem fuel the highly flexible and scalable design of the QoX suite. The solution has been successfully deployed by a tier-1 operator in France, while the new predictive monitoring component is already in use by several operators, both in the cloud and on-prem.


WINNER Vionlabs AINAR Interact

T

he future of content discovery is not just algorithmic; it’s conversational. We are elated to present our breakthrough innovation AINAR Interact. This advancement transforms content discovery from a passive experience into an interactive dialogue, powered by AINAR’s unmatched AI capabilities. Highlighting AINAR’s new capability: vision to text translation At the heart of our newest update is the ‘Vision - text translation’ feature. This cutting-edge development stands as testament to AINAR’s commitment to enhancing user experience. It ensures that viewers can seamlessly communicate with their favorite streaming platforms using natural language, ushering in a new era where technology not only understands data but also interprets human sentiment. Two-way communication: the core of personalized content discovery Traditional streaming services often present a one-size-fits-all approach, limiting the user’s experience. AINAR Interact, with its enhanced Prompts feature, introduces genuine two-way communication. This breakthrough ensures an individualized content discovery experience, adapting in real time to users’ needs, preferences, and moods. The result? A more intuitive and efficient navigation through myriad content options. Beyond algorithms: understanding emotions and scenes What sets AINAR apart is its profound understanding of content down to the granular level of every single scene. Consider the frustration of investing 20 minutes, sometimes more, in sifting through a carousel of options,

only to settle for content that may not resonate with your current mood. With AINAR Interact, users can vividly express their emotions or desired themes, and AINAR Interact, in turn, sources content that aligns perfectly with that sentiment. It’s not just about viewing preferences; it’s about aligning content with emotions, ensuring a deeply immersive viewing experience. Conclusion: the future is conversational No longer is content discovery a mundane, passive process. With AINAR’s Prompts, we’re inviting users into a dynamic, living world where their streaming platform listens, understands, and responds. The carousel of endless options is replaced by a personalized conversation, making content discovery a delightful interaction rather than a chore. In a world where technology is increasingly integrated into our daily lives, AINAR Interact stands at the forefront, ensuring that integration is personal, intuitive, and emotionally resonant. Welcome to the future. Welcome to AINAR’s living, breathing universe of content.


NOMINEE Vionlabs AINAR Contextual Advertising

W

e developed Ad Breaks and Contextual Ads to help AVOD and FAST services increase their advertising revenue while providing a seamless viewing experience for their audiences. Rooted in our commitment to enhancing viewer experience, AINAR’s technology ensures audiences are engaged, not alienated, by ads. The platform understands that consumers demand relevant content, which is why it offers advertisements that resonate with the mood of the content being consumed. By utilizing the latest advances in computer vision and machine learning AINAR Contextual Ads accurately detect faces, actions, scenes, emotions, and moods to serve ads in line with the content you are consuming. Through contextual ad breaks and detailed scene analysis, AINAR enables hyper-targeting, leading to an increase in impressions and conversion. AINAR Contextual ADs ’watch“ a movie as a human would and describe what is going on in the scene leading up to the ad break. The scene data includes +3000 keywords and +700 mood tags. This helps advertisers keep emotions at the heart of their campaigns and ensure that ads are placed in the right emotional context. The importance of emotions in advertising can’t be overstated. Emotions drive decisions, influence perceptions, and foster brand loyalty. Recognizing this, AINAR ensures that every ad is placed within the appropriate emotional context. By keeping emotions at the core of campaigns, it offers advertisers a unique advantage, the power to connect deeply with viewers. Automated mapping and IAB taxonomy 3.0 AINAR Contextual Advertising doesn’t stop at mood mapping. Its automated integration into the IAB taxonomy 3.0 is tailormade for programmatic targeting. This seamless alignment ensures that every ad adheres to industry standards, fortifying its relevance and efficacy. AINAR contextual Advertising divides the ad breaks into four ranks. The ranking system allow content owners to prioritize and tier their advertisements.

The ranking structure: excellence defined By dividing the ad breaks into four ranks, content owners can now prioritize and categorize their ads more effectively. Four state-of-the-art networks work in harmony to predict and rank these breaks: n Network 1: Black frame detector n Network 2: Deep learning network that looks at the flow of the story n Network 3: Scene boundary detector n Network 4: Shot-similarity algorithm Using these four networks, we are able to create a ranking system: n Rank 1 demands a full consensus from all networks, signifying the highest quality ad. n Rank 2 necessitates agreement from at least three networks n Rank 3, necessitates agreement from at least two networks n Rank 4, Gap Filler, includes ad breaks only predicted by the shot similarity network, and is used to fill gaps when higherranked ad breaks are insufficient. On top of this AINAR Contextual Advertising does postprocessing filters for speech to make sure not to cut off in an important discussion or narration that travels across scenes. AINAR Contextual Advertising is not just a solution; it’s a revolution, redefining how AVOD and FAST services engage with their audiences.


WINNER Vislink WMT LiveLink

V

islink's new WMT LiveLink is a next-generation bonded cellular encoder that provides class-leading video quality in combination with ultra-reliable, low latency transmission in an ultracompact form factor. The WMT LiveLink is a feature-rich solution that makes it the new benchmark in IP video contribution. It is based on award-winning Mobile Viewpoint technology that has been the choice of broadcast professionals worldwide for over a decade. It has been engineered around FPGA technology and HEVC encoding to deliver exceptional performance characterized by high-quality, low-latency video transmissions, which enables dynamic and highly fluid 2-way interviews. Pioneering in its design, the LiveLink shines with high-quality 4:2:2, 10-bit HEVC video encoding up to 4k resolutions, and 10-bit Luma Optimization to meet the demands of HRD. This makes it a game-changer in covering a variety of mobile camera applications ranging from news and sports to crucial live events. LiveLink provides impressive bonding connectivity, with support for up to 4x diversity public cellular connections, in addition to WiFi and public internet. Moreover, LiveLink integrates seamlessly with Vislink’s Private 5G connectivity solution, which delivers high data rate connectivity even in heavily populated environments. With bonded cellular, private 5G and Starlink satellite support, LiveLink provides seamless roaming across private and public 5G networks. LiveLink also incorporates must-have features including integrated camera control, a direct audio link to the camera operators and multi-camera/transmitter synchronization, providing a solid array of benefits to users in the field as well as those at the broadcast center.

Notably, the LiveLink boasts a highly compact design that facilitates portability, allowing easy mounting on ENG camera devices without disturbing the balance and operation of the camera or alongside mirrorless camera systems. In addition, LiveLink offers extended connectivity options, leveraging an SFP slot that delivers flexibility in terms of the amount of video inputs, HDMI support and SDI video outputs. With two Ethernet connections, it is also possible to employ the unit as a data hotspot for applications including remote control and Internet access. All LiveLink devices in the field can be remotely managed and controlled via LinkMatrix, Vislink’s free-to-use management platform that operates from a browser on any device. Operating through a userfriendly web-based interface, LinkMatrix enhances broadcast production workflows by providing a comprehensive all-IP system for managing field devices and receiving locations. This innovative platform enables remote operation and management of Vislink’s COFDM and 5G equipment, including the MVP bonded cellular encoders, Playout decoders, the Vislink Quantum receiver, and the MVP IQ Sports Producer platform.


NOMINEE VisualOn VisualOn Optimizer

V

isualOn introduces the VisualOn Optimizer, a product set to redefine the landscape of video streaming technology. This innovative universal content-adaptive encoding (CAE) solution tackles the challenge of reducing costs while enhancing the viewer experience, without the need to disrupt existing infrastructures. Debuting at IBC2023, from September 15th to 18th at stand 5.B83, the VisualOn Optimizer marks a significant advancement in the industry. In response to the rising costs associated with bandwidth usage, the VisualOn Optimizer offers a comprehensive solution. Through its real-time, continuous CAE approach, the solution analyzes both live and VOD content, optimizing transcoder settings to deliver exceptional video quality at lower bitrates. Leveraging the VisualOn Optimizer, service providers can achieve an average bitrate reduction of 40%, with an impressive maximum reduction of up to 70%. These advancements improve video service bandwidth, reduce storage expenses, enhance service scalability, elevate user experiences, and increase energy efficiency. Moreover, the VisualOn Optimizer can even enhance video quality within the same bitrate. The adaptability of the VisualOn Optimizer is evident through its compatibility with FFmpeg, enabling deployment across diverse platforms such as bare metal, virtual machines, private clouds, and public clouds. This applies seamlessly to both VOD and live workflows, ensuring its versatility across different scenarios. Tested rigorously through benchmarking and embraced by global service providers, the VisualOn Optimizer consistently delivers impressive results, underscoring its potential to reshape the streaming landscape.

Numerous global service providers have harnessed the power of the VisualOn Optimizer to achieve significant results in bitrate reduction and improved video quality. A prime example is Intigral, a leading provider of cloud-based video products and quality digital entertainment in the MENA region. Through the VisualOn Optimizer, Intigral experienced a remarkable 70% average reduction in bitrates while maintaining exceptional video quality throughout their streaming offerings. This customer success story stands as a testament to the VisualOn Optimizer’s capability to drive transformative outcomes in the realm of video streaming.


NOMINEE VoiceInteraction Audimus.Media

A

udimus.Media stands as the industry-leading closed captioning solution, leveraging AI technology to deliver real-time, fully automatic closed captioning across an array of platforms. From live TV broadcasting to streaming and online meeting platforms, our advanced signal processing and speech recognition technology ensures impeccable captioning accuracy with minimal delay. Supporting 40 languages, simultaneous translation, and speaker identification, Audimus.Media proves to be a versatile and efficient asset for diverse production and distribution workflows. The on-premises deployment ensures the lowest caption latency, robustness against network disruptions, and high data security and privacy. Audimus.Media’s language models are in a constant state of evolution through Artificial Intelligence, ensuring accurate adaptation to local pronunciations and the ever-changing language landscape. With a daily refinement pipeline by obtaining vocabulary from ongoing local programming through Newsroom Computer Systems and external web sources, unusual names and new terms relevant to the daily news cycle are automatically added. This ensures high standards, even in challenging content such as 'Breaking News' or unprepared speech. Moreover, Audimus.Media’s NLP modules provide caption post-processing, such as capitalization, punctuation, and profanity filtering, while context-aware caption formatting further enhances readability. This, combined with Audimus.Media’s key features such as accurate identification of speakers and spoken languages, enable wider content reach and increased viewer engagement. Audimus.Media is designed to meet the specific needs of TV broadcasters, with constantly updated technical features such as remote operation control over GPIO, CTA-708 closed caption embedding into HD-SDI signals, restreaming of HLS with synchronized WebVTT subtitles, and generation of streams with captions embedded into video packets according to SCTE128/ATSC-A53. It also offers MPEG-TS multiplexer contribution with ST-2038, DVB-Subtitling, DVB-Teletext, or ARIB-B24 streams and can export encoded video clips with synchronized captions for VOD publishing.

Audimus.Media is adaptable to multiple scenarios, with a flexible combination of inputs and outputs. Its source can be an SDI capture card, analog sound card, NDI or ST-2110-30 audio feeds, or a generic streaming feed. The captions can be delivered to closed caption encoders or as ST-2110-40 Ancillary Data streams, they can also be published as a captioned live stream to any CDN supporting RTP/RTMP/SRT/RIST as input or a stream of captions in any of the most common formats. With an intuitive web dashboard, Audimus.Media offers a customized setup, control over every configured channel, event scheduling for the creation of repeating live captioning tasks, access to vocabulary customization, and a subtitle editor that allows correction of captions before exporting or embedding them into video files. Closed captioning is not only a legal requirement but also an essential asset for video content distribution, providing increased accessibility, ensuring future retrievability, and creating monetization opportunities through content repurposing. With an extensive range of supported inputs and outputs for diverse production and distribution workflows, Audimus.Media is designed as the market-leading solution for automatic closed captioning and live translation in both production and delivery. This platform sets the standard thanks to its dependable speech recognition abilities, rapid adaptation to dynamic vocabularies, and seamless integration into a wide range of production workflows.


NOMINEE Wasabi Technologies, Gray Meta Wasabi hot cloud storage x GrayMeta Curio integration

W

asabi Technologies, the hot cloud storage company, has added a powerful new integration with GrayMeta Curio that harnesses the power of AI to create more personalized sports fan experiences with hyper specific detail. The solution transforms cumbersome media archives into highly indexed and searchable stores that allow digital teams to easily find and deliver customized content to audiences exactly when needed. Wasabi hot cloud storage is purpose-built to store the world’s data. Wasabi enables media and entertainment companies to store and access all their content in the cloud, from raw footage to finished product, with predictable costs up to 80% less than competitors. The integration with GrayMeta Curio will use artificial intelligence and machine learning to generate rich metadata for media libraries stored in Wasabi, enabling digital creatives to instantly find and retrieve specific media segments based on people, places, events, emotions, logos, landmarks, background audio, and more, helping them to deliver relevant content to market as fast as possible. "Television networks, Hollywood studios, and sports organizations all have vast archives of historic footage that they would like to monetize, reuse, or incorporate into enhanced fan experiences. The problem has been how to transform these monolithic archives into a highly searchable and monetizable content store without investing millions of dollars in technology or thousands of hours in manual content tagging," said Whit Jackson, vice president of media & entertainment, Wasabi Technologies. "The joint offering from Wasabi and

GrayMeta provides a viable and economic solution to that problem, so organizations can instantly find the archival content they want and deliver personalized audience experiences." "The ability to quickly identify the specific content or sentiment of an asset is invaluable in the fast-paced world of media production," said Jesse Graham, technical architect, GrayMeta. "By using the best of breed solution provided by GrayMeta and Wasabi, any professional media production enterprise with large volumes of files will be equipped to reap the benefits of AI and next generation storage architecture."


NOMINEE Witbe Witbe’s Ad Monitoring Technology

I

ssues with dynamically inserted ads in streaming videos are extremely common and take many forms. Significant buffering can occur before and during ads, ads often play at a completely different volume or picture quality than the content they’re interrupting, and one single ad can repeat multiple times, aggravating viewers instead of selling the intended product. These issues are a hot topic for AVOD services, FAST channels, and ad-supported subscription services. In Witbe’s independent research, we have observed that up to 30% of online viewing sessions are affected by a dynamic ad insertion issue. When advertisers see their ads are not running as intended, they may withhold payment or demand that more ads be run for free. Some ad pods might even get filled with blank slates instead of ads – leaving money on the table for video service providers. While there isn’t one single common mistake that triggers all these issues, there is one single method to discovering the issues and beginning the process of fixing them: ad monitoring. Witbe’s new ad monitoring technology can automatically monitor ad performance for video services worldwide. It is a worthy candidate for a Best of Show at IBC2023 Award because it delivers a unique approach that enables providers to know the true ad performance of their service and, in turn, improve and monetize it. Witbe’s technology tests and monitors real, physical devices; the only way to measure the true performance and errors that viewers receive at home. It non-intrusively records the exact video that is seen onscreen without altering anything. This leads to an accurate report on the most important key performance indicators for video service providers, including whether the ad plays, the video quality and audio loudness of the ad compared to the content, the amount of buffering before and during the ad, whether any slates fill the ad breaks, and more.

Witbe’s technology also offers video recordings as proof of ad performance, which allows providers to secure their revenue when third-party advertisers request it. Witbe’s unique Visual Ad Matching feature can record an ad and then identify every time it appears in a viewing session, allowing advertisers to verify when and where their ad played, if it was on time as scheduled, and how often it repeated. Since advertising revenue makes up a majority of most provider’s income, this distinctive feature offers a convenient way to secure it. Witbe’s Smartgate software also delivers personalized reports to providers, taking the raw data from the monitoring sessions and transforming it into streamlined analysis. Witbe’s reliable remote device access technology ensures that all video service providers can access their local testing devices from anywhere in the world. Together, Witbe provides a full-fledged Ad Monitoring suite. For all streaming content providers whose budget relies on revenue from dynamically inserted ads, Witbe’s ad monitoring technology is an essential solution. It helps improve quality and performance for providers, customers, and advertisers alike, recording proof for verification and ensuring a bright future for ad-based video.


WINNER Yuvod Yuvod: Platform-as-a-Service Streaming Solution

Y

uvod’s comprehensive Platform-as-a-Service (PaaS) streaming solution is a 100% cloud-based white-label offering providing all the advanced technology, tools and services needed to effortlessly deliver, manage and monetize high-quality video streaming experiences. The end-to-end turnkey OTT solution is quick and easy to deploy, rapidly scaled and entirely customizable to meet the individual needs of any video service provider, streamer, broadcaster, hospitality service, or communication service provider (CSP). Yuvod strives to deliver the best value with transparent pricing, where costs mirror subscriber growth, offering a complete streaming suite at a fraction of the price of traditional solutions. Telco customers have saved upwards of 80%. The solution requires no new hardware to deploy or manage, eliminating significant costs and challenges when operating a streaming service and app. Yuvod makes streaming as simple as plug-and-play. The holistic platform delivers live, linear, and on-demand streaming with advanced functionality across multiple devices and applications. It can be deployed in any location and centralizes all operational and technical processes into the streamer’s hands. This includes Yuvod’s proprietary video platform, media server, middleware, STB integrations, customizable dashboards, choice of CDN, IP networking, multiDRM encryption, CRM and billing systems, customizable app design, 24/7 support and more. Committed to innovation, Yuvod continually updates the solution to ensure customers can capitalize on emerging trends and opportunities. For example, Yuvod recently partnered with Red Bee to pre-integrate robust metadata to deliver more engaging and personalized viewing experiences; the Netflix app is now easily accessed within Yuvod’s platform; and upcoming integrations with Amazon Fire and Apple TV offer broader access on any device. The platform also features a plethora of built-in capabilities that enhance sports broadcasts, hospitality services and more. n Sports Broadcasting: It’s never been easier to broadcast

live sports and deliver an unparalleled viewing experience. Yuvod provides advanced tools and features to help monetize sports content – such as customizable subscription models and advanced analytics – and keep fans highly engaged with integrations for real-time stats from live or past events. Yuvod offers an unrestricted choice of CDNs to deliver fast and responsive live streams across every continent. n Hospitality Services: A refreshingly affordable alternative to the larger companies that dominate the market. Hospitality centers no longer need a head-end and can significantly improve the quality of a guest’s experience with easy-to-access, personalized entertainment in every room. Yuvod seamlessly integrates with existing PMS systems to provide real-time guest information and services at the click of a button; like ordering room service, finding a local attraction, checking out, and more. Yuvod includes EPG for linear signals and rich metadata for VOD sources, both owned and hosted in external applications (such as Netflix, Amazon Prime Video, or HBO). Robust analytics and customizable dashboards provide a 360-degree view of the entire video business, including viewer behavior and engagement across all devices and platforms. Organizations can pinpoint actionable trends to make realtime programming decisions that drive viewership, enhance experiences, and increase revenue. Yuvod’s clients and partners include Vodafone, La Liga Tech, Grupo Hotusa, Rakuten TV, DAZN, and more.


WINNER ZIXI ZIXI SOFTWARE DEFINED VIDEO NETWORK 5G

Z

ixi is the architect of the Software-Define Video Platform (SDVP), the industry’s most complete live IP video workflow solution. Zixi has integrated support for managing 4K video streams on 5G networks within multiaccess edge compute (MEC) infrastructure. Broadcast-quality content means no errors, and delivery across 5G networks should be no exception. With more than 15 years of innovatinglive video delivery over IP networks, Zixi is uniquely positioned to enable new use cases for 5G delivery while ensuring broadcast quality and reliability. 5G networks and MEC infrastructure unlock exciting new opportunities that Zixi is helping bring to market, including ultra-low latency live remote production, satellite rationalization for distribution and new fan experiences both in and outside the venue. Like all IP networks, 5G requires protection against challenges like jitter, congestion, signal interruption and degradation. The now features optimizations necessary to fully operationalize live IP video delivery of pristine 4K video over 5G networks, completely untethering production and distribution. The new solution is being used to distribute time-sensitive video with unprecedented low latency without sacrificing image quality or broadcast reliability. The SDVP automates critical functions necessary to ensure that the full benefits of 5G radio networks and MEC infrastructure are achieved: 1. Edge Presence: In order to take advantage of the highperformance, low latency characteristics of 5G, you must be able to move video processing and management to the 5G edge. The SDVP leverages ultra-low latency access to AWS compute and storage services enabled by AWS Wavelength at the Verizon 5G Edge to process huge amounts of UHD video and compress it for delivery to mobile devices. 2. Seamless Bonding of IP Networks: Accessing any radio network can introduce challenges of signal integrity, especially for mobile application on in areas with high interference. The SDCP can seamlessly bond across diverse signal paths, including redundant 5G access points, Wi-Fi and 4G LTE networks. This ensures uninterrupted video delivery

that is essential in live production and distribution workflows. 3. Network Aware Adaptive Bitrate: The Zixi protocol is congestion and network aware, seamlessly adjusts to varying network conditions and employs patented, dynamic Forward Error Correction techniques for error-free video transport over 5G. Zixi’s unique ability to adapt the video quality to the available bandwidth makes it easy to maintain stream continuity for the optimal Quality of Experience, even as signal strength or traffic congestion changes. These innovations have been demonstrated to enable pristine quality, ultra-low latency live 4K video backhaul for real-time production and distribution including for deployed customers such as BloombergTV on Verizon and AWS Wavelength Zones infrastructure. At a time that sees the normalization of remote working and a proliferation in ways programs reach viewers, Zixi’s SDVP provides the agility, reliability and broadcastquality video securely from any source to any destination over flexible IP video routes.


NOMINEE Zixi Zixi D2C Gateway

D

istribution platforms need to flexibly onboard live content channels from a diverse network of content partners with complex arrays of delivery methods, large differences in quality and stream consistency and varied mechanisms to protect and monetise programming. Built in collaboration with OTT providers such as Fubo, Paramount+ and AppleTV+ to streamline operations, the Zixi D2C Video Gateway is a comprehensive suite of tools that simplify content partner onboarding. Easily deployed and scaled in any operating environment, it includes everything needed to consolidate ingest of any live programming regardless of the method, protocol or format content partners used for delivery. The D2C Video Gateway pulls live feeds from partner content origins, out of hosted meet-me rooms or spins up redundant entry points for partners to push to. With support for over 18 IP video transport protocols and formats, deep compliance inspection and integrated processing for content normalisation, the D2C video gateway provides universal interoperability and simplified onboarding operations that enable video operations teams to add and manage live linear and event channels. As a modular D2C video gateway, Zixi’s Software-Defined Video Platform (SDVP) continuously validates live channel quality and compliance, normalises content partner feeds to match downstream production workflow requirements, centralises channel management and delivers real-time actionable insights that video teams require to efficiently scale operations so that adopting broadcasters can maximize revenue. The SDVP is the only live streaming software platform that offers users a wide range of protocols in addition to the pioneering Zixi protocol for the delivery of live video. The Zixi protocol is congestion and network-aware, dynamically adjusting to varying network conditions with advanced forward error correction techniques enabling error-free video over any IP network. It features ultra-low latency, dramatic throughput, compute, and efficiency improvements that realize extraordinary cost reductions. The SDVP delivers unparalleled live video delivery performance running over the Zixi Enabled

Network which is 'the industry’s largest ecosystem' and consists of more than 1,000 media companies and 400 technology partners globally. The D2C Video Gateway also enables video operations teams to engage much faster and with far fewer resource costs, delivering the scale, performance and agility that D2C platforms require. With it, distributors can manage the ingest of live linear and event programming, conforming to their existing workflows so that they can deliver to regional, national and global audiences simply and efficiently. The D2C Video Gateway organizes content partner feeds into highly intuitive and dynamic operational dashboards to deliver actionable insights into current performance and analyze health trends for specific channels over time. Problem channels can automatically be quarantined before they impact downstream systems, and changes to program structure, performance or quality can automatically generate a richly decorated RCA report complete with leading, trailing, and impacted objects indicators. The D2C Video Gateway is designed to give operators the flexibility to rapidly onboard video channels delivered in any protocol, over any network, process into any format, and deliver to any target.


WINNER Adder Technology ADDERLink® INFINITY 3000 Series

I

T managers across various industries are turning to virtual desktop infrastructure (VDI) to improve efficiency, reduce downtime, and lower total ownership costs. VDI also widens application availability and can improve desktop security. However, the proliferation of virtual environments has led to a growing demand for seamless VDI switching within a single keyboard, video, and mouse (KVM) environment. Historically, integrating flexible VDI access into a KVM matrix posed challenges for most manufacturers. But with the introduction of the ADDERLink® INFINITY 3000 Series (ALIF3000), users can now access and switch between unlimited virtual and physical environments, from a single human-machine interface (HMI). The recent platform updates bring support for a variety of VDI protocols such as HTML5, SSH, and VNC. This flexibility empowers users to select the client that best fits their needs, reinforcing the ALIF3000’s role as a comprehensive solution that enhances productivity, adaptability, and choice across diverse industries. Alongside this, the updates bring support for ultra-high definition (UHD) resolutions and digital Display Port audio enriching user workflows within the KVM network. The rise of UHD monitors has transformed VDI client interactions. The ALIF3000 series acknowledges this by delivering pixel-perfect UHD visuals, allowing users to harness their high-resolution displays for enhanced productivity and creativity. Adder Technology’s commitment to customer-centric design and development is evident in the timely response to market demands. The expansion of VDI protocol support aligns with the shifting landscape of virtualization, providing users with the freedom to choose protocols that best suit their unique requirements. Developed as a direct result of customer insight

and feedback, the ALIF3000 gives customers more flexibility and choice when it comes to choosing the right solution for them. John Halksworth, senior product manager at Adder said, “Customer and market insight are the driving forces behind our product design and development. With the global server virtualization market projected to soar to $14 billion by 2030, it is critical that we empower our customers to seamlessly access their physical and virtual servers, securely. By expanding our range of supported VDI protocols, and improving video resolutions for VDI sessions, our customers can maximize their UHD investments while harnessing the full potential of the virtual world.” These updates are not just technological advancements; they translate into tangible benefits for end users. Darren Jones, technology operations manager at dock10, the UK’s leading television facility and creator of the most loved TV shows, commented, “The new updates to the ADDERLink INFINITY 3000 Series further enhance our ability to seamlessly integrate all aspects of our workflow. The ALIF3000 gives us the ability to connect our vast virtual estate with our existing core system which is a huge bonus when it comes to improved workflows and productivity. Now, with the new updates, we can give our customers and engineers access to both physical and virtual servers, while maximizing our UHD real estate. Our KVM solution has allowed us to streamline our operations – meaning that we can continue to deliver top-notch live TV broadcasts and produce captivating 4K drama for our customers.”


NOMINEE Adobe Premiere Pro Text-Based Editing

P

owered by Adobe Sensei, Text-Based Editing in Premiere Pro uses the latest AI to automatically transcribe your source media into a transcript, allowing you to edit directly into the timeline to generate a rough cut faster than ever before. With new updates debuting at IBC, Text-Based Editing now allows post-production teams to automatically identify filler words, delete all pauses with a single click, easily transcribe multi-channel audio files, and rearrange dialogue to automatically shape their rough cuts directly in the timeline. When the edit is complete, Text-Based Editing gives editors a ready-made transcript that can be used to quickly generate captions. How it works The first step to start using Text-Based Editing begins with importing your media and enabling automatic transcription. With the latest update, you can now work with multi-channel audio and easily choose what audio channel you want to transcribe to edit multiple speakers. Once the transcripts are ready, you can use the Text-Based Editing workspace to review your source transcripts. As you start building your sequence, you can read through the text, use search to find the content you’d like to use, and add to the Timeline. As you add clips to the Timeline, Premiere Pro creates a new sequence transcript, which can be used to edit your rough cut. As you refine this transcript, the newest update allows you to identify filler words, speakers, and unidentifiable words more easily using the transcript view options button. Additionally, you can now easily search for words, filler words, or pauses to delete or replace. When you’re happy with your rough cut, switch to video editing tools for trimming, refining, pacing, color grading, audio sweetening, and adding titles or graphics to your cuts. AI innovations powering the future of video editing Historically, creating a full rough cut using footage

transcripts was time-consuming and laborious; Text-Based Editing in Premiere Pro accelerates this process to create more time for post-production to focus on their craft. The feature increases efficiency and eliminates traditional bottlenecks in transcribing, locating, trimming, and moving specific clips, enabling filmmakers to generate rough cuts in recordbreaking time. While Text-Based Editing was in Premiere Pro Beta, 91% of users said it was a game changer for them, speeding up the time it takes to create a rough cut by four to five times. Additionally, more than 80% of users felt it was intuitive, easy to learn, and good for all types of projects including documentary filmmaking, short social media promos, news, promotional videos, and cutting down post-production meetings. More than three quarters of users felt that Text-Based Editing worked well with the captioning workflow, and filmmakers felt that they were able to expedite their workflows, even going as far as to say they will be switching from competitor editing platforms to Premiere Pro. Text-Based Editing is designed to empower creativity and improve efficiency in video production by automating timeintensive tasks, so post-production professionals can dedicate more time to shaping the stories they want to tell.


NOMINEE Adobe Frame.io Frame.io Storage Connect

F

rame.io is a secure central hub that streamlines workflows so teams can work closely together from anywhere in the world. Its new feature, Storage Connect, allows enterprises to connect their petabytes of existing storage to the review platform, providing companies with more control over the security, accessibility, governance, and cost of their cloud storage. How it works With Frame.io Storage Connect, enterprise users can browse and organize media, review and approve creative work, transfer files, and collaborate on-thego. When uploading assets to Frame.io with Storage Connect via Camera to Cloud, traditional upload, the Transfer App, or through creative integrations like Adobe Premiere Pro and After Effects, the assets travel directly to the customer’s own AWS S3 bucket. Frame.io then immediately generates a number of proxies in multiple resolutions between 4K-SD, storing the lightweight and small proxies in Frame.io’s cloud storage for instant stakeholder access. All of this work is completed under-the-hood, with 100% uploading, transcoding, and performance parity. Now, teams can stay completely in sync with their organization’s standard operating procedures and security as well as maintain full control of their storage costs. In fact, enterprises that use Storage Connect can save up to 60% on their overall storage costs by connecting off-the-shelf AWS S3 buckets to Frame.io. Why it’s important As demand for content continues to accelerate across channels and surfaces, creative teams rely on collaborative review tools to partner with stakeholders across multiple locations and departments. Catering to photographers, videographers, editors, and marketers

who share work in progress, Frame.io eliminates bottlenecks by facilitating real-time reviews and approvals, managing assets in one seamless and secure cloud location for easy management. With Frame.io’s new features launching this fall, enterprises can now directly connect to Frame.io, maintaining full control of brand assets in a secure environment while providing access to thousands of employees and reducing storage costs. Imagine the world’s largest companies with dozens of on-set productions around the globe shooting video files directly to their own cloud storage buckets with Camera to Cloud, editors across continents uploading and downloading assets, and stakeholders leaving notes on instant dailies, all while working off of their own company’s self-governed and maintained cloud storage. This is now possible with Frame.io Storage Connect. Frame.io’s revolutionary cloud-based workflow has changed the media industry and represents a brand new way for filmmakers, social creators, marketers, and anyone working with video to collaborate more easily and with more efficiency than ever before.


NOMINEE Adobe Frame.io Frame.io Camera to Cloud

F

rame.io Camera to Cloud technology uploads video, photo, and audio assets from cameras directly to Frame.io so filmmakers and photographers on-set or around the globe can start editing creative media immediately, from anywhere in the world. How it works Camera to Cloud’s new in-camera integrations with FUJIFILM X-H2S and X-H2, marked the world’s first digital stills cameras to natively integrate with Camera to Cloud. When paired with the FT-XH file transfer attachment to establish an internet connection, workflows can be fully cloud-based, with Frame.io supporting high-resolution RAW camera files in its lightning-fast user review and approval interface. Now, C2C is expanding its integrations with Fujifilm to the GFX-100ii, a medium-format mirrorless camera. This latest integration does not require any additional hardware or accessories in order for its native integration to function. You can now shoot 102MP RAW photos or 8k 10bit video to the cloud directly from the camera. Files can also be transmitted automatically, individually sent, or prioritized to send directly from the camera to collaborators anywhere in the world upon completion of the shot. These bandwidth-efficient, high-quality files in Frame.io are small enough to be easily shared on social media and allow creators to start the editing process until they can be swapped with original camera files for finishing touches. Changing the old ways Until now, there’s been a traditional order to photo and media workflows that must be executed in a specific sequence. Media cards need to be downloaded to hard drives, hard drives need to be shipped, dailies need to be processed and sent to an editor, and then someone needs to ingest them into an editing platform. Historically, the creative process has been bound by processes that couldn’t really be altered, which meant that the flow of creativity was limited by the physical world. Imagine that an editor is looking at a take and wants to recommend a different framing for a scene, a VFX supervisor needs to compare

a plate to a foreground element to see if the lighting works, or a second-unit director needs to reference what the main unit is shooting. Now, with C2C, a camera, and no additional hardware, filmmakers can communicate with their team in the moment to make any necessary adjustments. Adobe’s first-to-market Camera to Cloud technology has been used on over 10,000 global productions with companies uploading more than 25,000 hours of content. What we’re observing is that over the next eight years, more and more media and entertainment workflows will make the permanent shift to cloudfirst technology, which will increase access, speed, manipulation, and creative control in ways never before possible. This new workflow is breaking down barriers imposed by the old process and inviting new opportunities for collaboration, greater control, and creativity.


WINNER AJA Video Systems AJA KONA X

T

he next generation of AJA’s KONA desktop I/O technology, KONA X features built-in Streaming DMA. Compatible with new AJA Desktop Software v17 and version 17 of AJA’s world-class software development kit (SDK), the new 4-lane PCIe 3.0 card offers ultra-low latency video capture and playback for applications spanning media and entertainment, live production, OEM development, and more. It combines bi-directional dual, full size BNC 12G-SDI and dual HDMI 2.0 connections to meet a broad range of I/O demands. Among KONA X’s standout feature highlights is builtin Streaming DMA, which lets users achieve as low as sub-frame latency for crucial tasks involving video I/O; whether assisted by AI or for gaming, AR graphics, overlays, or other tasks that require instantaneous performance. Robust, full size I/O streamlines installation while providing extended card longevity, and visible status LEDs on the KONA X backplane to ensure intuitive connection and status reporting. An In-Firmware MicroController allows AJA Developer Partners to leverage the card for nearly any use case while support for the AJA SDK on Windows, Linux, and macOS makes it easy to program and configure the KONA X using a familiar toolset. For applications that require advanced audio, LTC, RS422 and other functionality, an optional KONA Xpand break-out board is available, which installs in an adjacent PCIe slot. KONA X also leverages the proven feature set of AJA’s Desktop Software, including new enhancements in the latest v17 update, such as improved closed caption support, performance improvements for Apple silicon systems, support for Rocky Linux, and more. KONA X feature highlights include: Ultra-low latency and built-in Streaming DMA: DMA operation occurs between the video I/O and KONA X user buffer, bypassing the KONA X on-card memory, allowing users to achieve as low as sub-frame latencies. This is important for eSports and live production environments, where the synchronization of AR graphics and graphical overlays, and split second reaction times for viewer interactions are required. For

virtual production, it keeps camera tracking, the LED wall, and game engine in sync, while in broadcast applications, it provides a simpler way to manage dynamic video. Flexible connectivity with VESA HDMI I/O support: KONA X includes dual full-size BNC, 12G-SDI bi-directional connections, and dual, full-size HDMI 2.0 connections (one for input and one for output) with VESA HDMI I/O support. Video subsystem timing is agnostic, enabling capture and playback of a myriad of HDMI formats. In Firmware Microcontroller: This feature lets KONA X users and OEM developers run code directly on the card and achieve ultra-low latency or functionality that might need to exist before the system containing the card fully boots. Optional KONA Xpand Break-out-Board: A complementary, optional breakout PCIe I/O board, for expanding the connectivity of KONA X. Key KONA Xpand breakout board features include: two channels of balanced analog audio (via user-supplied DB15 to XLR cable); Bi-level/Tri-level Video Reference (BNC); eight channels of unbalanced AES/EBU audio (Breakout Cable - BNC); LTC In/Out (Breakout Cable - BNC) and RS-422 (Breakout Cable - DB-9).


WINNER Amazon Web Services AWS Elemental MediaConnect Gateway

A

WS Elemental MediaConnect Gateway is a cloud-connected software application for transmitting live video between on-premises multicast networks and AWS. Part of AWS Media Services, MediaConnect Gateway improves operations in hybrid environments, providing monitoring, security, and management of video feeds from the AWS Management Console. It can be used to build end-to-end live video contribution and distribution workflows in AWS at scale for seamless integration into their on-premises infrastructure. Typically, delivery of live-video multicast streams between datacenters and the cloud requires investment in specialized third-party hardware and software or a custom solution, which can be costly, and difficult to support. With MediaConnect Gateway, live video stream transport in on-premises datacenters can be viewed, monitored, and controlled directly from the AWS Management Console or using the MediaConnect API. For video contribution, content providers that originate live linear channels on premises can send these feeds to global partners, using MediaConnect Gateway as a bridge between their multicast, on-premises network infrastructure, and the cloud. Each MediaConnect Gateway instance can subscribe to one or more multicast groups, where a group represents either a single channel or multiple channels multiplexed together in a multi-program transport stream (MPTS). Once subscribed, MediaConnect Gateway converts the network traffic to unicast, adds encryption, and sends the video to a MediaConnect flow. A live streaming application is created using the feed, and AWS Elemental MediaLive, AWS Elemental MediaPackage, and Amazon CloudFront, or another software application process and deliver the video to end viewers. For video distribution, customers can use the new feature to build sophisticated networks that span hundreds or thousands of end points on premises. For example, a broadcaster might send 24/7 live linear content to

affiliates using MediaConnect Gateway to seamlessly bridge on-premises multicast network at the source and destination. The result is a cloud-managed solution with improved operational agility and decreased cost compared to a satellitebased workflow. MediaConnect Gateway runs inside Amazon Elastic Container Service (ECS) Anywhere, a service that allows customers to manage ECS containers on their servers. Once ECS Anywhere has been installed on the customer’s VM or bare metal server, they can download MediaConnect Gateway as a software container, and all video feed management can be handled in the AWS Management Console or using the MediaConnect API. When an on-premises multicast video feed is selected, the video signal is transported as unicast to the cloud using AWS Elemental MediaConnect, a service that combines the dependability of satellite and fiber-optic transport with the user-friendliness of IP-based networks. Once in MediaConnect, video can be sent to other AWS Regions, processed using AWS Media Services or other applications, shared with partners and affiliates, and delivered to other on-premises MediaConnect Gateway locations. Integration of MediaConnect Gateway into Amazon CloudWatch lets customers monitor the health of feeds without separate tools. MediaConnect Gateway gives customers full control over deploying and monitoring hybrid live video workflows, saving valuable time and resources so they can focus on their core business.


WINNER Audinate Dante Connect

W

ith a growing need to create more content faster without sacrificing quality, many broadcasters are turning to cloud-based production to meet demands. These platforms allow news, sports, and entertainment broadcasters to provide real-time audio and video experiences for less money. Dante Connect is a software application suite that facilitates cloud-based broadcast production and is a powerful platform for A1s and mixing engineers, combining the familiarity, ease of use and tight synchronization of Dante audio with seamless connectivity to centralized production tools running on cloud instances. Comprised of Dante Virtual Soundcard, Dante Gateway, Dante Domain Manager, and Dante Controller, Dante Connect allows broadcasters to rethink how they approach production. By letting customers take advantage of Dante audio devices anywhere in the world, they can create a cloud-based network of these devices and then manage it from wherever their production staff is based. The ability to create efficient remote production workflows directly from the hundreds of thousands of on-premise Dante devices has the potential to revolutionize remote audio production. With Dante Connect, Dante audio products can send up to 256 channels of synchronized, high-fidelity audio directly from on-premise Dante devices to best-of-breed or customer-preferred production software in the cloud, reducing the need for mobile studios and trucks. Audio can be distributed globally within the cloud, allowing different teams to use the same audio within multiple applications and locations to address different audiences, languages, and aspects of the production process. Source audio can be sent from remote sites directly to the cloud so mixing engineers can do their jobs from wherever they are located, again without the need to roll expensive outside broadcast truck deployments. Dante is the de facto networking standard

for the professional AV industry, and now using Dante Connect, broadcasters can put more devices to work for more productions, on- or off-site. Picture a week filled with overlapping sporting events: a Formula 1 race in Las Vegas, a baseball game in Chicago, and a college football game in Florida. Each of these events, happening in different locations and time zones, requires a unique audio setup, and Dante Connect is the key to managing these complex requirements. On the ground at each event, Dante-enabled microphones capture the engines’ roar, the crowd’s cheers, and the commentary from the booth. These microphones are connected to Dante-enabled communication systems, creating a robust onpremise audio network that can capture and process high-quality audio in real time. This audio is then seamlessly transmitted to a centralized production hub via Dante Connect. Here, cloud-based Danteenabled mixers, digital signal processors (DSPs), and intercom systems take over. These devices allow technical staff to control and fine-tune the audio broadcast from a single location, regardless of where the events occur. This seamless integration of on-premise and cloud-based devices creates a flexible, efficient, and scalable audio ecosystem that can adapt to the needs of any broadcast.


WINNER Avid Avid Media Composer

D

ebuting at IBC 2023, the latest release of Avid’s goldstandard video editing software Media Composer is packed with updates, with three new developments in particular taking it to even greater heights. Next-generation AI-driven capabilities now catalog volumes of dialog-driven media and automatically sync media with text in the script window. Secondly, a new Panel SDK (Software Development Kit) expands workflows through integrated third-party tools accessible from inside the application. Finally, a tech preview of our new SaaS solution allows distributed post-production teams to collectively review, discuss and approve content in real time - from anywhere, on any device. Smarter and faster workflows with upgraded ScriptSync AI and PhraseFind AI capabilities Media Composer’s new AI capabilities save precious time during the editing process. PhraseFind AI catalogs volumes of dialog-driven media, allowing editors to effortlessly locate clips and begin editing directly within the search results of the displayed sentence and search word. It offers full text results in search and supports 21 languages for automatic language detection, eliminating limitations when working with multiple languages. Meanwhile, ScriptSync AI bypasses time-consuming manual sorting of dailies and aligning media with the script text, accelerating timelines and smoothing production workflows. Filmmaker Doc Crotzer, ACE, whose credits include Shotgun Wedding, Glee, and Road House, commented, "These new tools have made something that is already an indispensable part of my workflow more efficient and effective, freeing up my team and me to spend less time manually scripting scenes and more time focused on the creative edit itself. We just finished a project in which ScriptSync AI would’ve saved us literally days." Openness to third-party apps with Media Composer Panel SDK Providing unprecedented openness in video editing environments, Media Composer Panel SDK (Software Development Kit) expands the Media Composer partner

ecosystem and streamlines workflows for customers by giving third-party vendors the opportunity to integrate their apps and services within Media Composer. With access to services through an integrated HTML5 panel, developers have the freedom to create custom integrations and workflows that can address their unique mix of needs. The SDK makes working in Media Composer as efficient as possible, allowing studios and editing teams to streamline production and reduce the chance of error. Editors can directly access third-party apps and services, including Autodesk Moxion, Dixon Sports Computing, Streamland Media and more. Full content-review collaboration from anywhere The new SaaS solution gives post-production teams using Media Composer greater collaborative capabilities through accelerated content review and approvals. Working in an over-the-shoulder Microsoft Teams environment, editors working in Media Composer can provide their teams with the ability to review, discuss and comment on high-quality content in real-time, from anywhere and on any device. The solution removes the timeconsuming process of sending content to multiple stakeholders to review separately, and instead simulates an in-person content review process, allowing editing teams to significantly speed up content delivery and go to market faster. These new Avid Media Composer developments will allow editing teams to experience unrivalled openness, collaboration and speed.


NOMINEE Avid Avid Pro Tools | MTRX II

P

ro Tools | MTRX II with the Thunderbolt™ 3 Module is the ultimate audio command center: an audio interface, a 4096 x 4096 router, a monitor system supporting a full 64 speaker Theatrical Atmos system with room calibration. And it supports multiple simultaneous Pro Tools systems and other audio apps over thunderbolt! It’s equipped with the ability to handle a variety of workflows and gear, giving virtually unlimited flexibility and connectivity options. An all-in-one audio interface, MTRX II lets users capture all the power of Pro Tools software-based workflows while gaining IO capacity, a larger routing matrix and more immersive monitoring capabilities. The addition of the Thunderbolt 3 module brings the power of MTRX II and MTRX Studio interfaces to native audio applications, which can be used simultaneously with Pro Tools | HDX. MTRX II allows multiple Pro Tools systems to use a single MTRX II interface significantly streamlining post-production mixing applications. Whether working in editorial, film post-production mix stages, or commercial music studios, MTRX II provides users with the best tools to fully unleash the creative capacity of Pro Tools software in their music and audio-post workflows. Key features include: MTRX Thunderbolt 3 Module: This optional module delivers 256 channels of native connectivity on both Mac and PC machines. It connects to any DAW, expanding workflows to seamlessly route audio between Core Audio DAWs via Thunderbolt 3 to Pro Tools over DigiLink. Integrated and expandable Dante: With 256 integrated Dante channels, users can connect any MTRX II audio source to any Dante network. This gives users a more versatile bi-directional workflow that leverages an Ethernet infrastructure to route audio between rooms and devices. Fully customizable and increased IO Card count: Users can use all 8 option card slots in any

combination with Avid’s MTRX IO cards, including DigiLink, 8 channel Mic/Line, 8 channel Line, 2 channel Line, 8 channel DA, Madi, AES, Dante 128 and SDI. Built in SPQ processing: Integrated SPQ processing allows users to calibrate their monitor system and correct acoustic anomalies in the room. SPQ provides up to 16 filters per monitor channel up to 64 outputs with adjustable delay and filters. Expanded summing mixer: With a summing mixer of 512 x 64, MTRX II covers all monitoring needs from simple stereo mixes and cues to theatrical Atmos configurations. No other single interface satisfies large scale audio postproduction needs, supporting multiple simultaneous Pro Tools systems and other native audio applications, with massive built in scalable Dante Audio over IP connectivity, the largest 64 speaker theatrical Atmos monitor environments with room calibration all in one 2 RU package. As a powerful studio centerpiece, MTRX II can replace multiple devices, connect anything to everything, and tune any room in a facility to get the best sound possible. The Pro Tools | MTRX II audio interface elevates the creative power of recording and post-production facilities, and independent content creators alike, with advanced immersive monitoring capabilities, flexibility and efficiency that enhance collaboration and save on valuable production time.


NOMINEE Backlight Wildmoka

W

ildmoka’s innovative approach revolutionizes content creation and distribution, particularly in live events, sports, and news. The heart of Wildmoka’s offering is its ability to empower broadcasters and content owners to efficiently generate and distribute content at scale across a spectrum of digital platforms. Live clips, near-live highlights, and live streams can be created and repurposed quickly, and delivered in the right formats for digital consumption. This ensures a seamless and engaging viewer experience across web, mobile, OTT, and social networks including Twitter, Facebook, YouTube, LinkedIn, and TikTok. The platform’s flexibility in handling content from various sources and formats makes it a valuable tool for broadcasters seeking to provide engaging viewing experiences wherever audiences are streaming. Wildmoka’s design prioritizes efficiency and speed without compromising quality. The platform’s Clip Studio and Live Studio products empower customers to produce significantly more video content for digital destinations using the same editorial resources. AI/ML capabilities, such as Auto ReZone (used to reformat content for optimized viewing on mobile) and StoryBot (used for fully automating or assisting content creation and distribution), amplify broadcasters’ digital strategies, enhancing content relevance and reach. A compelling real-world example of Wildmoka’s prowess comes from Grand Slam Social, a digital marketing agency specializing in sports events. During the Breeders’ Cup, a prestigious thoroughbred horse racing event, Grand Slam Social leveraged Wildmoka’s Clip Studio and Live Studio products to streamline their content production and distribution. The platform enabled the agency to tailor live and near-live content to specific audiences and platforms, resulting in an uptick in output and impact. The benefits from using Wildmoka were transformative: n Net audience growth: +48% n Impressions: +22% n Time to create and publish content reduced by 90%: Before Wildmoka >10 min; with Wildmoka <1 min

The results continued to improve, with total video views during the Breeders’ Cup 2022 event soaring to 20 million, up from 12 million two years prior. Grand Slam Social’s success story underscores Wildmoka’s role in increasing content output and elevating its impact. One of Wildmoka’s strengths lies in its user-friendly nature. Thanks to its intuitive interface it took editors at Grand Slam Social less than an hour to learn to use the platform. Furthermore, Wildmoka’s ability to handle content ingestion from multiple sources streamlines the workflow, reducing the time from creation to distribution. The latest updates to Clip Studio empower content production teams with enhanced productivity through automation features, including automating the creation of highlight clips and reels.The recent updates to Live Studio streamline workflows by supporting Transport Stream (TS) as an output protocol, simplifying delivery to specific playout services while reducing complexity. Wildmoka presents a paradigm shift in content creation and distribution for broadcasters and content owners. Its cloudnative architecture, AI/ML capabilities, and real-world success stories, such as the Breeders’ Cup collaboration with Grand Slam Social, reflect its profound impact on Broadcast Television, Streaming, and Live Events verticals. Wildmoka epitomizes innovation and excellence in the everevolving realm of professional video and audio products by enabling faster, more efficient content generation and distribution.


NOMINEE Backlight Zype Apps Creator

C

ontent distribution has taken center stage in the everevolving landscape of professional video and audio products. Content owners seek effective ways to engage audiences across various streaming platforms and devices while maintaining control over their media. Zype Apps Creator is a game-changing solution that empowers content owners to expand their reach and tap into new revenue streams by seamlessly creating custombranded streaming applications. As a prime candidate for the TV Tech Award, Zype Apps Creator’s no-code approach, market-tested features, and real-world success stories position it as a transformative force in the Broadcast Television, Streaming, and OTT market verticals. The heart of Zype Apps Creator lies in its ability to democratize app creation for media and entertainment companies. As a turnkey solution, it empowers businesses to design and launch enterprise-grade OTT apps across various digital platforms, including web, mobile, smart TVs, connected devices, and gaming consoles. The platform’s no-code approach eliminates the barriers posed by coding expertise, allowing content owners to focus on what truly matters: content creation and programming. Zype Apps Creator boasts an impressive array of markettested enterprise-grade features, including support for video-ondemand content offerings and live streaming. Whether for VOD viewing, pop-up event-based entertainment, or linear playout channel distribution (whether for FAST or subscription-gated use cases), Zype Apps Creator ensures a seamless streaming experience for users.With Zype Apps Creator, content owners can fully control the content experience for their consumers, with full authority over how content is branded, organized, monetized, and delivered. Key features of Zype Apps Creator include: n Responsive dashboard for

organizing content into sections, collections, playlists, catalogs, and more n Flexible monetization models, including SVOD, AVOD, TVOD, and hybrid approaches n Multi-region configuration and multilingual support for tailored user experiences n Robust security and analytics features, including Digital Rights Management and Google Analytics 4 integration n Compatibility with major streaming OTT platforms and devices Recently, Backlight added new features and made updates to Zype Apps Creator to drive app engagement, improve workflow efficiency and productivity and offer a more customized experience for endusers. These new features include an integration with Adobe Primetime, streaming on entry, custom menu creation, live event auto-cataloging, and a parental lock feature. What sets Zype Apps Creator apart from the competition is its focus on removing technical barriers. While other solutions often demand complex coding and lengthy development cycles, Apps Creator empowers users to launch market-ready, custom-branded apps quickly, with zero coding experience required. Zype Apps Creator has facilitated the launch of over 1400 apps for prestigious companies such as Tegna, Condé Nast, Barstool Sports, Outside TV, and Harvard Business Review. In conclusion, Zype Apps Creator is a solution that addresses the pressing need for seamless OTT app creation in the digital age. By democratizing app creation and empowering media enterprises to harness the full potential of digital platforms, Zype Apps Creator exemplifies innovation and excellence in broadcast television, streaming, and OTT verticals.


NOMINEE Bolin Technology R9

B

olin extends its leadership in the Indoor PTZ camera market by introducing the all-new R9 Indoor PTZ camera. The R9 offers three image solution options for various applications: Full HD 30X Zoom, 1“ 4K sensor with 18X Zoom, and 20X 4K60 ultra high resolution. The R9 features two FPGA imaging engines outputting simultaneous, independent video streams. There are two 12G-SDI outputs, Optical SDI, HDMI 2.0, and Multiple IP streams, including the FPGA hardware codec FAST HEVC. All the simultaneous video outputs have independent resolutions and frame rates. The R9 can be powered externally or with PoE++. And of course, it has two channels of professional, balanced audio. With powerful FPGA video engines, and multiple, simultaneous, independent outputs, this is a revolution in Indoor PTZ cameras. FAST HEVC is based on the H.264/265-AVC/HEVC open standard platform, leveraging the power of the AMD Zynq UltraScale+ MPSoC. With FAST HEVC, the R9 delivers up to 4K60, 4:2:2, 12-bit video signal over IP, with only two frames of latency, in just 50 Mbps of bandwidth, maximizing existing 1Gbps network IP video environments. Since FAST HEVC

is based on HEVC, it can be decoded by a standard HEVC decoder, and HEVC can be decoded by our EG40F FAST H decoder. FAST HEVC gives broadcasters a powerful, reliable, scalable, and creative workflow option for any situation. The Pan, Tilt, and Zoom performance of the R9 is stunning. The 340° pan and 210° tilt move at a variable rate from 0.01 of a degree per second to 100 degrees per second. The 255 presets execute at 100 degrees per second variable to five different speeds, all with Zero Deviation Positioning. The R9 also supports the Free-D protocol. The R9 is not just for permanent live performance installations. It offers an optional quick release plate, and can be tripod mounted for live, mobile production. We also designed a custom vibration absorption plate for more extreme vibration and motion environments. And although the R9 is an indoor camera, it has an IP65 rating making it less of a worry when inclement weather arrives during filming. Bolin’s new R9 is the most advanced, high-performing PTZ camera we have ever made, and we are eager for broadcasters to experience it.


WINNER Bridge Technologies VB440’s New Audio Panel

B

uilding on the already extensive array of audio, video and network monitoring tools embedded within Bridge Technologies’ VB440, the addition of an enhanced audio control panel allows for audio monitoring across an unlimited number of flows in a service, with each flow capable of maintaining 64 channels grouped into any required audio bond, from monaural and stereo channels to fully immersive 7.1.4. All Dolby® audio standards are supported by the probe, including ATMOS. Moreover, as well as providing extensive visualisation with a range of LUFS, Gonio and room meters, users can now also isolate a solo audio channel in order to engage in closer listening for problem identification. This is on top of the existing ability to listen to any audio grouping – including 7.1 – in a stereo down-mix through the browser. These features allow audio engineers to ensure that both channels and flows match expected outputs, not only in terms of audio quality but actual audio content. Further, users can access quick-controls from any VB440 screen, allowing them audio control even whilst they are engaged with other functions. These expansions represent Bridge Technologies’ wider positioning of the VB440 as a single appliance which gives creatives and technicians alike access to the in-depth information needed to complete their work, whilst at the same time eliminating as much as five rack units’ worth of equipment, thus reducing equipment expenditure, energy draw, wiring and van weight. Recent expansions – including the new audio additions - cement the centrality of the probe in busy IP and SDI-encapsulated production environments, delivering ultra-low latency analytics of compressed and uncompressed data across areas including packet analysis, content visualisation, colour scopes and deep engineering, all accessible to eight users at any one time. Supporting an extensive range of standards, resolutions and protocols, and with 100 gigabit dual interface capacity, the VB440 can support media networks of almost any size. These elements alone render the VB440 a unique achievement in-and-of itself. But the new audio additions deserve to win because they demonstrate that the VB440

does not just replicate and replace multiple disparate pieces of production equipment within a single, harmonised system, but actually improves upon them. Few - if any - single audio monitoring systems contain the depth of tools and breadth of capacity maintained by the VB440, let alone extending that level of performance across the full spectrum of production activities. Crucially, the VB440’s audio tools are designed to grant audio engineers precision insight in a way that can be deployed quickly, intuitively and flexibly within an individual’s workflow. The sophisticated range of listening tools that have been added allow audio engineers to make use of their own most important tool: their ears, and to do so live from anywhere in the world. These advanced listening options allow sound engineers to work with multichannel outputs even when not in a dedicated sound space, meaning a team of up to eight users can simultaneously access its functions from any HTML-5 browser, anywhere in the world: a unique, and award-worthy achievement.


NOMINEE Brompton Technology Tessera G1 Receiver Card

B

rompton Technology’s Tessera G1 anticipates what will be required from the LED panels of the future. With 20x the power of the current industry-leading Tessera R2+, it is the most powerful receiver card ever designed for an LED panel; well beyond today’s requirements but an essential platform for innovation. The G1 is leading the way for next-generation panels delivering finer pixel pitches, ultra-high frame rates and a broader spectral output thanks to the addition of extra LED emitters per pixel, while also offering a platform for the increasingly complex software and algorithms required to optimise visual performance on LED in an expanding range of applications. One million pixels A single G1 receiver card can drive up to one million pixels; the equivalent of a 1280x720 display in a single panel, which could easily be combined into 4K, 8K or even larger displays. This makes it easier to build practical ultra-fine pitch panels at sensible physical sizes. TrueLight® The G1 has the power to run the mathematically intensive algorithms required for new technologies such as TrueLight, which is transforming how LED panels are used for lighting. Within a virtual production LED volume, much of the scene’s lighting comes from LED panels themselves, but RGB LED panels are designed for direct view and their spectral output is very different from typical lighting sources, so foreground elements can show colour shifts and skin tones in particular can look unnatural. Adding additional LED emitters with wider spectral output to every pixel improves the result, but massively increases the complexity of per-pixel calibration and ensuring colour accuracy. The G1 not only enables the integration of additional calibrated emitters, but its immense processing power means Brompton’s patent-pending TrueLight technology can also perform spectrally-aware, full-

colour, per-pixel calibration of all four RGBW emitters, while offering powerful user control tools through the intuitive TrueLight user interface. The result is a significant leap in colour-rendering accuracy, especially noticeable for skin tones and in blending foreground elements with virtual backgrounds. Ultra-high frame rates There is an increasing need for higher frame rates, allowing slow-motion filming within a virtual production volume, or multi-camera shoots where each camera sees different content on the same LED screen. Currently, the fastest panels can run at frame rates up to 250fps, but specially designed LED panels using the G1 will run at up to 1,000fps, unlocking exceptional new levels of performance. Fibre connectivity The G1 can be driven directly from the SX40’s 10Gb fibre outputs and will be compatible with future Brompton processing platforms. This resolves cabling limitations with 10Gb bandwidth straight to a string of panels, with additional benefits of an optical connection giving electrical isolation and the ability to have cable runs measured in kilometres. The G1 has also been designed as a platform for software, so that it can continue to support new features and enhancements well into the future. This not only allows the G1 to respond to evolving user needs, but also improves overall return on investment.


NOMINEE BZBGEAR BG-QuadFusion-4K

T

he BG-QuadFusion-4K video production switcher is a valuable tool for professionals in television and film production. With support for up to 4-CH HDMI/ DisplayPort 4K60 inputs, it offers versatile video control, allowing seamless integration of multiple video sources. This versatility is crucial for creating polished content by combining different camera angles and shots. What sets this switcher apart is its commitment to audio and visual excellence. It comes equipped with HDMI and DP decoders, format conversion, audio processing, and video streaming capabilities. This ensures impeccable audiovisual output, maintaining synchronized audio and video for a cohesive viewing experience. In terms of special effects, the BG-QuadFusion4K offers a wide range of options, including smooth transitions like MIX, FADE, wipes, and picture-in-picture (PIP) effects. These features enhance viewer engagement and unleash creativity in content production, particularly important in television and film. The production switcher’s user-friendly interface features keyboard control, intuitive menu settings, and language options, streamlining live production tasks and configurations. Its durability, thanks to its metal aluminum alloy construction, makes it suitable for studio and on-location use. Plus, its ultra-thin design simplifies transportation and maintenance. One of the standout features is its support for real-time live broadcasting through RTMP streaming. This expands audience reach and engagement, making it an excellent choice for broadcasting large-scale events like sports and concerts.

Remote control capabilities add to its versatility, allowing professionals to operate the switcher from a distance. This feature proves useful in scenarios where discreet operation or control from a different location is necessary. Keying effects such as chroma key and luma key functionalities enable easy overlay of graphics and subtitles. This is valuable for adding visual elements and information to the broadcast or film production, enhancing the overall viewer experience. The BG-QuadFusion-4K video production switcher is a comprehensive solution catering to the diverse needs of professionals in television and film production. It’s an essential tool for those looking to create engaging and visually stunning content while maintaining control and flexibility in their production processes.


NOMINEE BZBGEAR BG-ADAMO-JR Series: FHD AI Auto-Tracking PTZ Camera

T

he BG-ADAMO-JR is a highly beneficial product for users in the television and film production industry who deal with professional video and audio products. This Full HD PTZ camera offers numerous features and capabilities that make it stand out in its class in the live stream broadcasting market. One of the primary advantages of the BG-ADAMOJR is its exceptional auto-tracking capabilities. It utilizes advanced human detection AI algorithms to achieve remarkable tracking speed and accuracy. With just one camera and one lens, it can easily detect and capture human forms and moving objects in real time. The auto-tracking comes in two modes: Presenter Tracking, which tracks targets without requiring any receiving devices, and Zone Tracking, which can be set to focus on specific areas, making it ideal for capturing whiteboard presentations or other stationary subjects. To ensure flawless network-based production, this camera offers the option of Dante AV-H or NDI|HX3 connectivity, allowing you to choose the one that best suits your preferences. The camera’s advanced visual performance is made possible by its high-quality 1/2.8-inch SONY CMOS image sensor, which delivers a resolution of up to 1080@60fps. Additionally, the camera’s advanced ISP processing technology and algorithms ensure true-to-life image quality with a high signalto-noise ratio (SNR) and reduced noise, resulting in clear and pristine images for natural and accurate viewing experiences. Another significant advantage is the BGADAMO-JR’s flexibility in control and connectivity options. It supports various protocols, including RS422 (compatible with RS485) and RS232 for communication, VISCA, PELCO-D, PELCO-P, ONVIF, GB/T28181, RTSP, RTMP, and more for control and network streaming. This series can also be controlled by BZBGEAR’s free proprietary PTZ camera control app, BG-CONTROL, available for iOS, Windows, Mac, and Android. Users can connect the camera through HDMI, SDI, USB 2.0, USB 3.0, LAN, and IP streaming,

offering versatile workflow options to suit different production setups. Furthermore, the camera’s innovative design makes it highly functional and visually appealing. It comes in classic black or white finishes and features a high-stability substructure that ensures precise PTZ functions and reliable performance. The built-in tally lights on the control arms provide 360-degree visibility, eliminating the need for external accessories for tally indication during live broadcasts. In conclusion, the BG-ADAMO-JR’s exceptional autotracking capabilities, advanced visual performance, extensive control options, flexible connectivity, and innovative design make it an indispensable tool for professionals in the television and film production industry. Its ability to deliver high-quality content with smooth video and audio processing ensures a top-notch viewing experience, making it an outstanding choice for live-stream broadcasting applications.


WINNER BZBGEAR BG-Commander-Pro: IP/Serial PTZ Joystick Controller with 7“ Touchscreen

T

he BG-Commander-Pro is a highly beneficial tool for professionals in the television and film production industry who work with professional video and audio products. Specifically designed for PTZ (Pan-Tilt-Zoom) cameras, this controller provides total PTZ camera control, including presets, focus, zoom, and exposure, making it essential for achieving precise shots and angles in broadcasting production. One of the key advantages of the Commander Pro is its high level of customization and expandability. With support for single IP multi-channel acquisition and ONVIF protocol, users can easily add up to 2048 devices, allowing for adaptability to various production needs and configurations. Despite its advanced capabilities, the device remains user-friendly with a 7” touchscreen interface that facilitates real-time footage previewing and management, making it accessible even to those with minimal training. The product also features HDMI projection, enabling users to transmit audio and video signals to larger display devices like projectors or TVs. Furthermore, it supports the creation of a 2x2 or 3x3 video wall using connected camera RTSP streams, providing the ability to monitor multiple cameras simultaneously. In addition to its current capabilities, the Commander Pro is designed with future-proof upgradability in mind. Users can easily upgrade the device through a standard USB flash drive, ensuring it stays compatible with evolving technologies and software updates, protecting their investment for the long term. For large-scale video projects, the Commander Pro offers multiple control options with support for four RS422/RS485 and one RS-232 control port, providing versatility in control setups.

With its compatibility with Power over Ethernet (PoE), the device offers a cleaner and easier setup, reducing cable clutter during installation. This versatility makes the Commander Pro ideal for various settings, including medium-to-large-scale events, schools, hospitals, hotels, residential areas, factories, and workshops. Its capability to achieve unified LAN ONVIF control allows seamless monitoring and management of an entire camera network, enhancing efficiency in video production workflows. Additionally, the device allows for video recording and screen capture, enabling users to capture critical footage during production. At the same time, support for H.265/H.264 decoding ensures efficient video compression and transmission without compromising quality. In conclusion, the BG-Commander-Pro stands as a comprehensive and user-friendly solution for professionals in the television and film industry. Its precise camera control, customization options, and versatile integration capabilities make it an excellent tool for achieving high-quality and efficient production results.


WINNER Clear-Com Arcadia Central Station Update: I.V. Direct

T

he latest update to Clear-Com’s Arcadia Central Station includes two important additions that support flexible workflows and provide system scaling potential. Firstly, the update significantly expands Arcadia’s port capacity, providing the ability to connect 64 FreeSpeak beltpacks, 32 IP transceivers, and more than 100 HelixNet User Stations, allowing users to have up to 164 digital beltpacks on a single system. This is an increase from its previous support for up to 128 IP ports and 100 beltpacks. Secondly, the update introduces I.V. Direct, an IP interfacing feature that allows connection between Arcadia and the LQ Series of IP Interfaces, Eclipse HX Digital Matrix System (via E-IPA card), and other Arcadia systems over LAN, WAN, or Internet. The I.V. Direct ports will allow intercom audio, logic controls, and call signals to be passed between separately managed Clear-Com systems and assigned to channels, groups, and keys in the individual systems. Notably, this can be done with easy setup for various levels of network quality of service and internetfriendly security features. Interfacing with LQ will allow Arcadia users to expand their analog connections using 2-wire, 4-wire and GPIO ports and support Clear-Com’s Agent-IC and Station-IC virtual clients, two-way radios, and SIP telephony much more easily, without the need for multiple audio and control cables, which will allow for full integration of all team members who need to communicate. In addition to local Dante-based connection capabilities, I.V. Direct connections will allow Arcadia’s network interfacing to extend globally, making it ideal for large, multi-site live

events, remote broadcasting, or even multi-country productions where professionals are concurrently operating multiple systems and requiring audio communication over large distances. Simon Browne, Vice President of Product Management, comments, "More than ever our users require distributed workflows and this integration with LQ and Eclipse opens a world of possibilities, allowing Arcadia users to scale their existing systems exponentially and collaborate on large, multifaceted productions; even from opposites sides of the world." The new features will be available for Arcadia starting in Q4 2023.


WINNER Cobalt Digital Pacific ULL-DEC

I

t is a well-known fact in the digital video domain that there is a tradeoff between video quality, latency, and bit rate. You can optimize any two of these at the expense of the third. One extreme example is SMPTE ST 2110 baseband video over IP: you have pristine quality at negligible latency, but the bit rate is in the Gigabit/second range. The tradeoff is always there, but technology always pushes the optimization point forward since there is a fourth dimension: complexity. With increased complexity, one can achieve a better latency/quality/bit rate tradeoff point. As technology improves, the complexity is ’hidden“ by the advances in processing power. For remote production (REMI), the tradeoff needs to be biased towards very high quality and very low latency (ideally, less than a frame). This means that the tradeoff is skewed towards a higher bit rate. That can only be brought down by increasing the complexity, i.e., more advanced technology. Nowadays, the technology curve looks like this: If you can afford the bit rate, you go baseband (ST 2110); that means 3 Gb/s for a 1080p50/60 signal, or 12 Gb/s for a 4K signal. The next step down corresponds to motion-JPEG technologies, of which JPEG-XS is the latest. You can bring down the bit rate by a factor between 4 and 10, depending on your quality requirements, still with negligible latency. So, you are looking at something between 1.2 Gb/s and 3 Gb/s for 4K, or between 300 Mb/s and 750 Mb/s for 1080p50/60. Bit rate still too high? That’s where the Pacific Ultra-Low Latency (ULL) Decoder from Cobalt Digital fits, combined with the Pacific 9992-ENC in ULL mode. This uses H.265 (HEVC) to get you the lower bit rates (around 80-100 Mb/s for 4K,

around 30 Mb/s for 1080p50/60), with end-to-end latency of less than one frame. It is a way to move forward in the technology curve. The Pacific ULL Decoder supports MPEG-2, AVC, HEVC and 4K with less than 5 milliseconds of latency for suitable streams. It is a full broadcast-grade decoder, with features such as scaling, extensive audio decoding support, and wide network protocol support. The Pacific ULL Decoder is offered in a standard openGear form factor and draws less than 20W of power. If you want it in a standalone box, it can be combined with the Cobalt BBG-1300FR enclosure. This is the main advantage of the Pacific ULL Decoder: it can offer very low latency operation, by combining HEVC compression with ST 302M LPCM audio, but it can also work as a traditional professional decoder, with genlock, arbitrary scaling, and support for MPEG-1 Layers I/II/III audio, AAC (both LC and HE), Dolby AC-3/EAC-3/AC-4, and Dolby-E. On the input protocol side, the low-latency decoding can be combined with RIST for reliable delivery, SRT for legacy systems, UDP/RTP/FEC as well as RTMP/ RTSP and, of course, ASI inputs. With 8-bit/10-bit, 4:2:0/4:2:2 support, it is the ideal decoder for all applications and it is covered by Cobalt’s high standard five-year factory warranty.


WINNER Eluvio Eluvio Media Wallet for Connected TV Powered by the Content Fabric

T

he Eluvio Media Wallet, powered by the Content Fabric, disrupts traditional content economics with direct peering between publishers and audiences at scale. It’s a universal personal media vault for major Connected TV platforms (Apple TV, Android TV, Fire TV, and 1,600+ device platforms). It expands the Content Fabric – a decentralized and open-architecture content streaming, distribution, and blockchain security protocol implemented for Internetscale and a direct Content Economy – to reach mainstream consumers at scale. Publishers distribute premium Video on Demand (PVOD), Live Channels, FAST Channels, and Interactive experiences directly to consumers for all sell-through models - transactional, subscription, and free/ad supported. Via the Media Wallet, audiences can discover, own, enjoy, and if authorized, retrade these experiences through blockchain ownership, authorization and windowing via the Content Fabric. No legacy CDN, media cloud or user PII is required. It maximizes publisher returns, distribution efficiency, and consumer engagement. Unlike legacy content distribution and media clouds, the Content Fabric allows all forms of media including premium live and VOD to be published once and distributed limitlessly, with speed and low latency (high-performance 4K streaming), simplicity (combining functions of a legacy CDN, media cloud, and DRM/rights authorization service in one dynamic and decentralized protocol), just-in-time personalization at scale (server-side and data-driven content insertion within the transcoding and streaming pipeline), hyper efficiency and cost savings (over 10X less expensive and more carbon efficient than legacy CDNs and media clouds), and transparent/tamper-free (providing end-to-end blockchain content security, DRM, and authorization with self-verifying on-chain content). The Media Wallet for Connected TV launched with Warner Bros ’The Flash“ as part of a first-ever personal (Web3) sellthrough window. The master film, bonus footage, and interactive experiences are streamed to owning fans’ Media Wallets via the Content Fabric. The international release window is implemented with no separate rights management system via an on-chain policy applied to the source master film object in

the Fabric and the Fabric’s content security and DRM. The Media Wallet allows consumers to own and authorize themselves to personalized Live Channels, including FAST. Compared to legacy ad insertion pipelines, the Content Fabric supports individually-personalized insertion of content/ads in live and on-demand streams to scale audiences with low latency (2 seconds for live streams, no custom player). Users are authorized using their Media Wallet (on chain) address rather than personal identity. It is both a personal media collection and direct channel from publisher to consumer. It enables new engagement opportunities, including: n bundled movie tickets, early access titles, redeemable retail offers, and continuous content updates; n exclusive access/super-fan channels; n interactive AR/VR experiences and ’hot spots“; n user-contributions (i.e. clips, content feedback, on-chain tipping, retrading marketplaces), and more. All engagement and purchasing are recorded transparently on the Content Fabric blockchain without middle parties, and transaction can be incentivized through on-chain rewards/ payments. Its API supports any sign-on provider, and deep links in other streaming properties. The ’middle layer“ between publisher and consumer is replaced with a scalable, open relationship for unlimited creative possibilities, profitability,and resource sustainability.


WINNER ENCO Systems enCaption5

G

reatly reduce overall costs by automating your captioning workflows with enCaption5 by ENCO. enCaption is a turn-key solution, available on-premise or in the cloud, for providing around-the-clock generation of captioning of live or recorded programming. The latest version includes a powerful video delay feature that enables lip-sync grade caption synchronization, native caption embedding, enhanced MOS newsroom integration, as well as updates to its punctuation and speaker separation abilities. enCaption’s latest enhancement helps customers overcome an industry-wide challenge in the captioning of live programming. While captions could be aligned with corresponding speech during post-production of file-based content, the nature of live captioning has inherently precluded such precise synchronization. Speech-to-text processing of a word or phrase cannot begin until after it has been spoken, and taking the context of surrounding words into account for greater transcription accuracy adds to this latency. enCaption’s newest capability shatters this limitation, effectively synchronizing the live captions with the spoken words. Already highly regarded for minimizing the latency between speech and its resultant captions, enCaption can now delay the associated video and audio by a user-configurable duration to provide lip-sync-like alignment. Two to four seconds of video delay is generally sufficient to provide the desired temporal precision, but by setting a longer delay, customers can choose to expand the audio analysis window to further enhance enCaption4’s renowned speech-to-text accuracy. enCaption’s AI-based deep neural network can embed CEA-608/708 captions natively into your transmission signal (or drive 3rd party encoders) and also generates real-time transcripts, making that content accessible on the station’s website or smartphone app, for those with hearing difficulties, or for viewers to simply follow along quietly, to your coverage. enCaption’s enhanced MOS newsroom integration can automatically learn the correct spellings of unusual words from lists and scripts and does not require the creation of speech pattern profiles for every person speaking. This is an important

benefit for news operations automating and captioning speech from various anchors, reporters, meteorologists, and studio guests. Other revolutionary features include multi-speaker distinction, which can distinguish between multiple people speaking based on separate microphone feeds or audio channels, as well as being able to detect changes between speakers within a single mixed feed. Additional AI-driven enhancements improve the captioning of punctuation and capitalization, among other challenging scenarios. enCaption continues to be a game-changer for broadcasters, cable networks, and OTT services that crave more efficient and affordable closed captioning workflows. The patented solution continues to deliver industry-leading speed, accuracy, and cost savings for closed captioning. The on-premise or cloud-based solution can address breaking news without the challenge of finding a live captioner, and do so at a much lower cost. enCaption has proven to be not only a technically reliable solution for automated closed captioning but also a smart business decision for the user from a long-term cost-savings perspective.


WINNER Evertz ev-670-X30-HW-V2 Virtualized Media Processing Platform

T

he ev670-X30-HW-V2 is a next-generation virtualized media processing platform revolutionizing how media facilities are designed, enabling customers to move to an infrastructure that allows for essential core broadcast services to be applied on a generic hardware platform when required. This paradigm shift from discrete, fixed-function hardware to compute pools of generic hardware with licensable software services provides media companies a flexible, scalable and agile broadcast infrastructure to dynamically meet and adapt to changing facility requirements. The ev670-X30-HW-V2 is an FPGA-accelerated compute blade that supports both 12G/3G/HD-SDI and IP interfaces. The ev670-X30-HW-V2 provides FPGAbased processing cores where a number of different types of applications (APPs) can be configured, providing services that include: multiviewers, gateway and video, audio and ancillary data processing functionality. As a future-proof, FPGA-based compute blade, ev670-X30-HW-V2 provides all the scalability and flexibility of a virtualized environment, while also ensuring low latency, low power and reliable real-time processing. ev670-X30-HW-V2 utilizes Evertz’ MAGNUM-OS orchestration software to allow users to easily manage apps, licenses and the pool of compute resources. These software tools allow media companies to deploy the required applications (e.g. multiviewer, gateway or video/ audio/ancillary processing) as needed. The ev670-X30HW-V2 provides greater efficiency and utilization of compute resource with respect to fixed function devices or COTSbased hardware, allowing users to accomplish precisely what they need, when needed. Evertz applications (apps) for various applications are available for the ev670-X30-HW-2 virtualized media processing platform. The apps are loaded onto the ev670-X30-HW-2 to set the functionality of the module, and can be changed, which reflects the versatility and future-proofing of the ev670X30-HW-2 platform. The current selection of apps provides high-density multiviewer functions for a 12G-SDI based facility including one for 32 12G-SDI inputs to eight 1080p displays.

For SMPTE ST 2110, another app can be used to provide a 64 3G video streams to eight 1080p outputs. The ev670-X30HW-2 also provides gateway apps that allow broadcasters to transition from 12G-SDI to IP using SMPTE ST 2110 or SMPTE ST 2022-6 with full for support NMOS IS-04 and IS-05. The new processing and conversion apps are being introduced for IBC 2023. These apps enable multiple paths of frame synchronization, up/down/cross or SDR-to-HDR conversions on the ev670-X30-HW-2. The ev670-X30-HW-2 can become a high-density processing and conversion block for either IP or 12G-SDI based facilities. The ev670-X30-HW-V2 supports SNMP, IGMP, JSON, REST and NMOS IS-04/05 protocols. These interfaces provide seamless integration with Evertz’ VUE user interface, MAGNUM-OS, and third-party systems.


NOMINEE Evertz Ease Live Interactive Graphics

E

ase Live is a Software-as-a-Service (SaaS) based interactive graphics platform that gives live sports, live events and broadcast customers the tools they need to create, build and distribute overlays to millions of end users on multiple platforms in real-time. Already used by sports leagues, broadcasters and content providers around the world, the platform delivers edge-rendered graphic overlays that add interactive experiences to existing OverThe-Top (OTT) services and streaming applications. Ease Live drives engagement and monetization opportunities by giving content and rights holder the opportunity to ‚Äògamify’ the viewer and fan experience. Graphical content can be overlaid onto live streams, allowing viewers to interact with in-game live statistics, watch parties, polls/trivia and sponsored betting and wagers – all without having to leave the event. This provides opportunities for monetization with new ad revenues. Customers using Ease Live have seen double-digit growth in their audience engagements. The addition of interactive live game stats has increased the number of live stats impressions per game (i.e. the count of how many times users launch the live stats overlay) by 60% over the previous year. Response rates have also increased, with up to 60% responses achieved on factoids and up to 68% responses achieved on polls. The additional support for watch parties, where users can invite their friends to a live video chat during the game, has increased viewership by 53% in terms of unique viewers per game, with the average watch party session lasting over 30 minutes. The Ease Live platform includes the powerful Ease Live Sync Server, which makes it very simple for customers to synchronize their live broadcast moments with interactive graphics in a frame-accurate manner. Getting the timing right is crucial for unlocking the commercial potential that interactive live streaming offers, as interactive content can be placed in relation to the game action and provide valuable clicks and conversions. Ease Live leverages Evertz’ years of experience in timing and synchronization to bridge the production timing and the OTT delivery platform to ensure frame-accurate placement of interactive graphics on top of

the customer’s video player. This synchronization of the live broadcast and interactive overlay graphics also addresses concerns over latency and opens opportunities for free-toplay live predictions of in-game occurrences. Delivering a single-screen experience, where fans can watch and simultaneously play a free-to-play game, is unique for the streaming industry. An additional benefit to Ease Live is the ability to collect first-party data and analytics which is generated from powerful cloud-based data tools. The knowledge gathered about user behaviours can be used to inform broadcasters on what content is resonating with audiences, and this can be used to identify and target specific audience demographics with paid content or advertisements. In addition to the existing support for mobile and web-based touch devices, Ease Live also offers interactive experiences developed for Connected TV devices including Fire TV, Roku TV, and Apple TV. These allow the viewer to engage with content using their television’s remote control device or their mobile devices as a 'true' second screen.


NOMINEE Frequency Networks Studio 5

T

he demand for linear channels is skyrocketing across all platforms, with FAST channel revenues surging nearly 20x between 2019 and 2022. According to 2023 Omdia research, this meteoric rise is set to triple by 2027, reaching a staggering $12 billion. At a time when the linear channel market is experiencing unprecedented growth, content owners and distributors are now ready to seize the opportunity to enter this explosively growing sector. Frequency has developed the most powerful and automated suite of tools and services for linear channel creation, management, and distribution for over five years via its cloud-native, multi-tenant SaaS platform, Frequency Studio. With the introduction of Studio 5, Frequency is redefining how broadcasters, studios, networks, and digitalfirst content creators engage with their audiences. The platform has been significantly extended to provide a host of new tools and services, including new content ingestion automation, scheduling automation, ultra-low latency live channel switching, advanced content filtering, and a comprehensive suite of tools for streamlining the management and scheduling of episodic programming. Studio 5 enables the production and distribution of seamless and engaging viewing experiences with ever greater efficiency. Studio 5: The most powerful platform solution for linear channel creation, management, and distribution Studio 5 introduces an array of ground-breaking features and services, enhancing the linear channel creation, management, distribution, and monetization experience for subscribers. Among the key advancements are: Ingestion Automation: Studio 5 offers a self-serve, real-time console for media ingestion, providing users with unparalleled visibility and efficiency in content ingestion, processing, and management. This innovative tool streamlines the entire ingestion workflow, ensuring a seamless content delivery process. Scheduling Automation: Building upon Studio’s industryleading scheduling tool, 5 introduces fully automated programming. It simplifies the scheduling of repeated content,

reducing the time spent on programming channels by up to 90%. This feature is a game-changer for efficiency-conscious content creators. Content Filtering: Studio 5 includes a new interactive and intelligent content filtering service, leveraging contextual and technical metadata. It empowers users to swiftly identify and schedule precise content, enhancing the viewer experience and content discovery. Series Management: With a comprehensive suite of tools for managing serial content, Studio 5 streamlines the creation and scheduling of episodic programming. From artwork to custom metadata, Studio 5 handles all the intricacies, saving up to 90% of time spent programming. Live Switching: Studio 5 introduces ultra-low latency live channel switching, catering to scheduled or instant / ad-hoc live segments like news or sports. This capability ensures seamless transitions between live sources in under a second, delivering a superior viewing experience. Frequency and the future of TV Studio 5 further extends Frequency’s position as the marketleading cloud-based solution for many of the world’s preeminent broadcasters, studios, networks, and digital-first content creators to deliver compelling 24/7 channels to connected TVs globally, thanks to its powerful and automated channel creation, management, and distribution toolset. Integrated with leading Free Ad Supported TV (FAST), vMVPD, and MVPD platforms, Frequency reaches over 350 million monthly viewers, solidifying its position as a pure-play linear streaming solution for OTT.


NOMINEE Fujifilm Duvo25-1000mm

T

he Duvo25-1000mm cinema box lens from Fujifilm is the first in its class. Developed by a need for a native PL mount, long focal length cinema lens for live events, whether single channel or multi camera broadcast – there’s nothing else like it available today. Derived from the heritage of the Fujifilm Premier range, and touching on the UA series broadcast range, Duvo25-1000mm provides production with unmatched flexibility and reach while maximizing the highly sought-after cinema look to live audiences. With a broadcast design and integration combined with cinematic looks, it’s crossing the line between multi camera cinema production and broadcast operation. Based on broadcast principles the lens can integrate seamlessly to an OB multi camera workflow, using the same zoom and focus demands, as well as the supporter seen in the UA series box lenses. This also means the lens carries functions like ARIA* (automatic gain adjustment), RBF (remote back focus) and BCT (breathing compensation technology). Combine these functions with an RS232 serial connection for robotic applications, and a 20 pin encoder raw output to expand into a virtual world, the Duvo really can cover all aspects of shooting methods. All of these features can be turned on or off direct from the zoom demand (ERD-50A-D01) too, along with zoom curve settings in 3 preset positions for the slowest of controlled

zooms, all the way up to high speed (0.6sec) crash zooms. The belt driven zoom barrel within the lens means minimal backlash and increased accuracy and feel for precise movements in all situations. Wrapped up inside the lens is also our latest generation of optical stabilizer, which can also be adjusted from the zoom demand, using ceramic materials we’ve reduced drag and increased efficiency and sensitivity in both horizontal and vertical axis. To achieve the rich colours and stunning bokeh associated with large sensor cinema cameras, but also cover S35 and FF sensors without a loss of optical performance, we have designed an all new built-in expander. The expander works in a unique way where the angle of view remains unchanged in both S35 and FF shooting modes. Another advantage of this is when we are shooting in S35, we can use the expander as an extender to cover 1500mm of telephoto zoom which was unheard of within the cinema industry. Boasting an impressive F stop of F2.8 from 25 to 465mm, with a subtle ramp to F5 at 1000mm in Super 35 mode. The user cases for the Duvo covers a broad range, from inside events like stage shows and basketball, to large vast stadium productions for live music and sport alike we have yet to find something where the Duvo can’t express itself. * Only compatible with Sony cameras


WINNER IMAX Stream Smart™

S

treaming businesses are facing enormous challenges. Subscriber growth is lessening and ultimately impacting bottom lines compelling businesses to search for new revenue channels or ways to cut costs. The number one video technology challenge is cost control, and the biggest cost for delivering video is distribution charges from Content Delivery Networks (CDNs). IMAX Stream Smart™ helps content distributors reduce CDN delivery costs by 15%-20% on average while guaranteeing preferred video quality and minimal workflow changes. This potentially saves millions of dollars annually;risk-free. Why Stream Smart: n Software overlays on existing workflows requiring minimal encoding workflow changes. n Works with and optimizes leading third-party encoders. n Faster and more accurate than competitive approaches. n Deploys in the cloud or on-prem, wherever the infrastructure lives. n Easy implementation in hours not days or weeks. n Automation reduces the need for human intervention. n Simplified pricing represents a % of bandwidth savings, leading to consistent ROI. How it works Stream Smart™ software overlays on existing workflows to analyze every frame of a video and optimize encoding settings for best picture quality and compression efficiency. Stream Smart is enabled by the IMAX SVS (SSIMPLUS® Viewer Score), which provides a single, objective quality score that benchmarks viewer experience throughout the workflow. Our patented IMAX SVS® is scientifically proven to be the most accurate measure of end-viewer experience. What it delivers is certainty. Once it locks in a quality score, it serves as a guard rail, allowing reductions to be made in the amount of data that make up each segment of video, without harming the viewer experience. As long as the quality score doesn’t change, it’s safe to keep making reductions. This means video operations leaders can confidently

reduce bandwidth, potentially saving millions in delivery costs while guaranteeing there’s no noticeable difference to viewer experience. The most accurate measure of quality The IMAX SVS is the only video quality metric that maps to the human visual system, making it the most accurate and complete measure of how humans perceive video. Our Emmy® Awardwinning technology (Yes, we’ve won two Emmy’s) is the only algorithm with >90% correlation to Mean Opinion Score (MOS), verified across various video datasets. It sees quality differences other metrics can’t. It works for all types of content including live, VOD, HDR and 4K. Makes encoders smarter Stream Smart software overlays on existing workflows and supports leading third-party encoders, requiring minimal encoding workflow changes. It analyzes every segment of video and directs the encoder to optimize settings automatically, to reduce bandwidth while maintaining the desired video quality. Cloud-native and easy to deploy The software can be deployed in a streaming provider’s on-prem data center or in the cloud. It can also be offered as an SDK. Save millions in distribution The result? Stream Smart reduces bandwidth by 15%-20% on average with no perceptible impact on visual experience. In fact, one leading streaming platform saves $25M annually across their library of most watched titles by using Stream Smart technology to reduce bandwidth consumption without sacrificing the subscribers’ viewing experience.


WINNER InSync Technology MCC-HD

D

ue to the incredible challenge they need to solve, standards converters are computationally intensive and algorithmically complex devices that have traditionally led motion compensated converters to be implemented as large, power-hungry equipment. For example, an industry mainstay hardware quoted power consumption of 500W-2kW in a 4+RU chassis. Also, in the confines of an OB truck or in a cramped and busy broadcast centre these large units generate significant heat requiring high levels of cooling that make a huge difference to operating and carbon costs, especially when scaled for multiple channels. On site broadcast trucks and the global transmission centre for the Paris Olympics, for example, will need to cater for channels all over the world; the Tokyo Olympics delivered around 48 concurrent channels being broadcast globally. For a truck to service even a tenth of those channels, a scalable system with redundancy and the highest quality conversion is needed. This is where the MCC-HD excels, featuring algorithms from InSync’s industry-leading broadcast conversion heritage, it produces crystal clear images and smooth motion, all within a 1RU solution at 45W of peak power consumption. In a truck, engineers can use any single available rack space for the compact unit with the knowledge that the hardware won’t be a drain on their available power or aircon. Using the Tokyo Olympics as an example, with an alternative hardware converter, it would cost €¨34,900 to power only 20 units for 9,500 hours, with the MCC-HD only costing €¨4,800. Furthermore, an additional reduction of 86% in carbon emissions from 38,600kg of CO2 to 5,330kg of CO2 emissions in energy consumption alone. This saving calculation excludes the additional cost savings derived from lower air conditioning and fossil fuel consumption requirements. InSync’s newest generation of hardware proves that traditional conversion methods are cost effective, sustainable and viable for the future of broadcasting. Central to InSync’s design and implementation is a clear commitment to sustainability and the end user. Since going to market with our newest line of motion compensated frame

rate converters our programme of continual development has reduced the computation demand and energy cost by up to 86% without any compromise to quality. InSync’s engineers achieved this through intensive work to maximise algorithmic and hardware efficiency allowing for a more compact design. Running hardware on-site or from the comfort of a broadcast centre has never been more costeffective, sustainable and high quality. With hundreds of thousands of hours of content being captured every year by the industry’s biggest broadcasters the figures represented here are multiplied by the same factor. Broadcasters can be simultaneously saving hundreds of thousands of Euros on less energy demanding solutions and reducing their carbon footprint by hundreds of thousands of Euros of CO2 emissions. InSync should be awarded for its culture of continual sustainable growth, drive for perfect image clarity and focus on the needs of broadcasters who use our equipment around the world.


NOMINEE LEMO 12G-SDI 4K UHD

T

he unique Push Pull 12G-SDI 4K UHD link to the Ultra High Definition on the Market The new 1S.275 Series for 12G-SDI (Serial Digital Interface) 4K Ultra High Definition transmission is an extension of the field-proven S Series push-pull connectors. The new robust pushpull connectors are compliant with the SMPTE ST 2082-1 standard for signal/data transmission and enable a transmission rate of 12 Gbit/s at frequencies of up to 12 GHz, providing eight times the bandwidth of a standard HD-SDI in a single link. The chocolate bar shape makes it easier to grip and handle with ease. LEMO has developed these connectors in response to the rapidly advancing technology landscape and market demands

for high transmission rates, lighter structure, low-latency in live events for the Audio Video Broadcasting (AVB) market segments and other market segments such as medical imaging platforms such as endoscopy and laparoscopy, among others. The optimum design of this connectors ensures seamless transmission with high precision, reliability, low return loss, making it a viable alternative to using dual 6G or Quad 3G links. Currently, many 4K professional cameras use Quad link BNC connectors to transmit 12G signals for UHD displays. The newly introduced LEMO product will facilitate the transmission of 12G-SDI using a compact single link connection for UHD displays, enabling higher panel density thus reducing the number of cables/ connectors.


WINNER Matrox Video Matrox ORIGIN

T

ier 1 live production requires frame-accurate, deterministic, low-latency, redundant, and responsive interconnected systems. So far, no cloud solutions have satisfied those requirements without compromising quality, latency, and reliability. Matrox ORIGIN solves that problem. This disruptive technology is a software-only, vendorneutral, asynchronous framework that runs on IT infrastructure, free from the constraints of the synchronized video realm. It can achieve highly scalable, responsive, low-latency, easy-to-control, and frame-accurate broadcast media facilities for on-premises, cloud, and hybrid deployments. Significance of Matrox ORIGIN: n Asynchronous processing of uncompressed video for live production uses the speed and power of IT architecture to process faster than in real time. n Cloud-native, not a 'lift and shift'. n Operates on a single host or across multiple hosts within the distributed environment, making it equally as effective on-premises as in the cloud. n Vendor-agnostic, so broadcasters can choose best-ofbreed components from anyone without being locked into a specific ecosystem. This is already possible on-premises. Now, Matrox Video is the unbiased player bringing that same flexibility to the cloud. n Built-in, frame-accurate redundancy and live migration, even across multiple AWS Availability Zones. With Matrox ORIGIN as the underlying infrastructure, developers can focus resources on what differentiates them. Products will run equally well on a single host or in distributed systems on-premises or in the public cloud. They can develop once and deploy many times. Broadcasters can operate, build, and develop scalable, bestof-breed solutions for public or private clouds without restriction to a particular vendor. They can make better use of their on-premises resources, offload peak needs into the cloud, run exclusively in the public cloud — or all the above — at whatever pace makes sense for their business.

Unique features and benefits: n Asynchronous: Matrox ORIGIN operates asynchronously to process and interconnect uncompressed data as fast and as soon as possible, removing all delays associated with synchronous interconnects. This enables low-latency, uncompressed, and highly responsive systems that make large-scale, tier 1 live production in the cloud possible. n Single-Frame Control: Matrox ORIGIN provides simple, granular control of a single frame. Any single unit can be frame-accurately routed or processed anywhere within the distributed and nonblocking environment of the Matrox ORIGIN framework, resulting in great flexibility with guaranteed AV synchronization that hasn’t been possible before. n Integrated Clean Routing and Switching: This is possible because Matrox ORIGIN controls every frame. Signal-path compensation delays are no longer relevant, and any frame can reach any destination frame-accurately on a large-scale, uncompressed, and distributed fabric. n On-Air Scalability: Matrox ORIGIN can provision or decommission compute to closely match dynamic operational processing needs with infrastructure costs, while on the air. It can live-migrate software processing in runtime without dropping a single frame or disrupting the control system. n Built-in Redundancy: Matrox ORIGIN provides infrastructure to develop and operate stateless media-processing services with granular protection of every frame. The framework manages redundancy and requires no additional intervention. It also supports redundancy across multiple AWS Availability Zones to address mission-critical resilience requirements. n Simple APIs: Developers can now build multiple best-of-breed offerings for broadcasters to choose from.


WINNER Megapixel VR HELIOS LED Processing Platform

T

he HELIOS LED Processing Platform is an industry leading tool that bridges the gap between content delivery and pixels on screen. The Megapixel team has been at the forefront of LED display technology for over 20 years and the team has over 200 patents granted. HELIOS is the product of many years working behind the scenes on fixed installation, live events, broadcast and film projects with demanding clients, tight deadlines and zero room for error. HELIOS is the first LED Processing Platform in the market to allow for direct SMPTE ST2110 ingest and this latest upgrade furthers this innovation and benefits new and existing users alike. HELIOS now accepts a 100G ST 2110 source natively allowing for IP ingestion of a single 8K canvas or multiple 4K rasters together.


WINNER NEP Group 5G MT-UHD MiniTx

N

EP and subsidiary BSI have built on their MiniTx UHD, the world’s smallest low-latency UHD wireless video transmitter, adding the latest 5G NR technology, to deliver the 5G NR MiniTx UHD. This new, powerful 5G-enabled wireless video transmitter offers reliability, even in areas with high network congestion. Along with 5G NR coverage, high-quality video encoding, integrated camera control, and low-latency, the 5G NR Mini-Tx UHD provides the ability to capture and transmit multiple videos simultaneously. NEP’s 5G MT-UHD is the world’s smallest 5G-enabled, low-latency UHD wireless video transmitter. Building on their MiniTx UHD RF-enabled wireless transmitter, the team at BSI kept the compact profile and high-quality video encoding technology and added the latest NR 5G technology. This allows the 5G NR MiniTx to deliver fast and reliable connectivity, even in areas with high network congestion. Along with 5G NR coverage, high-quality video encoding, integrated camera control, and low latency, the 5G MTUHD MiniTx can support a single UHD video or up to four independent HD video feeds up to 1080p59, providing the ability to capture and transmit multiple videos simultaneously. Key features include: n Ultra-compact Size (92 x 58 x 32 mm): Transmitter is incredibly portable and easier to deploy in various scenarios. n Various Transmission Modes: Transmitter supports RTP, Call, Listen, RdV; providing flexibility across multiple applications. n OEM Camera Control and inCar CAN Bus Support: Allows for easier integration with existing camera systems and in-car communication networks. n Integrated GNSS Receiver,

9-Axis Inertial Sensor, and WiFi: Provides additional capabilities for location tracking and motion sensing, as well as wireless connectivity for control and updates. n Low Latency: Supports real-time transmission of video and audio, which plays a vital role in live event broadcasting applications. n Worldwide 5G NR Coverage via FR1 and FR2 Bands: Transmitter works on both sub-6GHz and mmWave frequencies, providing better coverage and potentially faster speeds in areas with 5G coverage. n Support of Video Formats Up to 2160p59: Transmitter supports high-quality video formats up to 4K resolution at a high frame rate. n HEVC and AVC encoding at 10-bit 4:2:2: Allowing for efficient compression of high-quality video with low bandwidth usage. n HDR Capability: Allowing for high dynamic range video capture, providing a wider range of colors and brightness levels for more realistic and immersive video experiences.


WINNER Net Insight IP Media Trust Boundary with SMPTE RP 2129

S

MPTE RP 2129 Trust Boundary is an industry-supported recommended practice that provides for a consistentlydesigned, simple-to-deploy, easy-to-maintain and reliable IP media demarcation point between two organizations. This demarcation provides for the identification, isolation and forwarding of 'trusted IP media' at any point between two IP network domains whether those IP network domains contain switches, routers, cameras, encoders, storage or mixing equipment. Trust Boundary can also be used internally within organizations, facilities and remote locations to reduce engineering complexity and improve operational resilience and workflow monitorability. Example 1: Between organizations, Trust Boundary allows for trusted handover between service provider and broadcaster whereby both organizations may directly interface their internal IP networks towards each other. This can be achieved without concern for additional engineering complexity and the increased risk of issues propagating from either party’s infrastructure to the other party’s infrastructure. Example 2: Trust Boundary allows for trusted interconnection between IP production facilities across the Wide Area Network (WAN). The addition of a Trust Boundary at the WAN edge of each facility guarantees the highly stable interconnection between facilities without the need for both facilities to share the same IP network policy or be concerned about issues in one facility impacting the other. Technically, Trust Boundary can be seen as a network demarcation capability that works on the principle of "zero trust" and 'media preservation'. The

conformance requirements of RP 2129 include presenting both trusted and untrusted interfaces, establishing a zero trust firewall for media flows, capability to translate (NAT) addressable parts of the flow and most importantly, the absolute preservation of RTP headers and media payload through the demarcation process. Trust Boundary and Net Insight’s IP Media Trust Boundary solutions are the media industry’s standard solution for costeffective exchange of trusted IP media. Net Insight products are the world’s first implementations of this important new SMPTE recommended practice. There are a number of major clients that have already deployed and benefited from the ability to easily interconnect IP networks in the same way that they traditionally connected ASI or SDI media. Together with the service providers and other vendors in the SMPTE RP 2129 working group, Net Insight has invested significant resources to develop and promote an open, standardized and cost-effective method for the media industry to fully transition to IP. TV Tech’s recognition of this world’s first implementation of the standard in combination with existing real world deployments will greatly support the adoption RP 2129 and reduce the industry’s costs and complexity of migrating to IP media.


NOMINEE NETGEAR PR460X Professional Router

I

ntroducing the latest addition to our networking line: the PR460X 10G/Multi-Gigabit Dual-WAN Pro Router with Insight Remote Cloud Management. Easily connect and manage internet traffic between wired devices within a secure network. This cutting-edge router is designed to meet the needs of residential or commercial integrators who require lightning-fast speeds and exceptional connectivity with remote management capabilities. The dual-WAN functionality includes failover protection, ensuring uninterrupted internet connectivity for critical applications and services. The sleek, updated form factor of the router is designed for mounting with the ports in the rear, but can be installed reversed, if desired. As an Insight Remote Managed product, configuration and management is simple, stress-free, and available 24/7 from any location. Its 5-year warranty and 90-day free phone and chat support make the Pro Router the most affordable, cloud-managed routing solution on the market.

Feature overview n High performance hardware: 10G/Multi-Gig throughput and 1x2.5G WAN, 1x10G/Multi-Gig WAN/LAN configurable port, 3x2.5G LAN and 1x10G SFP+ LAN ports. n Primary and secondary internet connections: Dual-WAN ports for failover accommodating two internet connections to maintain a reliable link. The first connection functions as the primary and the other as a backup. n Leverage up to 8 VLANs: Separate the network into smaller groups for more secure and efficient use of network resources. n Supports up to 8 DHCP serversAllows for automatic assignment of IP addresses, simplifying network administration. n Firewall protection against malicious incoming traffic: Prevents

unauthorized access to the network and monitors the communications between the network and the outside world. n NETGEAR Insight Remote Cloud Management: Instant discovery, configuration, and management of your network and Pro Router. 4-year subscription included. n IPSec VPN: Provides up to 30 VPN tunnels for businesses to connect remote workers, branch offices, and partners to the main corporate network.


NOMINEE NetOn.Live LiveOS

L

iveOS by NetOn.Live is a revolutionary software platform for live and near-live media production. As a video and audio over IP solution utilizing SMPTE 2110 and other popular standards, it runs on commodity hardware (IT servers and network switches). Unlike other software-based IP solutions that suffer from switching delays, LiveOS is unique in that the NetOn.Live engineering team have conquered the latencies that have prevented many professional technical directors from wanting to use such systems in the past. This has made the LiveOS Production Platform a true disrupter in the live and near-live production market and very worthy of recognition.

Democratizing live IP production NetOn.Live is democratizing day-to-day live IP productions that were until now, stuck in traditional, less flexible SDI workflows. As a software-based system, LiveOS is infinitely flexible and scalable. Shared resources can be spun up/down as required offering a much-desired templated approach to production. At the press of a button, a different production configuration can be loaded instantly. Thanks to its flexibility, LiveOS can support Sports, News, Studio and MCR production. Lowering the cost of production empowers tiertwo events to be broadcast/streamed that previously never had the budget. Coupled with the fact that operators can participate in the production from multiple locations, anywhere, simultaneously, this freedom of deployment truly represents the future. Workflow automation Production management knows that the more they can automate a show, the more they can reduce expenditures while providing a repeatable, high-value productions for viewers. Using LiveOS, production teams can not only automate cameras and shots, but also pull graphics from 3rd-party systems to automatically populate lower thirds, and more.

Customer quotes To quote Jyrki Lepistö, deputy managing director at NEP Finland, "LiveOS represents a tremendous value for us. It has all the features we were looking for with room for expansion. We have been particularly impressed with the quick responses we receive from the NetOn.Live team with regards to customization for our unique needs. "Unlike purpose-built proprietary systems of the past, we simply bought a few computers with software and some network switches, installed them in the datacenter in the basement and we were ready to go. We’re happy to report that the price of production is significantly lower with the NetOn.Live LiveOS system." Future proof If a broadcaster wishes to expand their system, they don’t need to buy dedicated hardware for playback channels, recording capacity, multiviewers or graphics and no extra cabling. It’s simply a matter of adding more horsepower in the server cluster. Everything is under one umbrella and a single point of development, so there’s no need to worry about version mismatches between equipment and third parties. One of the benefits of a software-based solution is the ease in which NetOn.Live support can remotely access the entire system when asked and provide live remote support. When questions come up and new functionality is added, it’s just a couple of mouse clicks to quickly troubleshoot or give guidance to production staff.


WINNER Nextologies NXT-MCR

W

ith live events, the pressure is on. A cascade of dependent steps must all go perfectly - signal delivery, routing, switching, monitoring, ad-break coordination (which changes based on what’s happening in the event) - and there are no redos. The financial stakes are sky-high for even a single error. Typically, several people must manage all the different systems, solutions, and steps, and those people are located within a physical facility. But what if live events could achieve the entire process on one screen, from anywhere in the world, without having to invest in a single piece of hardware? Introducing NXT-MCR. NXT-MCR is a hybrid solution designed to provide a unified remote master control interface for live event production. It orchestrates incoming satellite and/or IP signals, video routers, video switchers, audio coms, video monitoring, scheduling systems and pre recorded files, and generates a downstream feed that’s fully loaded with SCTE markers, CCs and all the metadata necessary to distribute the signal to any taker around the world. NXT-MCR eliminates the burden of connecting to various systems through different remote control interface applications combining a one-stop UI, easy to learn and operate, that can reduce the manpower required for productions by 50%.

A quick summary of the process: n Production feed received from the production truck via satellite and secondary backup source. n NXT-MCR provides signal to Closed Captioners so they can in real time generate the CCs. n Signal provided to associates through either satellite, SRT or and NXT appliance (or a combination). n NXT-MCR operator coordinates with the truck to automatically signal ad breaks by adding SCTE-35 and triggering the commercials (playout) they’ve previously provided, which are built into the NXT-MCR automated playout. n NXT-MCR operator coordinates with the production truck and associates to trigger the local commercials. During those regional commercials, we still fill with promotional

content, but associates generate ad revenue by selling their own ad inventory. The benefits of NXT-MCR are already being realized by some major live sports events. Those benefits include: n Increased flexibility. The remote automation software ties back to the Nextologies HQ in Toronto, where all of the hardware and software are located. This frees the operator from being tied to any location, allowing them to be located inside the OB van, production truck, or they could be sitting at a station a thousand miles away from where the devices are with virtually zero latency. n Visibility. NXT-MCR allows the operator to monitor the health of the signal in one screen and, because the software is built into Nextologies’ Control Panel, to switch signals without leaving that screen. n Lower operator cost. Because NXT-MCR allows one operator to run multiple pieces of hardware, fewer people are required to run each live event. n No cap-ex investment. NXT-MCR is a fully managed service, employing Nextologies infrastructure, hardware, and software, as well as the operator required to run the event. In an era of unprecedented hiring challenges, that is sure to be a key benefit.


NOMINEE Nextologies AVDS and SDI Player

T

he recent years have brought a tidal wave of technological advancements to television, changing both how television is created and distributed, but also how people consume television. As consumers change their habits, the creators and distributors of TV have no other option - if they intend to be successful - to adapt. In this crush of digital transformation pressure, the technologies that have risen to the top as the most valuable for broadcasters and streamers all share one attribute: flexibility. That is the defining characteristic Nextologies engineers into all its solutions, and the AVDS2 and SDI player are the perfect examples of that engineering philosophy in action. AVDS2 is a software application that can act as an encoder, decoder, and/or transcoder, making it possible to take any signal in, process it, and output it in any required format. Unlike other solutions on the market, which require hardware, Nextologies’ AVDS2 is software that can be installed onto any server. The AVDS2 is a full framework with a chain of operations in which each type of signal acts as a module in the chain. This creates the flexibility to take in any signal, do the required operations with it, and then output in any required format. Many other video processor applications work in the same way, so this capability is not new: the unique thing delivered by AVDS2 is the ability to do all this without adding new hardware, which means no capital expenditures, no need for additional space and a big bonus as the broadcast and streaming world move toward decentralized and remote operations - the software can be operated remotely from any location. In addition, AVDS2 includes native integration with all of Nextologies’ key services, including automatic closed captions and commercial

detection/ replacement automatic, SCTE insertion, NexToMeet monitoring, GPU technology support, QuickSync. Nextologies has deep experience and expertise in SDI playout and has designed the SDI Player to give remote broadcasters the ability to send signals to SDI easily. This solution has one, singular, game-changing capability: to play any signal to an SDI device. As a multi-module tool, which has a flexible internal chain, the device allows users to switch sources, keep audio/visual sync, output to multiple SDI destinations, and then play the live signal, do a file playout or play slate if no signal is available. The SDI Player is written in C language and is engineered to provide SDI playout with all the capabilities specifically needed by broadcasters: SCTE insertion, captions, and other vanc data. Also, the SDI Player is a software solution that can be installed on any device, providing the ultimate flexibility without adding the need for cap-ex hardware expenditures or additional rack space. Additional capabilities added to SDIPlayer in 2023 include the ability to play signals coming to us from the internet using native browser WebRTC or using a mobile app. SDI Player has another big advantage, which is the ability to keep A/V sync from the SDI Player inside +-10ms range (tested on valid generator signal).


NOMINEE Nextologies Control Panel

N

extologies’ updated Control Panel (CP) software-based video network is now available for client use, delivering total visibility and control from origin to delivery point, from space to Earth, anywhere on the planet. Bringing together all hardware, software, data, and analysis into one central platform, CP controls everything within the Nextologies HITC infrastructure from transcoding to delivery to encoding, putting all of the different features that one could ever need in a broadcast environment all in one platform. CP enables total visibility of signals from origin to delivery, as well as the ability to analyze and troubleshoot those streams at any point along the way. The broadcast/streaming world is in the midst of digital transformation, which will enable all kinds of expansion and growth, but which can also be expensive and complicated. As broadcasters and streamers transition from conventional signal transport methods, satellite and fiber, to public internet and cloud-based operations, CP is engineered to accelerate their transformation through flexibility. CP is designed to solve the incompatibility problem. CP works with every possible standard, from traditional satellite and fiber delivery to IP, so as companies are moving to a cloud/ hybrid environment, CP can eliminate the need for specific manufacturers for encoding and decoding to get feeds out of the cloud. With CP, all the possibilities are on the table: clients can either get colocation in the cloud, install Nextologies software or buy their own servers and install Nextologies software. Nextologies makes all its own hardware on the HITC network, but in some cases, the software can even control other manufacturers’ hardware, eliminating the need to make that change. And CP is the layer on top that allows total control and visibility of whatever setup works best for the client. The CP represents a major advancement in IP broadcasting due to

its ability to simplify video management and delivery processes. CP can be deployed globally on any server, and it controls all aspects of the Nextologies ecosystem. By using CP in combination with HITC, companies can manage their entire video management and delivery process with a single vendor, rather than needing to use multiple vendors with varying degrees of interoperability. This eliminates the need for cobbling together different solutions and provides a comprehensive end-to-end solution. The CP technology has been implemented by many of Nextologies’ clients, including the Associated Press, who have found it beneficial to let Nextologies manage the solution for them from onboarding to troubleshooting. This allows them to have a hassle-free experience with a single point of contact for all their video management needs. CP meets the market need for a comprehensive, easy-touse platform that allows companies to orchestrate global video management and delivery with a single vendor. CP’s streamlined platform allows companies to manage their entire video workflow from a single control panel, reducing complexity and increasing efficiency. By addressing these market needs, CP helps companies save time, reduce costs, and increase the quality and consistency of their video content delivery.


NOMINEE Other World Computing Jellyfish Nomad

O

ther World Computing’s Jellyfish Nomad is an all flashbased solution that offers an unprecedented standard for shared portable NAS. This petite powerhouse is designed for DITs (digital imaging technicians), independent 3D studios, and on-the-go editors needing a shared media pool to work on the same project and access the same assets. Built with 6000 MB/s of aggregate bandwidth, capacities up to 64TB of user swappable NVMe drives, six attached 10 GbE ports, and 128GB of RAM, the Jellyfish Nomad can handle RAW files, multi-camera projects, image sequences, and VR without breaking a sweat. It is simply the fastest, most portable, and userfriendly shared storage solution on the market, changing the game for editing teams and redefining the art of teamwork. Jellyfish Nomad empowers video editors with an entirely new level of cooperative efficiency, underscoring Other World Computing’s commitment to enhancing the creative process.


WINNER Quantum Myriad

M

yriad is a new all-flash scale-out file and object storage software platform ideally suited for the evolving needs of VFX, animation, and rendering, and the increasing demand for AI and ML content creation and enhancement tools and new markets such as AR/VR, live production with LED video volumes, and digital twinning. Legacy NAS storage systems provide inconsistent performance, are complex, difficult to scale, and often deployed in islands that add workflow complexity and increased management burden. The slow performance makes rendering a painful and long process. Instead, Myriad makes full use of readily available NVMe storage and RDMA to deliver the extreme performance (10s of GB/s) and high IOPS (100s of thousands) needed for cutting-edge animation and multi-platform workflows without the drawbacks or design limitations of legacy systems. Myriad requires no custom hardware, so as market available NVMe storage servers gain higher capacities, higher performance, and lower cost, they can be used giving flexibility and adaptability as business evolves. Myriad lets you consolidate multiple animation, VFX, and rendering workflows into a single fast system to serve all departments, clients, workstations and workflows including rendering pipelines and AI and ML applications. Myriad delivers consistent performance for all users and is highly efficient storage for the large number of small files common in these workflows, and for serving rendering pipelines without impacting other users. Myriad is built with cloud-native technologies like microservices and Kubernetes making it extremely flexible and easy to use, no specialized IT or networking experience required, and can be easily deployed on-premises or in the cloud. Myriad delivers this performance in a smaller footprint requiring less power, cooling and fewer components to reduce networking complexity, administration overhead and operational costs. Myriad’s powerful data services ensure that data is deduplicated and compressed to deliver an effective storage size up to 3x the storage capacity. Zero-impact snapshots and clones protect against operator error.

Myriad benefits: n Consistent, fast performance of up to 10s of GB/s performance and hundreds of thousands of IOPS to serve every creative department’s needs, including rendering, on a single system, whether deployed on-premises or in the cloud. n Modern microservices architecture orchestrated by Kubernetes to deliver simplicity, automation, and resilience at any scale. n Runs on readily available NVMe flash storage servers so you can quickly adopt the latest hardware capacities and form factors and adapt your storage infrastructure to meet future requirements. n A Myriad cluster can start with as few as three NVMe all-flash storage nodes, and its architecture enables scaling to hundreds of nodes in a single distributed, scale-out cluster. n No specialized IT or networking knowledge needed: powerful automated storage, networking, and cluster management automatically detects, deploys, configures storage nodes and manages the networking of the internal RDMA fabric. n Highly efficient data storage with intelligent deduplication, compression and self-healing, and self-balancing software to respond to system changes. n Simple, powerful data protection and recovery with snapshots, clones, snapshot recovery and rollback capabilities to protect against user error or ransomware.


WINNER Ross Video Voyager XR

R

oss XR Solutions provides the rendering platform and control system that allows the easy preparation, management and control of extended reality productions throughout every stage of development. Voyager XR offers a range of powerful and innovative capabilities, including: End-to-end solution Ross is the only vendor to offer a highly integrated solution for XR, thanks to its large portfolio ranging from robotics, switchers, rendering engines to LED walls. The integration between Voyager and Ross’ D3 LEDs offers some unique integration possibilities like color matching for example. Real-time dynamic shaders Unreal5 allows the use of shaders to create realistic effects. From dynamic shadows, live-lights & light blooms to reflections and refractions. These shaders and effects are applicable both in Augmented Reality & Virtual Studio environments. Support of Unreal5 The current version of Voyager is built on Unreal Engine version 5.1. As part of this build, Voyagers users can leverage the latest Nanite & Lumen technologies which enable more complex geometries and global illumination/reflections respectively. Lucid studio integration Through the use of Lucid Studio, it is possible to mix & match tracking protocols and camera mounts to meet the requirements of the production. More importantly, Lucid Studio allows operators to run virtual productions in a simple but yet powerful manner. Everything can be accessed within a click of button and does not require any Blueprint scripting. News integration Voyager also supports seamless integration with MOS-based newsroom systems, meaning journalists can easily select an event, populate it, and add it to their stories.

Data integration Voyager is linked to XPression Datalinq server, enabling it to parse data from multiple external data sources and directly into Voyager, enabling users to tell better stories, discuss statistics, or show other engaging & informative content. Logic & scripting While everything can be operated from Lucid Studio, scripting is still possible with Unreal Blueprint Visual Scripting system which is a complete & highly versatile scripting system based on the concept of using a node-based interface to create interactions from within the Unreal Editor. Harnessing the renowned Unreal Game Engine from Epic Games, the Voyager XR graphics rendering platform enables users to create stunning virtual environments for AR, VS and XR LED studio applications. Its powerful graphics capabilities enable users to create environments that are highly realistic and detailed while making the design process more efficient and outcomes more impactful. Despite its advanced backend, Voyager XR is designed to be user-friendly. The integration of the Lucid Studio control platform ensures that users, even without in-depth knowledge of Unreal, can operate Voyager with ease. Ross Video continues to innovate the Voyager platform with a range of features. For those focused on mobility, Voyager’s compatibility with AJA Io X3 is a key feature, allowing laptopbased operation. A new D3 plugin also allows seamless integration with D3 LED displays, enabling Ross to provide customers with an end-to-end XR solution as well as enabling unique colour-matching capabilities for LED/set extension and AR applications.


WINNER RUSHWORKS, an ENCO brand PTX Model 3 PRO

T

he PTX Model 3 PRO by RUSHWORKS, an ENCO brand, is a whisper-quiet robotic pan/ tilt head designed for larger payloads, providing the torque required to smoothly position cameras and lenses. Use any joystick controller that supports VISCA over IP protocol. The unit features passthrough connections for SDI, HDMI, USB, DC camera power, and networking to reduce cable strain from camera movement. The PTX Model 3 PRO is equally at home in many production settings, including Corporate, Houses of Worship, Entertainment Venues, Sporting Events, and Newsroom-style studio environments. There are multiple pass-through connections for cameras and lenses between the base and moving arm, minimizing cable strain and stress. These include 3 x SDI, 1 x HDMI, 1 x CAT6, and one XLR 4-pin. Adopting the popular and versatile VISCA over IP control protocol assures broad interoperability across many different brands of controllers and software, and includes commands such as pan, tilt, speed, ramp curves, presets & recalls, tally, and much more. With its simple yet adaptable mounting plate, the PTX Model 3 PRO is compatible with most video cameras, including models from Blackmagic Design, Canon, Panasonic/Lumix, Red, Arri, Sony, and others. Configuring multiple PTX units on a network simplifies connectivity and control. You can use virtually any

hardware joystick controller that supports VISCA over IP, or the RUSHWORKS VDESK Integrated Production System or RUSHCONTROL Robotics Control Software. There are many accessories for your PTX Model 3 PRO to optimize it for your workflow. Add a teleprompter rig, or choose from various wall or ceiling mounting bracket options. The PTX Model 3 PRO is rugged and very stable, constructed of aluminum and steel. The result is a robust, solid platform designed to accommodate professional-grade cameras and lenses. The head is 15 x 15 x 15 inches (381 x 381 x 381mm), and weighs 43lbs (19.5 kg). Designed and manufactured in the United States by RUSHWORKS, an ENCO brand, the PTX Model 3 PRO is the latest generation of robotic pan/tilt heads, backed by US-based Customer Support that’s recognized for prompt, competent, and courteous assistance whenever you need it, 24/7/365.


NOMINEE SEI Robotics Minibar 5.1

T

he SEI Minibar is a unique audio-visual solution designed for Pay-TV consumers looking to upgrade their home theater with incredible sound and pictures. The soundbar supports best-in-class immersive audio with Dolby Atmos, features a 5.1 audio channel configuration, includes Android TV OS, and is Tuned by THX. The Minibar is half the length of typical soundbars on the market and packs 4K UHD capabilities, including Dolby Vision® for astonishing picture quality. This innovative product has been developed with top-tier audio and video experts in the field. It features 5.1 channels, including a dedicated center speaker for clear dialogue and excellent vocal clarity, four full-frequency speakers, and a built-in woofer for panoramic surround sound. Based on your favorite content, it can deliver immersive Dolby Atmos and Dolby Vision user experience automatically, and this innovative soundbar allows audiophiles to customize sound profiles to fit their audio preferences. The SEI Minibar is meticulously Tuned by THX, delivering an end-to-end solution that maximizes the performance of the speaker bar. This ensures consumers experience the artists’ true vision in their entertainment and enjoy an exceptional outof-the-box listening experience regardless of the type or format of the content. During the Tuned By THX process, THX performs an assessment of the product’s hardware, testing and evaluating every component individually. After the assessment, THX engineers optimize the product as a complete system and perform tuning to set parameters and corrective EQ. Through AI algorithms, the Minibar automatically delivers surroundings to the audience in real time. That

lets you always be at the sweet spot, so you can just lean back and experience cinematic surround sound from the comfort of your home with Dolby Atmos and Dolby Vision. The Minibar core features: n Sleek and compact design; it’s half the size of other soundbars in the market ( length of 600mm). n Dolby Atmos immersive audio for multidimensional sound and Dolby Vision for visuals with incredible contrast, color, and detail. n Android TV provides premium 4K content and enables voice control. n Tuned by THX for balanced, well-calibrated audio at all volumes. n Features Wi-Fi 6 mesh technology for seamless wireless connectivity. The Minibar is an all-in-one device, which is preloaded with all your favorite streaming services like Netflix, Disney+, YouTube, Hulu, Amazon, and more!


NOMINEE Sharp Sharp TVs Powered by TiVo

T

iVo OS™ is an independent media platform that allows user choice and control, enables subscriber acquisition and retention for content services, and provides recurring revenue for partners. TiVo OS™ drives TV demand and viewership; leveraging content relationships for a content-first approach, with premium global and regional content and TiVo+ channels. Sharp is the second TV OEM to launch smart TVs Powered by TiVo and we will partner with them on a multi-year, multi-million-unit relationship that is expected to ship TVs starting in Europe next year. We also have another OEM with plans to ship product in 2024, and we expect to have distribution in both Europe and the U.S.

Innovative features included in TiVo OS Content first approach: TiVo OS offers the ability to discover content across favorite streaming apps and create universal watch lists, making it easy for people to find, watch and enjoy what they love. Personalized Recommendations: TiVo OS customizes entertainment experience by delivering personalized recommendations based on a combination of watch lists, what’s trending and creating a unique taste profile. Award-winning user experience: The typical 'sea of apps' user interface on a TV screen is replaced with an exceptional user experience that allows users to view, browse and search all content from one easy-to-use, simple guide or menu. Natural Voice Control: With the industry-leading voice control in your hands all you need to do is ask TiVo what you’re looking for and TiVo will help you find it. Latest Technology: TiVo OS delivers best-in-class, natural voice navigation and uniquely customizable ways to enjoy TV. Free content with TiVo+: TiVo+ is a free, ad-supported content network offering instant access to over 100,000 movies and TV shows, news, sports, kids and specialty programming on 150+ live channels.

With an award-winning, content-first experience, global content provider scale and a profitable partnership model, TiVo OS ™ is the ultimate independent smart TV operating system. Based on decades of experience growing profitable consumer electronics and entertainment ecosystems, the TiVo model is designed to maximize the lifetime value of customers for TV OEM partners vs. competing platforms. TiVo OS meets the needs of this large addressable market, by giving smart TV OEMs the ability to create a new, unbiased relationship with their customers with meaningful content experiences that in turn drive lasting brand affinity. Powered by TiVo essentially levels the playing field. With good reason, as much as 40% of the smart TV market is now seeking exactly the kind of embedded independent operating system and media platform that TiVo offers. The evolution of the smart TV market is being shaped by the advancements offered by independent media platforms. TiVo’s content-first experience with simplified universal discovery and natural voice navigation is enhancing the user experience, offering intuitive interfaces, personalized recommendations and seamless streaming across devices. By prioritizing user-centric features, TiVo is creating a more enjoyable and user-friendly environment for content discovery, consumption, and engagement.


NOMINEE Shure Incorporated AD600 Axient® Digital Spectrum Manager

W

hether you’re an RF coordinator at Liverpool Arena for Eurovision or managing frequencies for the coronation of King Charles, you require a seamless, effective, and efficient wireless system to make events run smoothly. RF environments require real-time scanning for planning and managing frequency coordination across professional audio applications. Shure introduced the Axient® Digital AD600 Digital Spectrum Manager to provide industry professionals with the tools they need for simultaneous RF control. The Shure Axient Digital AD600 Digital Spectrum Manager helps RF coordinators and audio professionals monitor live RF information to plan and coordinate frequencies in the most challenging RF environments. It is the digital successor to the analogue Shure AXT600 spectrum manager and continues to build upon Shure’s portfolio of nextgeneration technology. The AD600 equips the industry with powerful and comprehensive RF coordination workflow tools that enable professionals to monitor frequencies continuously even in the most challenging environments. This includes highly televised broadcast media and entertainment like Eurovision and the international Tokyo-based sports competition held in the Summer of 2021. The AD600 boasts faster scanning that finds available frequencies and analyzes the RF spectrum immediately, streamlining site surveys and spectrum management. Six antenna connections deliver multiple coverage options while Dante connectivity provides advanced audio monitoring of your network. Users can lean on the USB port to export,

import, or save backup scans, event logs, and other important data. When used with additional Axient Digital solutions, AD600 users can benefit from interference avoidance features available with ShowLink®, a feature unique to the Axient Digital ecosystem that enables real-time control and communication with all ADX transmitters. With AD600, users can plan, scan, and deploy frequencies to their wireless network, or dive deep for complete control in tough environments thanks to the guided coordination features. These have been put to the test at major events around the world, with Steve Caldwell, RF Coordinator, trusting the AD600 and Axient Digital in some of the world’s most demanding RF situations. Speaking of its use at the international Tokyo-based sports competition in 2021, Caldwell said, "In my opinion, the best feature of the AD600 is its ability to sample up to six different antenna (or distribution network) sources concurrently. This allowed me to see comparable levels of four separate antenna inputs (the Axient Digital Quadversity distribution) and two localized wideband antennas. This ability to compare the six discrete antennas allowed quite accurate localization of any transmitter in the Tokyo stadium. As the six antennas were varied in both location and beamwidth direction, including two antennas on the opposite side of the stadium on an RF over Fiber (RFoF) network, the ability to locate a transmitter based purely on RSSI was remarkably accurate." The AD600 Digital Spectrum Manager ensures RF coordinators and audio professionals have the most accurate RF information in the most demanding audio applications.


WINNER Telos Alliance Telos Infinity® VIP

T

he Telos Infinity® Intercom family of products continues to grow and expand with the new Telos Infinity VIP app, a companion application for Infinity VIP customers that mirrors the look and functionality of the HTML5 browser-based VIP panel offering, but with a few key additions. Key features include a new system for easy panel sharing and configuration that allows VIP administrators to share invite emails with a link that

automatically opens the app with the desired configuration, allowing end users quick access to the same virtual panel without the need to safeguard a browser tab between sessions. For devices without the need for configured email, such as a dedicated tablet in a studio, virtual panels are easily accessible using a beacon address and corresponding password. The free Infinity VIP app is available from Google Play for Android users, or the Apple App Store for iOS devices.


NOMINEE Telos Alliance Minnetonka Audio® AudioTools Server WorkflowCreator

M

innetonka AudioTools Server has earned its reputation among content creators and distributors as one of the most highly flexible tools for handling complex, file-based audio automation tasks. The introduction of WorkflowCreator addresses one of the biggest challenges faced by ATS customers: The need to manually edit XML files to create new custom workflows. WorkflowCreator retains the core functionality

and all the features offered in its predecessor, WorkflowEditor, while introducing new abilities to delete steps, add new steps, and create brand-new workflows entirely from scratch using an intuitive, easy-to-use graphical interface. WorkflowCreator is included with the purchase of the AudioTools Workflow Control Module. Current ATS customers with an active TelosCare PLUS SLA can upgrade their systems to include WorkflowCreator.


WINNER VoiceInteraction MMS - Broadcast Edition

V

oiceInteraction’s Media Monitoring System (MMS) is a comprehensive platform that monitors both QoS and QoE broadcast elements. This ensures optimal delivery while incorporating features that streamline content creation, foster viewer satisfaction, reduce costs, and create monetization opportunities. MMS utilizes proprietary Automatic Speech Recognition (ASR) and AI algorithms to ensure regulatory requirements and manage content, all in one product. MMS allows users to monitor and control live content, segment news by topic, and generate automated reports. This proactive, AI-driven platform assists multiple departments simultaneously, making it the ideal solution for broadcasters looking to combine regulatory adherence, content production, and asset management. MMS is a 24/7 comprehensive compliance tool that captures and stores media feeds from various sources for OTA and OTT channels, prioritizing access to the most recent files with an archival system that gradually reduces the quality of stored content over time. Our proprietary technology enables a customizable alert center that displays real-time capture feed status, TS monitoring, Loudness/LFKS logging, Video and Audio QoS, and Closed captions monitoring. The alert center provides configurable real-time notifications through in-app alerts, email, or instant messaging for specific users or teams to take prompt action with confidence. The multiviewer allows for real-time monitoring of multiple live and VOD broadcasts in one dashboard. Generated content is easily integrated and developed by exporting media in a wide range of transcoding formats for any audio or video stream. The entire process is streamlined through a centralized webbased interface, making security simpler and reducing

computational demands for real-time monitoring. Additionally, the system provides a RESTful API for on-top development and customizable integrations. After capturing, monitoring, and storing incoming signals, the platform then analyzes the broadcasts, as the ASR generates a time-stamped, full-text transcription of the newscast. AI algorithms then generate relevant and searchable metadata: automatic news clipping, topic assignment, keyword detection, anchor ID, title and summary generation, and more. MMS’ content fingerprinting technology also utilizes OCR for additional face and text recognition. This creates a network of metadata, searchable without the need for manual tagging and curation; effectively saving time and cutting costs. The platform enables locating clips about a certain topic or person of interest, monitoring ad placements, keeping track of music royalties, and observing trends by adding Nielsen ratings or similar data. This information can be made available for every network channel or multiple market locations, including competitors. MMS enhances content creation and delivery workflows, fostering viewer engagement and creating new monetization opportunities through content repurposing. The generated clips and additional information can be exported to the web and social media, enhancing the station’s online presence, and allowing viewers to watch content on their preferred devices. The index and search capabilities, combined with ratings, allow the suggestion of relevant clips or related content. As for the automatically generated analytics, these allow for data-driven decisions that improve programming and ratings. MMS benefits every department, empowering broadcasters to focus on what they do best: creating and delivering great content.


NOMINEE WISEDV INC. WISEDV:WISEPLAY

W

isePlay is a cost-effective and versatile channel playout platform for broadcasters to automate and optimize on-air operations. An interactive and FAST broadcast channel playout supports all standard protocols like UDP, HLS, SRT, RTP, RTMP, WebRTC, NDI, SDI, and ASI to originate a TV or OTT channel. Hardware-agnostic Wiseplay runs on any cloud, on-premises, or private data centers, running on CPU only or CPU and GPU. It offers complete control over OTT or on-air content, ensuring viewers a seamless and high-quality broadcast experience. It supports all video formats and resolutions, including 4K and HDR. It features real-time graphics editing capabilities with Visual Graphics Editor with multi-layers, inserting graphics more efficient. WisePlay supports media files up to 4K Resolution with MPEG2, H264, HEVC, VPX video in MXF, .mov, .mpg, .mp4, .ts, .avi, .wmv, .mkv, and other containers. It outputs up to 4K resolution in MPEG2, H264, HEVC, WebRTC, and HLS formats. It also supports DVB subtitles and open and closed captions in multiple languages. WisePlay provides 1+1 redundancy with synchronized schedules and media, ensuring high levels of reliability and redundancy for broadcasters. WisePlay includes AI-based metadata fetching for movies and media files, making it easy to populate the required multiformat informative electronic program guide (EPG) on each platform, such as Samsung Plus, Vizio, PlutoTV, Localnow, and other well-known EPG platforms, with a unique URL. The AIbased breakpoint detection splits media with clip-in and clip-out capabilities, allowing the insertion of ad breaks at viewerfriendly midpoints. WisePlay also supports automated captions through its speech-to-text ability,

inserting closed captions in EIA 608/708 formats. With Smart QR-Code and HTML5 Graphic Insertion capability, interactive polls are supported. The advanced Ruled based scheduling capabilities allow scheduling up to a year in advance with Auto Ad breaks, media and graphics scheduling resulting in the reduced scheduling time to 25% of the time the other playouts require. WisePlay’s browser-based remote operation allows broadcasters to control their playout operations from anywhere, making it an ideal solution for teams working remotely or across multiple locations. Its integration with DASDEC’s EAS system enables Emergency Alerts Texts insertion. Overall, WisePlay is a powerful and flexible playout to help broadcasters of all sizes and types streamline their operations, reduce costs, and increase the efficiency and reliability of their workflows. It can insert HTML5 graphics to insert dynamic weather graphics and live sports scores and can end the ingested live streams based on time, manually, or on SCTE triggers - ideal for live news and sports channels. It schedules the station ID insertion, sends emergency alerts, and creates PSIP tables. WiseStudio Integration with WisePlay enhances Live Remote Sports and News Production. It recently won the Product of the Year award at the NAB Show 2023, and Streaming Media recognized WisePlay as a leading trendsetter in 2023. Over 300 channels currently use WisePlay due to its reliability, flexibility, and ability to meet the modern broadcasting demands of FAST and TV broadcasters. Debuting WisePlay Lite – Designed for mainstream low-cost FAST channels, with very high channel density per CPU Cores.


NOMINEE Zero Density Reality5

A

fter helping broadcasters deliver coverage of everything from US elections to the World Cup, the Zero Density team knows how easy it is to break an audience’s immersion—all it takes is a host that looks like they’re floating on a virtual set. That’s why we’re thrilled to announce Reality5. With it, broadcasters never have to worry about photorealism in on-air graphics again. Reality5 is a real-time compositing platform that raises the game for broadcasters by letting them create photoreal virtual productions more efficiently than ever before. With 40% faster rendering, Reality5 gives creatives the freedom to work with complex 3D scenes, so attention-grabbing results can be delivered without having to spend time optimizing assets or worrying about RAM requirements. With Reality5, all compositing and keying is executed in 3D space to maximize accuracy. Its pixel-perfect keyer ensures that real shadows are accurately preserved in a fully virtual

environment. In a hybrid setting, Reality5 is also capable of casting shadows and reflections of virtual objects onto physical elements and vice versa. This reduces the uncanny valley effect that comes from real-world actors or props that look copy-pasted into a virtual backdrop. Whether it’s for AR sports statistics or full XR multicam operations, Reality5 now comes with outof-the-box support for all Unreal Engine 5 features and plugins too. Zero Density has committed to supporting new versions of Unreal Engine within weeks of their release so that broadcasters can stay ahead of the curve with pipelines that are future-proof. To save time and money on every show, broadcasters can also take advantage of Reality5’s in-built control hub. This makes it easy to build virtual sets across multiple locations, all in just a few clicks. Anyone can create virtual set designs, add them to the central hub, and then distribute them to multiple studios in different countries, even when each has different setups.


WINNER Zixi Zixi Software Defined Video Platform (SDVP)

Z

ixi is the architect of the Software-Defined Video Platform (SDVP), the industry’s most complete live IP video workflow solution providing unparalleled live video delivery performance running over the Zixi Enabled Network which is the industry’s largest ecosystem and consists of more than 1,000 media companies and 400 technology partners globally. The SDVP enables media organizations to economically and easily source, manage, localize and distribute live events and 24/7 live linear channels in broadcast QoS, securely and at scale, using any form of IP network or hybrid environment. Superior video distribution over IP is achieved via four components: 1.Protocols Zixi’s congestion and network-aware protocol adjusts to varying network conditions and employs forward error-correction techniques for error-free video transport over IP. As a universal gateway, standards-based protocols such as RIST and open source SRT are supported, alongside common industry protocols such as RTP, RTMP, HLS and DASH. Zixi supports 18 different protocols and containers – the only platform designed to do so. 2.Video Solutions Stack Provides essential tools and core media processing functions that enable broadcasters to transport live video over any IP network, correcting for packet loss and jitter. This software manages all supported protocols, transcoding and format conversion, collects transport analytics, monitors content quality and layers intelligence on top of the protocols such as bonding and patented hitless failover across any configuration and any IP infrastructure, allowing users to achieve 5-nines reliability. 3.ZEN Master The SDVP’s control plane enabling users to intelligently provision, deploy, manage and monitor thousands of content channels across the Zixi Enabled Network, including 400+ Zixi enabled partner solutions such as encoders, cloud media services, editing

systems and ad insertion and video management systems. With such as extensive network of partner-enabled systems, Zixi ZEN Master presents an end-to-end view across the complete live video supply chain. 4.Intelligent Data Platform A data-driven advanced analytics system that collects billions of telemetry points a day to clearly present actionable insights and real-time alerts. The IDP leverages cloud AI and purpose-built ML models to identify anomalous behavior, rate overall delivery health and predict impending issues. The IDP includes a data bus that aggregates over nine billion data points daily from hundreds of thousands of inputs within the Zixi Enabled Network, including over 400 partner solutions and proprietary data sources such as Zixi Broadcaster. This telemetry data is then fed into five continuously updated machine-learning models where events are correlated and patterns discovered. With clean, modern dashboards and market-defining real-time analytics, the SDVP enables users to focus on what’s important, with intelligent alerts and health scores generated by Zixi’s AI/ML models helping sift through and aggregate data trends so that operations teams always have the insights they need without data overload. At a time that sees the normalization of remote working and a proliferation in the ways programs reach viewers, Zixi’s SDVP provides the agility, reliability and broadcast-quality video securely from any source to any destination over flexible IP video routes.


NOMINEE Zixi Zixi-as-a-Service

Z

ixi-as-a-Service (ZaaS) is a complete solution for enabling live video distribution from any location, in any format, delivered over any protocol, to any destination. ZaaS provides everything needed to receive contribution feeds and process, transcode, package and deliver them to any target location. It orchestrates cloud ingress into geographically distributed cloud operating environments, And for customers that require live transcoding or other processing support, ZaaS provisions the necessary cloud infrastructure and automates distribution of low-latency broadcast -quality live video to any number of targets and end points. ZaaS customers have full visibility across the operating environment with Zixi ZEN Master providing real-time status views and access to all managed channel and infrastructure resources. In addition to the purpose-built live video operational model that ZaaS enables, customers benefit from significant cloud egress fee mitigation and cost efficiencies. Delivering video through cloud infrastructure offers significant advantages in today’s market. Zixi customers are facing rapidly changing business and operating models and require the agility and scale that cloud delivers. The first wave of cloud adoption saw video distributors migrate large swaths of their post-processing and delivery infrastructure to public cloud partners. In 2022, video publishers moved more contribution and remote production workflows to the cloud and have been implementing multi-cloud strategies to mitigate risk and optimize costs. ZaaS is a key part of our customer’s multi-cloud strategies. Most customers are partnered with a public cloud provider like AWS, Azure or GCP, but protecting themselves from outages associated with a specific provider is increasingly becoming a high priority. ZaaS provides a complete videooptimized cloud operating environment for live video distribution, securing a diverse signal path for uninterrupted streaming and providing industry-best egress rates that dramatically reduce cost. At the heart of ZaaS is Zixi ZEN

Master, which seamlessly coordinates bonding live channel distribution in both the customer public cloud account and the ZaaS account. This is critical to enabling continuous uninterrupted hitless playback, even if there are significant outages within either operating environment. ZaaS is built on top of the Zixi Software-Defined Video Platform (SDVP). Key benefits of ZaaS include: n Centralized Management: ZEN Master provides a centralized video of the entire Zixi Enabled contribution and distribution network. n Security: Best-in-class enhanced with DTLS and AES standards-based protection n Reliability: Experience ~ 100% uptime with Zixi’s patent pending hitless failover that provides redundant transmission options for high reliability and disaster recovery n Ultra-Low Latency: With network adaptive forward error correction and recovery, proven millisecond live linear latency n High Availability: Leverage the SDVP on Zixi-as-a-Service to bond and load balance diverse internet or fiber circuits for increased high availability between facilities. n Interoperability: Zixi is compatible with the largest ecosystem of encoders and decoders and live video protocols.


WINNER Broadcast Bionics VirtualRack

V

irtualRack is a hardware appliance and browser interface that helps engineers rapidly and confidently deploy containerised broadcast products without advanced IT knowledge. Broadcast equipment manufacturers are increasingly releasing containerised versions of products that have, until now, only been available as hardware. But as engineers begin to transition from physical hardware to software-based alternatives, they are faced with a number of complex challenges. VirtualRack from Broadcast Bionics is a vendor agnostic solution that allows broadcast engineers to easily deploy and manage containerised broadcast products from a range of manufacturers, rapidly, reliably and without requiring any specific IT knowledge. Whilst there are many benefits to deploying containerised products (improved redundancy, dynamic scalability, remote management, saving on physical rackspace), there is also a new set of challenges that often extend beyond an engineer’s expertise and remit. Due to broadcast specific requirements including low-latency audio and high uptime, advanced Linux knowledge is required to configure hardware that will reliably run containerised broadcast products. Attempting to run multiple products from different manufacturers further compounds the issues, as different products have different requirements, so additional specialist knowledge (and a lot of trial and error) is needed to achieve stability. VirtualRack solves these issues using preconfigured hardware with an embedded software interface. Once a VirtualRack appliance is installed on a broadcaster’s network, products can be selected from the built-in application library without using a line of code. Once activated with their own product licence codes, VirtualRack monitors for updates, provides visibility across entire multisite networks, and allows engineers to configure and manage backup and

redundancy options. VirtualRack’s growing application library currently includes a number of products from Telos Alliance, as well as SOUND4, 2wcom, XPERI and Wide Orbit Automation for Radio and is available in two sizes depending on the number of products and processing power required. Multiple units can be installed across multiple sites to provide a breadth of diverse backup and redundancy options. We’re confident that VirtualRack is unique in addressing the issues described above and is therefore a truly innovative solution empowering broadcasters to quickly and confidently build stable, future-proof infrastructures. VirtualRack has already generated international interest from leading broadcasters and is currently installed at one of Australia’s largest networks. VirtualRack is the result of countless hours of research and development to fine tune, troubleshoot and ultimately create an optimised platform for managing containerised products. Now, instead of painstaking trial and error, broadcasters can start building their future-proof infrastructure, safe in the knowledge that the hard work has already been done.


NOMINEE ENCO Systems, Inc. WebDAD 3.0

T

he ability to work remotely and leverage automated workflows is more important than ever, and WebDAD browser-based automation can help operations continue unabated. WebDAD is a browser-based remote automation control system, with a virtualized platform that allows users to remotely access and manage studiobased ENCO DAD automation systems. It also helps with keeping operating costs low in a very challenging economic environment. WebDAD empowers cloud-based operation through an updated, HTML5-enabled user interface. Users have the freedom of a fully virtualized platform to remotely access and control their ENCO DAD radio automation systems; a benefit made all the more powerful with the addition of ENCO’s Presenter On-Air interface. This interface further optimizes ENCO’s modular, touchscreen design for customizing production workflows and providing instant access to media libraries, playlists and more. Plus, multiple users can remotely access their main DAD system at the same time, whether down the hall or around the world. The latest version of WebDAD offers voice tracking enhancements including ENCO’s powerful and efficient FastTRAK one-button voice track creation and insertion feature – previously available in the local DAD interface but appearing in WebDAD for the first time – alongside improved sound quality. Its refined user interface sports responsive adaptation to the operator’s browser window, improving the user experience across desktop and mobile devices alike. WebDAD has evolved to support a true ’Studio in the Cloud“ operation that removes the limitation of maintaining an on-premises physical workstation. In addition to reducing operational costs at the station, WebDAD customers have broad control and playout capabilities across on-air presentation, playlist manipulation, voice tracking, and other

workflow tasks. With the powerful ENCO Presenter On-Air interface, WebDAD is the most powerful and comprehensive browser-based interface on the market. WebDAD also helps stations work more freely with on-air and production talent around the world by making it easy to bring in part-time, contract, and remote staff to access the playout system from anywhere there is an internet connection. Advanced security features allow administrator accounts to add or remove access easily, making it easier than ever to work with contract workers, even if they are only needed to cover one event.


WINNER GatesAir Intraplex IP Link 100n with MicroMPX

G

atesAir’s new Intraplex IP Link 100n hardware codec represents the brand’s next generation of Audio over IP hardware solutions along with two other IP Link products, the 100c half-rack codec and 100e transmitter module, introduced last year. The Intraplex IP Link 100n is a full-duplex, single stereochannel codec for simultaneous reception and transmission of Audio over IP streams in STL (Studio-to-Transmitter), STS (Studio-to-Studio) and other networking applications for radio and streaming services. Like the IP Link 100c and 100e, the IP Link 100n offers a modernized platform that allows GatesAir to integrates more features under the hood, including 10-bit audio processing for improved program audio quality, Secure Reliable Transport (SRT), andcloud-based monitoring of live streams. It is compatible with all existing IP Link codecs, including the IP Link MPXp codec for FM-MPX composite signal transport; and Intraplex Ascent, GatesAir’s bulk cloud-transport platform for enterprise applications. As with all IP Link products, the IP Link 100n is SRTcapable and integrated with Dynamic Stream Splicing (DSS) technology, a GatesAir industry-first innovation nearly ten years ago, which fortifies network path redundancy across

two or more live streams for hitless protection against packet and link losses. Other features include dual power supplies and multi-source audio switching with automatic failover to backup streams and USB for on-air protection. The 100n, as well as 100c and 100e, adds support for MicroMPX technology to enable bandwidth reduction for FM MPX signal transport over single-frequency networks. MicroMPX is a third-party software library developed by Thimeo that helps FM broadcasters transport a full FM composite MPX signal at 320kb/s. When used within the IP Link 100c, 100e or 100n, customers will reduce bandwidth consumption by up to five times. This makes the combined solution ideal for reliable MPX transport over public internet connections and low-bandwidth distribution networks. While MicroMPX technology has been adopted by other vendors, GatesAir offers a unique value proposition through integration with its Intraplex SynchroCast and DSS technology. SynchoCast3 provides a dynamic, scalable simulcasting solution for overlapping transmitters, helping broadcasters cover a broader geographic area with fewer frequencies. That makes SynchroCast’s integration with MicroMPX especially valuable for single-frequency networks that operate many transmitters on one FM frequency.


WINNER Inovonics, Inc. 541 FM Modulation Monitor

T

he 541 is Inovonics fourth-generation FM Modulation Monitor. It delivers a wealth of information about the transmitted signal in terms of the RF carrier and all subcarriers, the audio component defining the technical quality that the listener hears, and full decoding of RDS data and SCA audio. The all-digital 541 combines detailed DSP signal analysis with a menu-driven touchscreen display, plus Webserver-based total access for remote operation, including measurements, graphical data and direct Web-browser audio monitoring of the off-air program. Feature highlights: n Unexcelled reception of analog-FM broadcasts with highlyaccurate displays of total modulation and other measurements.

n Total RF signal performance monitoring; Direct & Off-Air. n Tuning Range: 87.5MHz – 107.9MHz in 100kHz steps. n Graphic 5-inch touch-screen and remote display of all level metering. n Dynamic Web Interface displays Spectrum Analyzer, BandScanner, RDS data, baseband FFT, audio XY, and Peak Density readouts. n Station Rotation for round robin monitoring of up to 30 preset stations. n Remote control and monitoring via Web. n Enhanced alarm logging with notifications via Email/SMS SNMP. n Remote listening with multi-user stream and Dante/AES67 AoIP.


WINNER Jutel Oy RadioMan® Clipper

R

adioMan® Clipper is a new-generation web audio production platform that supports mobile app environments, virtual browser-based production, and media asset management in the cloud. The main goal and driver behind the RadioMan® Clipper was to deliver on the promise of a ’single audio toolset platform for all journalists“ for audio production and management. Clipper has been created to provide a unified platform for audio production so that journalists don’t need separate tools for recording, mobile audio editing, metadata generation, audio transfer, remote media asset management, and multitrack audio editing. A single platform helps the broadcaster in production environment management, user management, updates, and data security. Clipper allows journalists to effortlessly record and edit audio while on-the-go, either via an intuitive mobile app or by unlocking the extensive capabilities of browser-based multi-track editing with a variety of advanced features. Experience the seamless workflow of RadioMan® Clipper: 1. A journalist uses the Clipper mobile app to record interviews and comments, write down notes, comments, and journalistic texts, modify the clips with basic single-track audio editing, and manage the files locally in the handset or transfer the results into cloud-based media asset management for further use. 2. A journalist can either use the results directly in the newsroom, in the playout environment at the broadcasting facilities, or use them in further journalistic work. 3. Journalists can continue editorial work of audio and text material with a browser-based multi-track version of the Clipper audio editor. Clipper mobile app allows recording of audio clips, metering, basic audio editing, leveling, auto trim, separate clip list usage, copy/paste, marker usage, uploading to media asset management, and creating text and notes for later use. Clipper is available for both iOS and Android devices and integrates effortlessly with RadioMan® or other media asset management systems, ensuring a smooth flow of creative assets across

platforms. The mobile app can be additionally run as a browser-based application on the mobile handset which allows temporary users to use it without installing the app. Clipper multitrack application runs in a browser environment but supports standalone operation as well. In addition to the features of the mobile app, it has extensive support for multi-track editing, keyboard commands, and mouse usage. Clipper is integrated seamlessly with the RadioMan® 6 cloudbased broadcast environment. The material produced in Clipper is immediately visible in RadioMan® 6 and can be used instantly for broadcast purposes whether in a studio or in remote locations with browser-based playout controls. Clipper also integrates with other media asset platforms or file-based asset management platforms and can be tailored for in-house use for large broadcasters, networks, or other content creators. This innovation of RadioMan® Clipper allows radio stations to streamline their broadcast operations across various locations, cutting costs and simplifying their radio broadcast workflow, automation, and distribution with the RadioMan® Cloud system. Transform your ideas into polished audio within seconds; from recording to ready for live broadcasting.


WINNER NeoGroupe NeoSIP

N

eoGroupe has added the capability to handle phone lines and codecs directly in NeoScreener, making it an all-in-one software solution for managing calls and contributions at once. NeoSIP is a software-based telephony system that will replace your hybrid and codec setups. Combined with an IPBX, NeoSIP offers a scalable solution, ensuring real-time call management from listeners and reporters in a single, easy-to-use interface. Your studio studio users will not have any longer to know how to operate multiple devices: All communications come to the same interface and can be assigned easily to faders. It also offers all IPBX native functions such as call routing, voice mails, recordings, IVR and more. The solution is compatible with multiple technologies: n PUBLIC PHONE NETWORK: SIP/VOIP, ISDN, POTS.

n CODECS: Opus, g722, g711a, g711µ, g729. n CONSOLE AUDIO: AES67, Dante, Livewire+, Wheatnet-IP and any audio in/out through a Windows WDM driver. n OPUS CONTRIBUTIONS: WebRTC, physical codec boxes Because NeoSIP is scalable and modular, it can handle most cases — from a single fader/single line in a studio to four faders per console across three 10-studios platforms in the same region, with as many phone lines and contributions lines as desired. The solution can be integrated with your virtualization environment, or installed in the cloud, or even run on a local machine in your facility. Finally, NeoSIP comes with a free version of NeoScreener Lite and unleashes the power of the full NeoScreener software suite (NeoWinners, NeoScreener, NeoScreener Smart, NeoAgent), making it a new solution for ALL communications needs in a Broadcast environment.


WINNER RFE Broadcast HPA3000 FM Amplifier

T

he HPA3000 FM Amplifier is a compact (only 4U rack) 3.3kW FM amplifier featuring some of RFE's most innovative technologies in terms of efficiency, costs and maintenance. The HPA3000 is designed to meet the most demanding requirements in terms of user-friendliness, service life and easy maintenance. With an overall efficiency of more than 75% and a nominal power of 3.3kW, the amplifier is the ideal solution for radio stations that need robust and reliable transmitters without compromising on compact size and high performance. The 3.3kW amplifier is composed of three 1.1kW RF modules

mounting the latest generation transistors plus three completely redundant General Electric power supplies; if a power supply fails, the output power does not change. Of course, like all RFE products, the HPA3000 includes important features such as a large LCD color display with touch panel and remote control at no additional cost. HPA3000: a reliable device, easy to use and control, which ensures the best performances in a small size. It is also available at 5kW in the same 4U size, composed of four 1.4kW RF modules plus three General Electric power supplies. RFE Broadcast: making broadcast smarter.


WINNER Telos Alliance Axia® Altus Virtual Mixing Console

W

hat happens when you move the console out of the studio and into a virtualized environment? A world of new possibilities emerges. The new Axia® Altus software-based audio mixing console brings the power and features of a traditional console to desktop and laptop computers, tablets, and smartphones running any modern web browser, inviting users to re-think where their content is created and produced. Altus provides full-function mixing – including eight virtual auxiliary mixers and integration with Telos VX broadcast phone systems – for distributed and remote

workforces, allowing collaboration on both recorded programs and live broadcasts. Altus is also ideal for any situation where fast deployment is necessary, such as temporary studios, low-cost disaster recovery centers, or on-demand remote broadcasts. Altus is delivered as a Docker container, a method of software deployment used extensively in modern IT environments. It provides a high degree of flexibility on- or off-premises to meet your needs now and in the future using non-proprietary COTS hardware, and is available as a one-time buyout or as a subscription.


WINNER Telos Alliance Omnia® Forza Audio Processing Software

F

rom the legendary Omnia® team comes Forza, a brand-new approach to the multiband audio processor. Forza’s all-new AGCs and multiband limiters breathe new life into the traditional five-band processor design, yielding a sonic profile that delivers a consistent and polished audio signature without sounding overly processed. Forza debuts as a stereo processor optimized for HD, DAB, and streaming audio applications. As consumers increasingly embrace online listening, proper audio processing is as essential for streaming as it is for FM signals. Omnia’s highly regarded Sensus® codec conditioning for low bitrate streams, plus a new LUFS target-driven ITU-R BS.1770 loudness controller for compliance with streaming platform requirements, make it ideally suited to the task. Expertly crafted ’launch point“ presets and an intuitive

yet powerful user interface empowers users of all skill levels, ensuring instant sonic excellence for listeners. Central to Forza’s smart UI is its interactive processing logic which seamlessly maintains harmony between ’under the hood“ controls and settings. Anyone can confidently drive Forza without a PhD in processing, while professionals will love its powerful simplicity when crafting their unique signature sound. Forza also lends itself brilliantly to deployment within existing Telos Alliance hardware and software offerings, and is available as a mid-tier processing option in Telos Z/IPStream X/2 and R/2 stream encoding products. Forza also leverages the power and flexibility of delivery via Docker container, a broadly adopted method of delivering software used in an increasing number of new Telos Alliance products.


WINNER Telos Alliance Z/IPStream® X/20 and R/20

Z

/IPStream® X/20 and R/20 represent the next generation of stream encoders and processors from the global leaders in broadcast audio, Telos Alliance®. Z/IPStream X/20 is an all-in-one streaming audio software encoding and processing platform for Windows PCs and servers; Z/IPStream R/20 is a dedicated 1RU streaming and processing hardware appliance with AES/EBU, Livewire®, and AES67 I/O. Z/IPStream X/20 and R/20 feature all of the high-quality codecs and versatile encoding features broadcasters rely on, including Adaptive Streaming in support of Apple HLS and Microsoft Smooth Streaming formats.

They also set the stage for new features and processing options, such as Omnia® Forza, a brand-new approach to the traditional five-band processor design that yields a consistent, polished audio signature without sounding overly processed. Nielsen and Kantar watermark encoding are also available, as is Déjà Vu, the surround sound upmixer from Omnia founder Frank Foti that transforms stereo content into a rich, enveloping multi-channel experience. Z/IPStream X/20 and R/20 will be available in mid-October, 2023; upgrades are available for clients with Z/ IPStream X/2 and R2 products.


NOMINEE Telos Alliance Axia® Quasar Engine RPS

A

xia Quasar mixing consoles have proven to be the most flexible, intuitive broadcast consoles available for radio and TV applications, delighting broadcasters around the world with their advanced capabilities and user-friendly features. Now, to complement the top-of-the-line Quasar XR and streamlined Quasar SR, comes the latest addition to the Quasar family: Quasar Engine RPS, which builds on the success of our proven Quasar Engine platform by bringing many of the features and functions normally managed within the surface to the engine itself.

It leverages the power of the browser-based Quasar Soft UI and Quasar Cast to create a standalone mixer without the need for a physical surface. Quasar Engine RPS is ideal for any application where a traditional console is impractical, such as small studios and backup facilities, or as a costefficient backup for a physical console. Quasar Engine RPS will be available in mid-October, 2023; clients with existing Quasar Engines can easily upgrade them on-site using an RPS Upgrade Kit.


WINNER TELSAT Broadcast Smart Platform

W

ith DTT and Radio standards implementation we have integrated and installed thousands of transmitters globally, becoming a key player within broadcast networking; but we never considered that our work was done. We continued to explore new ideas and opportunities for how we could implement networks, and began to focus on the goal of bringing broadcasting signals to populations that are hard to reach using standard technologies and systems. That’s why, in partnership with our valued and skilled partners, we developed a new concept for a broadcasting site: the Broadcast Smart Platform (BSP). Everybody in our sector can manufacture transmitters and related devices to a good quality, so we wanted to go some steps further. We understood that the requirement wasn't just a new transmitter with a classic receiver in a small shelter: we needed a complete system, with very flexible/customizable architecture to satisfy the different broadcasting regulations requirements for countries and governments around the world; for TV and radio. The system that we started to design had to present high efficiency/low power consumption; it also needed to be powered with alternative sources, such as solar panels, or wind energy (let’s go green!).

Then it needed to be very robust, with no maintenance and be very easy to be handle and install. But most of all, it needed to be completely remotely monitorable and configurable. And so the BSP was born: a complete portable broadcasting site, realized in a single, mast-mounted, small-sized weatherproof enclosure for outdoor use. We have always been conscious that reliable and ultimate hardware was an important ingredient, but we immediately understood that the strategic heart of the system had to be the integrated management software, which needed to be a complete solution and as user friendly as possible. With our BSP Networks we can cover large critical areas with a cell-based network-model in a smart topological way, using low-power transmitters and avoiding unnecessary higher costs to cover unwanted areas. Our BSP allows scalable investments, where the ROI would be impossible to achieve with the traditional business model.


WINNER Wheatstone Corporation Layers Stream running on Amazon Web Services

S

howgoers experienced AoIP 5.0 at IBC stand 8.C91 with a hands-on demonstration of Wheatstone’s new Layers Stream software running on Amazon Web Services (AWS). Layers Stream features stream provisioning, audio processing and metadata software running on an on-prem server or public cloud with easy setup and control through a browser interface. Included are audio processing designed specifically for streaming applications and Lua transformation filters to convert metadata input from any automation system into any required output format, including Triton Digital, for transmission to a CDN server. Layers is part of the WheatNet IP audio network

environment that includes mixing, editing, scripting, virtualization, and intelligent AoIP working together to create less work, more flow inside and outside the broadcast studio. The WheatNet IP audio network has more than 200 interconnected studio elements and software apps to choose from - all engineered, manufactured and supported under one roof by the industry’s most trusted AoIP provider. In addition to Layers Stream, Wheatstone’s Layers Software Suite features a Layers FM software module and Layers Mix software module for the on-prem server or regional data center, making it possible to replace racks of processors, desktops and mix engines and/or extend studio failover redundancy across multiple data centers.


WINNER WorldCast Systems Audemat MC6

W

orldCast Systems proudly presents the Audemat MC6, the most comprehensive and versatile test and measurement platform available for DAB/FM broadcast. Designed to empower broadcasters, operators, and regulatory bodies, this revolutionary tool raises the bar for radio service quality while impeccably adhering to FM and DAB regulations. Building upon the solid foundation of the field-proven Audemat FM MC5 for FM measurement, and drawing inspiration from the award-winning Audemat DAB Probe for QoS/QoE monitoring, the Audemat MC6 stands as a testament to WorldCast Systems’ expertise in both the fields of DAB and FM technology. The Audemat MC6 boldly steps forward with an expanded array of cutting-edge features that ensures its relevance across a diverse spectrum of users, enabling them to harness its full potential. The Audemat MC6 is meticulously engineered to provide users with a comprehensive, multi-technology solution for mobile RF coverage measurement and extensive modulation analysis. Boasting multi-DAB measurement capabilities with dual receivers and additional multi-FM receivers, the Audemat MC6 asserts its position as the most complete solution in its category. Despite its powerful capabilities, the Audemat MC6 remains remarkably compact, easily fitting within a shoulder bag. Its robust construction ensures resilience against the most demanding field conditions, facilitating uninterrupted performance in any environment.

With features such as DAB/FM drive tests, GoldenEar for mathematical and objective quality ratings during drive tests, and DAB/FM commissioning, the Audemat MC6 ensures precise and versatile signal measurement. Its fully digital, high-precision measurement capabilities deliver dependable data crucial for uncompromising broadcasting quality. The Audemat MC6 is not only technologically advanced but also intuitively user-centric. It integrates a customizable feature for automatic measurement reports which lightens the operator’s workload while boosting operational efficiency. As broadcasting networks expand, the Audemat MC6 effortlessly scales to accommodate growth, underscoring its enduring utility. A user-friendly graphical interface ensures that operators can seamlessly navigate its capabilities with ease. Gregory Mercier, Director of Product Marketing, says of the Audemat MC6, "We have expanded our existing technology base, drawing on our extensive experience in FM and DAB technologies to deliver further innovation for users. Our goal is to provide a complete test and measurement solution that delivers comprehensive data while ensuring ease of use and mobility for our customers." The Audemat MC6 reflects WorldCast Systems’ unwavering commitment to continuous enhancement in alignment with the evolving demands of the global broadcast industry. The solution will be available for delivery by the end of 2023 and is positioned to emerge as a gamechanger in the field of DAB/FM test and measurement.


WINNER Xperi / DTS DTS AutoStage Broadcaster Portal

I

mproving on century-old technology – broadcast radio – DTS AutoStage™ is the only global entertainment platform for the connected car that seamlessly combines linear and on-demand content in a unified, user-centric experience. DTS AutoStage was developed with radio as its anchor, and its support of radio is unique in scale and capability. This year, DTS AutoStage launched its Broadcaster Portal which gives radio broadcasters unprecedented access to listener engagement data, including listener heat maps, day-parts from the previous 24 hours, and the songs, ads and program segments listeners enjoy the most. The easy-to-use dashboard is completely free to broadcasters, and is invaluable in helping sales teams and program managers understand listener engagement, connect with target audiences, and power new revenue opportunities with brands and advertisers. "This new DTS AutoStage data provides the missing link for how radio plays a starring role in cars," said Fred Jacobs, president, Jacobs Media. "Traditionally, AM/FM radio's top listening location is cars and trucks, but ratings information has always been vague. The DTS AutoStage Broadcaster Portal unlocks many new data points; geographic targeting on the road, reactions to in-car content, and 'heat maps’ that overlay listening with shopping locations. That last part is key for radio sales, always in need of more data to provide 'windshield advertisers' with meaningful data."

Data insights include: n Full broadcaster control over all metadata of each of their stations (logos, slogans, genres, social media, contact information, etc.). Changes made by broadcasters are reflected in DTS AutoStage vehicles immediately. n Heatmap: a graphical representation of the stations’ coverage area based on the vehicles in that market that are listening to their station. Broadcasters can select different times of the month and week based on the different transmission systems they use (AM, FM, DAB, CDR or HD Radio). This map allows broadcasters to zoom into different areas, down to the neighborhood level.

n Listening charts that show what users are listening to, i.e., a music chart of the most popular songs listened to, an ad chart of the most-listened to ads, and program-level information including the most popular program segments. n Dayparting: broadcasters can see when users are listening, and they can see that information 24-48 hours later. If the station’s format or on-air talent changes, or for results of sponsored events, information is available to broadcasters within 24-48 hours. n Station Reach / Change: Station coverage map is based on the location of vehicles within the coverage area and shows the percentage of total vehicles in the market listening to the station, and how that percentage changes over time. n Privacy: No personally identifiable information is used, so the broadcasters are compliant with all local, regional and country-based privacy protection laws. Broadcasters only see their data. With millions of cars on the road using DTS AutoStage technology, its Broadcaster Portal means that radio can now be part of the world of ’Big Data“ allowing station owners to know more about their listeners and to make actionable, data-driven decisions based on real-world usage.


WINNER Amazon Web Services AWS Elemental Link UHD Integration with AWS Elemental MediaConnect

A

new update for AWS Elemental Link UHD, an encoding device for contributing live HDMI and SDI video sources (i.e. camera, video production equipment, etc.) to the cloud, introduced support for AWS Elemental MediaConnect, a cloud-based secure, reliable video transport service. Leveraging Link UHD with MediaConnect, live production professionals, sports and entertainment venues, educational institutions, houses of worship, corporate facilities, and systems installers can easily establish ground-to-cloud workflows for processing and distributing live video, saving time and resources compared to alternative on-premises, satellite, or fiber infrastructure. The latest Link UHD update also benefits streaming video platform providers and companies who transport video to multiple monitoring destinations or for use in live streaming and broadcast applications. Across the board, it provides more flexible video processing options, broadcast-grade monitoring, and seamless integration with ISV or AWS Partner applications into live video applications. Getting Link UHD and MediaConnect up and running is straightforward. Users plug in video and ethernet, and power up Link UHD, then log into the AWS Management Console or API and create a flow. Video feeds are accessible in MediaConnect prior to MediaLive encoding, enabling greater control over processing and reducing latency. MediaConnect returns an ingest endpoint and then replicates and distributes the video stream inside and outside of AWS to global destinations. Users can create simple, reliable, and cost-effective video workflows catered to their unique requirements. Expressing enthusiasm for the update, NHL SVP of Technology Grant Nodine said, "We use Link UHD to transmit 4K video from fixed camera angles in our arenas for officiating, hockey operations, and player health and safety. With MediaConnect, we seamlessly deliver video feeds to partners anywhere in the world using entitlements and perform transformations on feeds to produce outputs for streaming and archival. The overall ease of use and enhanced automation capabilities make Link UHD invaluable in supporting our game night operations."

M2A Media Product VP of M2A CONNECT Dave Evans, added "We love it when our customers use AWS Elemental Link devices, because they are simple to set up and manage from AWS, and they are a cost-effective way to distribute video streams. Some of our customers also need to monitor their video feeds and deliver them to their partners for redistribution. With Link UHD and MediaConnect, we can send low-latency live video confidence monitoring streams to a multiviewer while simultaneously sending the same feeds for video processing. Delivering live video from an event using MediaConnect to affiliates across the world is possible in just a few clicks." Link UHD is easy to deploy in nearly any location with little technical background. With integrated support for MediaConnect, customers can easily contribute live video sources captured on premises to the cloud, with more control over video processing. MediaConnect lets users send video sources directly to Amazon Elastic Compute Cloud (Amazon EC2) instances, and deploy applications using software from ISVs and AWS Partners, for low-latency video distribution in its original contribution video quality.


NOMINEE BZBGEAR BG-4K-VP88: 8X8 Matrix Switcher/Video Wall Processor/Multiviewer

I

ntroducing the BG-4K-VP88 Series: The Swiss Army Knife of AV Distribution. The BG-4K-VP88 is a highly beneficial product for systems integrators, installers, architects, and end users of professional audiovisual (AV) systems worldwide. This advanced device supports eight HDMI inputs that can be independently routed to eight HDMI outputs, offering unparalleled flexibility in connecting various sources to different displays or devices. Its video wall capabilities allow users to create captivating and immersive 3X3 configurations with up to 7 different modes for each monitor, making it ideal for digital signage, command centers, and large presentations. The BG-4K-VP88’s multiview capabilities enable users to view up to eight pictures simultaneously on the same screen, catering to applications that require monitoring multiple sources at once, such as control rooms and live event productions. Furthermore, the device boasts highquality imagery with maximum resolutions of up to 4K at

60Hz 4:4:4, ensuring exceptional visual performance with pristine detailing and rich color reproduction. In terms of audio, the product supports HDMI audio formats like Dolby 5.1 and DTS 5.1 and allows for analog audio extraction, giving users the flexibility to handle audio independently and suit various AV set-ups. HDCP 2.2 compliance ensures copyright protection for content transmission, making it suitable for commercial environments with copyrighted materials. The BG-4K-VP88 offers multiple control options, including front panel controls, IR remote control, RS-232, IP control, and the state-of-the-art BGSWITCH-CONTROL App available for Windows, iOS, Mac, and Android. Its IR receiver and remote control facilitate clean installations by allowing the unit to be controlled out of sight. Moreover, the device provides advanced configuration options through a full-featured web interface and control software accessible via the RS-232 connection. This includes EDID management, seamless scaling, mapping, and network settings adjustment, allowing users to tailor the AV system to their specific requirements. Overall, the BG-4K-VP88 is a comprehensive and reliable solution for professional AV installations, delivering top-notch video and audio management, versatile routing and configuration, and seamless control options. Its capabilities make it a valuable asset in various settings, such as corporate environments, entertainment venues, educational institutions, and broadcast facilities.


NOMINEE BZBGEAR BG-ADAMO-4K Series: 4K AI Auto-Tracking PTZ Camera

T

he BG-ADAMO-4K PTZ camera is a highly versatile and innovative device that offers numerous benefits to professionals in the global professional audiovisual (AV) systems industry, including systems integrators, installers, architects, and end users. Its broad range of connectivity options, such as HDMI 2.0, 12G-SDI, USB 2.0, and USB 3.0, makes integration into various AV setups seamless, while Power over Ethernet (PoE) technology simplifies cabling and installation. The camera’s advanced auto-tracking capabilities, utilizing human detection AI algorithms, eliminate the need for additional tracking devices, streamlining setup and calibration processes. To ensure flawless network-based production, this camera offers the option of Dante AV-H or NDI|HX3 connectivity, allowing you to choose the one that best suits your preferences. Architects appreciate its aesthetically pleasing design, available in classic black or white finishes, which allows it to blend seamlessly into diverse architectural settings while optimizing

space with its compact form factor. End users of professional AV systems benefit from the camera’s exceptional video quality, providing 4K UHD resolution at 60Hz, perfect for live streaming, presentations, and video conferencing. The auto-tracking features offer presenter tracking mode, enabling presenters to move freely without additional devices while continuously and accurately tracking, enhancing audience engagement. Zone tracking mode provides a smooth representation of content behind the presenter, particularly useful for presentations involving whiteboards. With multiple video output options, impeccable image clarity, and convenient on-the-fly recording via the Micro SD Card writer, this camera ensures unmatched functionality, sophistication, and user-friendly control through various methods like RS232, RS422, Web GUI, Control App, or IR Remote. Overall, the BG-ADAMO-4K PTZ camera sets a new standard in live stream broadcasting, catering to the diverse needs of professionals in the AV industry worldwide.


NOMINEE Colorlight Cloud Tech ColorAdept+Z8t

C

olorlight’s ColorAdept+Z8t is widely used in broadcasting, TV, film, stage performance and more scenarios relying on user-friendly design and cuttingedge display technology. Z8t, the new flagship LED video processor, delivers ultra high-definition quality and precise imaging, supporting various input cards including the cuttingedge broadcasting level IP raw media transmission ST2110. ColorAdept, paired with Z8t, deserves recognition due to its exceptional design and performance. Engineered to be user-friendly, it has an intuitive interface and clear controls, making the setup of LED panel walls accessible even for beginners. 1. Innovative color processing features for exceptional image quality and precise color adjustment The ColorAdept+Z8t showcases an innovative approach to LED video processing and color adjustment. One of its key breakthroughs lies in multi-color adjustment based on the HSV model. This allows users to independently tweak the hue, saturation, and value of specific colors without impacting the display of others. The system’s ability to customize response curves for red, green, blue, and white is another groundbreaking feature. This facilitates independent fine-tuning at high brightness or low grayscale. As a result, overexposure can be prevented at high brightness, and details remain vivid at low grayscale, thanks to the Color Curve function’s compensation for differences in ambient light or camera performance. The platform also adopts 3D LUT professional color management technology, a tool widely used in the film industry. This delivers precise control over the full 3D color space, yielding more accurate colors. 2. Humanized design to simplify processes and enhance efficiency In essence, the ColorAdept+Z8t offers a humanized design that’s easy for clients even at beginner level to navigate, while delivering exceptional image quality and precise color

reproduction. This is due to its intuitive graphic interface, which incorporates Signifiers’ interaction principle from design psychology. The platform’s compatibility with both Windows and macOS extends its reach, and the option to remotely control the system via a Gigabit Ethernet network adds another layer of convenience. The ability to group processors and set parameters synchronously within these groups, such as input settings, screen brightness, test patterns, freeze, blackout, and preset triggering, simplifies the operation of large LED setups with multiple processors. The inclusion of layers becomes particularly useful when two sets of cabinets are superimposed, enabling easy content duplication across both. The unified cabinet parameter management via the cabinet library, coupled with the function to download the latest cabinet parameters from the cloud, streamlines the process of updating parameters. ColorAdept software can function even without a processor, so projects can be created ahead of time, including processor configurations, cabinet type selection, and layout mapping. Once a processor is online, the pre-configured project can be imported, completing the setup process. The system also features an advanced monitoring system that checks inputs, temperature, humidity, network cable errors, voltage, power supply, and fan performance. In case of any abnormalities, email alerts are dispatched, enabling 24-hour unattended operation. The innovative features of ColorAdept+Z8t platform have significantly boosted clients’ efficiency and output quality, creating an enhanced audiovisual experience for LED wall end users.


WINNER Lumens Digital Optics CamConnect Pro (AI-Box1 CamConnect Processor)

L

umens CamConnect Pro is a transformational technology that automates multi-camera operation for video conferencing and video production. It is designed to: n Deliver equity in hybrid video conference meetings. n Enrich visual radio production. n Reduce the cost of managing small video studios. n Provide engaging digital learning for remote education.

Installation CamConnect Pro runs on a dedicated processor. This is installed quickly and cost-effectively on existing LAN installations. Running on the same IP infrastructure, PTZ cameras can be connected by a single Ethernet cable which provides power, control and video signals. Installation of CamConnect Pro and integration of microphones and cameras is usually completed in minutes.

The concept Currently, many studio, event, training and huddle spaces rely on a single wide-angle video camera or a traditional multi-camera set-up. The wide-angle camera is low-cost and easy to manage but provides a static, uniform view which is unengaging and hard to follow. The traditional multi-camera production is highly effective but requires a skilled operator. CamConnect Pro brings the best of both worlds: it is simple to install, cost-effective, runs without user input and automatically delivers engaging multi-angle video.

Integration CamConnect Pro is exceptionally versatile. Using industry-standard broadcast and AV outputs, it can be integrated into a wide range of video and UC installations. It has HDMI and USB outputs which means that it can be integrated with monitors, encoders, IPTV systems and video conferencing platforms. It supports collaboration and sharing tools including Barco ClickShare and Inogeni Toggle. CamConnect Pro works transparently with installations which use a dedicated digital signal processor.

How it works CamConnect Pro works with Nureva, Sennheiser, Shure and Yamaha directional microphone arrays. These microphones locate the real-time position of active voices and transmit the data to the Lumens processor. CamConnect Pro uses this positional data to focus PTZ cameras on the current presenters. This is all done out of the box, without the need for advanced programming or third-party controller. CamConnect Pro supports up-to 16 microphone arrays and 4 PTZ cameras. It has the flexibility therefore to be installed in anything from a two-person radio studio or small huddle room, up to a conference facility or lecture hall. In these larger spaces, CamConnect Pro enables audience participation, with cameras able to locate and focus on questions from the floor, as well as presentations from the stage.

Advanced features and roadmap CamConnect Pro ships in September 2023, with advanced production features built-in, including support for split screen two-camera and quad view output, enabling two or four angles to be displayed simultaneously. The 2023 product development roadmap sees the roll-out of Profiles, so that the room or studio manager can instantly switch between different room layouts. This is especially useful where a single space is divided into multiple training rooms, each requiring an individual AV system. Advantages of CamConnect Pro n Automated multi-camera production, with no programming or operator required. n Cost-effective and easy-to-manage end to end IP installation. n Supports industry-standard AV, VC, UC and video production workflows.


WINNER SNEWS Broadcast Solutions ARION NRCS

I

n the ever-evolving landscape of broadcast technology, innovation is not just a buzzword; it’s a necessity. Enter SNews Broadcast Solutions, a global player with a reputation for excellence. Our flagship product, Arion NCRS, has once again proven its mettle through a groundbreaking installation project with Rede Amazonica, reshaping the broadcast industry’s paradigm. At the crossroads of complexity and ambition, Rede Amazonica’s sprawling network posed a challenge that SNews was uniquely positioned to tackle. As the largest SNews customer and a dominant presence in Brazil’s Northern Region, Rede Amazonica’s extensive reach demanded a solution that could seamlessly unify media management across vast territories and provide a streamlined experience for its audience of 6.2 million potential viewers. The crux of the challenge was twofold: to integrate media management and journalism across all branches while implementing a comprehensive media management system housing a staggering 500 thousand digital assets. SNews met this challenge head-on, displaying a level of technical prowess that speaks to our commitment to pushing boundaries. A defining aspect of the project was the successful integration of five central squares; a feat rarely achieved in the broadcasting realm. This integration resulted in frictionless communication between affiliates, effectively transforming media management from a logistical puzzle into a well-orchestrated symphony. Imagine a seamless flow of information, where digital assets are cataloged, accessible, and optimized for immediate use. SNews’ innovative approach reached new heights with the implementation of cloud media archiving. This progressive move didn’t just reduce deployment, maintenance, and recovery costs; it revolutionized the way media assets are managed and accessed. Furthermore, our solution’s transcoding prowess, converting XDCAM / MXF media to integrated H264 /

MP4 format, resulted in a substantial reduction in data storage requirements; a saving that cannot be overstated in today’s datadriven landscape. But this isn’t just about technology; it’s about impact. With a presence in more than 150 municipalities across five states, Rede Amazonica’s reach is awe-inspiring. And with affiliations like Rede Globo, one of the world’s largest television networks, the stakes are even higher. SNews’ role in enhancing the efficiency, reach, and impact of this network is pivotal in an era where communication knows no bounds. The successful collaboration between SNews and Rede Amazonica isn’t just a case study. As we step into an era where media is everywhere, Arion NCRS serves as a beacon of seamless integration, efficient media management, and resource optimization. This collaboration is a blueprint for broadcasters worldwide, showcasing the immense potential when innovation meets expertise. In summary, SNews’ Arion NCRS installation with Rede Amazonica stands as a testament to the power of innovation, collaboration, and technical prowess in the broadcast industry. It redefines media management, transcending borders and fostering a seamless ecosystem of communication. This project is a demonstration of what’s possible when technology and vision converge.


WINNER Vizrt TriCaster® Mini Go

F

or almost 20 years, TriCasters have set the standard for live video production. Putting an entire suite of media production capabilities at your fingertips, with a TriCaster anything can be made. From keynote presentations and webcasts through online training and sporting events to top-tier broadcasts, every production looks amazing. And as the demands of content creators on every level evolve, so does the TriCaster. TriCaster® Mini Go is the easiest and most cost-effective way of accessing the best tools for a high-quality production, unlocking the storyteller within. It offers the simplest set-up yet, with a wealth of professional-level video production features that meet the needs of up-and-coming creators, and grow with them. It’s a scalable solution that can be simplified or expanded, as necessary, even just by using a simple Converter. Not only is it the most affordable TriCaster ever, but by offering USB and NDI connectivity, it’s possible to use existing devices like mics, cameras, and low-cost apps to bring sources into the TriCaster. The TriCaster Mini Go is quick and easy to set up and can be easily taken wherever your stories take you. As the perfect compact production system for small, nimble productions or for content creators looking to up their game, the TriCaster Mini Go works for the needs of a gamer who is thinking about streaming for the first time, exactly as it does for a small business wanting to produce their first live event or even school media production classes. It offers 4 NDI inputs, 2 M/Es and mix outputs at HD resolution. With supplemental audio, HTML graphics capabilities and the ability to import photoshop files and use them, TriCaster Mini Go can help create a high-quality production, for less. The TriCaster Mini Go comes with Live Link for advanced HTML graphics integration, including the powerful Viz Flowics. Viz Flowics is the most comprehensive cloud-native graphics platform on the market, powering remote and in-studio production of live graphics and interactive content. No downloads or coding are necessary, it’s all cloud.

In the attention economy, building an audience is increasingly important. In the infinite pool of available content to consume, standing out is paramount to ensure that progress. Now, professional-looking content can be made independently, levelling the playing field for any size creators to grow. Productions can be made simple, skillful, and scalable. With the simplicity and accessibility of the TriCaster Mini Go, combined with the possibilities of HTML5 graphics creation, creators find the support they need to make beautiful stories, anywhere.



Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.