Best of Show_NAB Show 2023_June 2023

Page 1

Program Guide Showcases New Products From the 2023 NAB Show

This Program Guide takes you on a tour of new products introduced for TV, film, video, streaming, radio and pro AV professionals. The Best of Show Awards are for products introduced at the 2023 NAB Show in April. This digital guide features all the nominees and winners that participated in the Future awards program. It offers an excellent sample of new technology on the market today and allows companies to tell you in their own words why they believe a certain product is noteworthy.

Seven Future publications participated in the awards programs: TV Tech, TVB Europe, Broadcasting+Cable, Next TV, Radio World, Sound & Video Contractor and Mix. Manufacturers paid a fee for each entry and could enter multiple products. Winners were selected by panels of professional users and editors based on descriptions provided via the nomination form as well as on judges’ inspection at the convention.

Turn the page to learn more and thanks for reading!

Pages 3-117

Pages 118-139

Pages 140-147

Pages 148-162

Pages 163-184

Pages 185-201

Pages 202-205

Best of Show Awards 2023 | NAB Show
2

ACTUS DIGITAL Actus OTT StreamWatch

Actus OTT StreamWatch is a new SaaS and on-prem product that enables engineers to maintain quality for FAST, IPTV and linear OTT streaming channels. It provides 24/7 OTT stream monitoring and recording of native HLS streams throughout the workflow — from encod-

When impairments violate QoE parameters for critical audio-video issues, such as missing/frozen video, audio too low/ too high/missing, missing captions, OTT StreamWatch sends notifications to engineers such as pop-ups on multiviewer displays and email/SMS notifications.

formation when SCTE is present but may be improperly formatted and affecting downstream DAI.

In addition to live monitoring and alerting, OTT StreamWatch creates a reliable 24/7 recording of the native-HLS media, which can be used to verify renditions and probe-points that experienced QoS or QoE issues and additional purposes like, proof-of-airing, ad-verification, discrepancy reporting, clipping and repurposing.

The recorded HLS content is married to extracted or imported metadata, such as manifest information, Emergency Alert System messages and traffic system AsRun Logs. This enables users to intuitively locate content based on channel/date/ time, QoE issue type, CC word search and advertiser or program name. Content can be clipped and exported as proof-ofairing, published to social media or repurposed as Video-on-Demand content.

OTT StreamWatch can be purchased as a turnkey system with perpetual software licenses or as a SaaS solution on any public cloud, on the Actus private cloud or a customer’s private cloud or virtual machine infrastructure.

ing through delivery. And, it’s affordable enough for use throughout complex OTT distribution chains, monitoring renditions from multiple probe-points delivered across unmanaged networks, from a multitude of CDNs and devices.

Actus OTT StreamWatch is the first product to make it economically feasible for a much larger set of IPTV channels and OTT broadcasters to monitor their content and maintain quality throughout their entire workflow, enabling engineers to pinpoint precisely where issues originate and quickly remedy them.

It identifies Quality of Service (QoS) issues and discerns those that affect viewer Quality of Experience (QoE).

Actus OTT StreamWatch analyzes OTT content at the manifest level, at the encryption level and at the HTTP level. It displays QoS information cleanly so users can evaluate bandwidth usage, streaming media download times and buffering issues. It does this for all renditions within an HLS stream and summarizes the data so operators can recognize and address potential issues before they impact viewer QoE.

Other valuable features of OTT StreamWatch include the ability to analyze SCTE Digital Ad Insertion (DAI) messages, alert when SCTE is missing, and allow operators to dig into the JSON in-

Actus OTT StreamWatch fills a market gap because until now, the only products available included:

• Expensive subscriptions for OTT QoS analysis that provide extensive data but without clear distinction of the data’s impact on viewer QoE, or

• Less expensive but incomplete tools that fail to monitor all renditions or probe-points past initial encoding, often stopping at the hand-off to distribution partners that re-encode their content and create additional versions before distributing for delivery to mobile devices, smart TVs and streaming devices.

3 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Frame.io Camera to Cloud

Delivering content at speed and scale depends on a team’s ability to quickly share assets and seamlessly collaborate with all creative stakeholders. Frame.io — the pioneering cloud-based video review and approval collaboration platform that leading brands and nearly 3 million creatives have adopted — is expanding to photography and PDF documents. Now, photographers and marketing teams can benefit from a central hub where all creative teams can work side-by-side, regardless of where they are in the world and during every step of their workflow, from capture to campaign.

Frame.io Camera to Cloud (C2C) facilitates this workflow by allowing creators to upload video, photo, audio and useful data from cameras directly to Frame.io so filmmakers and photographers can start editing immediately, from anywhere in the world. New in-camera integrations with Fujifilm X-H2S and X-H2 mirrorless digital cameras and RED V-Raptor, V-Raptor XL and KOMODO camera systems allow photographers and filmmakers to transfer media directly from the camera to Frame.io with no intermediate hard drives, media cards, or third-party equipment required.

Most noteworthy is Fujifilm’s in-camera C2C integration, which marks the world’s first digital stills cameras to natively integrate with Camera to Cloud. When paired with the FT-XH file transfer attachment to establish an internet connection, creators can transfer Apple ProRes proxy files to Frame.io as they are being recorded. Files can also be transmitted automatically, individually sent or prioritized directly on X-H2S to send directly from the camera to collaborators anywhere in the world upon completion of the shot. These bandwidth-efficient, high-quality files in Frame.io are small enough to be easily

shared on social media and allow creators to start the editing process until they can swap them with the original camera files for finishing touches.

With the help of Camera to Cloud, photographers now have the ability to capture content into the cloud, collaborate and discuss photos within moments of being created, and send them off to be edited almost instantaneously. Using C2C in tandem with Frame.io’s newest Capture One integration, editors can start working with photographers without having to be together on set, offering more flexibility by transforming a traditionally tethered on-site experience into a remote and cloudbased workflow. The Capture One integration streamlines collaboration and eliminates on-set distractions and bottlenecks. Now, just as Camera to Cloud helps filmmakers facili-

tate a seamless video production and post-production workflow, photographers have a seamless photo editing system.

Frame.io has also expanded to fully support PDF documents with its collaborative review and approval tools. With this expansion, Frame.io can natively open and mark up PDF files on iPhones and iPads, aiding teams in reviewing collateral and other project-adjacent materials.

In the last year, the number of projects using Camera to Cloud has increased almost five times, with more than 6,000 productions relying on the technology. With the introduction of the first in-camera workflow with Fujifilm and RED, C2C is continuing to prove itself to be a formidable force that is game changing for the filmmaking and photography industries.

4 Best of Show Awards 2023 | NAB Show
ADOBE
FOR MORE INFO

The introduction of Text-Based Editing for Adobe Premiere Pro represents a groundbreaking shift in editing workflows, revolutionizing the way video creators approach their craft. It is the first and only professional NLE to incorporate AI-powered text-based editing features.

Historically, creating a full rough cut using footage transcripts was time-consuming and laborious — the Text-Based Editing feature in Premiere Pro accelerates this process to create more time for post production to focus on their craft and the creative.

Powered by Adobe Sensei, Text-Based Editing leverages AI and speech-to-text technology to automatically transcribe media and identify separate speakers. Editors can then edit video content just

ADOBE Premiere Pro

by cutting and pasting sentences from the transcript, which automatically reflects the respective video in their timeline. Using Premiere Pro’s custom TextBased Editing workspace, specifically designed to work around the transcript, editors can now create a video sequence in a timeline for the director to review. Within this workspace, they can adjust font size to make the transcript easier to read and use document editing keyboard shortcuts to navigate the transcript faster. Now, using this feature, post-production teams can shape their first cut directly in the timeline, essentially making assembling a rough cut as simple as editing a Word document.

Premiere Pro’s Text-Based Editing feature eliminates the

need to pay for a separate plug-in or app subscription and the need to create a “paper cut” before moving to the video editing stage. It simplifies video editing by using text, enabling creatives to generate rough cuts faster than before — removing bottlenecks and increasing efficiency.

This feature is officially coming to Premiere Pro in May and is just one example of Adobe’s dedication to improving video editing, streamlining workflows and making the creative process more accessible to everyone. Adobe’s commitment to providing innovative features like Text-Based Editing enables users to bring their creative vision to life with greater efficiency and accuracy.

5 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Alteon.io is an all-in-one ecosystem for content creators whose mission is to democratize the creative process. With its comprehensive cloud-based content management system, proprietary media transcoder, desktop upload tool and array of other products, Alteon is leveling the playing field for creatives by delivering world-class tools that were once prohibitively expensive for anyone outside of large enterprises. Alteon is

ALTEON.IO Alteon.io

than relying on ad hoc combinations of applications and devices.

While content creators have a lot of tools at their disposal today, this is a double-edged sword. Some of these tools, especially in the cloud space, aren’t intuitive for less tech-savvy creatives, or are frustratingly basic. Alteon users benefit from several tools bundled into one ecosystem, meaning fewer subscriptions, fewer applications that don’t

perience-with-alteon-accelerator) is a new desktop application that maximizes media upload speeds for content creators. Powered by IBM Aspera, Alteon Accelerator can be used on any Mac or Windows desktop and can be triggered to initiate remotely. Users can upload large files, including professional RAW file formats, directly to Alteon’s secure, scalable cloud platform. From there, Alteon automatically generates proxy files of every asset, allowing project owners to share files or folders with whomever they want and set optional expiration dates and permissions.

likewise focused on bridging the gap between traditional creatives and the Web3 creator economy.

Alteon.io is the product co-founder and CEO Matt Cimaglia wished he had when he was running his own creative agency for more than 20 years. It’s an all-in-one ecosystem that lets creatives upload files quickly, share them securely and store them affordably long-term.

Alteon’s goal is to make it easier for creators of all backgrounds to have a single source of truth for their media, from which they can do all their work, rather

integrate with one another, and a faster overall creative process. It handles commenting, review, transcoding, batch meta tagging and flexibly priced cloud storage, among other features that help creatives manage the thousands of files that comprise a typical production.

This year, Alteon added two exciting new tools to its content creation ecosystem: Alteon Accelerator and the Alteon iOS app.

Alteon Accelerator

(https://blog.alteon.io/transform-your-media-file-transfer-ex-

Alteon’s new iPhone app (https:// apps.apple.com/us/app/alteon-io/ id1666739505) creates a direct link between Apple’s powerful camera technology and Alteon’s comprehensive content-management system. After downloading the app, Alteon users can send footage from their iOS device directly to their Alteon Cloud account for secure storage and collaboration. Once the files are uploaded, remote team members—such as video editors—can immediately begin the post-production process. Alteon Cloud automatically transcodes all video files, and users can integrate the platform straight into Final Cut Pro using Alteon’s popular workflow extension (https://blog.alteon.io/alteonfinal-cut-pro-workflow-extension-launch).

After content is uploaded to Alteon, creators can select files and folders to add custom, searchable meta tags; leave color-coded, time-stamped comments on video or audio files; share files and projects with anyone in the world; send secure screener links with optional password protection and expiration dates; and move finished projects into lower-cost storage tiers to save money.

6 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

AMAGI

Amagi NOW

Amagi NOW is a modular Software as a Service solution, offering unified and comprehensive media management and monetization capabilities in the cloud, for delivering content to multiple linear and VOD platforms. The solution automates processes that increase operational efficiencies, thereby accelerating the time to market and revenue realization. It is a single self-serve portal with a superior user experience that focuses on the automation of daily tasks with meaningful and actionable feedback. It helps improve your content reach by pre-integration of platforms for both linear and VoD deliveries.

Amagi NOW gives media companies greater control over channel creation and management by reducing the complexity of bringing up and managing channels. The solution automates the orchestration of infrastructure so you can set up your channel in minutes, giving customers control of content production, management, scheduling, distribution and monetization in a unified and intuitive way. It is integrated with Amagi’s flagship products Amagi CLOUDPORT and Amagi THUNDERSTORM, making it an all-encompassing solution that enables content brands to realize revenue, increase audience size and sustain viewer retention with engaging experiences and contextual advertising.

Competitive Advantage:

Video streaming — particularly Free Ad-supported Streaming (FAST) — is experiencing unprecedented growth. New channels are entering the fray every day and content creators are fighting

to retain audience favor. In this fiercely competitive landscape, technology innovation can play a critical role in helping streaming companies stay ahead of the game. Designed for content owners and distribution platforms to deliver premium viewer experiences on all devices, Amagi NOW is a cost-effective solution that provides a fully integrated system to control content ingest, scheduling, delivery, distribution, monetization and data analytics workflows in a connected and intuitive manner while automating QC. With the unification of linear and VoD workflows, customers can benefit from ingesting and managing content once while delivering and monetizing their content in different formats to global platforms including their owned and operated properties.

Key Features:

Amagi NOW can serve as a technology backbone for every type of media company in the market, be it emerging content creators from the digital world or established media conglomerates diversifying with secondary channels. It is a one-solution-fits-all approach to channel management that

• reduces costs and expedites time to market

• enables unified, multi-point distribution with integration agility

• optimizes all available monetization avenues

• monitors content deliveries to every platform to alert you ahead of time of possible errors in video or metadata

Additionally, Amagi NOW offers rich insights on performance, monetization and attribution for viewership metrics through a single platform by integrating with Amagi THUNDERSTORM. It allows content brands to reach all types of devices, avoid ad blockers and deliver better-quality viewing experiences with integrated server-side ad insertion. Platform partners also gain content discovery, quick addition of channels, faster time to market, and pre-integration for linear and VOD deliveries. Amagi NOW frees media companies from the complexities of video processing and delivery and instead, enables them to stay ahead of the competition by optimizing monetization opportunities and bringing their viewers a superior, loyalty-generating experience.

7 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

AMAZON WEB SERVICES (AWS)

AWS Color in the Cloud

AWS Color in the Cloud enables highfidelity reference monitoring from the cloud in up to 12-bit color depth and 4:4:4 chroma subsampling for color grading, compositing and quality control. It comprises AWS Cloud Digital Interface (CDI), a network technology for high-quality uncompressed video transport inside AWS, and the AWS Elemental MediaConnect high-quality video transport service, enabling real-time JPEG XS encoding. This enables any application that requires high-fidelity reference monitoring to be run in the cloud such as color grading, compositing or quality control.

By leveraging JPEG XS, signals can be transmitted over AWS Direct Connect, AWS VPN or public internet while maintaining lossless image quality. The transcoding is offloaded to an AWS managed media service so there is no computational or graphics overhead, preserving the artist experience.

MediaConnect supports up to 12-bit color depth, with 4:4:4 chroma subsampling, JPEG XS in Rec. 709, Rec. 2020 and DCI-P3. CDI, an open-source SDK, serves

as the backbone of the workflow and can be integrated into any grading, digital intermediate, VFX or quality control application to transport uncompressed video from the application to the AWS Elemental suite of products. MediaConnect receives the uncompressed video signal and delivers the feed for primary grading monitoring, a secondary over the shoulder view or both.

For primary grading monitoring, MediaConnect is linked with the IntoPix JPEG-XS encoding library. This allows for extremely low-latency encoding and compression of the signal for transmission. An IP-based JPEG-XS decoder then converts the signal back to uncompressed video and feeds it over SDI or HDMI to a grading monitor. For a secondary over the shoulder view, MediaConnect can fork an ancillary signal over CDI to AWS Elemental MediaLive, which creates streaming variants that are then passed to AWS Elemental MediaPackage to form an HLS package.

Cloud-based technology has

been transformational for the entertainment industry; however, some parts of the workflow, like high-fidelity color monitoring, have remained tethered to on-premises equipment due to complex requirements. The AWS Color in the Cloud workflow supports cloud-based color and finishing using a lossless codec, with ultra low-latency encoding, allowing post-production activities to be conducted on AWS in 10-bit or 12-bit color depth, with 4:4:4 or 4:2:2 chroma subsampling, and in the correct color space for the deliverable being worked on. This is a significant advancement from the 8-bit 4:2:0 sRGB color space limitations of pixel streaming clients, which previously precluded color grading, VFX compositing, digital intermediate, finishing and master quality control from running in the cloud. AWS Color in the Cloud is a pioneering technical innovation set to propel the entertainment industry forward, making it an achievement deserving recognition.

8 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

The cost-effective and compact Mi-16 Series of Multiviewer now accepts full bandwidth NDI (Network Device Interface) inputs. The new NDI support means that Mi-16 can interface with video devices and sources connected over an IP network and adds to Apantac’s growing NDI-supported ecosystem of solutions.

The Mi-16-NDI Multiviewer series can display up to 16 NDI streams or sources (1080P) in a variety of layouts and supports up to HDMI 2.0, 12G and 3G SDI outputs with supported output resolutions up to 2160P@60 Hz.

As a fully featured multiviewer, the Mi-16-NDI series offers an extensive on-screen display for borders, dynamic and static labels, UMDs, tally LEDs, alarm tags, clocks and logos. This helps each source to be clearly displayed and labeled, and users can easily select what sources to view in full screen if necessary. The Mi-16-NDI also decodes up to eight embedded audio per NDI input and offers a total of 64 audio meters.

APANTAC Mi-16-NDI

This family of multiviewers supports low latency — single-frame processing delay and low energy usage.

The passive loop outs protect the input source and eliminate the need for external distribution amplifiers. Thesource duplicating function and internal routing eliminate the need for routers, which is a feature often only found in expensive multiviewers. The inclusion of under-monitor and standalone labels eliminates the need for physical UMDs. The analog and digital clocks and counters eliminate the need for physical clocks and counters. The low-latency, single frame of delay is ideal for live production applications.

There are four models in the Mi-16NDI multiviewer range:

Mi-16-NDI — 16x1 High Bandwidth

NDI Input Multiviewer

• 16x 1080P NDI inputs

• 2 simultaneous and identical HDMI and SDI outputs

Mi-16-NDI+ (plus) — Dual Output

Multiviewer (8+8) - 8x2 Full Bandwidth

NDI Input Multiviewer

• 16x 1080P NDI inputs

• 2 separate HDMI and SDI outputs with 8 windows each

Mi-16-NDI# (sharp) — Dual Output

Multiviewer (16x2) - 16x2 Full

Bandwidth NDI Input Multiviewer

• 16x 1080P NDI inputs

• Dual outputs whereby each input can be resized and duplicated up to 16 times and can be assigned to both outputs.

• Each independent output can display up to 16 windows.

Mi-16-NDI-UHD - 16x1 Full Bandwidth NDI Input Multiviewer With UHD Support

• 16x 1080P NDI inputs

• UHD outputs: HDMI 2.0, 12G SDI and 3G SDI outputs

9 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

ATELIERE CREATIVE TECHNOLOGIES

Ateliere Connect

Ateliere Connect is the cloud-native media supply chain that scales elastically to meet demand, making managing, storing, packaging and delivering video content to OTT, broadcast and cable endpoints simple and cost-effective.

Key capabilities include its AI powered video deduplication technology that reduces cloud storage footprint by more than 70% and and its data analytics functionality that enables customers to optimize sales, plan resources and more.

Video Deduplication

One video title could generate hundreds of versions to meet compliance and localization requirements. Deep Analysis/ FrameDNA deduplicates video, simplifying delivery and minimizing cloud storage.

Ateliere FrameDNA AI/ML fingerprints every frame upon ingest, and based on structural similarities, identifies the scenes that are different. Deep Analysis then automatically extracts the variant clips, eliminating manual scanning. Parallel processing and auto-scaling accelerates the entire process.

The second key technology at work in Deep Analysis is IMF generation in the cloud.

Deep Analysis converts the results of its scan into a base CPL (the original version) that contains your original material and various supplemental Composition PlayLists (CPLs) that describe how to combine your original material and deltas together to compose different versions. These CPLs can then be used as the input to flatten out for final deliveries.

Data Analytics

Ateliere data analytics capabilities enable customers to easily measure media supply chain volume and performance. Powered by AWS QuickSight, the new

functionality aggregates all processing events from acquisition to distribution into a single, accessible data warehouse. High-level KPIs are presented in an “ata-glance” dashboard providing customer visibility into titles processed, rejected and delivered, enabling stakeholders to make calculated adjustments and projections quickly.

In addition, the system provides detailed reports that list every event by type, status, quality, technical specifications and provider. It also tracks system events to identify peaks in content production. This gives businesses incredibly helpful information to forecast and plan for the future without needing a data scientist.

Beginning with the production or acquisition of an asset, the platform will watch each processing event, providing users with detailed information about

the asset’s journey through the media supply chain. The result is a contextualized window into workflows, helping businesses isolate inefficiencies. For example, the system might reveal a higher than usual number of defects or failed events with a certain partner. It could also help save on storage by identifying assets that can be archived because they’re not processed as often as initially thought.

Users can set email alerts to be notified of specific events or thresholds reached, a critical resource when there are preset operational parameters that users need to stay within. Results are fully exportable in formats like PDF or CSV to share further or import into other business platforms for an even broader view, and dashboards are fully customizable and shareable as well.

10 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

With a growing need to create more content faster without sacrificing quality, many broadcasters are turning to cloud-based productions to meet demands. These platforms allow news, sports and entertainment broadcasters to provide real-time audio and video experiences for less money.

Dante Connect is a new cloud-based software solution from Audinate that helps broadcasters centralize audio production in the cloud. It is already proving to help broadcasters overcome barriers to the cloud by utilizing the many Dante-enabled devices installed in stadiums, entertainment venues and broadcast and production studios around the world.

A typical remote broadcast involves sending one, or sometimes several, outside broadcasting (OB) trucks with a large staff to set up production. These local production teams connect with the AV equipment at the site, process it and send it back to the station. Because of the costs and complexity of these productions, many smaller or more remote

AUDINATE Dante Connect

events never get covered. Audinate is helping broadcasters to overcome these challenges with Dante Connect.

Centralizing audio production allows broadcasters to save money by reducing the need for additional resources at the site or the location where content is being produced. With Dante Connect, locally installed audio systems and another remote location across the world can send multichannel Dante audio directly to cloud-based virtual machines (VMs) running editing and production suites. Skilled audio producers anywhere in the world can then edit and distribute audio from anywhere, to anywhere.

For example, if a football stadium in New York has Dante-enabled audio devices (such as microphones, mixers, or cameras) installed, production teams can use Dante Connect to subscribe channels from those devices to computers or VMs running audio software in the Cloud for teams based anywhere — whether it’s in San Francisco, London or Sydney. Editors and

producers located at those remote locations can operate their software exactly as if it were local to create mixes, edits and overdubs that may be directly distributed to broadcast stations. Dante Connect lets broadcasters skip the OB truck, and allows local Dante-enabled AV equipment to be connected directly to the station’s infrastructure, producers and tools. Users on-site simply connect their Dante network to a Dante Connect gateway with a robust internet connection, and the rest is handled by the station. There, producers connect to the cloud-based computers receiving the audio and do their work just as if it were on a local computer. This means lower costs and creating more opportunities to cover more events for profit.

Dante is the industry-standard audio technology, and now using Dante Connect, broadcasters can put more devices to work for more productions, on or offsite. Dante Connect will be sold by resellers and configured by integrators.

11 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

AUDIO DESIGN DESK INC.

Audio Design Desk 2.0

Finding, placing and licensing music and sound for video is a painstaking, manual and expensive process. Audio Design Desk is a creative software suite combined with a massive sound and music library that uses AI to magically assist editors in creating soundscapes for their videos. Used on Netflix, HBOMax, Amazon Prime and others, ADD gives creatives the ability to produce sound design, sound effects and music in real time, turning hours of tedious work into minutes of inspired fun.

According to MusicTech, Audio Design Desk 1.6 produced audio for video 12 times faster than other DAWs. Jamie Hardt, (SFE “Spider-Man,” “It”) said “This is the software I’ve been looking for my entire career.” Seth Clark (editor, “Mixed-ish” and “Arrested Development”) added “How can something this familiar be so much better?” That was ADD 1.6. This year, at the 2023 NAB Show, the Audio Design Desk team presented ADD 2.0.

Audio Design Desk 2.0 introduces a sleek new user interface, AI-driven natural language search, playable triggers,

stem isolation, AutoMix, collaboration and cloud storage and backup. Also ADD 2.0 brings real-time synchronization and data exchange with Adobe Premiere, Blackmagic Resolve and Final Cut Pro, putting 100% of the power of ADD into the hands of millions of editors. ADD 2.0 is the only DAW that has this capability, and it even syncs these NLEs to other DAWs like Pro Tools and Logic. Editors can now easily harness the power of Audio Design Desk to produce, refine, mix and transfer their audio without ever leaving their editing software.

Any one of these new features would warrant a new release, but together they aim to turbocharge audio post-production efforts around the world.

Audio Design Desk 2.0 directly integrates with Makr.ai, a social marketplace for creatives to collaborate on their audio-visual projects and earn money. This modern platform boasts a real-time collaborative workflow, robust file management, project and team management, and communication tools. It is a central hub where artists can use

the world’s latest AI tools ranging from generating audio, video and images from text all the way to stem isolation, spatial audio conversion and mastering.

ADD 2.0 introduces AutoMix and Isolator, two incredibly powerful AI tools. AutoMix uses ADD’s content-aware timeline to automatically adjust volume levels of each element, resulting in a balanced first draft of a mix with the click of a button. Isolator uses AI to remove bad background noise from challenging production sound or to isolate any instrument from a recording, giving editors the power to isolate, remix or remove vocals from a song and eliminate interruptive background noise from the perfect take.

The Audio Design Desk team unveiled ADD 2.0 at the 2023 NAB Show, a revolutionary new software that will change the way audio post production is done. With its sleek interface, powerful AI tools and real-time synchronization with Premiere, Resolve and FCP, ADD 2.0 is poised to turbocharge audio post-production efforts.

12 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

The team behind Audio Design Desk is thrilled to launch their most ambitious undertaking yet. Unveiled at the 2023 NAB Show, Makr.ai is a social marketplace and AI-powered production hub for creatives to collaborate on their audio-visual projects. Whether it’s feature films, advertising campaigns, podcasts or social videos, Makr offers cutting-edge tools to help ideate, create, distribute and monetize creative assets.

Makr.’s collaboration component allows users to work together and to find and hire others to collaborate on shared audio-visual projects. Within a shared project, users can post progress updates, manage files and assets, manage and assign tasks, annotate audio and videos, find sounds and music cues, generate visuals, deliver all of their audio and video assets and more.

Makr boasts a range of cutting-edge

AUDIO DESIGN DESK INC.

Makr.

creative AI tools made to inspire and enhance creativity with maximum efficiency. Our offerings include metadata tagging, mastering, stem isolation, spatial audio, auto-mixing, script-to-sound analysis, artwork generation from text or even from a music file and much more. Makr.’s AI-powered tools help creatives ideate, create and refine their audio-visual content and elevate their creative vision to new heights.

But Makr. is more than just a platform for production. By offering a space where creatives can showcase and sell their work to a global community, Makr creates new opportunities for artists to monetize their materials and their services. Our familiar social media environment allows users to connect, network and discover the work of others, creating a vibrant and supportive commu-

nity of artists and creators.

Furthermore, Makr.’s LMS education platform includes a range of courses that lead to verifications and badges on a series of topics and also hosts a wealth of blogs and community forums where artists can come together to discuss and ask questions about their areas of interest. Creatives can now expand their network, skills and knowledge in a creative platform built for artists to share their work and their passions.

Makr. is a social marketplace for creatives to virtually collaborate on their audio-visual projects with the assistance of AI-powered organization, creation and distribution tools. Whether you’re a seasoned professional or a beginner just starting out, Makr.ai has everything you need to take your creative projects to the next level.

13 Best of Show Awards 2023 | NAB Show
FOR
INFO
MORE

Avid NEXIS | F2 Solid State Drive (SSD)

Demand for ultra high-resolution content is growing faster than ever, and that’s dramatically increasing pressure on post-production teams working in broadcast, film and other media to deliver projects faster while sharpening their competitive edge. To stay ahead of the curve, production teams need a storage engine that enables them to scale and collaborate on the most demanding editorial finishing, VFX, animation, color grading and DI workflows.

Building on its proven Avid NEXIS storage platform that is used by thousands of media organizations, Avid has delivered the next generation of its shared storage solutions with the new Avid NEXIS | F2 Solid State Drive (SSD). It perfectly fits the performance requirements of today’s most demanding high-resolution workflows, which were delivered right on time for the 2023 NAB Show.

Let’s start with outstanding performance and media protection. The NEXIS | F2 SSD storage tackles challenging workflows, including the finishing of 4K, 8K and HDR content, color grading, VFX and animation. Post-production teams now enjoy more space and enhanced

performance while lowering the total cost of ownership — delivering the capacity to scale from 38.4 TB to 307.2 TB per engine. The NEXIS | F2 SSD is also equipped with Avid NEXIS media packs that now deliver in excess of 6 gigabytes per second. It also gives exceptional media protection and high availability that are achieved with a redundant storage controller and hot spare SSDs.

The speed of the NEXIS | F2 SSD provides post-production teams with a clear competitive advantage that positively affects their company’s bottom line. By providing the storage capacity needed to deliver high-resolution content, the NEXIS | F2 SSD not only gives production teams the collaborative performance that reduces time spent on projects, but they can take on more projects that will increase business revenue. Comprehensive testing and support of media creation tools, including Avid Media Composer, Avid Pro Tools and leading third-party creative applications, open more possibilities for creative teams to deliver larger volumes of premium content faster.

The NEXIS | F2 SSD is a scalable, cost-effective storage solution that allows businesses to increase capacity and performance as needed. Dual redundant 100 Gbps Ethernet connections per storage controller are standard, providing the highest-performance network connectivity. The ability to right-size configurations ensures businesses pay only for what they need. With flexible updates to capacity, workflows and configuration, NEXIS | F2 SSD provides the muscle post-production teams need to tap into increased performance, reliability and continuity that keep them competitive and current.

The Avid NEXIS | F2 SSD is also compatible with all Avid NEXIS systems currently in use. When used with Avid NEXIS online or nearline storage, Avid NEXIS | F2 SSD administrators can seamlessly move a workspace between performance tiers, maintaining read and write access while the media is moving. With the addition of Avid NEXIS | F2 SSD to the Avid NEXIS family, teams now have even more options for the ideal tiered storage solution to fit any of their production workflows.

14 Best of Show Awards 2023 | NAB Show AVID
FOR MORE INFO

BB&S LIGHTING Reflect 4-Bank System

The new Reflect 4-Bank System was created in response to LDS lighting sets and newsrooms that have requested the color rendering and longevity benefits of remote phosphor technology in ultra efficient, lightweight fixtures that fit grids and walls. BB&S remote phosphor LED technology has been proven to provide consistent output for over 10 years.

• 1-foot 4-bank: 3200°K and 5600°K

Remote Phosphor featuring a 3-Pin XLR. Drawing just 40 watts produces over 240 Lux at 10 feet. Designed for optimum use in 8–12 foot range.

Size: 1-foot 4-Bank — 12 inches x 8 inches x 3 inches, Weight: 4 pounds

• 2-foot 4-Bank: 3200°K and 5600°K Remote Phosphor and Bi-Color versions

2700°K-6000°K and come with a 4-Pin XLR. With 80-watt draw, they produce over 480 Lux at 10 feet. Designed for use in 10–16 foot range. Size: 2-feet 4-bank — 24 inches x 8 inches x 3 inches, Weight: 8 pounds

With convenient rectangular form factors and flat profile, they fit right into a multi-lamp reflector bank or grid. They offer extreme efficiency and high output

(60 lux at 10 feet) coupled with consistently high color rendering of 95 TLCI, and the stable, color shift-free output that remote phosphor is known for.

These lights meet all the critical specs for newsrooms requiring extreme accuracy combined with control that’s fully dimmable without flicker or color shift. Reflects offer low power draw (11 watts/foot), high light output (1100 lumens/foot), 90-degree light dispersion, heatless and fan-less operation. Control is via the new optional 4-Channel Controller with 8/16-bit DMX 512/RDM with internal 48V power supply.

Reflect 2 and 4 Bank housings are designed using the latest engineering techniques to emphasize efficiency in power and output. Developed as a combined semihard and soft light, their superior reflectors utilize a semihard reflective surface to project a 90-degree directional light pattern. Optional diffusion, slides into a side slot, resulting in a soft surface with 140-degree dispersion.

Additionally, BB&S sources the highest grade new Blue LEDs which produce at least 10% extra

output over other types. The new fixtures emit 1100 lumens a foot versus 1000 lumens a foot.

Often tight on space, today’s studios need lighting to fit and fulfill multifunctions. With flat profiles Reflex Banks come with full length adjustable yokes plus TVMPs so they work on overhead grids or walls when height or space is at a premium. Their light weight means less stress on structures.

In the competitive news market, looks count. Effective beauty lighting is so essential with today’s ultra highresolution cameras picking up every skin imperfection. With consistently high TLCI, and superior skin rendering characteristics, BB&S remote phosphor is unsurpassed for modeling faces and excellent for illuminating backgrounds. Optional new louvers help produce directional light for added drama or impact.

Reflects fill the need for news and corporate studios, which require reliable, efficient, cost-effective, easy-to-use, lighting that fits their sets and offers consistent beauty light on talent and attractive set illumination over the long run.

15 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

No cables needed. The world’s first Wi-Fi production PTZ is here!

Of course, if you still prefer, there are also four hardwired connection options including NDI over Ethernet, SDI, HDMI for live production work and UVC USB for connecting to Zoom and Teams.

X120 packs an amazing Sony Exmor R image broadcast sensor to deliver fantastic low light capabilities and incredible picture performance.

In true BirdDog style, there is an OLED display to show the IP address, 360° mohawk tally, filter thread and free Auto-Tracking and color shading via Cam Control.

Key Features:

• Wi-Fi 6 connectivity

• Flexible power — Battery BackPack option for a true wireless PTZ camera or PoE over Ethernet

• Sony Exmor R back illuminated image sensor

• 20x optical zoom

• Latest NDI 5 libraries

• NDI HX3 support — can be sent over

BIRDDOG

BirdDog X120

Wi-Fi or hardwired Ethernet

• SDI, HDMI for baseband video workflows

• USB UVC for connection directly to Zoom, Teams, Google Meet and any conferencing app

• Full color matrix for red, green, blue, cyan, magenta, yellow

• Kelvin control to white balance match to lighting temperature

• Free Auto-Tracking with Cam Control

• FreeD output with no additional charges

• Remote shading with Cam Control

• 360 degree viewable mohawk tally

• BirdDog’s unique PTZ numbering system

• BirdDog’s unique OLED screen for showing IP address and more

• Filter thread for attaching filters

• Video Scopes — generated in camera for maximum accuracy

• NDI Mute function — stop sending NDI to the network

• Features BirdDog’s ground breaking BirdUI web user interface

• Compatible with all BirdDog software including BirdDog Cloud, Central 2.0, Comms Pro, Multiview Pro, DYNO, and more

• NDI Discovery Server failover

X120 is also compatible with BirdDog’s revolutionary Cam Mobile app, which works on iPad or iPhone and gives complete control over the camera. Move them around, set and recall presets and remotely shade with the Colour Matrix tools.

X120 is NDI HX3-certified by NDI and fully compatible with the entire NDI ecosystem. Direct connect to NewTek TriCaster, vMix, Telestream Wirecast, Streamstar, OBS Studio, Epiphan Peal, Vizrt Vectar and any system that is NDI-compatible.

X120 can be controlled a number of ways including BirdDog’s PTZ Keyboard, BirdDog Cam Mobile app for iPhone and iPad, and third-party controllers. There is also API support and a full Crestron control library, QSYS support and a RESTful API.

16 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Fusion Hybrid Storage

Ensuring the long-term preservation of content and metadata is essential for media organizations, archival storage plays a crucial role in achieving this goal. Many news outlets have implemented on-premise archival storage, such as LTO libraries, to store their content for the long term. While these libraries offer high reliability and data protection, they can be costly to implement, require ongoing maintenance, and are vulnerable to physical risks.

Fusion Hybrid Storage (FHS) leverages the reliability and low latency of on-premise storage and the flexibility and scalability of commercially available cloud-based resources — providing customers with content and data protection for high-resolution content, proxies and associated metadata. This solution intelligently manages the transfer and storage of content with compatible cloud storage. FHS offers the advantage of virtually unlimited scalability and significant cost savings, allowing organizations to easily scale up or down as their data storage

needs change without incurring significant hardware or infrastructure costs.

Our Oasis media asset management software is seamlessly extended by Fusion Hybrid Storage. Content can be readily searched, viewed and accessed through the existing Oasis user interface, even by remote users. With FHS, users are relieved of the task of determining where assets are stored because the system automatically transfers them to the desired location without user input. As a result, this eliminates the need for additional staff resources and frees up time for personnel to concentrate on producing news content.

FHS provides peace of mind by leveraging its cloud-based architecture to securely store valuable content in a remote location, thus eliminating the physical risks of on-site storage. Users can choose to replicate their assets across multiple regions in the cloud for extra protection. Additionally, FHS offers long-term backup

storage options to ensure that content is recoverable in case of accidental deletions or system failures.

FHS includes a selectable reserved storage capacity with cost efficiencies at scale. It is available as a contract or a “pay as you go” option, empowering customers to opt for the costing model that meets their storage objectives.

As media organizations face mounting pressure to maintain the integrity and accessibility of their content, Fusion Hybrid Storage offers a secure and highly available data storage environment, which enables them to scale their storage capacity while saving on storage and maintenance costs. FHS significantly reduces the risk of data loss due to physical damage or system failures with the disaster recovery capabilities of cloud storage and represents a valuable tool for media organizations seeking to protect their data and ensure the long-term preservation of their critical assets.

17 Best of Show Awards 2023 | NAB Show BITCENTRAL
INC.
FOR MORE INFO

Emerald DESKVUE

In a completely new concept in KVM over IP, the Black Box Emerald DESKVUE receiver allows users to create a personalized workspace in which they can simultaneously monitor and interact with up to 16 systems across up to four 4K/5K screens. Connecting to distributed physical systems, virtual machines via PCoIP ultra, H.264/H.265 sources and Virtual Network Computing (VNC), Emerald DESKVUE enables efficient handling of a high volume of information with instant mouse switching between sources.

The newest addition to the Emerald KVM-over-IP family of products, Emerald DESKVUE enables KVM users to consume more information from a growing number of sources, even when information and applications are spread across a wide number of physical and virtual machines. Emerald DESKVUE can connect to virtual machines using both RDP or PCoIP and PCoIP Ultra, and Emerald DESKVUE is one of the first KVM systems to allow direct access to PCoIP systems.

The Black Box solution empowers users to interact with and respond to multiple systems with exceptional efficiency, often without the need for switching. As a result, it’s easier to visualize and monitor large amounts of information from many different sources and to maintain complete situational awareness. In a typical use case, an operator using Emerald DESKVUE will work on a main system while monitoring many other systems in their peripheral vision, ready to react instantly as needed. All systems can be positioned across the screens as freely movable windows. If the user needs to jump to full screen while working on a window, it’s just a double-click on the info bar. Another double-click puts the windows back into their default position.

While Emerald DESKVUE is a KVMover-IP receiver, it eliminates the tra-

ditional one-to-one relationship with a transmitter or virtual system, where the user constantly needs to switch from one system to the next, possibly running client software to access virtual machines. Rather, Emerald DESKVUE provides anywhere, anytime access to physical and virtual machines, along with extremely low bandwidth usage and HD/4K video interoperability. Unlike other KVM solutions, which are created with expensive and complex integration of equipment, Emerald DESKVUE is one small box that does it all over IP. (And the solution’s reliance on IP means that as applica-

tions move to the cloud, adding access is simple.)

Emerald DESKVUE can replace up to 16 user stations and a very expensive multiviewer within a KVM system with one small box while delivering a far more flexible, simple, secure and reliable user experience. Users tailor their workspace by connecting a single keyboard, mouse, USB 3/2 devices, audio and up to four 4K/5K monitors (maximum of one 5K monitor). Access requires no additional hardware and is highly secure. Integration with Active Directory keeps password management simple.

18 Best of Show Awards 2023 | NAB Show
BLACK BOX
FOR MORE INFO

Blackbird is the leading cloud native video-editing and publishing platform. With Blackbird, users can edit live and filebased content anywhere in a browser, publish everywhere, be first to market, scale flexibly, ensure content quality and drive massive speed, cost and carbon efficiencies across their organizations.

Key Capabilities

• Instant precision access to video content

• Edit frame-accurately anywhere in a browser from just 2 Mbps

• Easily create clips, highlights and longer form content

• Edit and publish in real time from live streams

• Remotely and collaboratively create great content

BLACKBIRD PLC Blackbird

• Publish content fast to social media, OTT, FAST, VOD and other digital channels

• Feature-packed: up to 12 video and 32 audio tracks, voice over, color correction, pan zoom, multicam, transitions, closed caption, blur and much more

• MAM integration available for fast content archive searches

• Supports broadcast quality formats

• integration support for craft edit NLEs

Key Differentiators

• Pro feature set — the only browser-based professional level video production platform

• Total flexibility — enables total production freedom — editing

by anyone, anytime, anywhere

• Super fast — delivers content creation and distribution up to four times faster than cloud-based and on-premise platforms

• Lower cost — reduces video production infrastructure costs by up to 75%

• Ultra sustainable — lowers carbon emissions by up 91%

Popular Workflows

• Rapidly editing and publishing clips, highlights and longer-form content to social, OTT, VOD, FAST and other digital channels

• Reducing time and resources reversioning content

• Enabling universal content access for fast creation and distribution

19 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

BLACKMAGIC DESIGN

ATEM Television Studio HD8

ATEM Television Studio HD8 and HD8 ISO are a new family of HD live-production switchers with a built-in broadcast control panel that can be used for high-end work while being extremely portable.

These new switchers feature broadcast-grade control panels with advanced features such as streaming and recording, and there is an ISO model that can record all video inputs and connect to up to eight remote cameras. These new switchers also support live streaming, talkback and optional internal storage and are available from $2,995.

ATEM Television Studio HD8 and HD8 ISO features include:

• All-in-one switcher and control panel design.

• Supports connecting up to eight SDI cameras.

• Wide range of professional video effects included.

• Internal media for stills and motion graphics.

• Four ATEM Advanced Chroma Keyers for green/blue screen work.

• SuperSource multilayer processor with four DVEs.

• Eight standards-converted 3G-SDI inputs.

• Nine 3G-SDI program video outputs

and two 3G-SDI aux outputs.

• Audio mixer supports limiter, compressor, six band EQ and more!

• 16-way multiview on a single monitor.

• Live stream via Ethernet or mobile phones via USB.

• Records to USB flash disks or optional internal cloud storage.

• USB output operates as a webcam and supports all video software.

• ISO model supports recording all video inputs for later editing.

• ISO model records a DaVinci Resolve project file.

• Supports remote internet-connected cameras on ISO model.

Customers get a powerful switcher with eight standards-converted SDI inputs, two aux outputs, four chroma keyers, two downstream keyers, SuperSource, two media players and lots of transitions. Plus it includes a whole TV studio of features such as hardware streaming, recording, audio mixer, talkback, multiview and optional internal cloud storage.

The HD ISO model allows customers to edit their live event as it can record all inputs to separate video files. Customers get eight

separate video input files with matching timecode and sync, plus the program video is also recorded into a separate master video file. This means customers can edit using any NLE software that supports multicam editing. A DaVinci Resolve Project file is saved and linked to the input video files, so their live switching is converted into an edit timeline that customers simply click to open.

With support for DaVinci Resolve project files, customers get a full post-production workflow with editing, color correction, visual effects and audio mixing. Simply open the project and users will see their live production as an edit timeline. Customers can even relink to Blackmagic RAW camera files to finish in Ultra HD.

Another exciting feature on the ATEM Television Studio HD ISO model is that it can connect to remote cameras. The Blackmagic URSA Broadcast G2, Blackmagic Studio Camera 4K Pro G2 and Blackmagic Studio Camera 6K Pro cameras can live stream in H.264 direct to the switcher. Plus customers even get camera control and tally. Program audio is also sent back to the camera, which is great for live interviews.

20 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

BLACKMAGIC DESIGN

Blackmagic Studio Camera 6K Pro

The Blackmagic Studio Camera 6K Pro is an advanced Studio Camera 6K model with EF lens mount, larger 6K sensor, ND filters and built-in live streaming for $2,495.

They include an EF lens mount, a larger 6K sensor for improved colorimetry and fine detail handling, ND filters and built-in live streaming via Ethernet or mobile data. It has an all-in-one design with a lightweight carbon fiber reinforced polycarbonate body, large integrated 7-inch HDR viewfinder and powerful broadcast connections.

Blackmagic Studio Camera

6K Pro Features

• Native 6K sensor with 13 stops of dynamic range.

• Compatible with a wide range of popular EF lenses.

• Live streaming for global remote cameras via Ethernet of mobile data.

• Built in two-, four- and six-stop remote controllable ND filters.

• Large high-brightness viewfinder.

• 12G-SDI, HDMI, 10G Ethernet connections.

• Single 10G Ethernet allows SMPTE fiber style workflow.

• Professional mini XLR inputs with 48-volt phantom power.

• Optional focus and zoom demands for lens control.

• Blackmagic Studio Converter allows all connections via Ethernet.

Blackmagic Studio Cameras have the same features as large studio cameras, miniaturized into a single compact and portable design. Plus with digital film camera dynamic range and color science, the cameras can handle extremely difficult lighting conditions while producing cinematic looking images. The sensor features an ISO up to 25,600 so customers can create amazing images even in dimly

lit venues. Advanced features include talkback, tally, camera control, built-in color corrector, Blackmagic RAW recording to USB disks, live streaming and more. Plus, the new models add built-in live streaming via Ethernet or mobile data so customers can place cameras remotely.

With built-in live streaming, customers can place a camera in a remote location and it can generate a H.264 HD live stream that is sent over the internet back to the studio. Simply connect the camera to the internet using the built-in Ethernet connection, or customers can connect a 4G or 5G phone to the USB-C port to stream via remote data.

While designed for live production, it’s not limited to use with a live switcher. That’s because it records Blackmagic RAW to USB disks, so it can be used in any situation where customers use a tripod. The large viewfinder

makes it perfect for work such as chat shows, television production, broadcast news, sports, education, conference presentations and even weddings.

Amazing sensors combined with Blackmagic generation 5 color science give customers the same imaging technology used in digital film cameras. With 13 stops of dynamic range, the camera has darker blacks and brighter whites, perfect for color correction.

The advanced Blackmagic Studio Camera Pro models are designed for broadcast workflows with 12G-SDI, 10GBASE-T Ethernet, talkback and balanced XLR audio inputs. The 10G Ethernet allows all video, tally, talkback and camera power via a single connection so setup is much faster. That’s just like a SMPTE fiber workflow, but using standard Category 6A copper Ethernet cable so it’s much lower cost.

21 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

In 2023 Boland goes “into the black” with the ultra lightweight and portable X4K16aHDR-OLED. This 16-inch next-gen 4K-OLED model offers a true 10-bit panel and processing with an extreme dynamic contrast ratio that guarantees ultra-deep

BOLAND MONITORS X4K16aHDR-OLED

black levels. 4K signal is delivered via 12G and 3G SDI (single or quad link), HDMI 2.0, and Sfp (2110) inputs.

The OLED provides numerous scopes and audio meters, 3D LUTs, time code, markers, and multiple aspect

functionality. All firmware updates are completed in-field using USB, and this unit includes VESA mount holes on the rear in addition to a desktop stand.

22 Best of Show Awards 2023 | NAB Show
MORE INFO
FOR

BOLIN TECHNOLOGY EX—Ultra

Bolin elevates its leadership in the outdoor PTZ camera market by introducing the all-new EX—Ultra 4K60 outdoor PTZ camera. It offers three image solution options for various applications, 12X zoom, 1-inch CMOS sensor, 30X/20X zoom full HD, 4K60 ultra high-resolution, super low light performance, and super image stabilization capability. The EX—Ultra features two FPGA imaging engines outputting simultaneous, independent video streams. There are two 12G-SDI outputs, optical SDI and HDMI 2.0, and multiple IP streams, including FPGA hardware codec FAST HEVC. This is a revolution in outdoor PTZ cameras.

FAST HEVC is based on the H.264/265-AVC/HEVC open standard platform, using the Xilinx Zynq UltraScale+ MPSoC. With FAST HEVC, the EX— Ultra delivers a 12G-SDI signal over IP with high quality, low latency, and low bandwidth, maximizing existing 1 Gbps network IP video environments. The FAST HEVC video is broadcast quality with extremely low latency (less than 2 frames) and can be delivered over long distances with a dramatically low bandwidth of 50 Mbps at 4K60.

The EX—Ultra can withstand winds up to 60 m/s. A built-in heater and defroster allow for an operating tem-

perature of –40° to 60° Celsius. The entire camera is IP67-rated. The connections cover and mounting bracket system also meet that standard. It has all-metal mechanical parts, an aluminum alloy body and strategic use of Grivery GV5H high-strength nylon. It has a nitrogen-filled image module housing and C5 salt air corro sion-resistant coating.

The pan, tilt and zoom performance of the EX—Ultra is stunning. The 340° pan and 210° tilt move at a variable rate from 0.01 of a degree per second to 90 degrees per second. The 255 presets execute at 100 degrees per second at five different speeds, all with Zero Deviation Positioning. The EX—Ultra also sup ports the Free-D protocol.

The EX—Ultra is not just for permanent stadium installations or situational awareness environments. It can also be tripod-mounted for live production. Bolin’s new EX—Ultra is the most advanced, high-performing and rugged PTZ camera we have ever made, and we are eager for the market to experience it.

23 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

VB330 With New SCTE 104/35 and Visual Recording Functionality

Building upon the extensive monitoring capabilities of the VB330 — a multi-purpose tool capable of monitoring IP multicast, video OTT/ABR streaming, voice trunks, video-on-demand unicast, Ethernet packet micro-bursts, PCAP recording and general traffic protocol inspection — Bridge Technologies has extended the existing monitoring, control and alarming of SCTE 104 and SCTE 35 events to now include the visual recording, documentation and review of integrated downstream ad insertion, thus facilitating more in-depth validation, inspection, fault-finding, fault evidence and reporting.

Recording is facilitated across 200 channels in parallel, triggered in a range of ways. One method initiates recordings based on the occurrence of error events, which themselves are drawn from the extensive range of existing SCTE 104/35 alarms that are already programmed into the probe (and drawn from TR 290 standards). Crucially, a pre-fill buffer is used to ensure that the actual fault itself is recorded, thus allowing engineers to build up a living cache of records centered around the trigger points themselves. Alternatively, general recording can be facilitated on a ring buffer basis with user-customizable loop duration, triggered by SCTE 35 cue-in/cue-out occurrences, allowing for more general recordkeeping and performance analysis.

All of the recordings obtained are intuitively labeled and instantly accessible directly from the VB330 timeline, where an easy-to-use recording dashboard contains a file overview with comprehensive search functionality, making for quick, easy identification. The files are ready for playback in VLC in less than a second, which means the files can be accessed from anywhere in the world, through any HTML5 web browser. A week’s worth of manual recording for

any given channel can be captured for historical recordkeeping or analysis. File size is limited only by the storage system to which the files are directed, and can be stored locally, or via a Storage Area Network.

This invaluable ad recording addition to the VB330 significantly extends the fault-identification and recordkeeping abilities of the probe, and highlights Bridge Technologies’ commitment to developing a single, constantly evolving tool that equips broadcasters with the full range of tools they need to ensure that viewers experience the highest QoE, and advertisers receive the deliverables promised by operators, with data-driven assurances that justify their investment. Measuring and maintaining

records of not just signal integrity and continuity, but actual visual evidence of the quality of image received serves commercial benefit at every operational layer — allowing for improved reporting at a sales and marketing level, but also allowing engineers to engage in postevent diagnosis and long-term operational improvement. In this way, this new addition furthers the dual-purpose benefit of the VB330 as both a tool for in-the-moment troubleshooting, and long-term, C-Suite level decisionmaking. Crucially, all this is achieved in a way that maintains the ease-of-use, accessibility and flexibility that lie at the heart of the VB330 — extending further the breadth and depth of the probe’s monitoring capabilities, without adding undue complexity.

24 Best of Show Awards 2023 | NAB Show
BRIDGE TECHNOLOGIES
FOR MORE INFO

As video delivery evolves, today’s operators need solutions that reduce energy consumption, lower costs and deliver an exceptional quality of experience. Broadpeak’s new Advanced CDN solution ensures flawless streaming experiences, offering unparalleled performance, scalability and video quality.

Why Advanced CDN Is Groundbreaking

Featuring a high-efficiency design that optimizes sustainability for video streaming, Broadpeak’s Advanced CDN delivers 560 Gbps per server in https, 725 Gbps in http, and handles 70,000 redirections/s per FO server, which is three times better than other servers on the market. The solution ensures an outstanding quality of experience for end users, with ultra-low latency and video quality beyond broadcast. Offering elasticity orchestration, full control through open APIs, and a unique context-aware approach, Broadpeak’s Advanced CDN is a game changer for video streaming.

The innovative CDN solution is based on Broadpeak’s BkM100 Video Delivery Mediator and its recently launched BkS450 high-performance video streamer. A steering center enables the CDN to be context-aware, allowing operators to control which CDN features are being used for each session with a very

BROADPEAK Advanced CDN

fine granularity based on the request characteristics. Operators can consider any specificities of a streaming request (i.e., content requested, user profile, user location, device type, network load) to allocate a behavior (i.e., layer filtering, offload, blackout, usage of multicast ABR).

In addition, the Advanced CDN features a built-in A/B testing capability, allowing operators to check the efficiency of a parameter or a product version before deploying it widely. AI capabilities on the CDN help with troubleshooting and trending dashboards for capacity planning.

The Advanced CDN offers several unique features and benefits:

High efficiency with top performance for increased efficiency: Broadpeak’s next-gen CDN provides operators with massive energy savings, allowing them to use four times less energy for video streaming compared with the previous generation. It achieves the best Gbps/$ and Gbps/watt consumed ratios in the market.

Streaming at scale: Builtin elasticity helps operators manage horizontal and vertical scaling dynamically.

Open, flexible design for simpler configuration and operations: The Advanced CDN

can be deployed in any environment, including on-prem, the cloud and a hybrid configuration. Creating and operating a video streaming service is simple with Broadpeak’s new solution; the Advanced CDN offers operators full control over what is happening inside the content delivery network through open APIs and an intuitive GUI.

Support for new business models increases monetization: The Advanced CDN opens up new business models, creating a bridge between ISPs and content providers leveraging OpenCaching APIs. In addition, the solution enables collaboration between operators and content providers through its steering center and broadpeak.io.

Improved QoE: Broadpeak’s CDN solution is session-based. Compared with snapshot CDNs, the Advanced CDN leverages the centralized modelling of servers’ available resources in real time to optimally switch to higher ABR profiles and maximize QoE.

Increased bandwidth efficiency: The Advanced CDN supports server-side segment selection for streaming to ensure a real-time adaptation to bandwidth conditions, especially in low-latency configurations where players have difficulties accurately evaluating what resources are available.

25 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

BG-ADAMO-JR

The next generation of PTZ cameras — BG-ADAMO-JR — is unrivaled in its class in the live stream broadcasting market.

BG-ADAMO-JR auto-tracking PTZ camera is loaded with features including a full interface of video connections and 1080p@60Hz resolution. The 3G-SDI connection enables long-distance cable runs without compromising image quality.

Advanced AI auto-tracking uses the latest human detection AI algorithms, providing the ultimate in convenience and efficiency without needing a camera operator or additional hardware.

NDI|HX 3 and Dante AV-H models add to its versatility offering a way to utilize existing network infrastructures to offer exceptional video signals over the network while minimizing latency to imperceptible levels.

Compose shots ahead of time utilizing up to 255 programmable presets, with 10 accessible using the IR remote. Capable of storing 1 TB of video footage with the Micro SD card writer, start recording on the fly when other connections are inaccessible.

The lens is designed with an advanced auto-focusing algorithm that promptly snaps into focus with dependable accuracy and stability. The 3D noise reduction technology combined with the low-noise CMOS sensor ensures impeccable image clarity. Choose between the 12x optical zoom lens with a 70.3° wide-angle, the 20x (60.04°) or the 30x (58.1°).

Packed with an array of video connections including HDMI, 3G-SDI, USB 2.0, USB 3.0 and LAN, the BG-ADAMOJR boasts unparalleled functionality with sophistication and style. The dual stream USB facilitates concurrent mainstream and substream outputs while

the HDMI, SDI and USB connections are capable of transmitting 1080p video and audio signals simultaneously.

Innovative Design

Available in either classic white or black finishes, the chassis of the BG-ADAMOJR is designed to be as functional and attractive as its formidable feature set. The high-stability substructure provides a solid foundation for the precision lens delivering 1080p resolution video at 60fps. Eliminating the need for bulky external accessories, the control arms feature distinctive built-in tally lights illuminating green or red with 360-degree visibility.

Control

Control with RS232, RS422, web GUI, IR remote or control app BG-PTZ-Control — a free BZBGEAR proprietary PTZ control software for Windows, Mac and iOS (with Android available soon).

With 1080p@60 resolutions, AI autotracking, flexible connection options and seamless IP streaming capabilities with NDI|HX 3 and Dante AV-H, the BG-ADAMO-JR is the ideal solution for those looking to add automation to their workflow. The BGADAMO-JR is a high-performance PTZ camera delivering exceptional functionality and value — with a starting MSRP of just $1,499.

26 Best of Show Awards 2023 | NAB Show
BZBGEAR
FOR MORE INFO

BZBGEAR

BG-COMMANDER-PRO

Demand perfection. Command with precision.

Introducing the BG-Commander-Pro, a PTZ camera joystick controller that comes with an integrated 7-inch touch screen. It supports real-time image previews from connected cameras via their RTSP streams on the integrated 7-inch touch screen. It can output up to a 3x3 video wall to an external display through the HDMI interface. This controller is designed to simplify video viewing and management. Built using Android 11, it supports H.265/H.264 decoding and easily handles up to nine cameras simultaneously.

With HDMI projection and total PTZ control including presets, focus, zoom and exposure, you can easily control your PTZ cameras for better broadcasting production. The Commander Pro is also customizable with single IP multichannel acquisition and ONVIF protocol support, meaning you can add up to 2048 devices.

The Commander Pro was designed to be user-friendly. It can be upgraded through a standard USB flash drive, utilize an external mouse/keyboard for easier interface control, and even record RTSP streams or take

screenshots at the moment using the available Micro-SD expansion storage slot. It also features four RS422/RS485 and one RS-232 control port, making it ideal for large-scale video projects.

The Commander Pro’s unique capabilities, such as support for a 3x3 video wall with up to nine camera inputs, can be controlled via mouse and keyboard, all with Power over Ethernet (PoE), make for a clean, easy setup. All-in-all, the Commander Pro provides a simple UI for users to access professional-grade controls, management and editing tools.

27 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

CANON

Add-On Applications System for Canon PTZ Cameras

The “Add-On Applications System” for PTZ (Pan-Tilt-Zoom) remote cameras will be made available for the CR-N300, CR-N500, and CR-X300. The system is already available for the CR-N700. Available through a future firmware update, the Add-On Applications System will provide access to add new paid video production features such as Auto Tracking and Auto Loop. The Auto Tracking application will give users the ability to automatically pan, tilt and zoom to track moving subjects, and the Auto Loop application will allow users to program and automate repeated camera movements.

The add-on applications uniquely position Canon in the PTZ market as the applications run from within the camera itself, instead of other solutions in the market requiring external devices to operate functions like Auto Tracking. With the addition of the Add-On Applications System, customers can now directly add the new capabilities to the PTZ camera, providing a better value for end-users.

The Auto Tracking paid add-on application enables automatic tracking according to a subject’s movements. As the subject moves, the camera will pan, tilt and zoom to maintain the subject’s composition in the frame. This added function helps to reduce the camera operator’s workload, letting them focus on other tasks such as camera switching and streaming, and allows for multi-camera shoots with fewer people on set.

The Auto Tracking is highly responsive, allowing for full body, upper body and head and torso tracking at both slow and normal walking speeds, and full body and upper body tracking at fast

walking speed. In addition, the Auto Tracking application provides users with a wide range of adjustment functions such as composition, tracking sensitivity, priority display area, fixed viewing angle area, tracking target auto-select, pan/tilt halting area, initial position, autoselect exclusion, and pan/tilt operation control.

The Auto Tracking add-on application is ideal for end-users in markets such as house of worship and corporate event streaming, educational workshops and lectures, and broadcast and corporate interviews.

The Auto Loop paid add-on application helps automate repeated camera movements, potentially lessening the burden on camera operators. Camera operators will now have the flexibility to select between two repeated movements, loop or back-and-forth, based on needs and the shooting environment. It also provides the ability to enable smooth acceleration and deceleration when starting and stopping movement between two preset positions for more professional camera movements. For added convenience, camera operators can adjust settings such as

position, route, preview and start from one screen.

The Auto Loop add-on application is ideal for consistent smooth on-air movements for end-users in markets such as house of worship and corporate event streaming, broadcast and over-the-top (OTT) sports and interview streaming, and commercial production.

The Firmware Update for all three PTZ cameras to add the “Add-On Applications System” is scheduled to be available in July 2023. The paid Auto Tracking Application is scheduled to be available for the CR-N300 and CR-N500 for $1,200.00. The paid Auto Loop Tracking Application is scheduled to be available for the CR-N300 and CR-N500, and CR-X300 for $800.00.

28 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Canon Flex Zoom Lenses - CN-E14-35mm

T1.7 L S/SP Wide-Angle Zoom Lens and the CN-E31.5-95mm T1.7 L S/SP Telephoto Zoom Lens

Canon is launching two additions to the company’s 8K Flex Zoom Cinema Lens Series — the CN-E14-35mm T1.7 and the CN-E31.5-95mm T1.7 Super 35mm format lenses. The company also announced four new relay kits — the RL-S1 and RL-S2 (for Super 35mm) and the RL-F1 and RLF2 (for full-frame). The kits adapt each Flex Zoom lens to suit whatever sensor format is required.

The Canon Flex Zoom lens series was first introduced in 2022 to support fullframe digital cinema cameras. Today, the CN-E14-35mm T1.7 and CN-E31.5-95mm T1.7 expand the Flex Zoom series to include support for Super 35mm cinema

cameras. The new lenses, available in interchangeable EF and PL mounts, produce video with beautiful and natural background blur that is desired by many end-users.

Designed in the pursuit of cinematic beauty, the new lenses achieve superb 8K optical performance while maintaining the style and ease-of-use of Canon’s Cinema lens series. Ideal for shooting movies, television dramas, commercials and a wide range of other video content, the lenses maintain a bright T1.7 aperture1 across their zoom ranges, enabling operators to create

powerful, shallow depth-of-field shots. The Canon Flex Zoom lenses are designed with “flexibility” in mind. Each Flex Zoom, by virtue of their relay kits, can be adapted for Super 35mm or full-frame sensors as needed to support the ever-changing requirements of film productions for both popular sensor formats.

The use of the relay kits provides an added level of versatility and value to a customer’s Flex Zoom lens. Relay kit swaps can be performed by a skilled user or at a Canon Factory Service facility.

29 Best of Show Awards 2023 | NAB Show
CANON
MORE INFO
FOR

As consumers continue to demand more from their content providers, creating cost-effective and time-efficient workflows is more important than ever. To help with this, Cinedeck recently launched ConneX, a visual workflow creator that makes it quick and easy for media companies and content producers to batch process files at ingest.

ConneX can act as an ingest gateway from camera to MAM, joining up post-ingest workflows and streamlining the file check-in process. It provides built-in video transcoding, scalable workflow orchestration and metadata processing in a simple visual interface that is user-friendly and intuitive to work with.

ConneX was built to seamlessly slot into existing media architecture. This means companies can easily link up workflows, without the need to re-architect their infrastructure or adapt their processes. It supports MOV and MXF video files, and commonly used media

CINEDECK ConneX

compression codecs, such as ProRes, DNxHD/HR, XAVC (class 300 and 400), AVC-I, XDCAM and JPEG2000. Requiring no programming skillset to deploy, ConneX helps to keep teams engaged and working efficiently by making media processing more accessible and productive. It allows users to automate their workflow in just a few steps and modify content on the fly, getting them from record to delivery, faster and easier, without the need to write code or invest in expensive software. A watch directory automatically triggers pre-determined workflow actions when a new file is detected, but if needed ConneX retains the option for human intervention during processing.

Jane Sung, COO, Cinedeck, commented: “Post-production teams are dealing with tighter schedules, so interoperability and speed are more important than ever. Customers are asking for simple solutions to help

bridge gaps in their existing workflows. Cinedeck is continually looking for ways to make media processing more efficient for our users and ConneX is a stress-free tool that is easy to use.”

ConneX offers complete flexibility and can be used interchangeably in the cloud or on-prem depending on requirements. This is especially vital for modern post-production teams, where the ability to adapt is essential, and getting high-quality footage to editors in different locations quickly is the key to success.

ConneX can help speed-up the downstream processes further by allowing users to prepare assets for post-production workflows. Media operators can trim video files, remove snippets and instantly update metadata. ConneX reduces time spent on file processing at ingest, allowing users to streamline repetitive tasks and make updates effortlessly.

30 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Arcadia With HelixNet Integration

Arcadia Central Station is a scalable, all-encompassing IP intercom platform integrating wireless and wired intercom stations. It is the centerpiece of an intercom platform supporting more than 100 wired and wireless intercom user stations and beltpacks, making it a solution that addresses a large range of applications from small local productions to complex, large-scale live events.

Arcadia successfully supports user requirements with various types of form-factors that previously required individual systems. The system allows users to integrate the FreeSpeak family

of digital wireless systems across 1.9 GHz, 2.4 GHz and 5 GHz bands. Arcadia concurrently supports HelixNet networked and Encore analog partyline user stations that range from simple beltpacks to desktop, flush mount and rack mount intercom user stations of various channel counts and interconnectivity.

Arcadia brings all these types of user stations together in a 1RU device that is configured and managed through the easy-to-use browser-based software, CCM.

Arcadia interfacing makes it easy to integrate the intercom into any audio infrastructure

offering high density for both 4-Wire and Dante interfaces.

The latest version of Arcadia adds full support of HelixNet Remote Stations, speaker stations and beltpacks with access to up to 24 channels across 64 HelixNet user stations. Additionally, a 5 GHz FreeSpeak Edge scanner helps users navigate the 5 GHz wireless spectrum.

With flexible and upgradable licensed port capacity, Arcadia offers future expansion of capacity as requirements grow, ultimately providing virtually unlimited possibilities.

31 Best of Show Awards 2023 | NAB Show CLEAR-COM
FOR MORE INFO

COBALT DIGITAL

9992-ENC/9992-DEC Series

The Reliable Internet Stream Transport (RIST) series of Specifications from the Video Services Forum (VSF) provides a set of best-in-class mechanisms for content contribution over the internet. Cobalt Digital is an active member of the RIST Activity Group, and Cobalt products provide a set of rich RIST features.

The RIST Protocol provides lowlatency, reliable, secure content transport over the Internet. However, the advanced packet recovery mechanisms in RIST cannot, by themselves, overcome the situation where the delivery network runs out of bandwidth due to external factors. It is simply not possible to transport a 5 Mbps stream on a 4 Mbps pipe.

The solution to this problem is Source Adaptation. The receiver sends feedback information to the sender,

who adjusts the stream to match. If the sender is a video encoder, it can do so by dynamically varying the bit rate.

This type of solution exists already in the market — for example, all cell-bonding products do this to a certain extent. However, what is new here is that the Video Services Forum published an open Industry Specification for this functionality: VSF TR-06-4 Part 1, “RIST Source Adaptation,” approved in November 2022. With TR-06-4 Part 1, users are not tied to some proprietary implementation — it is possible to mix and match senders and receivers from different vendors. VSF TR-06-4 Part 1 can be downloaded from the VSF website.

Cobalt continues to add capabilities to its award-winning software-defined family of 9992-ENC/

FOR

MORE

32 Best of Show Awards 2023 | NAB Show
DEC encoders and decoders that meet the evolving needs of today’s broadcast facilities. Cobalt Digital just released the first commercial implementation of TR-06-4 Part 1 Source Adaptation in the 9992-ENC/9992-DEC series of broadcast encoders and decoders. The 9992 series offers traditional broadcast features combined with advanced network options to provide a perfect solution for sports, newsgathering, video contribution and affiliate distribution. The product also includes support for VSF TR06-2 Null Packet Deletion, which allows the receiver to create a compliant Null-Padded Constant Bitrate (CBR) transport from a rate-adapted incoming stream, and support legacy downstream devices. INFO

Renegade Series of Power Stations

The Renegade series of power stations by Core SWX are the culmination of years of research and development to deliver high capacity, high current output and exceptional quality, lithium-based power solutions with incredible versatility for the cinema and lighting industries.

The Renegade is a 777Wh, Lithium Iron Phosphate (LIFEPO4) power station encased in a polycarbonate housing offering a lighter option to the Maverick and new Renegade XL. It can deliver 15V, 28V and 48V simultaneously with up to a 1200W output. From a fully discharged state, the included PFQ8 external charger can recharge the Renegade in less than 3.5 hours. A runtime LCD, similar to that found on our Maverick power station, provides up-to-the-minute runtime/ charge time and percentage capacity, as well as an approximate runtime when in standby.

The Renegade XL model is offered in two variants, both being Lithium Ion (Li-Ion) based power stations encased in a cast aluminum housing, with a

capacity of 1376Wh. The Renegade XL1’s ability to deliver 15V, 28V or 48V making it a versatile power source for various cinema equipment, including cameras, lighting and other accessories. With its high capacity and multiple output options, it can power a variety of devices simultaneously, providing a reliable power source for your production needs. If you are in need of a lighting-focused option, the Renegade XL48 variant has got you covered with its dual 48V, 15 amp outputs capable of powering an Aputure 1200D at full output using just a single battery source.

The Renegade XL’s groundbreaking built-in charging system, which can recharge its massive 1376Wh capacity in just 5 hours, was first introduced on it’s nickel metal hydride cousin, the Maverick. This feature has been instrumental in revolutionizing battery management, minimizing failure points, and eliminating the need for external cabling, except for a standard IEC cable that connects to AC mains. Moreover,

the Renegade XL also provides unparalleled flexibility and efficiency with the optional SFQ40 rapid charger, which can charge one Renegade XL in an astonishing 2.5 hours, setting a new industry benchmark in the cinema world.

The Renegade XL’s new dynamic color OLED display can provide the same runtime LCD as with the Renegade and Maverick but also offers additional function and battery status. As Aux outputs are always a welcomed inclusion, the Renegade XL is complimented by 2 ptap ports and a USB, which can power mobile devices but doubles as a FW update port. One of the two ptap ports also supports Voltbridge Mesh, allowing for cloud fleet battery management.

All three models are the same size and footprint as their nickel metal hydride cousin, the Maverick, fitting in most dolly compartments and legacy shipping cases. Just like with the Maverick, units are highly serviceable to maximize up-time and minimize downtime for maximum ROI.

33 Best of Show Awards 2023 | NAB Show
CORE SWX
FOR MORE INFO

Dalet Cut is a cloud-native, lightningfast multimedia and multiplatform editor fully integrated within the Dalet ecosystem. It powers live web-based editing from anywhere with native access to all assets including clips, sequences, projects and graphics, even on limited bandwidth. The intuitive user experience enables content producers and storytellers to collaborate with unparalleled speed and efficiency, delivering better audience experiences across linear and nonlinear channels. In a few clicks, content targeted for linear channels can be repackaged for social and OTT platforms, optimizing resources and saving time.

Dalet Cut is a browser-based proxy editing coupled with intuitive tools for story creation that eliminates the need

Dalet Cut

for VPN, media movement, local rendering or lengthy training. Storytellers can quickly come up to speed and use Dalet Cut to collaborate on stories for digital, social, TV and radio from anywhere.

The digital-first approach features templates to adjust aspect ratios for various video requirements, ensuring stories are prepared in the right shape and form. Users can write scripts, edit content, record narration, add graphics and manipulate captions. Innovative script-to-graphic and AI-powered translations enable users to add impactful narration graphics to stories quickly and easily.

For news workflows, Dalet Cut for Pyramid supports growing files, closed captions and editin-place both on-premises and

in the cloud. Packaging and distribution of eye-catching stories is quick and easy in a single browser tab. Dalet Cut is natively connected to Dalet Pyramid news planning calendars, assignments, rundowns, archives and more. With full access to Dalet Pyramid shared resources, journalists, producers and news directors can easily collaborate and focus on story development and editorial decisions without the burden of trying to locate relevant content.

In addition to news, Dalet Cut allows any content owner and storyteller to edit digital content such as sports and event highlights, short-form programs or social promos thanks to its multimedia and multiplatform production capabilities.

34 Best of Show Awards 2023 | NAB Show DALET
FOR MORE INFO

DIGITAL NIRVANA

MediaServicesIQ Version 2

Adoption of AI technologies to solve real- world problems is on a steep rise, and MediaServicesIQ Version 2 gives Digital Nirvana customers the ability to access such technology easily all in one place.

MediaServicesIQ is the gateway to Digital Nirvana’s tech stack — the bedrock of the company’s flagship media applications and custom metadata solutions. All of these solutions use advanced AI and machine-learning (ML) capabilities to streamline media production, post production and distribution workflows.

News producers, editors, archive managers, and others in the broadcast, sports, and media and entertainment markets can go through MediaServicesIQ to access a full suite of AI and ML tools to enhance their media workflows. These include the company’s speech-to-text, video intelligence, caption conformance, content classification and other core capabilities.

When Digital Nirvana first announced MediaServicesIQ, it was a collection of AI/ML microservices that media organizations could consume individually to meet specific requirements. Now MediaServicesIQ V.2 has evolved into a multipurpose cognitive platform that integrates with all of Digital Nirvana’s other products and makes it possible not only to generate metadata but also to add generative AI capabilities on that metadata.

Through MediaServicesIQ V.2, Digital Nirvana customers can now access tools that combine data from text, images, video and audio to generate intelligence. New features include targeted identification of specific attributes from an image or video. (For example, users can submit an ad and get the product/ brand information.) Also, MediaServicesIQ is now equipped with the capability to segment content automatically based

on topic, generate synopses and titles, extract keywords and much more.

Users access MediaServicesIQ via a portal, APIs or any of Digital Nirvana’s products. They can share a live stream or an offline asset and receive various insights from the content (audio, video, text, images). Having a large set of cognitive services running in the background, with the ability to consume them via front-end applications, is an industry first.

Today a user can submit a video through the MediaServicesIQ portal or through any of Digital Nirvana’s products and receive all manner of results — speech-to-text and video intelligence metadata, chapter markers based on topic, a summary of each chapter/topic, a title for each segment/topic within the video, a list of brands or products displayed, the name of the person who is discussing a specific topic, keywords that could be used

as tags and many more.

This enables organizations to make faster content production decisions, gives them better visibility into available content, makes it easier to retrieve accurate content from libraries, and accelerates content production.

For a news producer, having realtime transcripts for raw incoming feeds — with a title and appropriate segmentation based on topics — helps with faster news publishing decisions.

For an editor, having appropriate, searchable, contextual metadata avoids them having to review the entire gamut of footage to create content.

For archive managers, having a synopsis and topic-based segmentation with titles for each media asset helps them more easily retrieve content for repurposing/relicensing. All of the above helps media facilities cover large volumes simultaneously and at scale.

35 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Medusa ATSC 3.0 Solution in a Box

Over-the-air television is currently in a transformation phase. Stations can now convert their broadcast to the new ATSC 3.0 technology, which offers the benefits of more services with better audio and video quality with personalization based upon the viewer’s location and preferences. This new, IP-centric, delivery method is already deployed in many DMAs. But legacy television sets do not natively support ATSC 3.0 reception. A separate set-top box or a new television is required to receive these transmissions. History shows that without subsidizing this type of change, such change does not come quickly. With such a small — but growing — potential initial audience, the ROI for investing hundreds of thousands of dollars in the best encoding/compression equipment isn’t economically sound at this early phase of the NextGen TV deployment cycle. DTV Innovations developed its Medusa to provide an ATSC 3.0-compliant broadcast service in a cost-effective manner, thus allowing broadcasters to walk before they run into the NextGen TV marketplace.

DTV Innovations’ Medusa provides a complete station solution, incorporating most of the core software functionality and hardware architecture necessary to light up a NextGen TV signal. Medusa eliminates the need to integrate applications and hardware from multiple vendors, thereby minimizing the required investment while expediting time to market and monetization.

Medusa includes an encoder/DASH (Dynamic Adaptive Streaming over HTTP) packager, service announcements/signaling generator and broadcast gateway within a single server. It meets all FCC requirements, supports ATSC recommended practices and even provides ATSC 1.0 PSIP. It’s a one-server solution accepting SDI/HD-SDI as inputs and pro-

viding an ATSC 3.0 STLTP output. Competing solutions typically consist of three to four separate servers, connected through a complex IP network to provide the same functionality provided by Medusa. Deploying a solution based upon subsystems from four different vendors translates to four times the complexity, four times the number of system updates and backups, four times the likelihood of something not interfacing correctly, four times the number of vendors to manage, four times the investment and four times the headaches of deploying and operating a fully internally integrated solution. Implementing a turnkey solution provides up front and ongoing savings. Since Medusa provides everything required to launch a NextGen TV service

except the content and integration with an ATSC 3.0-compliant exciter and transmitter, users don’t have to invest resources in managing disparate vendors or worrying about whether vendor A’s equipment will play nicely with vendor B’s equipment.

Instead, manpower and finances can be invested toward developing a strategy to maximize a return on investment in NextGen TV, which could include features such as:

• Additional advertising-supported programming

• Pay-per-view services

• Data distribution services

• Content and advertising tailored to your audience’s zip code

• Interactive applications with potential for additional advertising

36 Best of Show Awards 2023 | NAB Show
DTV INNOVATIONS LLC
FOR MORE INFO

DVEO AD & CG Insertion

DVEO AD & CG Insertion is an advanced multichannel solution that has the ability to insert advertisements, video clips, overlays, logos, graphics, text and scrolling text into live video streams either on a predetermined timetable or by signs such as SCTE-35. Windows-based, targeting either H.264 or MPEG-2, with flexible standards-compliant video ad insertion, text or graphic insertion through overlay or stitching.

The solution was developed specifically for industry experts who are prepared to make the switch to a robust server that can provide them with all they want, even run a large number of channels from a single location.

DVEO AD & CG Insertion has a variety of features, such as:

• The ability to place advertisements

depending on SCTE35 markers, DTMF or the schedule.

• Develop a distinct advertising output for each region or market.

• Make ad playlists or ad blocks and provide users the power to choose the sequence in which ads appear.

• Advertisement reports drag-and-drop operations may be performed on overlays, logos, animations, text and scrolling.

• Adjust the settings for the text’s size, color, text and position of graphics.

• Determine a timetable for the graphic overlays.

• Real-time graphic overlays may be manually toggled on and off at any time.

In terms of the inputs and outputs, DVEO Ad & CG Insertion

offers the following:

• SDI, HDMI, UDP, RTSP, RTMP, HLS, and SRT for the inputs.

• SDI, HDMI, UDP, RTMP, and SRT are all available as outputs.

• Mpeg2, H264, H265, Video Passthrough as Video Encode.

• AAC, AC3, Mpeg, Audio Passthrough as Audio Encode.

What sets it apart from the rest?

DVEO AD & CG Insertion is a professional multichannel solution that offers tremendous value to customers in inserting advertisements or graphic overlays. Capable of targeting several locations with distinct advertisements broadcast on the same channel while minimizing the amount of electricity and resources required to do so.

37 Best of Show Awards 2023 | NAB Show DVEO
FOR MORE INFO

Challenge:

To achieve business success in live streaming amid increased competition, subscriber churn and platform fragmentation, media companies need to simplify complex workflows to drive operational efficiencies while delivering the biggest live events at a global scale and managing millions of concurrent sessions.

Brands need to ensure the best quality streaming experiences to retain and grow audiences. And to optimize ROI, organizations need to enhance digital ad value by increasing engagement and ad conversion rates.

However, building and maintaining advanced in-house streaming infrastructure is costly and time-intensive. Content owners need trusted technology partners that can reliably manage, monetize and deliver highly demanding live events, allowing media organizations to focus on their content and business differentiators.

Solution:

Edgio was launched in 2022 as a result of Limelight’s acquisition of Edgecast. Its Uplynk streaming solution simplifies complex content management and delivery workflows with secure, flexible and scalable technology. Rich APIs and modularity make integrating in-house solutions or other technologies easy, tying in to on-premise, hybrid or cloud

EDGIO Uplynk

environments to suit individual needs. Uplynk enables broadcast-quality live streaming experiences with full-featured advanced customization and monetization capabilities, and platform support of <15 seconds latency.

Uplynk ingests and prepares customer content for encoding, storage and delivery through one unified workflow, saving time and costs while maintaining the highest video quality. Uplynk can easily be integrated on-premise or via the cloud, removing any need for onprem infrastructure.

Uplynk provides access to optimized encoding profiles for high-quality video, including 4K UHD content or live sports, delivering encoding profiles to tailor resolution, frame rates, slice size, markers, multi-pass and income from operations (IFO). Through its Smartplay feature, content providers can tailor ad-serving strategies to evolving audience needs, including streaming across social media, broadcast/OTT and FAST platforms, with SSAI driving dynamic personalization at scale and increasing ad revenues. Advanced DRM and content replacement capabilities ensure content is secured and business logic is enforced on a per-viewer basis while still supporting reduced latency, automatically adhering to time- and location-based business rules. With the addition

of Live DVR capabilities, customers can now enable viewers to pause, rewind and fast-forward live streams, meeting viewer expectations of experience.

The Universal Ad Config feature enables seamless integration with any ad server, supporting a range of ad response formats, including operational flags, pass-through parameters, macros and functions. Customers receive key customization and control over ad placement and content, enabling personalized user experiences and enhancing ad conversion rates.

Result:

Uplynk reduces the complexity of delivering live events by scaling operations with minimal resources so customers can reach more viewers. Advanced advertising solutions and performance monitoring help customers refine and perfect their revenue strategies and drive ROI across platforms. In 2022 alone, it handled 39,000+ live events, generated 2.4 billion event views, 3.3 billion hours of streamed video and 220 million hours of advertising.

Uplynk’s scalable team of experts are on-hand 24/7 to manage all elements of live streaming, meaning customers can focus on audience growth, creative decisionmaking and business differentiators.

38 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

NEXX-670-X30-HW-V2 FPGA Accelerated IO Processing Module for NEXX

NEXX, Evertz’ compact processing routing solution has become a cornerstone for broadcast facilities, mobile production trucks, venues and stadiums. The NEXX platform enables facilities to upgrade from aging HD/SD-SDI routing cores to a flexible core that supports 384x384 12-SDI routing matrix with an integrated Multiviewer in compact 5 rack units. NEXX’s popularity with Evertz customers lies in its modular-based frame and main interface/backplane that offers redundant control and ease of swapping components, including crosspoint, fans and I/O modules. An integrated, software-enabled multiviewer with more than 30 pre-configured layouts (using internal Evertz X-LINK signaling), full

mono channel audio shuffling, licensable output frame syncs, timecode and mixed reference support are some of the key features of the NEXX platform.

The addition of the new NEXX670-X30-HW-2 FPGA Accelerated IO processing module extends the NEXX feature set to include enhanced multiviewing, and IP gateway functionality enables a transition to SMPTE ST 2110 or ST 2022-6. These are the initial software applications (apps) that were launched at the 2023 NAB Show. This multiviewer app has a dynamic canvas, scalable PIPs, analog timecode support, UHD output, ANC data, close caption decode and more. The

IP gateway app supports encapsulating and decapsulating 12G/3G-SDI into/ from ST 2022-6 or ST 2110 workflows and support for NMOS IS-04 and IS-05. The app library for NEXX-670-X30-HW-2 will evolve over the next few years ensuring that the NEXX platform is future-proof. The versatility of NEXX enables customers a cost-effective path for their transition to IP while replacing legacy HD-SDI today.

NEXX is controlled by MAGNUM-OS, which provides all the common user interfaces including traditional hardware router control panels, virtual web-based control panels and Evertz’ VUE intelligent panels.

39 Best of Show Awards 2023 | NAB Show EVERTZ
FOR MORE INFO

DreamCatcher BRAVO Studio Virtualized Production Suite

DreamCatcher BRAVO Studio is the complete cloud-based production control suite that redefines live production today. BRAVO Studio, is a collaborative, web-based live production platform that is redefining the creative experience for content creators and broadcasters. Providing virtual access to all the services found in the traditional control room, BRAVO Studio is a simple, reliable and cost-effective platform that accesses live video and audio from remote locations over dedicated networks, 5G networks or public internet.

The platform ingests multiple live camera feeds; provides live video and audio mixing with transitions; multiple video overlays for picture-on-picture or multi-box looks; slow-motion replays; clip playout; highlight clipping and packaging; multiple dynamic HTML5 graphics layers; and multi-image display of sources and outputs on the

user interface. Using MAGNUM-OS for orchestration, BRAVO Studio enables users to schedule and automate the event preparation including routing of incoming remote feeds, allocating resources and configuring the operator stations. This allows customers to seamlessly transition between productions with minimal effort. Technical directors and operators collaboratively produce live events with BRAVO Studio remotely from anywhere in the world using a web browser.

Advanced BRAVO Studio co-pilots help automate and simplify production workflows and enable small creative teams of all skill levels to maintain high quality and consistency throughout the production.

BRAVO Studio’s new “Highlight Factory” co-pilot creates clips and stories automatically using AI technology. These clips and

stories are published to Evertz Ease Live where users can pick their curated highlights from the interactive overlay. This ability to reach back to the production from the edge device to create a personalized experience addressed the ever-changing audience engagement.

In addition, Evertz has also brought all the power of Studer Vista audio mixing console to its BRAVO Studio platform with the introduction of the new Vista BRAVO. This integrates a full mixing console into BRAVO Studio and gives users all the flexibility they need to enhance audio for live productions, whether working on-premises or through the cloud.

BRAVO Studio is proving to be a game changer, particularly for events that include live sports, local news, esports, entertainment, corporate and government.

40 Best of Show Awards 2023 | NAB Show EVERTZ
FOR MORE INFO

IPD-VIA Create

IPD-VIA Create is a web application designed to significantly accelerate editing operations in order to satisfy the demands of live production. It is the latest module to be directly integrated within MediaCeption Signature, EVS’ live PAM solution that provides a comprehensive platform from ingest to playout.

This integration enables users to benefit from a seamless workflow, which now includes direct access to a fully featured nonlinear editing (NLE) tool within the unified production platform.

Live production assets are available to users from anywhere, resulting in faster turnaround times for content delivery to multiple platforms. The solution also allows for further craft editing with industry standard tools such as Avid and Adobe, through transfer of EDLs without requiring media transfer.

The cloud-native technology of IPDVIA Create further extends the overall flexibility of the MediaCeption Signature solution and can easily be deployed either on-premises or in the cloud, depending on the needs of the production.

The VIA Flow central workflow engine allows for easy exporting of finished sequences to any destination, be it Play to Air, social media or any other target deemed necessary.

IPD-VIA Create offers easy-to-use features such as an integrated DVE, a 9x16 video framing effect and powerful color correction, and appeals to both beginners and experienced editors alike.

Users can begin working on their projects while feeds are still being recorded and ingested, thanks to the full edit-while-capture support at the core of EVS’ end-to-end live PAM workflow.

It includes a tool for storyboard editing, perfect for journalists and producers

to begin drafting the first cut of their stories while the action is happening. More experienced editors can instantly see the results of this storyboarding, and continue to refine the edit in the timeline view, using common industry methods and familiar keyboard shortcuts.

Broadcasters and newsrooms are faced with relentless deadlines. To keep up with the daily demand for high-quality content, a production infrastructure that is capable of quick and efficient operations is crucial. The integration of EVS’ IPD-VIA Create application into the MediaCeption Signature live PAM solution helps streamline the entire production process from ingest to playout. This allows users to concentrate on creating compelling content that captivates and engages audiences as soon as it arrives.

With the notion of edit-while-capture inherent to all

EVS tools, IPD-VIA Create significantly enhances the speed and efficiency of content production and distribution in fast-paced environments. It also greatly simplifies remote collaboration among journalists and editors as all work is done within a browser.

Furthermore, IPD-VIA Create demonstrates how interoperability allows for the creation of best-of-breed solutions and contributes to an ecosystem that operates harmoniously to achieve the desired results for customers.

Finally, with the addition of IPD-VIA Create, production teams have access to an editing tool that is designed specifically with the needs of live production in mind, that meets EVS’s highest standards of quality, reliability and speed-to-ai and that ensures broadcasters and newsrooms can deliver topnotch content to their audiences consistently.

41 Best of Show Awards 2023 | NAB Show
EVS
FOR MORE INFO

SOAR-A is an agile appliance delivering ultra-low latency, standards-based streaming to any device over any combination of circuits.

Remote production and long-distance delivery rely on the ability to transfer media content and associated data from to one or multiple locations. Each application can require a different set of functionality and circuits to different devices.

Rather than assemble a large number of different devices to address each application, FOR-A has developed a standardized, agile platform, SOAR-A (Software Optimised Appliance Revolutionised by FOR-A). This is the company’s first major step into the IP world, but by seeing how previous offerings by other vendors have revealed shortcomings, FOR-A has developed a platform that meets the practical requirements of the industry.

As the name implies, SOAR-A is an agile software platform, capable of adapting to requirements by loading the appropriate software. There are two hardware options: a two-channel device in a compact cabinet, and the standard 1U appliance, which supports up to 16 channels. Devices can be linked to pro-

FOR-A SOAR-A

vide even greater capacities. Whatever the configuration, latency is always extremely low, making it ideal for remote live production as well as content distribution.

Critical to the design is the support of widely recognized open standards, including SMPTE ST2110 and NDI. RIST (Reliable Internet Stream Transport) ensures multiple video, audio and data streams can be bundled together bidirectionally with a simple setup.

Through the use of WebRTC, video streams can be sent to any device anywhere, without the need for a specialist player. It also allows streams to be sent to multiple destinations, and SOAR-A IPTV is a content distribution system in a box.

Alongside IPTV, applications already available include SOAR-A Edge, which provides for highly secure IP transport; SOAR-A Graphics, a character generator and branding engine; SOAR-A Switch, a software switcher; and SOAR-A Play, a media server.

SOAR-A is a transformational approach to the whole content delivery ecosystem, and is ready to unlock new efficiencies in live production, at up to 4K Ultra HD.

Building on a software-defined architecture, it is also the perfect solution for a cost-effective migration from SDI to media over IP.

The benefit of a highly configurable platform is that new functionality can be added in software, adding functionality as it is needed without additional hardware investment. In the transition phase, that includes hybrid IP and SDI (including 12G) workflows.

SOAR-A is capable of handling multiple video, audio and data streams, using industry standards including SDI, NDI and SMPTE ST2110, using WebRTC, RIST, WebM, HLS/Dash and more. The use of WebRTC means that signals can be distributed to any device from a smart TV to a phone, with ultra-low latency.

The power of RIST ensures point-topoint and point-to-multipoint streaming, including bonding and load balancing for faster connections across multiple bearers. Security is baked in, through PSK (pre-shared key), plus SRP (secure remote password) and DTLS (datagram transport layer security). One-stop simplicity is enhanced with the ability to apply a simple VPN to connect multiple sites.

42 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Hammerspace is a powerful scale-out software solution designed to automate unstructured data orchestration and global file access across storage from any vendor at the edge, in data centers and one or more cloud regions and providers globally.

WIth Hammerspace, broadcast and film customers can now unify global access across on-premises storage/compute resources plus with cloud providers and regions from any vendor, to cut costs and rapidly adapt to changing production requirements.

Unlike solutions that shuffle file copies across incompatible storage types and distributed environments, Hammerspace creates a high-performance parallel global file system that seamlessly spans on-prem and cloud resources bridging silos, locations and clouds so users everywhere are working on the same datasets.

Hammerspace

are significantly more expensive than in cloud regions where power comes from lower cost renewable resources.

Hammerspace solves this problem transparently to users using its data orchestration system to automatically route jobs to the lowest-cost cloud regions based upon current cloud provider rates, and without creating multiple copies of data. From an artist’s standpoint, workflows are unchanged due to native integration with applications such as Autodesk ShotGrid. As a project is triggered within ShotGrid to move files to render, Hammerspace takes care of everything automatically in the background, routing jobs seamlessly to whatever the lowest cost region is at that time.

And since Hammerspace supports onprem or cloud storage from any vendor, this enables customers maximum flexibility to use any existing or new storage, without needing to consolidate data into proprietary cloud or on-prem resources.

For example, Jellyfish Pictures, a U.K.-based VFX company, leverages Hammerspace to rapidly spin up production teams globally, without the need to consolidate resources or rely on traditional file copy methods like FTP, rSync. This enabled Jellyfish to take on new VFX projects for Netflix and Disney by rapidly bringing new teams online in Australia, India and South Africa, who now had the same user experience local artists in London enjoy.

Another issue was the high cost of render workloads in more expensive cloud regions. Render jobs that need thousands of CPU cores in London or Los Angeles

Hammerspace can dynamically extend the production namespace across cloud regions to scale up or scale down when needed, to minimize cloud expenses. Render costs alone on large projects can be reduced by 30% or more because of this.

With Hammerspace, users everywhere still see the same files in the same directories, as though all files were all on local storage. The power of Hammerspace is the ability to present local access to distributed users, meaning that everyone is working on the same datasets no matter where they are or on which storage type or location the data is located. And because Hammerspace supports new or existing on-prem or cloud storage from any vendor, it means customers can rapidly adapt to changing project requirements.

In these ways, Hammerspace has revolutionized the way broadcast, film and other industries can manage distributed workflows and data across one or more on-premises and cloud compute and storage resources.

43 Best of Show Awards 2023 | NAB Show
HAMMERSPACE
FOR MORE INFO

Harmonic’s VOS360 Ad is an industry-first, innovative standalone server-side ad insertion (SSAI) SaaS that is fully cloud-native, enabling targeted addressable advertising at scale for low-latency video streaming.

Powered by state-of-the-art, field-proven, at-scale manifest manipulation technology, VOS360 Ad allows best-in-class targeted ad delivery to millions of concurrent viewers for live sports streaming, as well as a host of streaming applications, including linear, VOD, FAST, IPTV and low-latency live streaming. The standalone solution supports third-party origin servers and video streaming applications and can also serve broadband service providers that want to offer targeted advertising to IPTV subscribers. The comprehensive solution includes ad ingest and processing, ad serving, frame-accurate serverside ad insertion, ad decision server and supply-side server capabilities.

Why VOS360 Ad Is Groundbreaking

Harmonic’s VOS360 Ad SaaS is the industry’s first addressable targeted advertising solution for low-latency sports streaming at scale. A breakthrough for the video streaming industry, VOS360 Ad offers several innovative features and benefits, including:

Low latency: VOS360 Ad supports low-latency HLS and DASH formats,

HARMONIC VOS360 Ad

ensuring a latency of about 3 seconds or less for targeted ads, which is critical for live sports streaming.

Geo-redundancy: Cloud geo-redundancy, source redundancy and multi-CDN path diversity ensure high availability. Targeted addressability: With VOS360 Ad, advertisements can be personalized down to the user level, based on available data, increasing viewer engagement and improving CPMs for video service providers.

Infinite scalability: Running on all major public clouds, VOS360 Ad provides unparalleled scalability for a superior viewing experience. Harmonic’s solution is proven to scale up to millions of concurrent viewers.

Dynamic brand insertion: VOS360 Ad enables video service providers to create more ad inventory without increasing the ad load (i.e., the number of minutes per hour spent on advertising).

Field-proven reliability: Deployed at scale, harnessing Harmonic’s high standard of quality, the solution provides industry acclaimed reliability for ad revenue streams.

Solving a Critical Need for Scalability, Flexibility and Enhanced Addressability

SSAI is essential for monetizing streaming services. The total addressable market for

SSAI is increasing at 30%+ CAGR and is expected to reach close to $20 million in annual recurring revenue by 2025.

As video service providers look to support targeted advertising, VOS360 Ad answers the critical need for a monetization solution that is open and scalable. The SaaS solution offers an open ecosystem with well-known partners and well-documented APIs for extensive flexibility and simplified targeted ad delivery. Integration with Beachfront’s sell-side ad server enables video service providers to maximize existing inventory. VOS360 Ad is also integrated and deployed with many industry-leading ad decision servers and supply-side platforms. In addition, VOS360 Ad is integrated with Mirriad’s virtual product placement insertion technology to enable individual targetability at a massive scale.

VOS360 Ad can be used with Harmonic’s market-leading VOS360 Media SaaS or with third-party origin servers, addressing the market demand for SSAI solutions that offer third-party origin support.

VOS360 Ad deserves to win this award for enabling service providers to deliver targeted addressable ads to viewers at scale, with low latency and high availability, boosting their monetization.

44 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

XScale ATSC Encoder

igolgi’s iLux ATSC 1.0 XScale is a versatile, compact, high-quality and reliable ATSC 1.0 encoding platform designed for broadcast and LPTV. While ATSC 3.0 may one day become the broadcast standard, and igolgi offers the advanced iLux ATSC 3.0 NextGen encoder, the XScale gives broadcast and LPTV stations the ability to double their channel capacity using the current ATSC 1.0 standard. This is especially important as streaming replaces traditional linear delivery and free OTA viewership is growing.

This new offering employs H.264/ AVC video encoding technology compliant with the ATSC A/72 specification for superb video quality with expanded channel support. Leveraging the latest CPU technology, the igolgi XScale platform supports up to 24-output programs

and can support any combination of SD/ HD-SDI, baseband, ASI or IP inputs. A key innovation in statistical multiplexing enables programs to be encoded in MPEG2 or H.264/AVC in the multiplex for broadcast. This innovation allows operators to maintain legacy MPEG2 streams on air if this is required.

Broadcasters who adopt XScale can more than double their existing channel count compared to older ATSC 1.0 encoders. With the wide adoption of H.264/ AVC-capable TVs for the past decade, consumers can immediately receive many more channels on a single ATSC 1.0 broadcast. For broadcasters wishing to continue certain channels in MPEG2, the XScale can support that also. Each channel can be individually configured to be MPEG2 or H.264.

As with all iLux ATSC encoders, the XScale model offers the industry’s most flexible interface support and easiest configurability. The XScale provides a complete ATSC 1.0 Broadcast Station solution for encoding, Electronic Program Guide (EPG), analog or IP-based Emergency Alert System (EAS), CALM Audio loudness control, Hourly Station Callout and many more. In addition, Static or Dynamic PSIP is also supported. For Dynamic PSIP, XScale can integrate with third-party PSIP generators, or can create the dynamic PSIP information directly. Simultaneous outputs over ASI and IP are available and enhance the operational value of the XScale platform, which also supports 1+1 redundancy if a hot spare is required with instant failover switching.

45 Best of Show Awards 2023 | NAB Show
INC.
IGOLGI
FOR MORE INFO

Draco G-Flex KVM Matrix

IHSE expands the capabilities of the popular Draco Flex KVM Matrix Systems with the introduction of the G-Flex Matrix series. With an integrated Draco tera IP gateway card, the Draco G-Flex Series provides system designers with the ability to bridge multiple KVM matrix systems over an existing IP network. It combines the high levels of security and performance of the Draco tera KVM system with the flexibility and ease of connectivity inherent in IP-based communication. Therefore, it allows users to access remote computers and interact in real time with minimal latency and no visible artifacts.

The Draco G-Flex series can maximize efficiency by multiplexing up to eight full HD channels over a single duplex fiber networked connection between KVM matrix frames. This is extremely important where limited cable runs are available for adding more sources or workstations. For applications where both fiber and copper are specified, the Draco G-Flex option is the perfect solution where localized connections can be distributed on traditional copper or fiber connections and shared connections can be accomplished over an IP gateway.

The Draco G-Flex matrix starts with 16 physical ports and eight gateway ports in 1RU. The series can be expanded up to 152 physical ports and eight gateway ports in 4RU. Systems are available in 1G copper, 1G fiber, 3G copper or 3G fiber. For systems needing a mixture of fiber and copper, the Draco G-Flex can be customized to fit almost any type of hybrid fiber/copper requirement.

In addition to the high level of security data transmitted throughout IHSE’s KVM switching and extension systems, the Draco G-Flex features IHSE’s Secure Core technology, which prevents direct access to the data within the KVM system from the IP network. This maintains the integrity of the KVM system and is consistent with countermeasure to potential cyberattacks.

With the Draco G-Flex series, you simply plug in the desired extender unit to an open port on the matrix and the built-in control system will automatically recognize the type of device and assign it as a source or destination device. This is accomplished through IHSE’s proprietary Flex-Port technology

that provides instant switching capabilities for all the popular video formats and resolutions.

It is a simple operation to set up and configure with the on-screen display (OSD) or through tera tool; a free downloadable IHSE utility program for configuration and system management. For those who prefer a third-party control application, the Draco G-Flex can be configured to operate with many popular control systems using the IHSE optional API protocol package. Along with the compact design and low cost the G-Flex incorporates features from the Draco tera enterprise series of switches, including SNMPv3, LDAPS, multilingual on-screen display, encrypted communication for maximum security (for API, Draco tera configuration tool and Matrix Grid).

With a small footprint and ruggedized chassis design, the Draco G-Flex provides a space-saving solution where both centralized and remote access are desired. It is especially suited for mobile production, command-and-control systems, production studios and campus wide classrooms.

46 Best of Show Awards 2023 | NAB Show
IHSE USA
FOR MORE INFO

FrameFormer

InSync Technology has revolutionized the process of broadcast standards conversion with FrameFormer, delivering uncompromised quality frame rate conversion in software running on COTS hardware, with a low operating cost. Valuable content in a global media market often needs to be shared around the world, with rights and production costs running into billions of dollars. In the past, large, power-hungry and expensive hardware boxes were needed to perform the necessary broadcast standards conversions.

FrameFormer is the new gold standard for frame rate conversion, delivering the highest quality, lowest cost, and most flexible operation entirely in CPU-only software for high performance and low carbon impact. It is the world-leading motion-compensated standards converter that eliminates risk by ease of set up, with auto-configuration, meaning new channels with different content types, whether sports, movies or news can be established almost instantly without time consuming manual adjustments.

When converting between broadcast frame rates, small objects in motion often result in obvious artifacts, making it a significant limitation. Using its rich broadcast engineering heritage, InSync Technology developed a new suite of algorithms for motion compensating frame rate conversion, providing very high-quality, real-time conversion for all content. FrameFormer does this entirely in CPU software on COTS hardware, on-premises or in the cloud, making it an ideal solution for fast-moving sports coverage, where any deficiencies in the frame rate conversion can lead to visible and disturbing artifacts, causing dissatisfaction among viewers and lost revenues.

Continuing development of the

product has achieved continuing refinement of the motion compensation algorithms while making the processing ever more efficient. The latest version of the software shows a 20% reduction in processor demands, making it even more affordable and available. Additionally, InSync has implemented the software for the highly efficient ARM processor, resulting in an even smaller carbon footprint and higher levels of efficiency.

FrameFormer is agile, adaptable, and very processor-efficient for cost and environmental savings. It fits seamlessly into modern workflows while delivering outstanding performance. As a CPU-only application, it can run as a standalone process in dedicated hardware, in a data center or in the cloud. Alternatively, it can be embedded in products and services from other vendors, thanks to the comprehensive SDK.

InSync’s collaboration with major global broadcasters has led to further significant improvements in the way small objects are handled in the conversion process. Head-to-head comparisons in real-world conditions showed FrameFormer coming out on top of the “gold standard” hardware converter in quality assessments, without considering FrameFormer’s operational and cost benefits.

FOR MORE

In the highly contested, highly connected media world of today, broadcasters and production companies need to delight audiences in both quality and content. Achieving that requires innovative technology that is fast to set up, simple to operate, readily integrated into IP workflows, cost-effective and with a minimal carbon footprint. With FrameFormer, broadcasters and production companies can achieve all these goals.

47 Best of Show Awards 2023 | NAB Show
INSYNC TECHNOLOGY
INFO

INTERRA SYSTEMS

ORION Content Monitoring Suite

Interra Systems’ ORION Content Monitoring Suite is a complete quality monitoring solution for linear and OTT video delivery. The suite comprises ORION, for 24/7 confidence monitoring of linear/IP video, ORION-OTT for streaming video, OCM for end-to-end, central management of all streams being processed and delivered.

ORION provides real-time monitoring of IP-based infrastructures, looking at all aspects of video streams, such as QoS, QoE, closed captions, ad-insertion verification, reporting and troubleshooting. For LIVE services and VOD assets, the ORION-OTT content monitoring solution brings these same capabilities to the multiscreen environment, where re-encoding, transcoding and multiplexing processes can adversely affect content integrity and the user experience. With ORION-OTT, OTT providers can verify QoS and QoE for ABR videos by checking for inconsistencies related to ABR package compliance, manifest and playlist syntax, download errors and content quality.

Together, ORION and ORION-OTT allow broadcasters and service providers to perform critical monitoring functions at scale, on thousands of services simultaneously, providing users with a

single point of visibility and access to important information such as status, alerts, alarms, visible impairments, error reports, triggered captures, etc. Audio and video quality checks for the solutions include macroblocking, freeze and black frames, loudness, silence and levels, and support all popular audio formats, including Dolby.

For enterprise-wide visibility into QoE and QoS, the ORION Central Manager (OCM) enables central management of streams irrespective of probes’ geographic locations. OCM is the only webbased, enterprise-level solution that centrally monitors the health of probes, collects monitoring data, and provides a global alert summary across the entire delivery chain. Relying on the comprehensive video insights provided by OCM, broadcasters and service providers can deliver video with a high QoE on every screen, improve operational efficiency, and increase monetization.

In addition, as the broadcast world transitions toward IP infrastructure, having the capability to monitor the complex SMPTE ST 2110 workflow is essential. The most recent addition to the ORION suite,

the ORION 2110 probe, monitors 2110 streams for both QoS and QoE, including the ST 2110 main and redundancy signals, NMOS-base ST 2110 feed discovery, and Precision Time Protocol (PTP) messages for all Ethernet network interfaces, SDP protocol checks and monitoring density, meaning operators can ensure high quality and performance for SDI-IP streams and take full advantage of the ST 2110 standard. With 2110 Probe, video providers can monitor streams at ingest, preventing error propagation down the video processing chain. This dramatically improves QoS and QoE, while avoiding costly service repairs.

ORION content monitoring suite is developed keeping in mind the critical needs of quality, scalability and efficiency for the now, new and next video delivery. Interra Systems has taken a fresh approach to content monitoring by providing one single platform for monitoring all aspects of a video stream including ads, closed captions, etc.

Backed by industry-proven video analysis accuracy and video QC, ORION is an innovation that can leverage the latest COTS CPUs and cloud platform for optimum efficiency and quality of experience.

48 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

GY-HC500UN, NDI-Enabled Handheld Broadcast Camera

JVC recently introduced its first-ever NDI-compatible broadcast cameras, including the GY-HC500UN Handheld Camera. Designed in direct response to systems integrator requests, the NDI|HX-compatible solutions ensure that broadcast studios, schools, churches and other facilities can seamlessly incorporate the cameras within their existing IP infrastructure, which reduces integration costs and complications. It also allows them to easily integrate a combination of JVC PTZ and broadcast camera solutions into one workflow, all of which can be remotely controlled over a standard IP network. This makes it possible for a wider range of facilities to produce network-caliber content and provide a more engaging experience for viewers.

The addition of these NDI|HX-capable handheld camera solutions to the JVC product lineup further strengthens the brand’s commitment to IP workflows. Further, the combination of SRT, Zixi and NDI|HX capabilities now available in the cameras ensures that broadcasters will be able to integrate the cameras into today’s diverse and evolving environment.

Additionally, current HC500 Series users can have their cameras upgraded by JVC to implement the new NDI|HX capabilities, rather than purchase new gear — further reducing costs while increasing functionality. With built-in streaming and connectivity options, the camera provides advanced low-latency video that allows users to stream live video to Facebook and YouTube, increasing the threshold of people consuming the content.

The NDI|HX connectivity adds to the camera’s super-reliable SRT or Zixi streaming functions, as well as the optional HEVC/H.265 encoding capability. Ideal for ENG and live broadcasts, the GY-HC500UN is equipped with

the brand’s powerful communications engine that delivers high-quality, low latency error-free video right from the camera through its standard Ethernet connection.

Like all HC500 Series cameras, the GY-HC500UN has a variety of high-quality features that further enhance productions, such as a one-inch 4K CMOS imager; integrated 20x zoom lens with built-in ND filters and manual zoom, focus and iris control rings; a 4-inch, high-resolution LCD screen for menu navigation; and an LCOS viewfinder. The camera also has dual XLR inputs; 3G-SDI and HDMI video outputs; and an expansion slot for SSD (solid state drive) recording in 10-bit ProRes 422 at

4K resolution and 50/60p frame rates, when not in NDI|HX mode. For creative flexibility, it can additionally record other native 4K UHD and HD file formats to support a wide range of workflows in HLG or 10-bit J-Log modes for HDR footage, and 120 fps slow-motion HD. These features allow for high-quality video playback, so users can review previously recorded materials.

Everything necessary to produce quality broadcasts is included, such as the integrated 20x lens for advanced image quality and several media options, such as SDHC, SDXC and SDD (with the optional KA-MR100G media adapter). With the appropriate media card, users can begin shooting, recording and streaming, or do direct-to-live production by setting the camera to NDI|HX.

49 Best of Show Awards 2023 | NAB Show
JVC PROFESSIONAL VIDEO
FOR MORE INFO

LiveU Studio is a fully cloud-native IP video live production solution, allowing the creation of additional content across a myriad of media channels. It can sit alongside existing primary content workflows and leverage deployed assets for extra production. From any web browser, journalists/content producers can create and control shows, applying flexible, easy-to-use features for video switching, graphics and audio, and managing remote guests, before distributing the content to up to 30 different publishing destinations.

LiveU Studio is the first cloud-native SaaS live video production service to natively support LRT ingest, offering a fully scalable, on-demand solution designed with digital distribution in mind; with a usage-based model, it enables digital audiences to be served in parallel to

LiveU Studio

linear viewers, meaning content creators really can produce more for less.

Online content requires agility, with producers needing to be able to respond quickly to current events.

LiveU Studio is the world’s first cloud production service to natively support LRT (LiveU Reliable Transport) ingest, bringing the high-quality, low-latency resilience of the leading wireless videoover-IP protocol directly into the heart of users’ digital productions.

It provides a usage-based model that allows customers to easily and swiftly respond to higher, or lower, volumes meaning they never pay for what they don’t need as they are always in control. Being 100% cloudnative and browser-based means customers can be up

and running from day one. Deployment at any time is very fast, meaning events can be reacted to very quickly, like news, failovers and so on. It also means users can scale their live online presence at a moment’s notice with decentralised worldwide collaborators. With LiveU Studio, producers automatically have the latest version ready to go. Even new operators can ramp up very quickly, using intuitive vision mixing and creative templates, letting them focus on the story they want to tell, driving engagement. Designed to sit alongside a customer’s current capabilities, LiveU Studio is packed with features that add value to content, helping users shorten the time to publish, satisfying the constant desire for more content.

50 Best of Show Awards 2023 | NAB Show LIVEU
FOR MORE INFO

With the amount of video content reaching an all-time high, it’s no longer possible to manage the content manually. Manual processes create bottlenecks that slow down the production time. By adopting automation tools, media production companies reduce human error and make content available faster for linear and digital production teams.

This is where LiveU Ingest comes in, an automatic recording and story metadata tagging solution for live video. As a hybrid cloud workflow solution, LiveU Ingest is powered by the LiveU cloud video platform, the hub for all things live. The LiveU Ingest workflow inte-

LiveU Ingest

grates at the preliminary stage of the NRCS, or similar system. The stories are associated with the relevant metadata and fed into LiveU Central. Field crews then choose the story they are about to cover from a drop-down list on the LiveU unit screen. Unlike conventional ingest workflows, the video is automatically recorded and stored on the LiveU Ingest portal and immediately synced with the associated metadata. The content can then be pushed manually or automatically into the MAM system. By trimming raw footage on LiveU Ingest, you cut down the size of footage

that is transferred to the MAM, thereby saving on storage costs.

Never miss a thing! With LiveU Ingest, you have peace of mind knowing all your live content is being automatically recorded. All video feeds are instantly accessible over a cloud web portal, allowing both your field crews and production teams to view, trim, download and publish content from anywhere, regardless of their location. Ingest is also compatible with other production tools, so you can edit and enrich videos as part of your existing workflow.

51 Best of Show Awards 2023 | NAB Show LIVEU
FOR MORE INFO

Challenge:

Media companies need to deliver content more reliably, more efficiently, and with greater customization across new and emerging channels, including direct-to-consumer, free ad-supported streaming TV (FAST) and over-the-top (OTT) platforms. Organizations need to create multiple content versions to engage audiences with curated, regionalized, and platform-specific content with platform-specific ad signaling profiles — all at an unprecedented scale and with fewer resources.

Efficiently delivering tailored content around the world through traditional transport methods is unfeasible. Satellite and fiber lack the scale, agility and reach required to deliver sufficient channel variations across multiple platforms. Faced with shrinking satellite capacity and changing regulations, organizations need viable, reliable and scalable delivery mechanisms that provide cheaper, more flexible alternatives.

Many media brands have already adopted IP-based transport, but face several limitations. Standard internet routing architecture and protocol-only solutions fail to deliver high reliability and low delay. Traffic aggregation points can become easily overwhelmed, and with potential packet delay or loss, content providers need technology partners to help navigate and overcome the complexities of internet transport.

LTN LTN Wave

The internet’s underlying architecture does not support multicast, so any source location must send multiple copies of the same content to bring a single video feed to multiple endpoints. With unsophisticated IP solutions that lack business intelligence and inherent multicast capabilities, customizing and directing multiple content versions to fragmented audiences is incredibly challenging, with costs driven up by additional public cloud egress and processing fees.

Solution:

Built on LTN’s intelligent, multicastenabled, fully-managed IP transport network that delivers < 200ms latency and high reliability (five nines plus), Wave provides a flexible, scalable and reliable video transport solution that de-risks satellite migration with end-to-end management, and automatic, stress-free changeover while supporting next-generation possibilities such as content replacement and custom ad trigger profiling.

Fully interoperable with public cloud environments and various first and last-mile protocols and technologies, customers can integrate Wave seamlessly with existing infrastructure while flexibly harnessing other solutions such as third-party encoders, decoders and hardware or software infrastructure. LTN’s open and agnostic network strategy means content can be ac-

quired in a single format and delivered in multiple different formats, common third protocols and in and out of public clouds, as needed.

Underpinned by LTN’s proprietary native multicast network with built-in packet recovery and routing protocols, Wave gives industry-leading reliability and high SLAs, while enabling complex business and licensing rules.

Result:

Wave offers an intelligent, flexible and cost-efficient means of reliably distributing multiple versions of content to any destination around the world, enabling media owners to maximize reach, monetization and ROI.

Wave makes the transition to IPbased video transmission easy, trusted and efficient, empowering broadcasters to futureproof their distribution model without technology headaches or heavy CapEx investment.

Wave simplifies complex video transport workflows to drive operational efficiency and scale, while granting complete visibility and control to help customers achieve business goals. Underpinned by proven, ultrareliable network performance and 24/7 expert TOC support and monitoring, Wave enables media companies to focus on content and audience growth while achieving total peace of mind.

52 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

LucidLink Filespaces cloud-native SaaS solution is a high-performance cloud file service for distributed workloads providing access to massive files, terabytes to petabytes of data from anywhere. It fundamentally changes the way organizations handle the challenge of sharing large files over long distances by enabling access to huge data sets across globally distributed locations solving the problem of immediate file access. Creatives easily collaborate on files in real time from any location. Since LucidLink works with any application, including NLEs, MAMs, DAMs and DAWs, and fits seamlessly into creative workflows.

LucidLink does away with synchronization and streams data on-demand directly from the cloud as needed by the application. The single source of truth is kept in the cloud. Regardless of size and location, LucidLink grants you the data you need, when and where you need it, all through simultaneous file streaming.

FileSpaces

“single and centralized source of truth,” eliminating the need for file downloading or syncing.

A Filespace is a shared global namespace that acts like any other high-performance NAS even though the data is hosted in the cloud. It mitigates the effects of latency with cloud as primary storage, like an extension of your hard drive, making terabytes+ of data instantly accessible from anywhere. LucidLink’s solution leverages cloud storage with the functionality like an on-prem NAS. Advanced capabilities include direct read/write access from the cloud, immutable snapshots, useraccess control and enhanced global file locking.

Built-in software that runs on the endpoint and is delivered as SaaS with no virtual or physical middleware or appliance, LucidLink works with any S3-compatible cloud object storage, Microsoft Azure Blob and any major operating system. By leveraging cloud-based storage, LucidLink brings the functionality you’d expect from an on-premises NAS.

LucidLink Filespaces provides content creators instant access to media assets in the cloud as a high-performance cloud file system. Designed for rapid data access over distance, huge media files are immediately accessible from any location, any machine. By streaming data on-demand, LucidLink reduces the need for large network and storage sources bearing the weight of constant file synchronization and replication. Content being collaborated on simultaneously, the cloud remains the

With end-to-end Zero-Knowledge and client-side encryption, LucidLink maximizes security on operating systems, including macOS, Windows and Linux. With LucidLink Filespaces, content creators can rapidly access files and collaborate with team members in real time, from any location — eliminating the need to download or sync files, boosting productivity. Using LucidLink Filespace, the world’s top creative teams are working together like never before, In addition to the multi-award-winning animated film “The Boy, the Mole, the Fox, and the Horse,” LucidLink Filespaces was used for remote production on the hit FX show “The Bear,” which won awards across the Golden Globes, Screen Actors Guild and Critics Choice. LucidLink is scaling up and streamlining creative workflows across the industry and enabling never-before feasible workflows on a global scale.

53 Best of Show Awards 2023 | NAB Show LUCIDLINK
FOR MORE INFO

LUMENS INTEGRATION INC.

OIP-N40E/OIP-N60D

Lumens is introducing the world’s first IP encoders and decoders, which support Dante AV-H, NDI|HX3 and SRT, in addition to other popular streaming protocols.

While there is broad consensus that broadcasters and production teams will increasingly migrate to video-over-IP workflows, there is currently no single dominant format that meets the needs of every production. Presently, the industry relies on a range of standards to fulfill specific functions, whether it’s for collaborative content creation, remote production, local video transmission or live streaming.

This multiplexity of technologies means on the one hand that technicians can implement a best-of-breed approach

to each workflow. On the other side, most broadcast hardware (such as cameras, monitors, recorders and playback devices) is designed primarily for baseband (SDI or HDMI) workflows or to support a very narrow range of IP protocols. This can make it difficult to integrate legacy technology with IP workflows.

IP converters are therefore essential to bridge between baseband and IP networks, but until now, existing devices have been limited to encoding and decoding either a single format or a restricted range of protocols. This has required broadcasters to invest in a variety of decoders and encoders, each designed for a particular IP workflow. Lumens OIP-N encoder and decoder series launched at the NAB Show with

incredibly broad video-over-IP format support in a single box, including RTSP, RTMP, SRT, NDI|HX3, NDI|HX2 and Dante AV-H standards.

This flexibility means that for many, the OIP-N series is the all-in-one encoder/decoder pair that will meet all their IP conversion needs. OIP-N is the tool that will instantly connect baseband equipment to diverse IP networks. For example, OIP-N60D decoders will enable legacy recorders, monitors and switchers to receive YouTube RTMP streams, RTSP video from IP cameras, live feeds from NDI PTZ units or incoming SRT streams from remote contributors. Conversely, the OIP-N40E unit can encode the output of virtually any digital camcorder or switcher for local transmission over NDI and Dante AV-H networks. It can also stream content live to remote audiences over a CDN (content delivery network) or via a production solution such as vMix, OBS or Telestream.

Lumens’ pocket-sized encoders and decoders will be essential for broadcast technicians and kitrooms that need a portable and adaptable IP solution. The OIP-N decoder even supports IP to USB (UVC) conversion enabling IP streams to be converted for use with video conferencing software, making it suitable for use by journalists in the field and foreign correspondents.

In the studio and in production departments, OIP-N technology is optimized for fast installation and simple browser-based network management. Supporting PoE for easy integration, OIP-N60D even supports video wall output, perfect for multi-source monitoring and dynamic AV display. Designed for optimal video quality, OIP-N operates at very low latency, making it suitable for demanding live production environments.

54 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

MANIFOLD TECHNOLOGIES

Manifold Cloud

Manifold Cloud is a broadcast live production infrastructure software product that runs on COTS FPGA programmable acceleration cards (PAC) from various manufacturers such as arkona technologies, Bittware and Prodesign.

At its core, Manifold Cloud is service-oriented software that utilizes an on-demand configurable pool of shared COTS resources (PACs) allocated within a private cloud environment. Hardware resources are pooled together in CLUSTERS, which can be thought of as Virtual Private Clouds or put differently, a broadcast production. Typically a cluster has a fixed purpose of a limited time such as “the 6 PM news” or “the Saturday afternoon football game.” Multiple clusters can be operated at the same time, each with different services and users operating on a shared resource pool from one or more data centers.

A Manifold Cloud CLUSTER has two main components, sources and services. Sources can be audio, video or metadata from IP or SDI and are assigned to a cluster from, for example, an NMOS registry. Inside each cluster runs multiple SERVICES.

Manifold Cloud offers a number of different live production SERVICES such as compression, multiviewing, routing, audio and video mixing, color correction and color space conversion, etc. These services are generic in the sense that they run on all supported PACs regardless of manufacturer and are automatically instantiated to assigned PACs by

Manifold Cloud.

Operations is presented to users as services through a single-sign-on web UI. For example a service could be a multiviewer output “head,” an up/down/ cross converter instance or a JPEGXS encoder. Multiple services can, of course, run at the same time and each cluster consists of the services that makes sense for that application.

Services usually have one or more inputs of audio/video/ metadata and one or more outputs of the same. For example an up/down/cross converter might have one video and metadata input and one video and meta-

data output (in another video format). Service outputs automatically become available as new sources, which then can be routed to other services or out from the cluster.

Manifold Cloud is inherently resilient and will reroute services as required upon failure.

In short Manifold Cloud presents a user with all of the benefits of “Cloud” — scalability, COTS, automatic provisioning (service-focused), resiliency and efficiency — while still supporting the largest live uncompressed workflows with subframe latency allowing for true Tier-1 productions at scale.

Manifold Cloud represents a watershed moment in broadcast production technology with its many “firsts”

It’s the first product that utilizes commercially available Programmable Acceleration Cards (PACs) that provides unparalled density, lower cost and the benefits of commodity compute while still retaining the performance requirements for Tier-1 uncompressed live production workflows. For example, a 1RU server can process 512 uncompressed 3G videos for multiviewing.

It’s the first live production core infrastructure product with a “service” focus where operators, using a singlesign-on web UI, simply request the service needed. Manifold Cloud abstracts both hardware and software so that features and functionality are presented as individual services to the user.

55 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

MARSHALL ELECTRONICS

CV374 Compact Network POV Camera

Marshall introduced its CV374 Compact POV Camera at the NAB Show, featuring low-latency NDI|HX3 streaming as well as standard IP (HEVC) encoding with SRT, while also offering an HDMI simultaneous output for traditional workflows.

The CV374 compact NDI|HX3 camera contains a new Sony 4K (UHD) sensor. The camera can be set to NDI|HX3, NDI|HX2 as well as standard IP with H.265 and SRT settings, with a simultaneous HDMI output for traditional processing or switching equipment.

The camera is designed with flexible features including interchangeable lenses, multiple broadcast frame rates, remote adjustability and very discreet and durable bodies made of lightweight aluminum alloy with rear I/O protection

wings. CV374 features a CS/C lens mount providing a wide selection of lenses from which to choose.

The new NDI|HX3 format requires slightly higher bandwidth than previous NDI|HX2, but much less than is required for full NDI. NDI|HX3 delivers similar low latency as full NDI at less than 100ms end-to-end and has video quality performance closer to premium full-NDI lossless video quality. NDI|HX3 is a big step forward for NDI|HX while reducing the bandwidth requirements of full NDI and delivering similar speeds and video quality.

The CV374 joined the CV370 compact cameras as well as the CV570/CV574 Miniature Cameras in making their show debut, as

Marshall continues to evolve its product offerings with the latest NDI codecs for ultra-low latency even over challenging network bandwidth.

Ahead of the show the company looked forward to “introducing these new POV cameras ... further expanding the Marshall POV lineup of state-of the-art cameras,” said Tod Musgrave, director of cameras for Marshall Electronics. “NDI has been a great success for Marshall with a variety of options from which to choose, including zoom and full PTZ models. Adding NDI|HX3 and IP to our miniature POV camera line was inevitable and doing so with high-end UHD sensors is a very exciting development.”

56 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Tier 1 live production in the cloud requires frame-accurate, deterministic, low-latency, redundant and responsive interconnected systems at large scale. So far, there have been no cloud solutions that satisfy those requirements without compromising quality, latency and reliability.

Instead of simply shifting on-premises workflows to the cloud — thereby giving up some quality and latency — Matrox ORIGIN tackles the problem at the infrastructure level.

This disruptive technology is a software-only, vendor-neutral, asynchronous framework that runs on IT infrastructure. It can achieve highly scalable, responsive, low-latency, easy-to-control and frame-accurate broadcast media facilities for both on-premises and cloud deployments.

What makes Matrox ORIGIN disruptive?

• Asynchronous processing of uncompressed video for live production.

• Cloud-native, not a “lift and shift.”

• Operates on a single host or across multiple hosts within the distributed environment, making it equally effective on-premises as in the cloud.

• Vendor-neutral, so users can choose best-of-breed components from anyone without being locked into a specific ecosystem.

• Built-in, frame-accurate redundancy and live migration, even across multiple AWS Availability Zones.

• Redundancy requires no user intervention.

With the Matrox ORIGIN as the underlying infrastructure, developers can focus resources on what differentiates them. Their products will run equally well on a single host or in distributed systems on-premises or in the public cloud. They can develop once and deploy many times. Meanwhile, broadcasters can operate, build and develop scalable,

best-of-breed solutions for public or private clouds without being restricted to a particular vendor. Broadcasters can make better use of their on-premises resources, offload peak needs into the cloud, run exclusively in the public cloud — or all of the above — at whatever pace makes sense for their business.

Unique features and benefits:

Asynchronous — Matrox ORIGIN operates asynchronously to process and interconnect uncompressed data as fast as possible and as soon as possible, removing all delays associated with synchronous interconnects. This enables low-latency, uncompressed, and highly responsive systems that make largescale, Tier 1 live production in the cloud possible.

Single-Frame Control — Matrox ORIGIN provides simple, granular control of a single frame. Any single unit can be frame-accurately routed or processed anywhere within the distributed and nonblocking environment of the Matrox ORIGIN framework — resulting in great flexibility with guaranteed AV synchronization that hasn’t been possible before.

Integrated Clean Routing and Switching — This is possible because Matrox ORIGIN controls every frame. Signal-path compensation delays are no longer relevant, and any frame can reach any destination frame-accurately on a large-scale, uncompressed and distributed fabric.

On-Air Scalability — Matrox ORIGIN can provision or decommission compute to closely match dynamic operational processing needs with infrastructure costs — while on the air. It can livemigrate software processing in runtime without dropping a single frame or disrupting the control system.

Built-in Redundancy — Matrox ORIGIN provides the infrastructure to develop and operate stateless media-processing services with granular protection of every frame. The framework manages redundancy and requires no additional intervention. It also supports redundancy across multiple AWS Availability Zones to address mission-critical resilience requirements.

Simple APIs — So developers can build best-of-breed offerings for broadcasters to choose from.

57 Best of Show Awards 2023 | NAB Show
FOR MORE INFO
MATROX VIDEO Matrox ORIGIN

MATTHEWS STUDIO EQUIPMENT Litemover

Litemover is the first off-the-shelf automated universal remote head made for adjusting light or reflector fixture positioning from the ground. Designed by working gaffer Erno Das, Litemover is

component-based to accommodate light fixtures and reflector boards weighing up to 220 pounds/100 kilos from popular brands like: ARRI, Cineo, Creamsource, Lightbridge, Matthews and more. With

an easy and familiar interface and intuitive controls, Litemover is made to increase accessibility, save time and add safety on set by eliminating the need to climb high ladders to aim a light.

58 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

MATTHEWS STUDIO EQUIPMENT Air Climber

Matthews Studio Equipment, known for standard-setting grip gear for location and studio work, introduces the first off-the-shelf, modular grip and lighting stand that reaches 25 feet/7.62 meters. Air Climber uses pneumatics to be safely raised and lowered via a regulator switch and air compressor.

A large leveling platform supports the telescoping column (mast). Complete with eight risers, the unique design employs seven locking collars with tension control and a frictional locking system for each section. A clockwise turn of the handgrip engages a metal band that braces each riser section and safely tightens and locks the riser tube.

Individual riser tubes may be locked in intermediate positions so the user can enjoy precision accuracy when raising and lowering. The system’s pan ring allows 360-degrees of mast rotation without the need to completely lower the system — a valuable time-saver.

The large leveling dolly platform base comes equipped with four telescoping legs for footprint adjustment (maximum 8x8 feet/2.44 x 2.44 meters, collapsed 5x5 feet/1.52 x1.52 meters) and four heavy-duty jacks with 14-inch/35.5 centimeters reach to level out the platform. The eight-section telescoping column can be easi-

ly removed from the dolly with a single wrench or can be left built and ready to transport in the back of the truck.

The rotating base at the bottom of the the column (or mast) offers a Lock and Pan Wheel around it to ease fixture positioning. Four rugged tires with breaks and security struts assure safe and smooth movement and secure lock down.

With a maximum height of 7.62 meters/25 feet, minimum height of 1.98 meters/6.5 feet, and 4.5 feet loading height, the system offers a hefty load capacity of 200-pounds/91 kilgrams.

59 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

MEDIA LINKS

Xscend — IP Media Platform

Xscend is our all new high-density, versatile media transport platform, designed for the network edge as a reconfigurable, evolvable, IP media migration gateway. Xscend can transmit and/or receive up to 128 high-quality media and data services across both managed and unmanaged (open internet) networks.

Xscend’s uniqueness lies in its density and software upgrade flexibility to accommodate current and evolving industry advances in network protocols, user workflows, physical interfaces, compression codec algorithms and/or video formats, resulting in an adaptable and scalable IP media transport platform that can address a wide variety of

diverse use cases.

Application flexibility is the hallmark of Xscend’s new generation design, delivering a variety of capabilities in a small 2RU footprint. The platform addresses the migration from SDI-to-IP and IP-to-IP environments along with high-density, low-latency remote/distributed production applications, including Ground to Cloud connectivity.

Xscend’s distinctive combination of modularity, density, breadth of media/network interfaces, flexibility and configurability set it apart. All in a compact, economical, powerconserving footprint designed with a hybrid hardware/software architec-

ture to accommodate evolving and emerging technologies, including the ability to adjust, traverse and scale to ever-changing media over IP network transmission demands. Standards-compliant processing to today’s requirements, including ST 2022-2/6, ST 2110-20/22/30/40, JPEG2000, JPEG-XS and VSF TR-01/07/08, along with the capability to implement tomorrow’s ensures media exchange interoperability, providing users with an IP media platform that has the workflow flexibility, processing horsepower and bandwidth headroom necessary to support current advances as well as those to come.

60 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Miller Tripods recently released its new SkyX 8 fluid head. Designed for outdoor broadcast and studio productions, the SkyX 8 is versatile enough to use over a wide array of camera configurations.

The SkyX 8 replaces the popular Skyline 70. Delivering 16 positions of stepped counterbalance, with CB Plus and a 120mm travel large Euro camera plate, SkyX 8 offers fast, repeatable counterbalance. The 7+0 positions of pan-and-tilt fluid drag system employs the Miller “right feel” smooth start and soft stop technology while the precise floating pan-tilt caliper locks ensure

MILLER TRIPODS

SkyX 8 Fluid Head

bounce-free on-off performance. Constructed of lightweight highstrength die-cast alloy and rigid composite polymers, the SkyX 8 is extremely durable and robust for rugged outdoor shooting conditions delivering, silky smooth pan-tilt fluid actions and symmetrical diagonals to match camera payloads up to 40kilograms (88 pounds). It employs precision heavy-duty ball bearings to ensure long, trouble-free usage. Equipped with a 150mm claw ball, rugged dual telescopic handles, two side-mounting points for viewfinders and accessories,

plus a Mitchel adaptable base, operators will have everything they need for any environment.

Incorporating advanced precision fluid drag and counterbalance controls operated through rear mounted “all in one location” with unique illumination of all controls and bubble level, the SkyX 8 makes it even faster and easier to set up and capture the action.

Each SkyX 8 comes with a full Gold three-year factory warranty, which reflects Miller’s confidence in the product. Shipping will start in July 2023.

61 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

HoneyBadger is a new bulk fiber transport platform that brings several classic MultiDyne strengths together into one common platform with exceptional signal density for HD, 3G (quad-link 4K) and 12G (single-link 4K) productions.

Ideal for stadiums, arenas, campus and venue-wide signal extension, metropolitan intra-facility connections and classic point-to-point links between trucks, control rooms and studios, HoneyBadger offers an expansive, unparalleled feature set for production-based fiber transport. With support for eight camera feeds and SDI return channels — both expandable through HoneyBadger’s modularity — MultiDyne’s latest innovation takes several decades of bulk fiber transport product design experience into account.

With HoneyBadger, there are no limits for local signal connectivity and extension, thanks to its high I/O density and two independent 1 Gb local-area network (LAN) extensions. The latter

MULTIDYNE

HoneyBadger

enables IP connectivity over single-mode fiber strands. As users can also extend four partyline intercom channels (wet or dry), eight bidirectional line level analog audio outputs and eight mic-pre inputs with phantom power over two costefficient single-mode fibers, customers can manage all long-distance bulk fiber transport needs from one box. That also includes analog trilevel or bilevel genlock outputs, legacy GPIO/serial control signals and more.

HoneyBadger is an ideal field fiber solution for a new generation of content producers faced with a broader array of formats, signals and connectors than ever while seeking to bridge the gap between fiber and IP. It scales to serve the expanse of any production workflow or requirement, so that everything the content producer needs for multi-camera or multi-announcer productions exists within its design. A typical HoneyBadger application for live

production will employ a 5RU remote unit for media contribution and transport and a 4RU unit at the receiving location. They both provide standard connectivity for full-size BNCs for video, XLRs for audio, and terminable Phoenix connectors for serial data and GPIO control signals.

Speaking to the product design’s aesthetics, HoneyBadger is essentially an active broadcast junction box that is easy to transport, manage and maintain. It installs comfortably into an actual JBT junction box, which is typically a stainless-steel, wall-mounted enclosure that facilities build into the architecture. Built with streamlined architectures and quick, simple connectivity in mind, Honey Badger is perfect for customers who want to provide an all-in-one signal transport solution that technicians can access and plug into for hassle-free, quick-launch productions.

62 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

APE Advanced Power Extenders

MultiDyne’s new APE Advanced Power Extension Line broadens the possibilities of signal transport and power extension for 12- and 24-volt fiber camera systems. MultiDyne developed the APE family, featuring the HUT-APE and SilverBack-APE, to meet the needs of DC-powered cameras from leading vendors. Both devices are plug-and-play with automatic camera recognition.

The HUT-APE frees camera chains from the limitations of hybrid copper and fiber cabling by enabling cameras to be separated from their CCUs. It achieves this by tricking the camera and CCU into seeing a physical copper connection. Content producers can now use affordable, conventional single-mode fiber, which brings added benefits such as improved performance (no RF, EMI or grounding issues), accelerated set and strike times, and reduced weight for transport on OB trucks and within flypacks.

The HUT-APE offers long-range power for DC-powered systems, broadening the range of camera manufacturers and types now supported. Content producers can pair a HUT-APE with the latest highend SMPTE studio cameras from Grass Valley, Panasonic and Sony, and provide

power from up to 3 kilometers away over SMPTE hybrid fiber. The HUT-APE can be paired with a companion throwdown power converter to provide 12- and 24volt accessory power to lights, monitors and other production equipment in places where local power isn’t available. APE products supply up to 325 watts of power, leaving plenty of room to power camera accessories.

The chain gets even more interesting when extending capability to include MultiDyne SilverBack fiber camera adapters. MultiDyne’s latest SilverBack-V and SilverBack-VB camera adapters specialize in converting digital cinema cameras into SMPTE studio cameras for live multi-camera productions, adding a cinematic feel to sports, TV and worship content. The SilverBack-APE includes multiple outputs to match with camcorders, PTZ or digital cinema cameras from any manufacturer, including the 24-volt ALEXA35 camera from ARRI and RED-V RAPTOR XL — both very popular in the film production community.

Seamless connectivity between APE extenders and MultiDyne VB Series of signal transport products brings additional user

benefits. MultiDyne’s VB Series is a range of highly configurable fiber transport solutions that allow users to support a broad spectrum of signal transport combinations over long distances. MultiDyne builds modular VB Series products to specification, allowing customers to populate its chassis with various video, audio, data and Ethernet cards. The APE family represents the most substantial remote powering systems from MultiDyne to date, at nearly 10 times the distance of its predecessor. Using its dual connectivity capability, the APE’s impressive power output can be used for both a camera and a VB Series fiber throwdown for signal transport to a studio, control room or video village.

APE Extenders have user-selectable voltage outputs that range from 5 to 24 volts to meet almost any camera powering requirement, along with related camera accessories. It is a versatile power supply system and with connectivity to MultiDyne VB Series products, customers can easily design and build their ideal fiber transport networks with a granular building block approach.

63 Best of Show Awards 2023 | NAB Show
MULTIDYNE
FOR MORE INFO

TFC Link is NEP Group’s softwaredefined network (SDN) product designed to seamlessly configure and manage network flows between broadcast facilities using point-to-point technology. A meta-aware control system, TFC Link orchestrates end points, supervises workflow and constantly feeds metrics back to the network controller to optimize bandwidth.

NEP GROUP TFC Link

tocols, TFC Link provides over-the-top management that delivers non-blocking objectives — and currently supports both Arista and Cisco switches.

TFC Link has three critical components:

• Configurator — Intuitive device discovery with network configuration and visualization.

• Path Engine — Real-time network

Federation Services and the ability to use single sign-on across organizational boundaries.

• Monitoring and alerting service integrated with NEP’s 24-hour service desk.

• Device discovery and configuration control.

It all comes together through the easy-to-use TFC application, which allows routine tasks to easily be managed on site by any broadcast engineer, not requiring specialty network skills or knowledge. In the backend, it relieves the complexity of IP network infrastructure through intelligent use of software design and system automations.

With smart use of IP infrastructure, coupled with powerful software, TFC Link allows for efficiencies during deployment. Its capacity to automate load balancing and manage network flows ensures the most efficient use of bandwidth. This allows for great reductions on the volume of cabling and other networking infrastructure commonly needed to manage large broadcast compounds.

Purpose-built for the broadcast industry, TFC Link was developed by a team of NEP’s new generation engineers, strengthened by the company’s four decades of experience supporting live productions around the globe. That experience led the vision for an SDN tool that was hosted from the same platform broadcast control providing a completely symbiotic relationship between broadcast systems and network infrastructure. To ensure seamless communication, they used NEP’s TFC platform, a web-based software tool used to manage, monitor, connect and control broadcast systems, networks and infrastructure. TFC consolidates these critical functions all in one platform.

Rather than disabling switch vendor features or replacing established pro-

optimization allowing paths to prioritize individual feeds, load balance or evenly distribute flows.

• Monitor — Ensures compliance with switches and monitors bandwidth, health, heat and more.

Its major features include:

• Network configuration, snapshots and roll-back functionality.

• Deterministic and real-time routing of both high-bandwidth media flows and data service provisioning.

• Northbound API with drivers for other well-known broadcast controllers like VSM or Cerebrum (Ember+).

• User right management build on SAML2 authentication to Active Directory. Includes

TFC Link has key network security features built in, including robust monitoring services. It allows for priority flows to be tagged and automatically protected during live broadcast, mitigating risks should there be a link failure during the show. Although it autodetects and configures new devices added to the network, it quarantines until they have been authenticated by an authorized engineer before their switch ports are enabled for routing.

TFC Link proved its value during its debut at a major international sporting event in the fall of 2022 where it supported network orchestration and management across eight venues in five cities, as well as the International Broadcast Center.

64 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

NET INSIGHT Trust Boundary Appliance

The Trust Boundary Appliance is compact and cost-effective, conforms to SMPTE RP 2129 recommended practices and offers full media monitoring capabilities.

Net Insight has launched a market-first, easy-to-integrate appliance to solve common challenges within IP media networking. The Trust Boundary Appliance provides an “all-in-one” way of safeguarding IP media transport across networks. The solution delivers the capabilities broadcasters, production companies and enterprises require to make their use of IP media seamless, secure and cost-efficient.

Net Insight’s Trust Boundary Appliance is the first security and control point in the market that is specifically designed for media flows running bandwidth up to 40 GB. It contains the market-proven Net Insight IP Media Trust Boundary technology in an offthe-shelf, cost-effective and easy-to-use device. As such, it removes the need to deploy generic costly IT firewalls and solutions that are not fit for purpose for any organization moving video over IP. The solution delivers greater security at a lower cost than a traditional firewall because it enables IP media to pass

through a Real-Time Transport Protocol (RTP) media proxy.

The Trust Boundary Appliance terminates a media flow at the network boundary and re-establishes it at the destination network, without disrupting other active IP media flows. In this way, it prevents outages and security risks, including hijacking and spoofing, while preserving the integrity of established flows. Enabling the monitoring and assurance of the IP media payload as it passes between networks with ETR 101/290 P1 performance metrics, frozen frames and audio silence.

The Net Insight Trust Boundary is fully compliant with SMPTE RP 2129 and delivers several mission-critical features in one single platform, such as real-time media firewall, IP media monitoring, media protection, traffic control and flow replication.

The Trust Boundary Appliance comes preconfigured, removing the complexity of non-mediaspecific solutions, to be deployed in combination with any enterprise firewall to deliver carrier-grade media capabilities to media and enterprise indus-

try players.

Broadcasters and production companies’ transition to IP is hindered by the complexity of connecting different media domains while maintaining media secure. With the launch of the Trust Boundary Appliance, Net Insight addresses these fundamental challenges, giving customers full control over their networks. With the single open platform that integrates with any IP media network, Net Insight leads the way to industry standardization, continuing effort of making the transition to IP truly simple, seamless and secure.

The Trust Boundary Appliance addresses the need to manage the edge between different network domains, such as studio and operator, to secure IP media operations. The solution gathers all needed IP media address translation, monitoring, traffic control and security functions into one product simplifying IP media operation using technologies such as ST-2110, 2022 and NMOS.

First product on the market to be fully compliant with SMPTE RP 2129.

65 Best of Show Awards 2023 | NAB Show
INFO
FOR MORE

VideoIPath Federation Support

Nevion, a Sony Group Co. and award-winning provider of virtualized media production solutions, offers a flagship media orchestration platform VideoIPath, which now supports federation, or the ability for multiple autonomous instances of VideoIPath to collaborate within and across locations. This unique development is a major breakthrough in distributed multisite media production, as it allows production resources to be shared and used seamlessly, regardless of where they are located, and without compromising on orchestration performance, reliability and security.

Broadcasters, media and production companies are increasingly seeking to increase the flexibility and costeffectiveness of their live productions by moving to remote and distributed production. Such productions involve

studios, control rooms, people, on-premise and cloud processing located at multiple sites. Sharing, controlling and connecting these resources easily across LANs, WANs, 5G and GCCG (Ground-toCloud-Cloud-to-Ground) is one of the biggest challenges in enabling this type of production.

Nevion VideoIPath has already established itself as one of the most powerful, scalable, secure and easy-to-use orchestration systems in the industry. With federation, individual VideoIPath systems, for example at each site, can now collaborate with other VideoIPath systems to share, control and connect resources across locations securely.

As each system is autonomous and in charge of its own resources, it continues to func-

tion and collaborate, even if problems occur in other parts of the federation. The federation capability also enables VideoIPath to reach new heights in scalability, to handle all the production resources and all the media streams involved.

While remote and distributed production are obvious applications for VideoIPath’s federation functionality, the capability can also be used to compartmentalize networks within facilities, for example between ingest, production and playout.

VideoIPath’s federation capabilities are also a great opportunity for telecom service providers. Federation allows them to provide a WAN orchestration that can operate seamlessly with broadcasters’ orchestration, to bring together the customers’ facilities.

66 Best of Show Awards 2023 | NAB Show NEVION
FOR MORE INFO

Newsbridge MXT-1 Generative AI Indexing Technology

Newsbridge’s new MXT-1 generative AI indexing technology uses natural language models to generate human-like descriptions of video content. Capable of indexing more than 500 hours of video per minute, MXT-1 is a game changer for organizations working with media and sports content. Leveraging the next-gen technology, users can index vast amounts of content in record time, and search their large video collections as easily and intuitively as they search the web. With its dramatically reduced energy consumption, MXT-1 makes AI indexing seven times more cost-efficient than mono or unimodal AI systems, enabling massive indexing.

Newsbridge’s MXT-1 combines multiple AI modalities, including computer vision and speech processing with natural language models. The technology is specifically trained on hundreds of thousands of hours of media, entertainment and sports audiovisual content, leveraging AI transformers, making it particularly good at describing content for these industries’ indexing and search use cases.

MXT-1 is a core indexing technology that powers all Newsbridge offerings: Just Index, Media Hub, Live Asset Manager and Media Marketplace.

MXT-1 offers a unique approach to AI indexing. The technology detects people, logos, landmarks and text, as well as transcribes, translates and summarizes audio to text. Next, it merges these data sources to generate a human-like, objective description. While traditional AI providers in the market are only capable of generating keywords and speech-to-text content, MXT-1 uses natural language to link everything together and describe a scene, resulting in more accurate descriptions and faster, more affordable AI indexing.

Newsbridge’s MXT-1 technology offers a significant leap forward in AI indexing technology based on its ability to describe scenes in natural language, which ChatGPT and other generative AI currently on the market cannot offer. What’s especially innovative is that Newsbridge’s language model links raw modalities (i.e., detection of faces, text, logos, landmarks, actions, transcription) to generate a semantic description for increased searchability.

Moreover, MXT-1 improves upon the current state of AI indexing, which produces a jumble of tags that fail to give content owners the level of information they need. MXT-1 bundles all the latest evolutions of Newsbridge’s multifaceted AI, and can be trained and finetuned by an end user with multimodal rules and a custom thesaurus.

Until now, broadcasters faced the tough question of what foot-

age to fully index due to limited media logging resources to transcribe, describe and summarize content. The cost of traditional AI indexing services is prohibitive, holding back companies from embarking on fully automating their archive and live indexing operations.

Newsbridge’s MXT-1 technology dramatically reduces energy consumption, making it seven times more cost-efficient than mono or unimodal AI systems. We achieved this breakthrough by maximizing the use of energy-efficient CPUs over GPUs coupled with a smart frame sampling technology.

Fast and scalable, MXT-1 enables users to quickly start enhancing, sharing and monetizing their archives. With MXT-1, media organizations and sports rightsholders now know exactly what’s in their media files and can shine a light on the hidden gems in their archives.

67 Best of Show Awards 2023 | NAB Show NEWSBRIDGE
MORE INFO
FOR

NEXTOLOGIES 10TX for PFL: Live Signal Distribution Uncaged

End-to-end solutions have long been the holy grail for broadcasters and streamers, a challenge only complicated by the increasing need for digital transformation to reach more viewers and control costs. For too long, only two options have existed: take on the complex project of creating their own solution by cobbling together a mix of third-party solutions, or partner with yet another third party who could cobble together the third-party solutions for them.

But those days are over, and the Professional Fighters League (PFL), an American mixed martial arts league launched in 2018, was the first to take advantage of the new 10TX remote management and signal processing ecosystem, a satellite+fiber+internet delivery system that allowed PFL to distribute 4x more signals, enabling them to distribute a much greater variety of content, and to expand their takers far beyond the traditional broadcast ones, allowing OTT and streaming takers to access their events as well and share them across a broader international audience — a game-changing capability for the new sports league that needs to grow its brand awareness.

“Staying ahead of tech that makes our engineering process run smoother has been a mantra since we started the PFL,” says George Greenberg, executive producer, PFL. “10TX has been a major player for us, employing technology so we can deliver to a global audience with

unparalleled reliability.”

10TX, a Nextologies company, launched in March 2023, providing remote signal management and processing that uses all Nextologies-built (not third-party) solutions and infrastructure to deliver live event signals across all possible modalities. The 10TX team required just three days of setup to install the Nextologies hardware at the PFL site, and thereafter,

the entire distribution and management process could be controlled from any location on the globe.

Without 10TX, PFL was looking at a much more complicated, costly and risky process, requiring a teleport, booking at a per-hour rate, as well as a full-time engineer who would just set up the back feed. Then the outbound feed would be limited to Tier-1 takers.

With 10TX, PFL entered a whole new world of signal distribution flexibility. The Nextologies infrastructure includes seven teleports already set up to transport signals worldwide at the click of a mouse — no engineering required. From there, the signal could be distributed by Nextologies private fiber or over public/ private internet using Nextologies’ NXT-4 encoder/decoders, which can deliver the signals in any format required by any taker, opening up a vast new revenue-generating territory for the PFL sales team.

Success for live events strictly depends on distribution: number of possible takers and what they can do with those feeds. The worst thing a technology partner can do for that live event is to say “It can’t be done,” and so that is the promise of 10TX. Using the Nextologies infrastructure and solution set, the answer is always “We can make that happen.” Ultimate flexibility begets ultimate success, and with 10TX, PFL is set to take on the world.

68 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Satellite and fiber have long been trusted to deliver live video across multiple platforms and geographies, but these solutions are geographically limiting and increasingly expensive. To remain competitive, broadcasters need scalable video transport solutions, and IP offers the answer. But to make the transition, broadcasters need solutions engineered to create interoperability. Enter the Nextologies SDI Player.

Creating a hybrid IP-SDI approach is one way to ease the transition to IP, allowing broadcasters who have millions of dollars invested in SDI equipment to continue reaping their return on the investment they made in that technology.

As a software solution, the Nextologies SDI Player makes it possible for IP streams to play on any SDI device with no workarounds required and no new hardware added. Built by Nextologies as an integral part of the Nextologies XaaS (everything as a service) suite, it automatically includes a host of features that broadcasters require, including closed captioning and SCTE insertion.

Leveraging the SDI Player, Nextologies

NEXTOLOGIES SDI Player

was able to completely overhaul and expand the video delivery system of the oldest and largest news agency in the United States and long the largest and one of the preeminent news agencies in the world: the Associated Press (AP).

AP was seeking an alternative to its traditional delivery model, which primarily relied on satellites. AP required a less-expensive method of delivering the video channels they produced (Direct, Live Choice 1, 2, 3 and 4) to customers in geographically diverse locations who also required multiple formats (50 Hz and 60 Hz). Leveraging Nextologies’ SDI Player, along with Nextologies encoders/decoders or their existing hardware, since the SDI Player is a hardwareagnostic software application, AP customers are able to receive up to four live feeds from the AP web portal, which was also custom-built by Nextologies.

Using their deep experience and expertise in SDI playout, Nextologies designed the SDI Player to give remote broadcasters the ability to send signals to SDI easily. This solution has one,

singular, game-changing capability: to play any signal to an SDI device.

As a multi-module tool, which has a flexible internal chain, the device allows users to switch sources, keep audio/visual sync, output to multiple SDI destinations, and then play the live signal, do a file playout or play slate if no signal is available.

The SDI Player is written in C language and is engineered to provide SDI playout with all the capabilities specifically needed by broadcasters: SCTE insertion, captions, etc. Also, the SDI Player is a software solution that can be installed on any device, providing the ultimate flexibility without adding the need for cap-ex hardware expenditures or additional rack space.

Additional capabilities added to SDI Player in 2023 include the ability to play signals coming to us from the internet using native browser WebRTC or using a mobile app. SDI Player has another big advantage, which is the ability to keep A/V sync from the SDI Player inside +-10ms range (tested on valid generator signal).

69 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

The recent years have brought a tidal wave of technological advancements to television, changing both how television is created and distributed, but also how people consume television. In July 2022, for the first time ever, streaming services surpassed cable TV in U.S. television consumption. According to a Neilsen report, streaming accounted for 34.8 percent of total TV screen time in July 2022, while cable and broadcast (i.e. traditional linear TV) accounted for 34.4 and 21.6 percent of total viewing time, respectively. As consumers change their habits, the creators and distributors of TV have no other option — if they intend to be successful — but to adapt.

In this crush of digital transformation pressure, the technologies that have risen to the top as the most valuable for broadcasters and streamers all share one attribute: flexibility. That is the defining characteristic Nextologies engineers build into all its solutions, and the AVDS2 is the perfect example of that engineering philosophy in action.

AVDS2 is a software application that can act as an encoder, decoder and/or transcoder, making it possible to take

NEXTOLOGIES AVDS2

any signal in, process it and output it in any required format. Unlike other solutions on the market, which require hardware, Nextologies’ AVDS2 is software that can be installed onto any server.

The AVDS2 is a full framework with a chain of operations in which each type of signal acts as a module in the chain. This creates the flexibility to take in any signal, do the required operations with it, and then output in any required format. Many other video processor applications work in the same way, so this capability is not new: the unique thing delivered by AVDS2 is the ability to do all this without adding new hardware, which means no capital expenditures, no need for additional space, and a big bonus as the broadcast and streaming world move toward decentralized and remote operations — the software can be operated remotely from any location.

In addition, the AVDS2 includes native integration with all of Nextologies’ other key services, including:

• automatic closed captions

• automatic commercial detection and replacement

FOR MORE INFO

• automatic SCTE insertion

• NexToMeet monitoring

• GPU technology support

• QuickSync

AVDS2 allows the delivery of a live signal to any taker in any format with no additional hardware, creating a competitive advantage by immediately expanding the potential market for a signal. One example of this is the Professional Fighters League (PFL), which engaged 10TX, a Nextologies company, to use AVDS2 to distribute its fights to new markets all over the world.

PFL is an American mixed martial arts league that launched in 2018. As a new brand in the MMA space, PFL needs to reach all possible new audiences, a goal made a reality by AVDS2.

”Staying ahead of tech that makes our engineering process run smoother has been a mantra since we started the PFL,” says George Greenberg, executive producer, PFL.

“10TX has been a major player for us, employing technology so we can deliver to a global audience with unparalleled reliability.”

70 Best of Show Awards 2023 | NAB Show

NEXTOLOGIES

NexToMeet+VCC

At the beginning of 2023, Nextologies, creator of NexToMeet, acquired the VCC technology and began combining the two in a best-of-all-worlds remote production platform.

NexToMeet+VCC is a cloudbased solution for IP-based remotes, connecting contributors from anywhere in the world via laptops, tablets and smartphones, delivering broadcast-quality signals directly into client productions. Using the platform, producers receive superior-quality, lag-free, on-demand workflows for high-speed, high-volume control of live remote contributors on any device. Providing all the professional features and tools the broadcast producers depend on, including hyper-low latency, two-way connections and mix-minus IFB for communications, NexToMeet+VCC solves remote production for the Tier-1 production layer, highly professional productions where things simply can’t go wrong.

To understand the value of NexToMeet+VCC, it’s important to take a look at how remote production is accomplished with alternative solutions.

Other solutions require:

More hardware. Several monitors are required, and with multiple guests, you will need a laptop for each guest, and now you have multiple laptops running Zoom, and you’re taking HDMI output from each one and putting it into your production software.

Several apps. You must work in several apps at once, and those participating in remote interviews need login credentials.

Expert setup and execution. Setup and configuration for the whole process

is complex and requires an expert in order to avoid issues, including latency, intermittent audio problems, echoes, etc. Staff/Budget. Other solutions get you partially there, but if you are running a Tier-1 production, you want it to be the highest possible quality. That means that, in addition to everything listed above, you will still need an operator and a producer, and you’ll still be sending people onsite.

NexToMeet+VCC delivers more options, flexibility and results with less of everything else. The combined solution has been engineered specifically to do exactly what is needed without all the equipment, apps, custom setup, passwords and workarounds. Operating

NexToMeet+VCC requires:

One rack unit device. Headless computer — once you plug it in, it just works, and you can control it remotely, all from our portal. It’s a headless computer; you simply plug it in, and it works.

No programming. No complex setup or setting manipulation. Then you (or the call producer team) can control it remotely, all from the web-based portal. One laptop. Not multiple monitors, no keyboard and mouse. No external control surface. All the features you want are in the app.

One person. Our team is always available to be part of the production team, providing immediate technical support, onboarding for guests, etc.

One link. Everyone who is part of the production — producers and guests, each get their own, secure link to enter. No complicated logins.

Any device. Guests can use any device — any smartphone, a tablet or a laptop — to join the production and film their interview. The simplicity of the process means zero guest frustration and a much higher likelihood that highprofile, busy guests will say yes to interviews.

71 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Nextologies’ updated Control Panel (CP) software-based video network, which was built and has been used by the Nextologies internal team for several years, is now available for client use, delivering total visibility and control from origin to delivery point, from space to Earth, anywhere on the planet. Bringing together all hardware, software, data and analysis into one central platform, CP controls everything within the Nextologies HITC infrastructure from transcoding to delivery to encoding, putting all of the different features that one could ever need in a broadcast environment all in one platform. CP enables total visibility of signals from origin to delivery, as well as the ability to analyze and troubleshoot those streams at any point along the way.

The broadcast/streaming world is in the midst of digital transformation, which will enable all kinds of expansion and growth, but which can also be

NEXTOLOGIES Control Panel (CP)

expensive and complicated. As broadcasters and streamers transition from conventional signal transport methods, satellite and fiber, to public internet and cloud-based operations, the options are endless but so are the complexities. CP is engineered to accelerate that transformation through flexibility.

CP is designed to solve the incompatibility problem. CP works with every possible standard, from traditional satellite and fiber delivery to IP, so as companies are moving to a cloud/hybrid environment, CP can eliminate the need for specific manufacturers for encoding and decoding to get feeds out of the cloud. With CP, all the possibilities are on the table: clients can either get colocation in the cloud, install Nextologies software or buy their own servers and install Nextologies software. Nextologies makes all its own hardware on the HITC

network, but in some cases, the software can even control other manufacturers’ hardware, eliminating the need to make that change. And CP is the layer on top that allows total control and visibility of whatever setup works best for the client.

CP can be deployed anywhere in the world, installed on any server, and it controls everything on the Nextologies ecosystem. With CP and HITC, companies are able to manage their entire video management and delivery process with one single vendor, rather than the typical solution, which requires cobbling together solutions provided by a number of vendors with varying degrees of interoperability. Many of Nextologies’ clients, such as the Associated Press, prefer to let Nextologies manage the solution for them from end-to-end, including onboarding and troubleshooting, as well.

72 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Video Analytics

NPAW’s Video Analytics is a cutting-edge and highly accurate client analytics solution that offers comprehensive insights into the performance of video services for media companies, broadcasters, OTTs and telcos. With this advanced solution, video businesses can gain full visibility into the behavior of their audience during video playback, all in real time.

With an extensive array of metrics measuring quality, audience, content and engagement data across the entire video service, Video Analytics helps video businesses make smart, data-driven decisions concerning business, content, operations, technology and customer experience, optimizing performance, minimizing time-wasting and saving money.

STANDOUT BENEFITS

Customization and Flexibility: NPAW’s Video Analytics stands out from the competition by offering a fully customizable, flexible and secure video analytics solution. Our customers can build their own personalized app, with full flexibility and control over their video analytics data. Our focus on customization and flexibility allows us to respond to the unique needs of each customer, ensuring they have the tools and metrics they need to succeed in today’s fast-paced video industry.

One Source of Truth: NPAW’s solution streamlines decision-making by bridging the gap between business and technology-related data, eliminating inconsisten-

cies between systems and teams. We put an end to silo thinking between different teams and departments, promoting collaboration and enabling synergies to be unlocked. With the same data foundation for different functions, teams can work together based on shared, correlated data, leading to faster time to decision and resolution.

Automatic Alerting and Ticketing: With Video Analytics, customer care teams can address user-specific issues and dive deep into each user’s playback, tracing them by User ID or IP without engineering needing to intervene, maintaining high-quality customer service and reducing resolution times. Empower customer teams with tools and anonymized data to quickly identify and address user issues.

Data Protection: Our customers own their data. We are GDPR-compliant and ISO 27001certified, providing secure and precise insights always protecting organizations and their

user’s privacy. NPAW provides the only solution in the market with a GDPR management tool integrated in the UI to obfuscate or delete personal user information through the Admin API in order to comply with GDPR.

USE CASES

Operatings: Mitigate errors and quality issues by identifying, locating and resolving them. Prioritize troubleshooting efforts by analyzing the percentage of affected plays and users, track error evolution and monitor services in real time to identify recurring issues and their root causes.

Product: Use data-driven insights to analyze the impact of releases on end-user performance and measure player version impact over time. Optimize resource allocation, reduce bounce rates, and increase conversions without costly strategy changes.

Marketing: Create customizable dashboards, set goals and track progress with automated periodic reports to gain a detailed view of user behavior. Use actionable insights to analyze user engagement and create targeted campaigns for marketing teams.

Customer Care: Download a list of users affected by an “In-Stream Error” on Android TV. Data can be extracted via API, Kafka, Reports, or dashboards, making it easy to identify affected users and take appropriate action.

73 Best of Show Awards 2023 | NAB Show NPAW
FOR MORE INFO

The OOONA Toolkit is a selection of online, customizable and continually enhanced production tools for media localization projects, which can be licensed separately or in packages. They offer frame-accurate streaming, integrate the latest language technology innovations, and offer all the features of best-of-breed desktop editors — with added security, user management and without the need to download anything to your local workstation. The tools are set up to address each task within a traditional media localization workflow:

• The Create tool, to create subtitle and caption files in any industry-standard format, including Line 21 and Teletext, and with support for the Japanese language

• The Transcribe tool, to create scripts in the audio language, with help from a selection of speech recognition engines

• The Translate tool, to translate subtitles or scripts out of a template, with the help of a variety of machine translation engines

• The Review tool, to review translated

OOONA OOONA Toolkit

files using a Word-like functionality to highlight changes from one version to the next and notes for intool communication between people working on the same file

• (NEW) The Audio Description tool to create and translate AD scripts, as well as voice them in the cloud or using any of the available synthetic voices

• The Convert tool, to batch covert between any of +60 industry-standard file formats that are continuously updated to the latest industry preferences

• (NEW) The SynCheck tool, to automatically check subtitle assets for sync against media in bulk with the use of speech recognition and machine translation technologies

• The Burn & Encode tool, for the creation of videos with burnt-in subtitles

All tools are fully browser-based, with the highest security certification (ISO 27001, TPN, streamer audits) and protection

measures in the industry (ongoing pentesting, automated file saving and online backups, protection against cyberattacks). Enterprise integrations are available, so each tool can be API’d into any content management platform. Individual freelancers who only need to perform one service/task or content owners who do not require full-blown localization support can license them in a modular manner as needed with flexible subscription terms.

The tools come with an intuitive interface, extensive customizable hotkeys, the ability to use presets and templates, as well as perform batch operations that ease up workload in multiple stream projects, which are typical for language service providers. Popular features include an advanced customizable timeline, video grid, audio waveform and shot-change detection as well as user-informed, finetuned, automated quality assurance scripts, minimizing the scope for error.

The tools are also localized in 12 languages, the top ones spoken by the OOONA users.

74 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Jellyfish XT

OWC’s Jellyfish solutions have become the industry standard for shared portable NAS with unmatched connectivity and speed. Easy to use and set up right out of the box, with no IT required, Jellyfish products are designed to move the technology out of the way so creative teams can churn out content at a rate they hadn’t imagined possible.

The next generation is here with the Jellyfish XT, a full flash-based storage solution with up to 360 TB (720 TB with

extension) usable storage and both 100 Gb and 10 Gb Ethernet connectivity that makes workflow bottlenecks and latency things of the past.

Jellyfish XT is built for the most demanding workflows and includes expandability options to handle tomorrow’s higher resolutions and bitrates. Whether it’s 4K/8K/12K, VR or AR, Jellyfish XT is built to enable teams to collaborate on content without friction.

75 Best of Show Awards 2023 | NAB Show
OWC
FOR MORE INFO

PACKETSTORM COMMUNICATIONS

VIP Monitor

A major broadcast engineer stated: “The PacketStorm VIP Monitor is one of the best tools that I has in my toolbox for monitoring and troubleshooting ST 2110 networks.” The VIP Monitor is used by both developers of ST 2110 equipment and broadcast engineers. Major broadcasters are using the VIP in their ST 2110 Proof of Concept (POC) networks and live networks.

The NMOS-controlled VIP Monitor allows the user to simultaneously measure and monitor up to 122 flows of PTP, video, audio, and ancillary data 24 hours a day, 7 days a week, over 25 Gbps/100 Gbps interfaces with alarm-triggered packet captures. A timeline shows the status of all the flows with green indicating the flow is good and red lines indicating alarm events. Clicking on an alarm event will display the type of alarm, location of error in the packet capture and the dissected packet capture, which can be downloaded for further analysis. Packets are captured before and after alarm events so that the user can understand what happened before, after and during the alarm event. There are measurements for SMPTE

ST 2110-21 (Professional Media Over Managed IP Networks: Traffic Shaping and Delivery Timing for Video), SMPTE RP 2110-25 (Professional Media Over Managed IP Networks: Measurement Practices) and SMPTE ST 2022-7 (Seamless Protection Switching of SMPTE ST 2022 IP Datagrams).

Automatic connection and disconnection of flows is facilitated by AMWA NMOS IS-04 (NMOS Discovery and Registration) and AMWA IS-05 (NMOS Device Connection Management). Once a flow is connected, the following NMOS information is displayed: NMOS receiver information, connected sender information, dissected sender SDP and raw sender SDP. The VIP compares the received flow to the sender’s SDP. An alarm is triggered if there are differences between the sender SDP and flow or if there are errors in the SDP. The VIP has tools for trouble shooting NMOS issues, such as packet captures of NMOS connection messages, and NMOS Trace Log of NMOS takes and disconnects.

Some ST 2110 monitoring systems may indicate that errors

have occurred, but the user has no way of going back and determining exactly what happened, when it happened and how it correlates with other events going on in the network. This is especially true when error events only occur occasionally. The VIP resolves these issues by continuously watching all the flows that it has joined and creating history of alarm triggered packet captures on all flows that have errors along with the associated error information.

A number of ST 2110 monitoring systems provide measurements for ST 2110-21 and RP 2110-25. The VIP Monitor provides these measurements along with live Packet Read Schedule graphs for Packet Arrival Distribution, Packet Arrival Times, VRX Buffer Updates and VRX Buffer Levels Per Packet so the user can understand the detail behind the measurements.

The VIP provides SDP information, reports SDP errors and has tools for troubleshooting NMOS issues while some ST 2110 monitoring systems only provide SDP information for the flows that it has joined.

76 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

PANASONIC CONNECT KAIROS

Panasonic Connect’s KAIROS, an IT/ IP live video-production platform, streamlines production workflows with its innovative CPU/GPU architecture and software that maximizes video processing capacity and utilization with just one frame processing delay. Productions enjoy the flexibility to deliver content from various video and graphical sources to multiple screens and streams. Used across broadcast studios and the streaming of live sports and concerts, KAIROS virtualizes traditional switcher functions.

KAIROS is equipmentagnostic with fully customizable multiviewer functionality, providing users the ability to harness as many inputs, sources or external devices needed to bring creative visions to life. Unlike traditional switchers, KAIROS has no fixed limits on the number of M/Es and keyers. Its layer-based approach lets users build a system that adapts to their creativity, like creating a unique output for any aspect ratio with the exact resolution of their display, or perfectly blending a multiprojection system. Along with immersive visuals, KAIROS uses Secure Reliable Transport Protocols and Real-Time Messaging Protocols to streamline hybrid, in-person and remote workflows. KAIROS also natively supports ST 2110 IP connectivity for unparalleled input/output flexibility. This gives users an intuitive layer-based interface for powerful content creation and simplifies connections to Panasonic’s latest ST 2110-compatible cameras, the AW-UE160 PTZ and the AK-PLV100 Cinelive studio camera.

At the NAB Show, Panasonic Connect showcased the newest hardware and capabilities of its platform, including

two new KAIROS Core mainframes (the KC200 and KC2000) and software updates to deliver more control options. These updates increase the platform’s scalability to deliver larger and more complex productions across broadcast, sports, entertainment, cinema and more. The KC2000 mainframe offers the largest capacity yet, with 200% of input and output capacity compared to previous mainframes. It also heightens video processing power and a larger 900 GB internal clip player. Both the mainframes support 2022-7 network redundancy when connected to two 100 Gb ST 2110 networks and offer significantly reduced noise levels compared to previous models.

Panasonic Connect also showcased the new KAIROS Touch Control Panel (TCP). Users with a touch screen Windows PC will now have the ability to control KAIROS directly from their monitor. Multiple remote or on-premise instances of the KAIROS Touch

Control Panel can be used simultaneously to control the same KAIROS Core.

Since NAB 2022, Panasonic Connect has been working with industry partners to give broadcast professionals more ways to leverage cloud technologies. At this year’s show, Panasonic Connect showcased the latest update to KAIROS’ cloud capabilities — Global Live Control Room — developed in partnership with LiveX, a leader in cloud-based production. Global Live Control Room combines the capabilities of KAIROS with LiveX’s Virtual Video Control Room (VVCR) to offer a turn-key, hybrid solution that allows productions to route all cloud applications to VVCR and all on-premise gear to KAIROS. To further expand capabilities, Panasonic Connect has also partnered with Singular Live, Videon, Scoreboard OCR and Telos Infinity. Through these partnerships, the Global Live Control Room will include switching, ISO recording, intercom, playback, graphics and more.

77 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Perifery AI+ is a set of groundbreaking new application-centric services for content production workflows. Offering seamless integration with the Perifery Transporter on-set media appliance, Swarm software, and Perifery Panel for Adobe Premiere Pro, Perifery AI+ enables media and entertainment companies to perform critical pre-processing tasks at the edge of their workflows. Perifery AI+ empowers companies to improve their workflow efficiency, reduce costs, speed up time of delivery, and monetize digital assets faster. The first pre-processing functionalities that will be shown on Perifery AI+ include object recognition and smart archiving.

Why Perifery AI+ Is Groundbreaking

Perifery AI+ represents a breakthrough in media content production at the edge, enabling simple-to-execute, predictable cost and fast AI-enabled pre-processing for remote and off-site content production. Users can deploy Perifery AI+ in their workflow to improve efficiency, lower costs, and — most importantly — ensure time of delivery. This solution takes services that were typically only available in the public cloud and

PERIFERY Perifery AI+

enables pre-processing at the edge.

As a set of workflow intelligence services, Perifery AI+ shows that “the edge” consists of more than just 5G-to-cloud or 5G-to-facility remote processing. Perifery AI+ redefines the edge as an extension of the cloud workflow that enables preproduction users to work remotely, on-set or anywhere.

Furthermore, Perifery AI+ demonstrates that many machine-learning(ML-) or AI-enabled capabilities on the cloud can be provided at the edge at a lower and more predictable cost with significantly reduced complexity. Overall, Perifery AI+ brings these services to the point where assets are created or preprocessed and, in some cases, where content has already been archived and reprocessed, enabling media companies to derive additional information or metadata to increase monetization. By allowing users to access AI preprocessing services in a single user interface, Perifery AI+ simplifies the media production workflow.

Bringing Innovation to the Edge

Traditionally, AI processing services have focused on provid-

ing media companies with a way to use AI-embedded products in the cloud. While cloud-enabled apps, services, and tools have become invaluable in media production for their ability to help companies meet deadlines and reduce operational costs, the cloud has unpredictable costs. The time and effort needed to upload and download in the public cloud, not to mention the egress fees, has made the cloud more expensive than previously anticipated, offsetting its many benefits through unnecessary complexity.

Perifery AI+ splits processes between the cloud and the edge (i.e., a remote location), saving media companies a substantial amount of money and time during content production. Mid-size organizations that pay a significant amount of money to use the cloud for processing can now perform preprocessing at the edge and reduce their operational expenses.

Perifery AI+ deserves to win this award for bringing innovation to the storage and workflow intelligence categories and for providing the industry with a foundation for media production from the edge.

78 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

PHABRIX is pleased to present its first example of a traditional “Waveform Monitor” — but with a twist. Inheriting all of the class-leading features and flexibility of the QxL Rasterizer the QxP features an integral 3U multitouch LCD screen and an integral V-Mount or G-Mount battery plate, integral mains PSU and 12V external DC input. You now have 12G SDI and 25G ST 2110 compliance monitoring in a portable form factor using industry-standard camera batteries*. With its class-leading waveforms, the QxP is equally at home on-set in SDR or HDR productions, grading, shading or QC, MCR, engineering and R&D environments. The user is free to use the integral screen, or plug in an external

PHABRIX QxP

HDMI monitor and use the flexible tool layout to view up to 16 instruments simultaneously. A rich set of remote access options including NoVNC and the UI as a ST 2110-20 flow provides all of the “headless” operational flexibility of a conventional rasterizer.

The QxP includes SMPTE ST 2110, 20226 and a wide range of formats as standard. In-field upgrades are available for a rich set of options: UHD/4K, IP-MEAS, HDR, Dolby E decode, PCAP capture, EUHD and extensive AV-ANC test signal generation. Factory-fitted options provide SDI interfaces or RTE real-time SDI eye and jitter analysis, with an engineering grade data view and an optional,

advanced SDI-STRESS toolset.

For real-time IP workflows the QxP supports simultaneous generation and analysis of HD/3G/UHD/EUHD 2110 payloads on generic SFP28/25 GbE interfaces, with ST 2022-7 Seamless IP Protection Switching (SIPS) and independent PTP followers on both media ports for fully redundant media network operation — all with AMWA NMOS IS-04 and IS-05.

Whether working in HD, UHD, SDR, HDR, SDI or IP, conventional or remote production, the QxP combines the user-configurability and advanced tools required for full operational flexibility when transitioning to next generation workflows.

79 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Planar Venue Pro VX Series

Planar Venue Pro VX Series, a family of indoor fine pixel pitch LED video wall displays delivering exceptional incamera visual performance for virtual production (VP) and extended reality (XR), as well as on-camera visual performance for broadcasters. The series combines high-performing scan and refresh rates with high brightness and narrow pixel pitches, making it well-suited for LED XR stages in markets as diverse as film and video production, corporate, broadcast, rental and staging and live events.

Designed to support hanging, stacked or wall-mounted installations, Planar Venue Pro VX Series expands on the capabilities of the industry leader’s first solution designed to revolutionize the production of realistic in-screen

and on-screen content, the Planar CarbonLight VX Series. With support for HDR-ready content, a wide color gamut, including up to DCI-P3 color space, and compatibility with a wide range of cameras, the Planar Venue Pro VX Series delivers the unmatched visual performance and deployment versatility today’s companies need to develop lifelike recorded, streamed or broadcast video content.

The release of the Planar Venue Pro VX Series bolsters the Planar Studios initiative and expands the company’s portfolio of visualization solutions designed to support VP and XR. The newest addition will be backed by Planar’s dedicated VP and

XR team, which includes pre-sales and post-sales support from local experts. This reinforces the industry leader’s commitment to making such applications streamlined and available to mainstream markets.

The new Planar Venue Pro VX Series is designed to reduce the complexity of setup and teardown, featuring magnetically-attachable cabinets with quick locks for single-person installation. The series also includes mechanical features to suit both temporary applications and fixed installations.

The series is available in 1.9 and 2.5 millimeter pixel pitches and compatible with Brompton LED processors and LED controllers from Colorlight.

80 Best of Show Awards 2023 | NAB Show
PLANAR
FOR MORE INFO

PLAYBOX NEO

Media Gateway for Live Media Delivery and Distribution

The Media Gateway from PlayBox Neo allows the entire process of playout routing and decoding to be handled entirely in software. This eliminates the capital cost needed to operate hardware routers, codecs and related devices.

The PlayBox Neo Media Gateway can be installed onsite or in the cloud.

Integral software codecs allow signals to be converted between SDI, NDI, SRT, UDP and RTP. Video and audio content can be sourced directly from a desktop screen and delivered as live feeds. A built-in, web-based multiviewer includes automated black-frame and frozen-frame alarms. It shows audio and video for all inputs, including built-in visual and audible alarms.

Input from SDI, NDI and decoded IP

can be sent to any output or multiple outputs. It provides automatic or manual switching of Mpeg TS (transport stream) streaming. Switching groups can switch multiple streams at the same time. For example, switching all main sources to backup sources or some external feed. Each switch group can have multiple switches. Every switch in the group has multiple inputs and one output, which can be sent to multiple streaming protocols and destinations. All inputs, outputs, encoders and decoders can be adapted to user needs through a simple licensing update.

Other features of PlayBox

Neo’s Media Gateway include:

• Output to NDI, SDI or encode to codec/bitrate, including

frame rate, resolution, color format and aspect ratio can be changed.

• SCTE-35 and SCTE-104 are cross-converted and transmitted to the output.

• Closed captions are cross-converted between CEA608 and 708 and the frame rate converted when required.

• Active format descriptor (AFD) is inserted when required.

• Uncompressed input clean switch to any output at any time.

• Automatic changeover on compressed domain (Transport stream) based on missing signal, under/over bitrate, black and freeze when decoded stream is passed to internal decoder.

• Black, bars or freeze frame when there is no signal on input.

81 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

PTZOPTICS Move SE

Built on the award-winning PTZOptics camera line, the Move SE includes the features you love and puts them more within reach than ever.

The unit includes software-enabled auto-tracking SDI, HDMI, USB and IP outputs, 1080p@60fps, NDI|HX upgradable, available in 12X, 20X and 30X, and a five-year warranty.

The best thing about the Move SE is its potential for scalability. At the low price point starting at $999, these units enable streamers to mass deploy cameras and maximize their production capabilities.

Get ready to build the video workflow you’ve been dreaming of. The PTZOptics Move SE is compatible with just about any production or live streaming setup.

82 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

QUANTUM Quantum Myriad

Myriad is a new all-flash scaleout file and object storage software platform ideally suited for the evolving needs of VFX, animation and rendering, and the increasing demand for AI and ML content creation and enhancement tools and new markets such as AR/VR, live production with LED video volumes and digital twinning.

Legacy NAS storage systems provide inconsistent performance, are complex, difficult to scale, and often deployed in islands that add workflow complexity and increased management burden. The slow performance makes rendering a painful and long process.

Instead, Myriad makes full use of readily available NVMe storage and RDMA to deliver the extreme performance (10s of GBps) and high IOPS (100s thousands) needed for cutting-edge animation and multiplatform workflows without the drawbacks or design limitations of legacy systems. Myriad requires no custom hardware, so as market available NVMe storage servers gain higher capacities, higher performance and lower cost, they can be used giving flexibility and adaptability as business evolves.

Myriad lets you consolidate multiple animation, VFX and rendering workflows into a single fast system to serve all departments, clients, workstations and workflows including rendering pipelines and AI and ML applications. Myriad delivers consistent performance for all users and is highly efficient storage for the large number of small files common in these workflows, and for serving rendering pipelines without impacting other users.

Myriad is built with cloud-native technologies like microservices and Kubernetes making it extremely flexible

and easy to use; no specialized IT or networking experience required; and it can be easily deployed on-premises or in the cloud. Myriad delivers this performance in a smaller footprint requiring less power, cooling and fewer components to reduce networking complexity, administration overhead and operational costs. Myriad’s powerful data services ensure that data is deduplicated and compressed to deliver an effective storage size up to 3x the storage capacity. Zero-impact snapshots and clones protect against operator error.

Myriad Benefits

• Consistent, fast performance of up to 10s of GBps performance and 100s thousands of IOPS to serve every creative department’s needs, including rendering, on a single system, whether deployed on-premises or in the cloud.

• Modern microservices architecture orchestrated by Kubernetes to deliver simplicity, automation and resilience at any scale.

• flash storage servers so you can quickly adopt the latest hardware capacities and form factors and adapt your storage infrastructure to meet future requirements.

• A Myriad cluster can start with as few as three NVMe all-flash storage nodes, and its architecture enables scaling to hundreds of nodes in a single distributed, scale-out cluster.

• No specialized IT or networking knowledge needed — powerful automated storage, networking and cluster management automatically detects, deploys, configures storage nodes and manages the networking of the internal RDMA fabric.

• Highly efficient data storage with intelligent deduplication, compression and self-healing and self-balancing software to respond to system changes.

• Simple, powerful data protection and recovery with snapshots, clones, snapshot recovery and rollback capabilities to protect against user error or ransomware.

83 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

SpycerNode2

SpycerNode from Rohde & Schwarz has proved itself as a popular and powerful storage platform. In response to large numbers of potential users looking for additional functionality to take it into new areas, Rohde & Schwarz has introduced SpycerNode2, launched at the 2023 NAB Show.

SpycerNode2 is built on IBM’s high performance computing technologies (HPC), including the Spectrum Scale RAID software. Rohde & Schwarz designers are uniquely placed to take full advantage of these tools, resulting in unprecedented performance in a media application.

The Spectrum Scale RAID control and other functionality in SpycerNode2 is built into a redundant pair of external 1U servers. Two are provided for redundancy and security. This new architecture boosts performance dramatically — by as much as 50% in demanding 4K applications — and it allows the designers to incorporate significant new functionality, in response to user demand.

VSA (virtual storage access) technology provides complete failover protection and uninterrupted data for direct attached and network users. The controller also incorporates AWS S3 export protocols, making it simple to integrate SpycerNode2 into hybrid storage systems.

Each 1U controller includes space for up to eight NVM Express plug-in drives. This provides very fast caching, in a function called Dynamic Media Cache. This reduces demand on the main storage for regularly used content, ensuring much better performance for all users and managing bandwidth through the system.

It provides a significant enhancement for network attached users calling on content regularly, as you would find in a busy post-production house, for example. Used in conjunction with the

SpycerPAM production asset management software, it also provides seamless interworking with third-party post-production software like Adobe and Avid editors, giving users simple workflows with editors accessing SpycerNode2 storage directly.

SpycerNode2 has a 5U RAID chassis. To provide the scalability required by users, storage is built from blocks of 28 SSD or disk drives, up to a maximum of 84 drives in a single cabinet. As many as four 5U chassis can be managed under a single pair of controllers, giving a total capacity of up to 6.7 petabytes. Units can be networked together to provide increased capacity and performance.

As well as performance and scalability, the foundation of IBM HPC software also provides greater security for all users.

Using VSA and Device Manager alongside the other core tools and the redundant external RAID controllers, SpycerNode2 provides highly secure storage with zero downtime.

Broadcast and post-production facilities need to store large amounts of media, in a form that is fast and secure. Typical applications have large numbers of concurrent users, with controlled access to their content.

To meet this real need, Rohde & Schwarz has taken its popular and proven SpycerNode platform and re-engineered it, to create something that is significantly more powerful, practical and flexible. It takes standard components and operating level software and adds layers of application software that delivers the practical performance users require.

84 Best of Show Awards 2023 | NAB Show
ROHDE & SCHWARZ
MORE INFO
FOR

Carbonite Ultra 60

Almost a dozen years ago, Ross Video introduced Carbonite, a completely new class of production switcher that went beyond the capabilities of many products twice its size and price. As the development of this fully featured groundbreaking platform has continued, so has its popularity and market penetration.

Carbonite Ultra 60 from Ross Video is a new revolutionary production switcher that offers big switcher performance. Since its introduction, Carbonite Ultra has become the benchmark for performance and ease of use for mid-sized production switchers. Building off the success of the Carbonite Ultra series, Carbonite Ultra 60 is the biggest, fastest and most powerful Carbonite that Ross Video has released to date.

In its own class of production switchers, Carbonite Ultra 60 allows media groups looking to undertake larger, more complex, and more demanding productions to do so without having to spend a considerable amount of money on one of the largest switchers on the market. Carbonite Ultra 60 is an ideal solution for facilities that need the power, affordability and feature set of Carbonite but require more inputs and outputs than previously available.

Key Capabilities Include:

• A modular production switcher solution that supports an I/O of up to 60x25 in HD or UHD. And because the 3RU frame is modular, it can be configured as 36x15 in a flexible platform that can meet the production needs of today while growing alongside expanding production facilities.

• Although Ultra 60 leverages the latest hardware technology to provide incredible performance, it shares the DNA of Carbonite Ultra, including its

entire feature set.

• Like Carbonite Ultra, the new Ultra 60 platform goes well beyond simple layering and transitions with onboard frame syncs, format converters, multiviewers and more.

• From SD to UHD and beyond, Carbonite Ultra 60 supports most major formats and frame rates. Additionally, HDR and WCG support is built-in, making Carbonite Ultra 60 the ideal system to grow with.

• Audio mixing and processing capabilities are available with an easy-toinstall license key. There’s even an available external hardware module for incorporating analog audio into Carbonite Ultra 60.

Carbonite Ultra 60 is yet another step in the evolution

of this remarkable product series and features a number of firsts:

• Carbonite Ultra 60 is the first Carbonite with modular I/O boards making it easier and less-costly for customers to leverage the switcher’s incredible feature set without buying more I/O than necessary for present needs while supporting future growth.

• Carbonite Ultra 60 is the first Carbonite with internal power supplies making installation easier and “cleaner.”

• Carbonite Ultra 60 is the first Carbonite with an I/O of up to 36x25, making it available to facilities that wish to leverage Carbonite’s power, affordability and feature set but require more inputs and outputs than are currently available.

• Carbonite Ultra 60 is the first Carbonite to provide the same I/O in both HD and UHD.

85 Best of Show Awards 2023 | NAB Show
ROSS VIDEO
FOR MORE INFO

Rally Media Supply Chain Platform

SDVI’s Rally media supply chain platform enables optimization from ingest through delivery, empowering media organizations to provide the entertainment experiences audiences demand — faster, more economically and more sustainably. Rally not only deploys all the applications and infrastructure needed to create a dynamic, responsive, cloud-enabled media supply chain but also orchestrates all supply chain actions.

The only supply chain management platform capable of facilitating true end-to-end optimization, Rally is alone in providing compounding efficiencies (and value) as each step builds upon the previous. From content receipt to content distribution, Rally gives media companies a uniquely comprehensive management platform for driving agility, efficiency and intelligence across their media operations.

The first solution to unite the disparate tools and infrastructure needed to prepare content for distribution, SDVI’s Rally platform enables supply chain operators to manage the whole “system” rather than a collection of parts. The cloud-native platform helps users to orchestrate the work required and deploy the resources needed, including an array of application services providing media processing as and when required, on a per-use consumption basis. Collectively, these services execute the supply chain requirements — with the right tool matched to the type or value of the work — while freeing up internal resources to focus on higher-value work, such as perfecting the customer experience.

While automated provisioning and scaling of cloud infrastructure to meet demand improves efficiency over traditional models, SDVI also has stepped up Rally’s sustainability — and that of its users — through its Net-Zero Supply Chains initiative, which uses carbon

offsets and recapture to render SDVI customers’ supply chains running on Rally net-zero.

As Rally automates and accelerates multistep processes, the supply chain itself gives media organizations the information they need to make better, faster decisions. The platform provides enterprise visibility of the end-to-end supply chain, so users enjoy up-to-theminute information about the status of all running supply chains, and operators are instantly alerted to any irregular status changes. In addition to presenting per-unit information on resources and costs for every job, the platform enables accurate, granular forecasting with predictable costs for budgeting and for evaluating the profitability of new projects and deals.

No other platform delivers the range of use cases that Rally supports, and recent updates enhance Rally’s utility

and value for an even broader array of users, and most notably for studios. The platform now supports relationships of assets to one other, allowing customers to build their own ontology of asset relationships (i.e., episode to season to show, etc.) or to use the MovieLabs 2030 Vision and its Ontology for Media Creation. Additional features introduced in Q1 2023 support the growing use of Rally by major studios that need to prepare content deliveries. To simplify usage by studios, Rally has achieved Blue-level certification with TPN+, which acknowledges the security posture of Rally as being acceptable for MPAA members.

With these new updates and through ongoing refinement of its optimization capabilities, Rally continues to offer media organizations a uniquely robust solution for end-to-end media supply chain management.

86 Best of Show Awards 2023 | NAB Show SDVI
FOR MORE INFO

SONY FR7

The Sony FR7 is the world’s first fullframe, interchangeable lens cinema camera with pan-tilt-zoom (PTZ) functions. Unlike traditional PTZ cameras, which feature a smaller, 1-inch sensor, the FR7 offers a full-frame sensor that delivers the image quality of a cinema camera and the versatility of a PTZ camera. Productions can choose from 72 E-Mount lenses ranging from prime lenses for shallow depth of field or zooms for flexibility and range.

And thanks to its full-frame cinema sensor and BIOZ XR processor, the FR7 can produce superb image quality for various industries — from broadcast, reality and live events. In fact, the FR7 has been used alongside Sony’s flagship digital cinema camera, the Sony VENICE 2, to bring a cinematic look to some of the world’s most prestigious live events, including the “Super Bowl LVII Halftime Show,” “The Weeknd Concert” on HBO Max, “Norman Lear: 100 Years of Music and Laughter” on ABC, “Mariah Carey: Merry Christmas to All!” on CBS and “Kendrick Lamar Live: The Big Steppers Tour” on Amazon Prime.

The FR7 features a full-sized cinema sensor that provides 15+ stops of dynamic range and excellent low-light visibility. In addition, this cinematic PTZ camera can capture up to UHD 4K at up to 120 fps for slow-motion effects and offers live streaming. The FR7 also provides the flexibility and versatility of a cinema camera when it comes to its color science. The camera is preloaded with the S-Cinetone LUT, inspired by our prestigious Sony VENICE motion picture camera. Users can also load LUTs (Look Up Table) or shoot Log footage for the ultimate flexibility.

This versatile camera features the unique ability to use interchangeable Sony E-mount lenses and is currently compatible with up to 72 E-mount lenses ranging from 12 to 1200mm. Depending on the needs of the production, users can pair the FR7 with prime lenses to create shallow depth-of-field bokeh effects or pair the camera with a wide range of zooms for versatility.

Other incredible features include a built-in electronic ND filter. This built-in ND filter allows variable ND filtration ranging from 1/4 (2 stops) to 1/128 (7 stops). This allows you to compensate for changes in exposure without affecting your depth of field. Another feature that sets the FR7 apart from traditional PTZ cameras is the ability to record directly into the camera or output raw video

to an external recording for color grading and other post-production tweaking.

Other features include:

• Allows remote and robotics functions: Control your camera from a distance like a broadcast truck, control up to 100 cameras by IP500 controller with 100 camera position presets.

• Output video using the HDMI and up to 12G-SDI video outputs

• Stream RTSP, SRT or NDI|HX (requires NewTek license) using the LAN port

• Can be easily controlled with the included IR remote, web app remote control and the IP500 control console

• Fast-hybrid autofocus with face detection and touch tracking enable you to keep even dynamic action in focus

87 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

VENICE Extension System 2

The VENICE Extension System 2, commonly called Rialto 2, is the next generation of the Rialto — an innovative camera accessory used to capture in-camera action on recent blockbusters

“Top Gun: Maverick” and “Avatar: The Way of Water.”

The VENICE Extension System 2 is a tethered extension system that allows you to detach the camera body from the actual image sensor block without degradation in image quality. The VENICE Extension System 2 adds important enhancements over the original, including compatibility with both the VENICE and the VENICE 2. In addition, the Extension System 2 offers greater mobility and compatibility with either a 3-meter or a 12-meter cable — without the need for a repeater.

Thanks to its small size and light weight, filmmakers have greater creative freedom with the ability to shoot in tight spaces, go handheld or mount the Extension System 2 on gimbals and cranes.

The Extension System 2 is nearly the same size as the original system and weighs around 2.1 kg. For added capabilities, such as when shooting for Visual FX, the Extension System 2 is equipped with a tilt/roll sensor on the camera head that detects motion. It records this information in the metadata and outputs the data through the camera’s SDI.

For added functionality, the Extension System 2 also has four assignable buttons — making it perfect for handheld configurations. This feature allows buttons to be assigned for functions, such as changing ND filters, REC Start/Stop, etc.

Academy Award-winning cinematographer Erik Messerschmidt has been shooting with the VENICE Extension System 2 on the upcoming feature film “Ferrari.” He offered this feedback: “The VENICE 2 Rialto has been a huge asset for me. It frees us to use the sensor we already love in ways we never could without it.”

What technological advance-

ments does this represent?

This innovation allows you to separate the camera sensor from the camera body — as was the case in “Top Gun: Maverick” or use for smaller setups, such as on “Avatar: The Way of Water” when two Rialto systems were used on a custom 3D rig. This innovative rig allows productions to create a small, lightweight rig without compromising image quality for feature films, TV series and live events.

In addition to its small form factor, the VENICE Extension System 2 allows productions even more mobility — allowing crews to place the extension system even further away than ever before — up to 40 feet away without the need of a repeater. In addition, now the VENICE Extension System 2 incorporates a two-axis gyroscope in the sensor housing. This allows the tilt & roll positional information of the image sensor to be captured as metadata and utilized in VFX workflows.

88 Best of Show Awards 2023 | NAB Show SONY
INFO
FOR MORE

Creators’ Cloud

Sony’s Creators’ Cloud is a cloud-based platform that provides enterprises in the media and entertainment industry, individual creators and small teams with secure access to efficient services and apps to maximize their production workflows. It is comprised of the following elements.

For Enterprises:

• C3 Portal is a cloud gateway service that enables seamless content delivery from the field to the cloud, alleviating traditional physical limitations, while vastly accelerating distribution and edit workflows. Custom metadata-tagged camera clips can automatically be pushed from cameras to a production system, or nonlinear editor (NLE) of your choice, resulting in simplified search, identification and editing. New integration with Marquis and Avid completely automates this process, eliminating the need for user interaction and is currently in use by Sinclair Broadcast Group, while Teradek support is now available to enable fast and secure content delivery from the field to virtually any location.

• Ci Media Cloud is a media management and collaboration service that allows teams to access and collaborate on in-production files and finished media content from virtually anywhere. Recent Ci updates include enhancements to functionality, updated workflows, new pricing plans, integrations with Atomos CONNECT products via Atomos Cloud Studio (ACS) and Pomfort’s Silverstack Lab post-production software, as well as

a partnership with Deloitte.

• M2 Live makes multicamera live production quick and efficient by providing scalable cloud-based tools that enable the creation of engaging live event streaming, social and web content.

• A2 Production is an AI-based workflow automation process that provides data analysis, subtitling, clip creation and content enrichment.

• NavigatorX manages and orchestrates assets, data, workflows and devices.

• Cloud Master Control Content Browser by Crispin enables prep and review of playback content from the cloud, supporting full cloud or hybrid

cloud/on-prem master control, operated from Crispin’s Core web-based user interface.

For Individuals:

• Creators’ App enables access to content and upload capabilities from a mobile device, as well as remote control of select Sony cameras.

• Free Storage (5 GB) for Creators’ Cloud account holders and 25 GB for owners of select cameras.

• Discover is a place for creators to connect and share their content.

• Ci Media Cloud is also open to individual creators and small teams and can be accessed through a Sony account.

89 Best of Show Awards 2023 | NAB Show
SONY ELECTRONICS
MORE INFO
FOR

SONY ELECTRONICS INC.

27-Inch Spatial Reality Display

Sony’s next generation 27-inch Spatial Reality Display comes packed with a bigger screen and evolutionary upgrades in its high-speed vision sensor, image quality-enhancing technologies and software capabilities. The product sets a new bar for

the glasses-free 3D display category and delivers extremely high precision and contrast, a powerful spatial reality experience. It is the ideal 3D visual medium for many business applications, especially in the entertainment industry.

90 Best of Show Awards 2023 | NAB Show
FOR
INFO
MORE

SPECTRA LOGIC Spectra StorCycle Enterprise Software for Digital Preservation

Having fast remote access to archived content to search, restore and create new stories has become a major revenue-producing necessity as workforces have remained largely remote and distributed. Organizations that have focused on optimizing their IT environments to achieve maximum efficiency in their data storage have been able to benefit from the full value of their archived digital assets.

Spectra’s StorCycle is an enterprise software for digital preservation that addresses the challenges of content storage and long-term preservation. StorCycle is simple to use and supports multiple use cases, including long-term archive, digital preservation, project archive and the migration of data to the most cost-efficient storage tier. Without sacrificing data availability, StorCycle archives and manages media assets at scale by identifying, migrating, accessing and preserving digital media assets for the entire lifespan of that data — be it short-term or forever. StorCycle identifies file attributes of unmanaged assets and moves less frequently accessed content to a secure nearline or archive tier, which includes any combination of cloud storage, object storage disk, network-

attached storage (NAS) and object storage tape.

StorCycle’s ability to archive digital assets on-premises or to the cloud gives organizations the flexibility to store files where most appropriate to maximize scalability, disaster recovery and cost requirements — limiting storage expenditures during tough budget times and freeing up valuable storage capacity on primary storage. The software effectively automates cleanup of production storage and helps optimize storage utilization by enabling users to trigger automated media asset migration with user-defined policies based on attributes like creation date, age, size or last access.

Organizations can also automate the replication of data across multiple locations, systems or in the cloud to meet varying data availability, reliability, compliance and protection goals. For best practices in media storage, StorCycle can automatically make additional copies of data for disaster recovery purposes, including on tape (locally or remote) for physically separated air-gapped copies that protect data from ransomware. It can encrypt data, adding another level of protection and security

to the data being stored.

StorCycle enables users to manage archives remotely and easily via web UI. Users can manually or automatically archive entire project-based directories or subdirectories while maintaining familiar and consistent access to copied or migrated assets. With the use of HTML Links or Symbolic Links, and a web-based search, archived data is easily accessible by users in a familiar manner. StorCycle stores content in open formats so that data is always accessible through StorCycle or from the storage target itself. Assets can be migrated to the cloud for sharing or collaborative workflows.

StorCycle’s modern approach to digital preservation enables organizations to ensure media assets are located in the right place at the right time, delivering affordable long-term protection and access to content while helping organizations become more effective and efficient. For any organization managing an expanding amount of digital content, StorCycle effectively provides the means to automate the management and preservation of vast amounts of growing assets for future use and monetization.

91 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Spherexgreenlight is the world’s first expert-in-the-loop AI and machine-learning technology that highlights how to optimize global distribution and avoid costly mistakes. Spherexgreenlight analyzes movies

SPHEREX

Spherexgreenlight

and TV shows in 200+ countries and territories for regulatory compliance and brand safety by highlighting objectionable and culturally inappropriate content — generating the right local age-rating, content advisories and

consumer experience. With this proprietary technology, Spherex optimizes localization and post production while helping content providers, distributors and streaming platforms make smarter acquisition and programming decisions.

92 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Stream Smart enables streaming providers to retain the quality of the viewer experiences while reducing bandwidth needs and delivery costs. SSIMWAVE’s Emmy Award-winning technology with its hyper-accurate perceptual quality SSIMPLUS metric, calculates automatically and precisely the bandwidth needed to deliver on-demand content at the provider’s current video quality level. Its unique and patented intellectual property optimizes existing encoding devices, enabling rapid deployment in video providers’ existing infrastructures. Stream Smart also allows providers to minimize access network issues, driving reductions in rebuffering, startup times, stalling and profile switching.

• Straightforward to deploy with no change to encoder and/or infrastructure

• Unique IP allows it to configure the encoder settings optimally for bandwidth savings on top of what the current encoder is providing

SSIMWAVE, AN IMAX CO.

Stream Smart

• Limited touching of the encoder settings

• Our magic lies in the unique AI-driven technology that provides the most accurate perceptual quality layer to the settings

• Efficient, cloud-native, easy to deploy

• Provides savings ranging from single digit percentages up to 15% across larger libraries

Stream Smart is a new generation of optimization solution, building on top of SSIMPLUS VQ Dial.

SSIMWAVE’s technology is already optimizing 2.8 billion viewing minutes of on-demand titles per month.

More than 150 million subscribers enjoy the improved viewer experiences we help leading global streamers, studios and pay-TV operators deliver.

SSIMPLUS is the only perceptual quality metric that can accurately measure viewer

experience at any point in a workflow, support all content attributes including high dynamic range, and work uniformly across all content types, such as animation and sports content. The SSIMPLUS family of algorithms replicates the human video experience and closely correlates with Mean Opinion Scores.

In 2020, SSIMWAVE received a Technology & Engineering Emmy® Award from the National Academy of Television Arts & Sciences for work in Development of Perceptual Metrics for Video Encoding Optimization for its work on SSIMPLUS. The role of SSIMPLUS in enabling development of massive processing optimized Stream Smart Technologies at Disney Streaming Services is recognized in this post: https:// www.linkedin.com/posts/scottlabrozzi_disneys-2020-technology-and-engineering-activity-6767223821717004288-cvvg

93 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

SSIMWAVE, AN IMAX CO.

VOD Monitor

To maintain high standards for picture quality and operations running smoothly at scale, providers need to efficiently validate that content meets their own quality standards. Watching every sec and every frame of each title version is impossible when a single title can have hundreds of versions.

Some common video quality validation challenges:

1. Identifying and localizing quality issues that are likely to cause problems downstream in the supply chain.

2. Handling many QC (quality control) alerts with high percent of false positives.

3. Flagging titles with video quality issues without an objective and standardized metric.

4. Quickly analyzing and determining if consistent quality is delivered when multiple versions of the same title are in play.

5. Failure to scale on demand due to limited availability of QC licensing and compute infrastructure.

The solution:

VOD Monitor is the only 2-in-1 solution for effective video quality measurement and media quality control (QC) for file-based validation.

Trusted and used by the teams at four of the world’s top-10 streaming media companies — including Disney, Paramount Global and Warner Bros. — SSIMWAVE solutions have improved the viewer experience of more than 150 million subscribers over billions of viewing minutes to date. The world’s top streaming media companies, studios and pay-TV providers work with SSIMWAVE. The technology is protected with 50 patents and patents pending globally.

VOD Monitor is designed to pinpoint instances in which video quality is compromised using the same criteria as if it were evaluated by “golden eyes.” It

is an industry-trusted solution that defines the level of video quality providers expect from vendors and workflows.

SSIMPLUS VOD Monitor’s 0–100 scene-byscene “report card” grades can be used to accept, flag or reject assets.

• A proprietary grading system trusted by The Television Academy, IMAX and the ASC.

• Quickly compare two or more same title assets and highlight differences with proprietary AI that sees like a human.

• Automatically identify and localize issues further reducing manual QC.

• One tool to verify video quality and to check must-have elements including video, audio and metadata to meet content delivery requirements.

• Designed from the ground up as a highly available system that uses elastic scaling to handle the most demanding workloads.

Unique features include:

• Perceptual video quality scoring, the Emmy Award-winning proprietary nonreference and

reference metric SSIMPLUS.

• User-defined checks: Luminance, Viewer Experience Score, Banding, Color Difference and Encoding Performance for quick identification of content sections where thresholds and duration compromise quality.

• Content similarity analysis: compares two video assets and identifies frames or mismatched scenes.

Plus wide range of industry standards quality checks:

• File, Video and Audio Parameters.

• Video content behavior and audio validation.

• Configurable content analysis templates to ensure content acceptance for both contribution and distribution.

• Full HDR support with all checks, measurements and validations required to manage the multistandard HDR workflows such as Dolby Metadata validation (Beta version) and MaxCLL and MAXFall cross-validation between measured and declared values.

94 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

SUITELIFE SYSTEMS

Axess Monitor & Control

Axess Software is a broadcast transmission and IOT hardware integration solution. An industry leader in remote site orchestration of transmitters, generators and HVAC/environmental

sensors. Axess connects industry hardware to monitor, manage and control from your main studio. The software is agnostic to manufacturer or model.

95 Best of Show Awards 2023 | NAB
Show
FOR MORE INFO

SWXTCH.IO

cloudSwXtch Video Overlay Network

cloudSwXtch is a virtual overlay network that brings on-premises features including multicast distribution to the cloud. Deployable within a cloud tenant, cloudSwXtch helps broadcasters and video service providers merge on-premises and cloud networks, migrate demanding media workloads, and establish mesh configurations to create global networks. cloudSwXtch unlocks missing network features that are required for demanding, high-throughput workflows in broadcast and media applications including multicast, broadcast, packet monitoring, network path redundancy, and protocol conversion and fanout.

cloudSwXtch is the first solution for merging on-premises and cloud workflows around the world, empowering customers with previously unavailable cloud features to migrate demanding media workflows. swXtch.io has improved cloud networking features by adding PTP clock synchronization to cloudswXtch. PTP, which synchronizes time signals and reverses “clock drift,” ensures that data is received and processed in the correct order. Like multicast and other cloudswXtch features, PTP has previously not been available on public cloud platforms. The integration of PTP into cloudswXtch adds standard PTP access to the hybrid cloud and hybrid networks, bringing cloud

capabilities in line with on-premises networks that extensively use PTP for clock synchronization.

cloudswXtch addresses inherent hurdles in conventional cloud architectures. By supporting multicast, cloudswXtch simplifies traditional cumbersome network reconfigurations. PTP will enable synchronization of video and audio sources in cloud networks for the first time, and makes it easy to interconnect an abundance of interface protocols through its protocol conversion and fanout capability. For example, cloudSwXtch can seamlessly translate between UDP Multicast, UDP Unicast, SRT and other protocols, allowing endpoints with different protocols to interact on the same network with no configuration or management. Along with SMPTE 2022-7 network path redundancy (hitless merge) for high availability, full support of uncompressed SMPTE ST 2110 workflows, and dynamic ground-to-cloud and cloud-to-cloud bridging, cloudSwXtch genuinely breaks new ground by making these important and valuable features available on public cloud networks.

Also launched at the NAB Show, the cloudswXtch user experience has been enriched through wXcked Eye, an advanced user

interface that offers detailed insights into global cloud network performance. wXcked Eye’s open API enables seamless connectivity to all end points connected to a cloudswXtch network, as well as two more cloudswXtches connected via mesh technology. Users can remotely add, configure and adjust settings for cloudswXtches and third-party technologies directly within the user interface. As cloudswXtch networks can infinitely scale via mesh technology, wXcked Eye provides global visibility of how media workflows are performing as they move between on-premises locations and cloud networks.

cloudSwXtch’s architecture brings bare metal parity to cloud networks that customers can build upon, beginning with features that matter most and adding new features as required. cloudswXtch’s virtual architecture establishes a long-term development tool for broadcasters, video producers and cloud networks such as Amazon Web Services, GCP, OCI and Microsoft Azure — and can even create workflows across multiple disparate clouds for higher availability. cloudswXtch is available in several variations based on initial customer needs and can be scaled and built upon for years to come.

96 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

TAG VIDEO SYSTEMS

Content Matching Technology

Content Matching is TAG’s newest and perhaps most ground-breaking technology yet. This unique mechanism detects similar content across two different streams to ensure correct and uninterrupted delivery to the intended destination. This is done by creating a unique fingerprint for each video frame and audio envelope and matching them across the entire media distribution path against a user-defined reference point. This new technology dramatically reduces workflow complexity and eyeson-glass and enables media companies to deliver quality content with fewer resources and more confidence.

TAG’s Content Matching can identify and correlate audio and video uniqueness accurately regardless of the resolution, bit rate or frame rate, thus enabling a match between any two or more points in the workflow. Even after the content has been processed and manipulated, TAG will still be able to identify the match and confirm that the content is identical, correct and behaves as expected.

In addition, the new TAG technology

allows users to get to the root cause of problems faster and troubleshoot more efficiently, even in the most complex, elaborate workflows. Based on a sophisticated, real-time, frame-to-frame correlation engine, the user will be notified when the first content mismatch occurs and combined with TAG’s rich probing and monitoring, they can easily identify and resolve the source of the errors.

TAG’s content matching enables, but is not limited to, the following highly requested media workflow applications:

• Frame-accurate latency measurement between any two or more points in the workflow

• Comparing quality and content accuracy across different feeds to compare distribution methods or alternative paths

• Confirm ad insertion to SCTE messages with frame accuracy to assure and protect revenue

• Measurement Validate A/V alignment and audio channel drift at any point in the workflow

The ability to identify, match and correlate content to content anywhere in the workflow empowers users to measure a wide variety of parameters, and the potential uses are left to the user’s imagination. With a reference point and one or more monitoring points, comparisons are easily made, and issues can be quickly identified.

Combined with TAG’s flagship software-only monitoring and visualization platform, Content Matching is a powerful tool. The technology adds yet another layer of monitoring to TAG’s robust Multi-Channel Monitoring (MCM), a system that manages alarms and alerts operators of 500+ userdefined event thresholds. In addition, Content Matching provides another resource for the data collected and aggregated by TAG’s Media Control System (MCS). The MCS allows data to be visualized with IT open-source tools, providing engineers with a more precise understanding of their workflow and the information they need to improve it.

97 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Tedial’s smartPacks is the media and entertainment (M&E) industry’s first Package Business Capabilities (PBCs) solution. smartPacks is an enhancement to the company’s smartWork cloud-native, NoCode Media Integration Platform. Launched at NAB 2022, smartWork has transformed business processes by empowering users to define integrations autonomously — without vendor participation — creating business processes in a flexible and agile manner.

Now, smartPacks is bringing broadcasters closer to the cloud and enabling the composable enterprise, a modular approach that leverages existing digital capabilities to create new products and services. Composability increases agility in digital transformation, relieves pressure on the IT team and frees up time for innovation.

smartPacks comprise a set of Packages Business Capabilities (PBC) modules capable of streamlining the processes

TEDIAL MEDIA S.L.

smartPacks

of different business units including, but not limited to: news, content delivery, post production, archive and even IMF. PBCs are reusable software components that provide the key building blocks of a composable enterprise, used to create best-of-breed solutions in many verticals. In the M&E industry PBCs represent self-contained units that solve a specific problem: Localization, Content Delivery, Post Production, etc. PBCs function without external dependencies, or the need for direct external access to data. For interaction with the rest of the enterprise’s systems and services, each PBC offers a data schema, an API, an event notification system and a set of services.

Applying modularity to M&E entities achieves the scale and pace required to enact ambitious business practices. An easy-touse toolset combined with the modularity of smartPacks’ PBCs

and smartWork’s no-code technology allows applications to be created without any prior knowledge of traditional programming, enhancing composability and resulting in the flexible design of applications and services enabling organizations to innovate and adapt quickly to changing business needs.

Tedial’s smartWork can be deployed on-premises, on any cloud or in a hybrid architecture for incredible flexibility. Cloud capabilities enable media services to be quickly built, deployed and evolved as business needs change by adding or adjusting business processes. Tedial recently successfully completed and achieved the AWS Foundational Technical Review (FTR) for smartWork, and has also joined the Google Cloud Partner Advantage program as a technology partner. All smartWork services are available as apps, which can be provided by AWS or Google.

98 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

OmniGlide Robotic Roving Platform

In another pioneering first, Telemetrics has developed a wireless video transmission and battery system for its OmniGlide Robotic Roving Platform that allows users to freely move the popular OmniGlide studio camera pedestal and avoid obstacles virtually anywhere in the studio without cables.

The new option for the OmniGlide consists of a field-proven wireless transmitter and battery system configured to the rover’s specifications, while maintaining the rover’s esthetics and precise, preprogrammed or manual movements.

“Being able to move the rover anywhere without worrying about cables getting caught on a set piece or having a dedicated person physically managing the cables, is a big step forward for roving pedestals.” said Michael Cuomo, vice president of Telemetrics. “Eliminating the cable loom attached to the rover further enhances our AI features such as Path Planning and Collision Avoidance.”

Going wireless with the rover still provides the same functionality customers are used to including camera power and control, teleprompter power and video, confidence monitor power and video and full robotic control. Also, at any time, the system can run on a standard cable loom providing a full backup solution. The wireless system is designed so that OmniGlides currently in the field can be upgraded to support wireless.

The hot swappable and fully rechargeable lithium-ion battery system provides DC power — which is safer than running AC power, as most rovers do* — a long run time and is fully rechargeable, making it ideal for long studio projects. Now users can shoot a production all day on battery power and then charge it overnight to be ready for the next day. The wireless transmitter’s robust features and low latency (subframe delay of 7 milliseconds) ensure the reliability of exact pedestal movements.

* The European Union has mandated that populated spaces, such as production studios, must use low-voltage (DC) power as a health and safety issue.

99 Best of Show Awards 2023 | NAB Show TELEMETRICS
FOR MORE INFO

Telestream Content Manager is a new, next-generation solution that provides a single point of access for an organization’s content across its entire storage ecosystem, including cloud and on-prem storage. Built on the DIVA Core technology, Telestream Content Manager is tightly integrated with the Telestream workflow orchestration tools, as well as supporting all major MAM, PAM and automation systems.

As media workflows continue to migrate to the cloud, content owners and aggregators find themselves with content stored locally and across multiple cloud platforms. Telestream created Content Manager to provide a pathway to work seamlessly with any form of storage on-premises or in multiple cloud platforms simultaneously, while lowering our customers’ costs to do so.

Three innovations combine to make this unification of cloud and on-prem content management practical and cost effective. First, an intuitive web-based

TELESTREAM Content Manager

user interface provides users with the tools that they need to discover and work with their media content. All content information is indexed and searchable, including both system metadata, editorial metadata imported from other systems, and customer-configurable metadata. Once content is found, it can be played back directly within the application and transferred to any connected device such as a storage, production or playout system.

Second, auto-object discovery and the ability to index files directly from cloud storage eliminates the egress costs associated with copying to another location and enables enterprises to efficiently manage both legacy content and incoming files. Finally, the automation of content management actions and triggering of automated workflows enables greater efficiency through the ability to create sophisticated supply chain workflows that

incorporate content movement, lifecycle management and media processing.

NFL Films/TV2 have been working with an early version of Telestream Content Manager. Bob Russo, post-production workflow manager at NFL Films, said, “I’m really excited about the potential of Telestream Content Manager to provide our users with greater visibility into our content and processes. As a long-time DIVA customer, it’s great to see that Telestream is continuing to evolve content management to address new technologies and to meet our needs.”

Telestream Content Manager was unveiled at the NAB Show, where Telestream showcased its solutions for content creation and production, distribution and monetization. The new Content Manager product wass Telestream’s most substantial news at the NAB Show this year. Content Manager is planned for worldwide availability in Summer 2023.

100 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

At the 2023 NAB Show, Telestream introduced PRISM MPP, a new line of multiformat rasterizers purpose-built for post-production and remote workflows. The three new models, MPP 100, MPP 200 and MPP 300, extend the PRISM family of software-defined monitoring instruments to address post-production users requiring high-end production video formats like 12-bit RGB for 4K/UHD applications in both SDI and IP.

These instruments include measurement tools for colorists with HDR requirements, a complete set of QC tools for objective evaluation of high-end video and audio content, and a remotely accessible user interface. Designed for post production, they’re exceedingly quiet, support a wide range of formats, provide new functionality designed to make color gamut assessment and compliance easier than ever, and offer loop-through for reference monitors and analog audio out for edit suite configurations.

It has been Telestream’s vision to continually expand the PRISM platform from the day the team first created this family of solutions. These instruments represent the next evolution, supporting

TELESTREAM PRISM MPP

the specific needs of post production, and are positioned to continue to lead the way in engineering analysis to support creative applications like sports and event production.

The MPP models support local, remote and post-production applications up to 8K. They enable color grading in SDR and HDR formats including wide-color gamut, surround sound audio production up to 7.1.4, high-end post-production video formats including 4:4:4 and 12-bit RGB for 4k/ UHD applications, dual display support, and all the operational SDI monitoring and outstanding IP analysis the PRISM family is known for.

The MPP software also includes a unique feature that makes it incredibly powerful for post-production colorists. Telestream has been offering the ability to apply a false color overlay to the picture display to indicate areas of an image that are outside of a specified color gamut. With the new software being released as part of the MPP instruments, it is now possible to determine how far from the Rec. 709 or DCI-P3

boundaries an out-of-gamut color is. The picture display is monochromatic for all colors that are within the gamut. The false color overlay indicates colors that are near, but beyond, the Rec. 709 or DCI-P3 boundary in one color and those close to the Rec. 2020 boundary in another. Colorists can adjust the necessary colors until the colored highlights are eliminated, ensuring that their entire image is compliant to the desired color space.

WebRTC remote connectivity provides access through a web browser with the same level of performance and functionality (including audio) as sitting in front of the instrument. As they can be mounted in an equipment room and accessed via KVM or WebRTC network connectivity, the MPP models are also ideal for live and remote production operations.

As with the rest of the PRISM family, the MPP line provides a “no penalty” software upgrade path to add additional features as required and as new developments take place.

Expected availability of the PRISM MMP line was this spring.

101 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

TELOS ALLIANCE

Telos Infinity Virtual Intercom Platform (VIP) App

The Telos Infinity Intercom family of products continues to grow and expand, this time with the release of the Telos Infinity VIP app.

The VIP app is a companion application available to Infinity VIP customers that mirrors the look and functionality of the current HTML5 browser-based VIP panel offering, with a few key additions.

One such feature is a new system for easy panel sharing and configuration that allows VIP administrators to share invite emails with contributors containing a link that will open the app with the correct configuration, allowing end

users quick access to the same virtual panel without the need to safeguard a browser tab between sessions. For devices without the need for configured email, such as a dedicated tablet in a studio, virtual panels are easily accessible by inputting a beacon address with a corresponding password.

The free Infinity VIP app is available for download for Android devices from Google Play and for iOS devices from the Apple App Store.

To learn more about Telos Infinity VIP, please visit https://www.telosalliance.com/vip.

102 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

TELOS ALLIANCE

WorkflowCreator

Minnetonka AudioTools Server has earned its reputation among content creators and distributors as one of the most highly flexible tools available for handling complex, file-based audio automation tasks. The introduction of WorkflowCreator addresses one of the biggest challenges faced by ATS customers: The need to manually edit XML files

to create new custom workflows. WorkflowCreator retains the same core functionality and includes all the features offered in WorkflowEditor — which it replaces — but introduces the ability to delete steps, add new steps and create brand-new workflows completely from scratch with an intuitive, easy-to-use graphical interface.

WorkflowCreator is included with the purchase of the AudioTools Workflow Control Module. Current ATS customers with an active TelosCare PLUS SLA can upgrade their systems to include WorkflowCreator.

For more information, please visit the AudioTools Server page of the Telos Alliance website at https://www.telosalliance.com/ATS

103 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Prism Mobile is the lowest-SWaP, LTE-enabled camera-back video encoder in its class. Utilizing two internal LTE modems, Prism Mobile allows you to reliably contribute secure, low-latency, 4K HDR video from the most challenging locations by aggregating up to nine network sources.

Get exceptional worldwide internet connectivity virtually anywhere you go with Prism Mobile’s two internal high-throughput Node II modems uniquely designed for video applications, and the additional capability to add an external modem for even more coverage. For maximum redundancy, Prism Mobile can bond up to nine network connections across the following, ensuring your stream never misses a beat:

• 2 Node II Internal Modems

• 1 Node II External Modems

• 2 Gigabit Ethernet

• 4 Cellphone Hotspots

TERADEK

Prism Mobile

With the Prism app for iOS and Android, you can manage your Prism devices right from your phone or tablet for ease of use. Fire up the app to configure your encoder, monitor and manage your feed, and review streaming stats in real time. And if you’re lacking internet signal, the Prism app allows you to share your phone’s 5G or LTE bandwidth with your Prism device. Achieve Prism’s true potential with Core, Teradek’s streaming media orchestration platform. Core allows you to securely monitor, manage, route, distribute and archive your live video feeds from anywhere in the world. With Core, you can:

• Configure your Teradek devices and monitor their status and performance in real time.

• Stream to one or many destinations, all at once. Securely and quickly deliver broadcast-quality live and recorded video to CDNs, decoders, software solutions and more.

• Enable network bonding on Prism Flex and Prism Mobile encoders, allowing you to stream over multiple network connections for the ultimate stream reliability.

• Protect your live feed with a secondary backup stream for a seamless handoff in the event of network congestion or failure.

• Deliver 4K HDR video and audio in ultra-low latency to stakeholders viewing with Core TV, Teradek’s secure monitoring application for Android, iOS, AppleTV and Mac OSX.

Prism Mobile’s camera-back form factor mounts to the Gold/V-mount battery plate on your ENG camcorder or professional camera rig. And for studio or fixed workflows, Prism Mobile is available as a desktop solution. Visit teradek.com/prism-mobile to learn more about the most resilient bonded-cellular encoder for live coverage from the field.

104 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

TRIVENI DIGITAL ATSC 3.0 Translator

As ATSC 3.0 deployments accelerate, being able to rapidly launch NextGen TV services is even more critical for broadcasters. Triveni Digital’s new ATSC 3.0 Translator meets this need by significantly reducing the cost of ATSC 3.0 service delivery. Leveraging the ATSC 3.0 Translator, broadcasters can efficiently repeat or translate their existing ATSC 3.0 signal to other areas without the need for an entire broadcast chain — minimizing costs, equipment and power. In addition, the ATSC 3.0 Translator includes an optional feature for NextGen TV signing, helping broadcasters quickly expand the reach of ATSC 3.0 services.

The solution is ideal for both public statewide networks and private cloudbased environments. The ATSC 3.0 Translator is also perfect for business channels using SRT distribution, signal capturing for manufacturing, signal monitoring for automobiles and remote areas, ATSC 3.0 retransmission, changing channel numbers for local broadcast

stations and more.

Unique features and benefits of the ATSC 3.0 Translator include:

• Faster time to market: The ATSC 3.0 Translator eliminates the need for a full-blown broadcast chain for ATSC 3.0 redistribution, reducing the time to market for NextGen TV services.

• Reduced costs: With the ATSC 3.0 Translator, broadcasters need less equipment to deliver NextGen TV services, reducing power and minimizing capex and opex.

• Superior quality of experience: The ATSC 3.0 Translator validates ATSC 3.0 signals in real time down to the frame structure, offering support for multiple physical layer pipes. Using the innovative solution, broadcasters can perform ATSC 3.0 network troubleshooting and postmortem analysis with log and trend files to improve NextGen TV experiences for viewers.

• Ease of use: ATSC 3.0 is a new television standard that is

more complex than its predecessor. Triveni Digital’s ATSC 3.0 Translator is simple to operate and maintain, as it minimizes the amount of equipment being used for ATSC 3.0 delivery.

Triveni Digital’s ATSC 3.0 Translator is a game changer for the broadcast industry, allowing broadcasters to quickly and cost-effectively bring NextGen TV services to market. With the ATSC 3.0 Translator, broadcasters can be at the forefront of NextGen TV, improving television viewing experience and opening up new monetization opportunities.

Research by BIA Advisory Services found that new datacasting revenue from NextGen TVs is likely to reach $5 billion by 2027 and $10.7 billion in new revenue by 2030, accounting for 22% of the total local broadcasting revenues by 2030. With Triveni Digital’s ATSC 3.0 Translator, stations can capitalize on that revenue and engage viewers like never before.

105 Best of Show Awards 2023 | NAB Show
INFO
FOR MORE

X-Connect IP Routing Control System

Market Snapshot

Broadcast infrastructure is trending towards an all-IP environment. This process has taken years, from the broadest of concepts, to today, where standards-based IP media transport is becoming the default for broadcast production. This leads to a completely virtualized environment where media is routed over IP and controlled remotely.

Where X-Connect Fits

Distributed and remote production are now commonplace alongside conventional production workflows. X-Connect solves the challenge of creating a virtual IP media transport routing system that’s familiar to use for operators who are used to a conventional system. It also solves the problem of “how to use existing physical controllers in an IP environment.”

Benefit for Customers

X-Connect prolongs the life of legacy equipment while providing a bridge to the all-IP installations of the future.

Thanks to existing industry standards, mixing and matching IP and non-IP devices is now not only feasible but cost-effective. X-Connect enables customers to reuse existing control surfaces and panels in a hybrid/IP environment. This lowers the cost of entry to SMPTE 2110 IP routing and increases sustainability.

With TSL’s flexible and scalable approach to broadcast control, X-Connect provides the smoothest and most flexible transition for customers between the two domains.

X-Connect Product Ecosystem Description

X-Connect is an IP routing and control ecosystem with three main elements: a control layer to discover and qualify participating devices, a user interface, and a control processor. The combination creates a virtual router that can be integrated with products like TSL’s TallyMan. Users can also connect TSL’s virtual panels as well as a wide range of other control surfaces.

Operation/Innovation

X-Connect creates a control layer to discover and qualify IP media end-points, then presents them as part of a controllable matrix to a northbound router control system such as TSL’s Tallyman.

From an operator’s point of view, an X-Connect workflow presents itself as a conventional base-

band router, but built on IP technology.

X-Connect Summary

X-Connect IP Routing and Control system is an ideal routi ng solution for both mixed IP and conventional installations. It is a virtual router built on TSL’s proven processing platform and exposed by industry-standard protocols. Users can configure individual devices or entire control rooms dynamically and responsively.

X-Connect utilizes the industry open-standard NMOS, making it compatible across a wide range of existing devices as well as fully controllable by TSL’s own Virtual Panels and router control, for a totally-customizable installation. The comprehensive and robust routing solution is also perfect for incremental upgrade.

106 Best of Show Awards 2023 | NAB Show TSL
FOR MORE INFO

TVU NETWORKS TVU RPS One

TVU RPS One turns remote production on its head with the first solution that combines cloud-based and studio production with synchronized multicamera cellular transmission. Used for REMI, broadcasters can leverage their existing studio infrastructure to cover on-location live sports, news and special interest events at a fraction of the cost and resources required by traditional onlocation production.

Alternatively, TVU RPS One provides the ability as well to take the captured live content and fully produce it for distribution to viewing platforms through TVU’s integrated cloud-native, live video production ecosystem. With this hybrid cloud and studio approach, TVU RPS One delivers end-toend professional cloud-based live coverage from capture to production to distribution.

TVU RPS One delivers fully frame-synchronized multicamera capture and transmission at sub-second latency. It aggregates up to 12 data connections, including 4G/ LTE/5G cellular, Wi-Fi, Ethernet, microwave and satellite, including Starlink, for HD transmission.

TVU RPS One has a small backpack-style form factor, is lightweight at around 2 pounds and is battery-powered

for go-anywhere operation. It also uses patented Inverse Statmux Plus technology for added HD live video transmission resiliency in remote locations or

bandwidth-challenged environments such as crowded locations.

TVU RPS One also provides layer 2 IP connectivity between encode and decode. It enables existing LAN extension to the field using layer 2 connectivity for DHCP pass-through and support for multicast/discovery. TVU RPS One supports multiple IP peripherals including talkback systems, camera tallys, robotic camera control, CCU and more. When used with TVU’s cloud-based live video production ecosystem, TVU RPS One allows for the live video capture of synchronized multiple camera inputs directly to cloud-native TVU Producer for full professional video production and distribution. Producer uses a web interface and provides robust features including patented zero latency frame accurate switching, professional graphics/overlay capabilities, instant replay, PIP/dual/quad multiview and the ability to simultaneously output directly to social media platforms and CDNs as well as SDI through a TVU Receiver. It also has a separate audio mix interface with independent input channel level control, pan, mute and solo monitor functions.

107 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

VARNISH SOFTWARE

Varnish Enterprise 6

Varnish Software continues to break records and deliver the most significant advancements in content delivery speed, power and TCO efficiency to date with Varnish Enterprise 6.

In collaboration with Intel and SuperMicro, Varnish Software has achieved 1.3 Tbps in-memory throughput on a single Edge server consuming approximately 1,120 watts, resulting in 1.17 Gbps per Watt.

This breakthrough was accomplished using Varnish Enterprise 6 deployed on off-the-shelf, commercially available solutions — and without expensive, power-hungry or specifically finetuned equipment. Therefore, the throughput and energy efficiencies achieved can be applied to a broad range of servers and real-world applications depending on the needs of any-sized organization using Varnish Enterprise 6.

Varnish Enterprise 6 is the industry’s most feature-rich web cache and HTTP accelerator designed for unmatched performance, robustness and flexibility when delivering digital experiences at scale. The comprehensive offering includes CDN, caching and edge solutions optimized at every layer — from traffic routing and management to content storage and access — to provide faster experiences, support greater traffic and deliver more content with less resources than ever before.

It’s ideal for broadcasters, media companies and communication sService providers (CSPs) who need to increase throughput and efficiency at scale, maximize performance and ROI of existing infrastructure and support new and emerging use cases for next-gen digital experiences.

Additionally, the solution includes

new and expanded features, including robust traffic orchestration and load balancing. These capabilities and redesigned UI ensure a flawless user experience by always distributing content from the optimal location and cache servers, regardless of scale or network complexity.

The company also recently introduced the new Massive Storage Engine 4.0 as part of Varnish Enterprise, which offers high-performance caching and persistence for 100 TB+ data sets to meet the needs of video distribution, CDNs and large-cache use cases.

Varnish Software’s unique architecture, features and capabilities include synchronous direct I/O, NUMA awareness and software-based, in-process TLS.

The latest benchmarks were achieved using Varnish Enterprise 6.0 deployed on both a Supermicro 2U CloudDC and a Supermicro dual socket Hyper

SuperServer powered by fourth Gen Intel Xeon Scalable processors, without the use of specialized, added-cost TLS offload cards. Additionally, benchmarks utilizing SSD for VOD and OTT use cases relied on Varnish Software’s Massive Storage Engine.

Over 20% of the world’s top websites and leading video service providers use Varnish Software’s caching and CDN solutions. Customers who have deployed Varnish Software include Sky, Emirates, Hulu, Migros, Tesla, CBC and Future Publishing.

Customers who have used Varnish Software have experienced significant results, including:

• Reduced capital and operating expenses by 30%

• Reduced latency by 80%

• Increased object delivery speed by 10x–100x times faster

• increased cache hit ratios by as much as 50%–90%

108 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

AdStrategy

AdStrategy enables ad-driven media companies to maximize their competitiveness for advertisers, win a greater share of the ad spend in their markets and increase their advertising revenue.

AdStrategy is a comprehensive, feature-rich, cloud-hosted service providing tools for broadcasters, content owners, syndicators, FAST, OTT, vMVPD and AVOD providers to monitor and analyze ads running on competitor channels in markets of interest, generate valuable insights into active advertisers and their advertising behavior, identify potential new advertisers, and stimulate ad spending by existing advertisers. It provides extensive data, analytics and insights to increase marketshare and ad revenue.

Because it is cloud-hosted, AdStrategy can seamlessly monitor, capture, analyze and provide highly current insights into all advertisers, ads and competitors within a local market, on a pan-market basis, across states, regions and nationwide.

AdStrategy monitors streams at one or multiple points (at origination, in-distribution and edge locations of interest), identifies, logs, indexes, records, analyzes and generates comprehensive insights on ads running in the chosen local, regional or national market. It captures and records all ads, integrates data about them, metadata about the advertisers, the channels they are running on, the frequency and times they are running, and the audience ratings on

the client’s and competitors’ channels for each ad’s target demographic at those times.

AdStrategy consolidates and analyzes all of this content and metadata in cloud, and generates alerts and reports with highly actionable insights, reports and recommendations. They help clients identify new advertisers, as well as existing advertisers who are not advertising (zero-runs), or doing so minimally, on their channels. It provides links to recorded proxies of the ads and metadata about where and when they ran. The integrated ratings by demographics, geolocation and aired times and programs enable clients to develop

compelling sales messaging when meeting with the advertiser. If a client finds a Ducati dealership that started advertising prolifically on a competitor but not on its channels, its salesperson can approach the dealer, point out times when the M2535/$50–90K demographic can be better reached on its channel, and present a winning proposal. AdStrategy even enables the salesperson to show advertisers their competitors detailed advertising activity in the market.

AdStrategy integrates a suite of cloud-native technologies into an intuitive solution. Its architecture takes advantage of the ability to collect, record and analyze ads across any geographic footprint, agnostic to channel type (OTA, OTT, AVOD, FAST, vMVPD) or video format of the streams.

AdStrategy includes integrated cloud storage, including multiple options from Block to global EC. It offers clients an option to directly connect their major nodes into cloud with dedicated connections, thereby rendering their content, data and users “cloud-native.”

AdStrategy stores client content and metadata in Postgres datasets, and integrates ActiveDirectory and security protocols so that clients can store and administer logs, content, reports and internal data securely. Multiple users in each client organization can securely collaborate on sales strategy and tactics.

109 Best of Show Awards 2023 | NAB Show VELA RESEARCH LP
FOR MORE INFO

Audimus.Media

Audimus.Media is a broadcast-grade, AI-driven solution for real-time, automatic closed captioning across multiple platforms, including live TV broadcasting (OTT/OTA), streaming and online meetings. Our state-of-the-art signal processing and speech recognition technology enable high-accuracy captioning with low latency, enhancing content accessibility and engagement. Additionally, Audimus. Media’s speech recognition capabilities extend to 40 languages, with simultaneous translation and speaker differentiation, making it a versatile and efficient solution for various production and distribution workflows.

Audimus.Media’s advanced language processing technology allows for daily refinement of the vocabulary, obtained from ongoing local programming available in TV stations’ Newsroom Computer Systems and external web sources, adding unusual names and new terms relevant to the daily news cycle. Audimus.Media’s language models are constantly adapting to local pronunciations, idioms and other speaking characteristics, ensuring high standards even in unanticipated situations such as “Breaking News” scenarios or unprepared speech.

The additional speech processing modules differentiate between speakers and identify spoken languages, increasing caption readability and flow. Audimus.Media also performs automatic text capitalization, denormalization, punctuation, customizable profanity filtering and context-aware caption formatting to

improve readability.

Audimus.Media is designed to meet the specific needs of TV broadcasters, with constantly updated technical features such as remote operation control over GPIO, CTA-708 closed-caption embedding into HD-SDI signals, restreaming of HLS with synchronized WebVTT subtitles and generation of streams with captions embedded into video packets according to SCTE-128/ATSC-A53. It also offers MPEG-TS multiplexer contribution with ST 2038, DVB-Subtitling, DVB-Teletext or ARIB-B24 streams and can export encoded video clips with synchronized captions for VOD publishing.

Audimus.Media is adaptable to multiple scenarios, with a flexible combination of inputs

and outputs. Its source can be an SDI capture card, analog sound card, NDI or ST 2110-30 audio feeds, or a generic streaming feed. The captions can be delivered to closed-caption encoders or as ST 2110-40 Ancillary Data streams, they can also be published as a captioned live stream to any CDN supporting RTP/ RTMP/SRT/RIST as input or a stream of captions in any of the most common formats.

With an intuitive web dashboard, Audimus. Media offers a customized setup, control over every configured channel, event scheduling for the creation of repeating live captioning tasks, access to vocabulary customization and a subtitle editor that allows correction of captions before exporting or embedding them into video files.

The available on-premises deployment ensures the lowest caption latency, robustness against network disruptions and high data security and privacy. Our sustainable business model provides clients with lifetime access to the platform for all their captioning needs.

Closed captioning is not only a legal requirement but also a valuable asset that should be provided to viewers across all platforms where video content is consumed. Audimus.Media is the benchmark for automated closed captioning, thanks to its dependable speech recognition abilities, rapid adaptation to dynamic vocabularies and seamless integration into a wide range of production workflows.

110 Best of Show Awards 2023 | NAB Show
VOICEINTERACTION
FOR MORE INFO

VOICEINTERACTION MMS

VoiceInteraction’s Media Monitoring System (MMS) is a comprehensive platform that combines broadcast QoS modules for regulatory compliance with AI-driven analytics and metadata. Made possible by our proprietary ASR technology and algorithms, this platform aims to be the new standard in broadcast compliance. MMS allows users to monitor and control live content, segment news by topic, and generate automated reports. This proactive, AI-driven platform assists multiple departments simultaneously, making it the ideal solution for any television station or network looking to meet current needs for streamlined regulatory adherence and content production workflows.

MMS is a 24/7 comprehensive compliance tool that captures and stores media feeds from various sources, prioritizing access to the most recent files with an archival system that gradually reduces the quality of stored content over time. Our proprietary technology enables a customizable alert center that displays real-time capture feed status, TS monitoring, loudness/LFKS logging, video and audio QoS, and closedcaptions monitoring. The alert center provides configurable real-time notifications through in-app alerts, email or instant messaging for specific users or teams to take prompt action with confidence.

After capturing, monitoring and storing incoming signals, the AI-driven platform then analyzes the broadcasts, mapping out a time-stamped, full-text

transcription of the newscast. This creates an interconnected network of metadata that includes topic and keyword detection, speaker ID and summarization. Combined with OCR, this content fingerprinting technology improves search filters and creates an intuitive timeline that allows users to locate, select and publish clips about a certain topic. With the generated metadata, Nielsen ratings and other relevant data, the platform also monitors ad delivery and performance, keeps track of music royalties and observes new programming trends with a content-viewership correlation. The platform then creates automated reports with detailed information, for any channel or market location.

A multiviewer allows for real-time monitoring of live and

VOD broadcasts all in one dashboard. The content generated is easily integrated and developed by exporting media in a wide range of transcoding formats for any audio or video stream. The entire process is streamlined through a centralized web-based interface, making security simpler and reducing computational demands for real-time monitoring. Additionally, the system provides a RESTful API for on-top development and customizable integrations. With version 7.0, useful workflows are created intuitively through the new user interface. While compliance software is typically used only by broadcast engineers, MMS has developed a UX approach that extends its usability to a broader range of users across departments. This user-friendly approach breaks down barriers to accessing critical compliance information and creating diverse content.

Media Monitoring System is a proactive platform that uses proprietary Speech Processing Technologies and AI algorithms to go beyond traditional compliance solutions, allowing broadcasters to streamline their processes, save time and improve results. By combining enriched media content with AI-driven analytics and metadata, broadcasters can access valuable insights into their network and competitors. This results in maximized engagement, viewer loyalty and revenue, while also reducing overhead costs, allowing broadcasters to focus on creating and delivering excellent content.

111 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Layers Software Suite, Stream Module

Wheatstone presented a cloud version of Layers Stream running on AWS at the 2023 NAB Show as the first practical use of cloud data centers for broadcast applications.

Layers Stream software includes stream provisioning, audio processing and metadata support. It is part of the Wheatstone Layers Software Suite, which also has software modules for running instances of FM/HD processing and mixing in cloud data centers such as AWS or on-premise servers.

For the NAB Show, Wheatstone demonstrated streaming instances

running on AWS that can be brought up rapidly and torn down rapidly and controlled through a browser-like user interface. Layers Stream includes audio processing designed specifically for streaming applications and Lua transformation filters to convert metadata input from any automation system into any required output format, including Triton Digital, for transmission to a CDN server.

Wheatstone’s Layers Software Suite also has a Layers Mix module for television. Layers Mix has a full-featured mix engine

and Glass virtual mixers for the laptop, tablet or touchscreen with routing, logic, automixing and full native IP audio integration with major production automation systems.

Layers modules can be used for remote or REMI applications between studios and cloud data centers or for extending studio failover redundancy across multiple cloud data centers.

Wheatstone demonstrated Layers Stream and other cloud applications in their booth at the NAB Show.

112 Best of Show Awards 2023 | NAB Show WHEATSTONE CORP.
FOR MORE INFO

Witbe’s Witbox+ is a next-generation solution for testing and monitoring video services on multiple devices, including OTT boxes, smart TVs, mobile platforms, gaming consoles and more. Designed specifically with video operations teams in mind, the Witbox+ utilizes Witbe’s revolutionary technology to create an ultra-scalable, powerful device in a small form factor that’s easy to set up. Users can simply plug any physical device into the unit to start automatically testing and monitoring any digital service running on it. The Witbox+ supports 4K video and 5.1 surround sound, as well as Bluetooth and RF4CE control, for up to four different devices simultaneously connected.

With Witbe’s Remote Eye Controller (REC) software application, network operation center teams and manual testers can remotely access and control every device plugged into the Witbox+ from anywhere in the world, removing the need for engineers to travel thousands of miles to test specific devices in the field. REC aggregates all connected devices on the same screen in a mosaic, and the number of devices supported is unlimited. If a company has 100 STBs attached to multiple Witbox+ units in the

WITBE Witbox+

field, all 100 of them can be accessed and controlled simultaneously.

Recently, Witbe introduced a brandnew version of REC that is directly available on the web. It can run on any modern web browser, allowing users to control their Witbox+-connected devices on laptops, smartphones, tablets and more.

Unique features and benefits of the Witbox+ include:

QA test automation: Helping QA teams cover the performance, endurance and stress testing that is difficult for human team members to accomplish manually, the Witbox+ enables automatic, around-the-clock testing.

Video service monitoring: The Witbox+ goes beyond standard testing with its proactive monitoring capabilities. Even when a device isn’t being used, the Witbox+ can still monitor its video streaming quality using a proprietary algorithm that relies on the same metrics as the human eye. Whenever the quality dips, the Witbox+ sends users an alert, enabling QA teams to proactively identify and fix the issue.

Short-form video testing:

Automatically evaluating key performance indicators for short-form videos — including availability, buffering time and quality — the Witbox+ allows video service providers, social networks and mobile network operators to understand the QoE their customers truly receive.

QoE benchmarking: By comparing the quality of a video streaming service against local and global competitors through Smartgate Benchmarking, the Witbox+ helps operators understand the QoE their users expect.

Compact scalability: With the ability to test and monitor four different 4K devices simultaneously, scalability sets the Witbox+ apart. The major technological breakthrough wasn’t just making this happen; it was packing all these capabilities into a sleek 20cm by 20cm package. This makes the Witbox+ not only the most powerful test automation device on the market, but also the most compact. With the compact size, users can now test up to 64 devices in a single rack.

Environmental sustainability: The Witbox+ consumes eight times less power than Witbe’s previous 4K-compatible products.

113 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Yuvod’s Platform-as-a-Service Streaming Solution

Yuvod’s Platform-as-a-Service (PaaS) streaming solution is a comprehensive, customizable, affordable and 100% cloud-based white-label offering that provides all the advanced technology, tools and services needed to effortlessly deliver and manage flawless, highquality streaming experiences.

The end-to-end platform delivers a turnkey OTT solution that is quick and easy to deploy, rapidly scaled and entirely customizable to meet the individual needs and budget of any video service provider, streamer, broadcaster or communication service provider (CSP).

Yuvod strives to deliver the best value with transparent pricing, where costs mirror subscriber growth, offering a complete streaming suite at a fraction of the price of traditional solutions. Customers have saved upwards of 50%.

Additionally, the solution requires no new hardware to deploy or manage, eliminating significant costs and challenges when operating a streaming service and app. Yuvod makes streaming as simple as plug-and-play.

“We are democratizing streaming,” said Ricardo Tarraga, cofounder and CEO. “Until now, advanced streaming technology has only been accessible to the biggest companies and required enormous budgets. Our PaaS is a game-changer, with a technology stack that makes self-service easy, affordable and immediate.”

The holistic platform delivers live, linear and on-demand streaming with advanced functionality across multiple devices and applications. It can be deployed in any location and centralizes all operational and technical processes into the streamer’s hands. This includes Yuvod’s proprietary video platform, media server, middleware, STB integrations, customizable dashboards, choice of content delivery networks (CDN), IP

networking, multi-DRM encryption, CRM and billing systems, customizable app design, 24/7 support and more.

The platform also features a plethora of built-in capabilities that enhance sports broadcasts, hospitality services and more.

Sports Broadcasting: It’s never been easier to broadcast live sports and deliver an unparalleled viewing experience. Yuvod provides advanced tools and features to help monetize sports content — such as customizable subscription models and advanced analytics — and keep fans highly engaged with integrations for real-time stats from live or past events. In addition, Yuvod offers an unrestricted choice of CDNs to deliver fast and responsive live streams across every continent.

Hospitality Services: Yuvod offers a refreshingly affordable alternative to the larger companies that dominate the market. Hospitality centers no longer need a headend and can significantly improve the quality of a guest’s

experience with easy-to-access, personalized entertainment in every room. Yuvod’s platform integrates seamlessly with existing PMS systems to provide pertinent guest information and services at the click of a button — such as ordering room service, finding a local attraction, checking out and more. Content can be ingested from any source, along with metadata from external applications (such as HBO, Netflix or Hulu) and EPG systems.

With robust analytics and customizable dashboards, Yuvod provides a complete 360-degree view of the entire video business, including viewer behavior and engagement across all devices, screens and platforms. With end-to-end visibility, organizations can pinpoint actionable trends and opportunities, and make real-time programming decisions that drive viewership, enhance experiences and increase revenue.

Yuvod’s clients and partners include Vodafone, La Liga Tech, Grupo Hotusa, Rakuten TV, DAZN and more.

114 Best of Show Awards 2023 | NAB Show
YUVOD
FOR MORE INFO

Zixi-as-a-Service (ZaaS)

Zixi-as-a-Service (ZaaS) is a complete solution for enabling live video distribution from any location, in any format, delivered over any protocol, to any destination. ZaaS provides everything needed to receive contribution feeds and process, transcode, package and deliver them to any target location. It orchestrates cloud ingress into geographically distributed cloud operating environments. And for customers that require live transcoding or other processing support, ZaaS provisions the necessary cloud infrastructure and automates distribution of low-latency broadcast-quality live video to any number of targets and end points. ZaaS customers have full visibility across the operating environment with Zixi ZEN Master providing real-time status views and access to all managed channel and infrastructure resources. In addition to the purpose-built live video operational model that ZaaS enables, customers benefit from significant cloud egress fee mitigation and cost efficiencies.

Delivering video through cloud infrastructure offers significant advantages in today’s market. Zixi customers are facing rapidly changing business and operating models and require the agility

and scale that cloud delivers. The first wave of cloud adoption saw video distributors migrate large swaths of their post-processing and delivery infrastructure to public cloud partners. In 2022, video publishers moved more contribution and remote production workflows to the cloud and have been implementing multi-cloud strategies to mitigate risk and optimize costs.

ZaaS is a key part of our customers’ multi-cloud strategies. Most customers are partnered with a public cloud provider like AWS, Azure or GCP, but protecting themselves from outages associated with a specific provider is increasingly becoming a high priority. ZaaS provides a complete video-optimized cloud operating environment for live video distribution, securing a diverse signal path for uninterrupted streaming and providing industry-best egress rates that dramatically reduce cost. At the heart of ZaaS is Zixi ZEN Master, which seamlessly coordinates bonding live channel distribution in both the customer public cloud account and the ZaaS account. This is critical to enabling continuous uninterrupted hitless playback, even

if there are significant outages within either operating environment.

ZaaS is built on top of the Zixi Software-Defined Video Platform (SDVP). Key benefits of ZaaS include:

• Centralized Management: ZEN Master provides a centralized video of the entire Zixi Enabled contribution and distribution network.

• Security: Best-in-class enhanced with DTLS and AES standards-based protection

• Reliability: Experience ~ 100% uptime with Zixi’s patent-pending hitless failover that provides redundant transmission options for high reliability and disaster recovery

• Ultra-Low Latency: With network-adaptive forward error correction and recovery, proven millisecond live linear latency

• High Availability: Leverage the SDVP on Zixi-as-a-Service to bond and load balance diverse internet or fiber circuits for increased high availability between facilities.

• Interoperability: Zixi is compatible with the largest ecosystem of encoders and decoders and live video protocols.

115 Best of Show Awards 2023 | NAB Show ZIXI
FOR MORE INFO

Zixi Software-Defined Video Platform

Zixi is the architect of the SoftwareDefined Video Platform (SDVP), the industry’s most complete live IP video workflow solution providing unparalleled live video delivery performance running over the Zixi Enabled Network, which is the industry’s largest ecosystem and consists of more than 1,000 media companies and 400 technology partners globally.

The SDVP enables media organizations to economically and easily source, manage, localize and distribute live events and 24/7 live linear channels in broadcast QoS, securely and at scale, using any form of IP network or hybrid environment. Superior video distribution over IP is achieved via four components

1. Protocols — Zixi’s congestion and network-aware protocol adjusts to varying network conditions and employs forward error-correction techniques for error-free video transport over IP. As a universal gateway, standards-based protocols such as RIST and open source SRT are supported, alongside common industry protocols such as RTP, RTMP, HLS and DASH. Zixi supports 18 different protocols and containers — the only platform designed to do so.

2. Video Solutions Stack — Provides essential tools and core media processing functions that enable broadcasters to transport live video over any IP network, correcting for packet loss and jitter. This software manages all supported protocols, transcoding and format conversion, collects transport analytics, monitors content quality and layers intelligence on top of the protocols such as bonding and patented hitless failover across any configuration and any IP infrastructure, allowing users to achieve five-nines reliability.

3. ZEN Master — The SDVP’s control plane enabling users to intelligently provision, deploy, manage and monitor thousands of content channels across

the Zixi Enabled Network, including 400+ Zixi-enabled partner solutions such as encoders, cloud media services, editing systems and ad insertion and video management systems. With such an extensive network of partner-enabled systems, Zixi ZEN Master presents an end-to-end view across the complete live video supply chain.

4. Intelligent Data Platform — A data-driven advanced analytics system that collects billions of telemetry points a day to clearly present actionable insights and real-time alerts. The IDP leverages cloud AI and purpose-built ML models to identify anomalous behavior, rate overall delivery health and predict impending issues. The IDP includes a data bus that aggregates over nine billion data points daily from hundreds of thousands of inputs within the Zixi Enabled Network, including

more than 400 partner solutions and proprietary data sources such as Zixi Broadcaster. This telemetry data is then fed into five continuously updated machine-learning models where events are correlated and patterns discovered.

With clean, modern dashboards and market-defining real-time analytics, the IDP enables users to focus on what’s important, with intelligent alerts and health scores generated by Zixi’s AI/ ML models helping sift through and aggregate data trends so that operations teams always have the insights they need without data overload.

At a time that sees the normalization of remote working and a proliferation in the ways programs reach viewers, Zixi’s SDVP provides the agility, reliability and broadcast-quality video securely from any source to any destination over flexible IP video routes.

116 Best of Show Awards 2023 | NAB Show ZIXI
FOR MORE INFO

ZOOstudio

ZOOstudio is the industry-first globalization management platform, designed to solve the large-scale globalization challenges of a multiterritory OTT service. Certified by the Amazon Partner Network, the cloud-based platform is a vendor-agnostic tool that allows localized movies and shows to be managed across a host of language service providers, all in one place.

ZOO Digital identified that the sporadic components of the traditional localization process could be consolidated in one centralized, cloud-based system, giving control back to content owners and saving them time and resources that they often do not have to spare, especially in this time of consolidation and change.

Multiple orders, vendors, languages and time zones can all be tracked seamlessly in one system. The platform delivers automated, real-time completion statuses, which connect via API, metadata cache and rate-limited solutions to CMS and production platforms to give automated updates instantly. These platforms differ by client, vendor, service line and beyond, but ZOOstudio is designed to be flexible and to bend to client needs. ZOO can implement ZOOstudio no matter what CMS they are using, no matter what format their data is in, with no heavy changes to their established, underlying systems and processes.

Using an in-built digital signature system, contracts are sent simultaneously, with automated reminders to reduce project delays and administrative steps. Content owners also benefit from a streamlined review tool allowing for subtitles and recorded audio to be easily assessed, quality controlled, and updat-

ed, with all assets securely stored in the system. A connected financial module allows content owners to estimate costs, arrange necessary payments and give status updates that are automated, cutting down on manual errors and delay.

From the first inception of ZOOstudio, ZOO Digital has created a platform that tackles mammoth entertainment industry challenges with realworld outcomes that set it apart from other products:

Trusted partner of industry giants: Since being launched at the NAB Show in 2019, ZOOstudio has been adopted by two major Hollywood studios, and has evolved into the proven solution for delivering the volume and scale required for international, multilanguage streaming platforms. Rather than acting as a standalone, isolated technology, the platform powers localization services, embedding itself seamlessly within the globalization workflow and becom-

ing the heart of all projects.

Visibility and control: By offering a visible pipeline of which localized content is being produced and when it will be ready to go live across all vendors, languages, formats and regions, ZOOstudio gives content owners the control to plan and change direction as required, and ensures audiences get the latest shows and movies like clockwork.

Entertainment industry empowerment: ZOOstudio has managed over 17,250 projects across 20+ vendors for some of the world’s best content creators, enriching the lives of audiences across the world by facilitating their access to film and television in their own languages. By forming an integral part of the localization workflow, the technology is unlocking the world’s most beloved stories to life for audiences who may not otherwise have been able to connect with them.

117 Best of Show Awards 2023 | NAB Show
ZOO DIGITAL
FOR MORE INFO

7MOUNTAINS

Dina Mobile

Software company 7Mountains released a new version of the newsroom app Dina Mobile.

Dina Mobile fills the gap for journalists and storytellers on the move. It allows journalists to create and publish stories in the field, with a live link into the newsroom to track schedules and control when a story goes live. Dina Mobile includes features such as:

• Upload media content to news stories.

• Create stories, edit, take photos and videos and publish from anywhere.

• Engage and communicate with the newsroom with a range of new chat features.

• Get push notifications for story assignments.

• Monitor news rundowns to see what is on air, a countdown and more.

Users can upload media content to news stories, write, edit, add photos and videos, and publish stories from the palm of their hands.

With Dina Mobile, journalists and storytellers can swap between the newsroom web interface of Dina and the app for their story creation, planning and publishing to linear/live shows, LinkedIn, Twitter, Facebook, web CMS systems and other destinations, and now also for the collaboration and communication using chat. With the newest update to Dina Mobile unveiled for the NAB Show, the communication experience reaches a new level. The app users can chat 1–1, in groups, teams and departments. Users can also engage in chats connected to a specific news story and a newsroom rundown.

Dina Mobile changes how journalists work with innovative features designed to revolutionize how you communicate and collaborate, making it simpler and more efficient to keep everyone connected and informed.

118 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Hardware Accelerated SRT Video Transport for X Platform

The continuous growth in penetration and increased bandwidth capacity of unmanaged Internet networks has unlocked the possibilities of transporting broadcast quality video over the Internet. The Secure Reliable Transport (SRT) protocol is presently the media industry’s favored open standard solution for Internet video transport. The real difference that SRT has enabled is in the exponential growth in the potential use cases for delivering video over the public internet. We’re talking to customers right now who are looking at using SRT to transport large groups of live channels over the public internet using SRT. However, until now, the available SRT solutions have had limitations in terms of capacity, density and flexibility, hindering the take-up of SRT in professional video environments.

Hardware Accelerated SRT

SRT solutions have historically been provided through servers. This approach to SRT transport worked when moving one channel or covering a live event with a small number of cameras. However, due to the cost-savings of the public internet and the simplicity of operation of SRT, the potential use cases for the SRT protocol widened. It became apparent to Appear that media and entertainment companies needed SRT support in robust, high-density, flexible carrier-grade solutions that interoperate with existing broadcast workflows.

To facilitate these advanced use cases, Appear developed, and recently announced at the 2023 NAB Show the launch of its hardware-accelerated SRT solutions for the X Platform.

The Appear X Platform SRT Difference

Hardware accelerated SRT enables

cost-effective Internet connections to make serious opex savings. For example, in contribution, today’s existing SRT gateways can only support HEVC with a limited number of cameras before having to add more servers, while one SRT-enabled Appear X Platform unit can support up to 22 UHD camera feeds.

In distribution, Appear’s SRT solution provides the lowest cost for channel transmission over the public internet, enabling operators to confidently replace expensive satellite links and dedicated fiber circuits. Additionally, SRT empowers operators to reduce the transport budget of moving studio functionality such as media asset management to the cloud. In a single 1RU chassis, Appear’s X Platform as an SRT gateway can handle more than 192 SRT connections and 18 Gbps throughput, saving space and power consumption at a much lower cost base. SRT also

enables organizations to change the economics of migration to the cloud in their favor.

“Media and entertainment organizations face a conundrum: consumers expect more and more choice, while transporting long-tail video presently has a similar cost-base to that of high-value content, our implementation of SRT solves this,” said Thomas Bostrøm Jørgensen, CEO, Appear.

Setting the Industry Standard

We believe that the cost-advantages of SRT and the public internet will be increasingly hard to ignore. Thanks to the significant benefits of optimization and hardware acceleration, SRT will continue to evolve to enable the public internet to be used in ways that simply would never have been thought possible by broadcast professionals just 10 years ago.

119 Best of Show Awards 2023 | NAB Show APPEAR
FOR MORE INFO

With a growing need to create more content faster without sacrificing quality, many broadcasters are turning to cloud-based productions to meet demands. These platforms allow news, sports and entertainment broadcasters to provide real-time audio and video experiences for less money.

Dante Connect is a new cloud-based software solution from Audinate that helps broadcasters centralize audio production in the cloud. It is already proving to help broadcasters overcome barriers to the cloud by utilizing the many Dante-enabled devices installed in stadiums, entertainment venues and broadcast and production studios around the world.

A typical remote broadcast involves sending one, or sometimes several, outside broadcasting (OB) trucks with a large staff to set up production. These local production teams connect with the AV equipment at the site, process it and send it back to the station. Because of the costs and complexity of these productions, many smaller or more remote

AUDINATE Dante Connect

events never get covered. Audinate is helping broadcasters to overcome these challenges with Dante Connect.

Centralizing audio production allows broadcasters to save money by reducing the need for additional resources at the site or the location where content is being produced. With Dante Connect, locally installed audio systems and another remote location across the world can send multichannel Dante audio directly to cloud-based virtual machines (VMs) running editing and production suites. Skilled audio producers anywhere in the world can then edit and distribute audio from anywhere, to anywhere.

For example, if a football stadium in New York has Dante-enabled audio devices (such as microphones, mixers, or cameras) installed, production teams can use Dante Connect to subscribe channels from those devices to computers or VMs running audio software in the Cloud for teams based anywhere — whether it’s in San Francisco, London or Sydney. Editors and

producers located at those remote locations can operate their software exactly as if it were local to create mixes, edits and overdubs that may be directly distributed to broadcast stations. Dante Connect lets broadcasters skip the OB truck, and allows local Dante-enabled AV equipment to be connected directly to the station’s infrastructure, producers and tools. Users on-site simply connect their Dante network to a Dante Connect gateway with a robust internet connection, and the rest is handled by the station. There, producers connect to the cloud-based computers receiving the audio and do their work just as if it were on a local computer. This means lower costs and creating more opportunities to cover more events for profit.

Dante is the industry-standard audio technology, and now using Dante Connect, broadcasters can put more devices to work for more productions, on or offsite. Dante Connect will be sold by resellers and configured by integrators.

120 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

BACKLIGHT iconik

Backlight’s iconik is a revolutionary cloud-native, SaaS media asset management (MAM) solution designed to meet the dynamic needs of today’s creative teams and developers. Unlike traditional MAM systems, iconik is affordable for almost any budget, user-friendly and tailored to serve businesses of all sizes, breaking down barriers to entry. Iconik delivers unparalleled value with a platform that replaces multiple solutions with features for media management, storage management, collaboration, post production and automation.

In a time where remote collaboration and instant access to media assets are more important than ever, iconik stands out as a powerful, flexible and scalable solution. By connecting both on-premise and cloud storage, iconik users securely gather and organize assets in a centralized library, making them available for users across any device, from any location. This eliminates the need for time-consuming data migration and allows businesses to leverage their existing storage investments.

Iconik’s intuitive interface and

hassle-free video collaboration tools empower creative teams to work together effortlessly with review and approval workflows, time-based comments and drawings on media. Collaborating in the same place media is managed streamlines the entire content creation process.

One of the key differentiators of iconik is its ability to adapt to the unique needs of each business. Its scalable nature makes it the perfect choice for small teams looking for an affordable and efficient MAM solution, as well as big brands seeking a powerful and flexible platform that can handle a high volume of media assets. This adaptability ensures that iconik grows alongside your business, effortlessly accommodating changing requirements and increasing demands.

In a fast-paced digital landscape, iconik’s automated metadata extraction and AI-powered smart tagging and transcription capabilities make finding and sharing assets a task that takes a few minutes instead of hours. These advanced features save

users time from day one, allowing them to focus less on search and admin and more on creating content.

The iconik API was the first thing to be developed. Today it enables developers to add their own integrations and build entire media systems around iconik for greater productivity and drive innovation. The iconik integrations with popular creative software like the Adobe Creative Suite and Final Cut Pro help editors create faster using the iconik panel to find assets and edit with smaller proxy files to eliminate the need to download large files.

Iconik is a game-changing media asset management solution that offers unmatched accessibility, scalability and flexibility. By breaking free from the constraints of traditional MAM systems, iconik paves the way for businesses of all sizes to unlock their full potential and maximize the value of their media assets. With its innovative approach and expansive feature set, iconik is pioneering a new way to manage media.

121 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

BB&S LIGHTING Reflect 4-Bank System

The new Reflect 4-Bank System was created in response to LDS lighting sets and newsrooms that have requested the color rendering and longevity benefits of remote phosphor technology in ultra efficient, lightweight fixtures that fit grids and walls. BB&S remote phosphor LED technology has been proven to provide consistent output for over 10 years.

• 1-foot 4-bank: 3200°K and 5600°K

Remote Phosphor featuring a 3-Pin XLR. Drawing just 40 watts produces over 240 Lux at 10 feet. Designed for optimum use in 8–12 foot range.

Size: 1-foot 4-Bank — 12 inches x 8 inches x 3 inches, Weight: 4 pounds

• 2-foot 4-Bank: 3200°K and 5600°K

Remote Phosphor and Bi-Color versions 2700°K-6000°K and come with a 4-Pin XLR. With 80-watt draw, they produce over 480 Lux at 10 feet. Designed for use in 10–16 foot range.

Size: 2-feet 4-bank — 24 inches x 8 inches x 3 inches, Weight: 8 pounds

With convenient rectangular form factors and flat profile, they fit right into a multi-lamp reflector bank or grid. They offer extreme efficiency and high output

(60 lux at 10 feet) coupled with consistently high color rendering of 95 TLCI, and the stable, color shift-free output that remote phosphor is known for.

These lights meet all the critical specs for newsrooms requiring extreme accuracy combined with control that’s fully dimmable without flicker or color shift. Reflects offer low power draw (11 watts/foot), high light output (1100 lumens/foot), 90-degree light dispersion, heatless and fan-less operation. Control is via the new optional 4-Channel Controller with 8/16-bit DMX 512/RDM with internal 48V power supply.

Reflect 2 and 4 Bank housings are designed using the latest engineering techniques to emphasize efficiency in power and output. Developed as a combined semihard and soft light, their superior reflectors utilize a semihard reflective surface to project a 90-degree directional light pattern. Optional diffusion, slides into a side slot, resulting in a soft surface with 140-degree dispersion.

Additionally, BB&S sources the highest grade new Blue LEDs which produce at least 10% extra

output over other types. The new fixtures emit 1100 lumens a foot versus 1000 lumens a foot.

Often tight on space, today’s studios need lighting to fit and fulfill multifunctions. With flat profiles Reflex Banks come with full length adjustable yokes plus TVMPs so they work on overhead grids or walls when height or space is at a premium. Their light weight means less stress on structures.

In the competitive news market, looks count. Effective beauty lighting is so essential with today’s ultra highresolution cameras picking up every skin imperfection. With consistently high TLCI, and superior skin rendering characteristics, BB&S remote phosphor is unsurpassed for modeling faces and excellent for illuminating backgrounds. Optional new louvers help produce directional light for added drama or impact.

Reflects fill the need for news and corporate studios, which require reliable, efficient, cost-effective, easy-to-use, lighting that fits their sets and offers consistent beauty light on talent and attractive set illumination over the long run.

122 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Bitmovin has optimized its next-generation VOD Encoder with Smart Chunking, an evolution of the split-and-stitch algorithm, which splits the encoding job into multiple parallel encodings or segments, accelerating the entire process.

One of the biggest priorities for media and entertainment companies is delivering streams in the best possible quality, and encoding is fundamental in achieving this. The split and stitch algorithm was a significant advancement in video encoding because it made it possible to horizontally scale the compute intense workload. However, some of the limitations of split-and-stitch are the downside of potential quality drops when using fixed GOPs and segments, degrading the overall visual quality for viewers.

Bitmovin saw an opportunity to solve these challenges and and further im-

Smart Chunking

prove the visual quality of video content, with Smart Chunking. Smart Chunking optimizes chunk lengths and bitrate distribution, delivering an improved visual quality throughout the whole asset that’s visible to audiences and achieving this at an even faster pace than before.

To achieve this, Bitmovin decoupled the chunk duration, which allows for variable chunk size depending on the codec type and the encoding complexity, providing the user with immediate and visible improvements.

Currently, video quality is measured with VMAF, an objective quality metric created by Netflix, and it’s one of the most widely used metrics in the video streaming industry to benchmark video quality. Bitmovin used VMAF when benchmarking the image

quality of a highly complex video asset processed with the standard split-andstitch approach compared to Smart Chunking. The results showed the lowest quality 1% frames increased by 6 VMAF points (6 VMAF is exactly the noticeable difference for the human eye). There was also an impressive 22 VMAF points increase in the lowest quality 0.1% frames, and the worst frame saw an increase of an astonishing 60 VMAF points.

The immediate benefits of Smart Chunking for audiences are even better visual quality when streaming their favorite content. The data shows a noticeable increase in visual quality that’s perceptible to the human eye when Smart Chunking is used compared to the legacy split and stitch algorithm.

123 Best of Show Awards 2023 | NAB Show
BITMOVIN
FOR MORE INFO

BOLIN TECHNOLOGY EX—Ultra

Bolin elevates its leadership in the outdoor PTZ camera market by introducing the all-new EX—Ultra 4K60 outdoor PTZ camera. It offers three image solution options for various applications, 12X zoom, 1-inch CMOS sensor, 30X/20X zoom full HD, 4K60 ultra high-resolution, super low light performance, and super image stabilization capability. The EX—Ultra features two FPGA imaging engines outputting simultaneous, independent video streams. There are two 12G-SDI outputs, optical SDI and HDMI 2.0, and multiple IP streams, including FPGA hardware codec FAST HEVC. This is a revolution in outdoor PTZ cameras.

FAST HEVC is based on the H.264/265-AVC/HEVC open standard platform, using the Xilinx Zynq UltraScale+ MPSoC. With FAST HEVC, the EX— Ultra delivers a 12G-SDI signal over IP with high quality, low latency, and low bandwidth, maximizing existing 1 Gbps network IP video environments. The FAST HEVC video is broadcast quality with extremely low latency (less than 2 frames) and can be delivered over long distances with a dramatically low bandwidth of 50 Mbps at 4K60.

The EX—Ultra can withstand winds up to 60 m/s. A built-in heater and defroster allow for an operating tem-

perature of –40° to 60° Celsius. The entire camera is IP67-rated. The connections cover and mounting bracket system also meet that standard. It has all-metal mechanical parts, an aluminum alloy body and strategic use of Grivery GV5H high-strength nylon. It has a nitrogen-filled image module housing and C5 salt air corro sion-resistant coating.

The pan, tilt and zoom performance of the EX—Ultra is stunning. The 340° pan and 210° tilt move at a variable rate from 0.01 of a degree per second to 90 degrees per second. The 255 presets execute at 100 degrees per second at five different speeds, all with Zero Deviation Positioning. The EX—Ultra also sup ports the Free-D protocol.

The EX—Ultra is not just for permanent stadium installations or situational awareness environments. It can also be tripod-mounted for live production. Bolin’s new EX—Ultra is the most advanced, high-performing and rugged PTZ camera we have ever made, and we are eager for the market to experience it.

124 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Brompton’s Tessera G1 is the most powerful receiver card ever designed for an LED panel, and a platform for innovation on which to build the next generation of high-performance displays.

With 20x more computing power than Brompton’s current industry-leading R2+ receiver card, a single G1 receiver card can drive up to 1 million pixels for a new generation of ultra-fine pixel pitch panels — this is the equivalent of a 1280x720px display in a single panel, which could easily be combined into 4K, 8K or even larger displays.

The G1 is the first-ever receiver card to support 10 Gb fiber connections directly to the panel, thus providing 10x the bandwidth of the current R2+ receiver card and positioning the G1 as a future-proof solution for the ever-growing requirements of LED systems. It is fully compatible with the industry-leading Tessera SX40 processor, and can be driven directly from the SX40’s 10 Gb fiber outputs.

BROMPTON TECHNOLOGY Tessera G1

When it comes to in-camera visual effects, LED panels often contribute to lighting the scene, so having an additional white emitter within an RGBW LED panel represents a significant leap in color-rendering accuracy, especially noticeable on skin tones and in blending foreground elements with virtual environments.

The immense processing power available in the new G1 receiver card means Brompton’s TrueLight technology (patent pending) can perform spectrally aware, full-color, per-pixel calibration of all four RGBW LED colors, while offering powerful user control through the intuitive TrueLight user interface. The G1 is the world’s only receiver card with the extreme processing power needed to deliver color-accurate calibration and dynamic control with additional emitters.

The G1 also has the capability to drive panels at up to 1,000 frames per second, with huge

benefits for slow motion and special effects filming. When showing normal frame rate video, this speed improves the performance of Brompton’s existing Tessera software features, such as ShutterSync and Extended Bit Depth. ShutterSync is an industry-first feature allowing cinematographers to time the LED refresh to their preferred camera configuration (rather than the other way around), thereby retaining greater creative flexibility for choice of aperture, shutter speed, sensor gain, etc., to achieve the desired visual result. Extended Bit Depth adds multiple additional bits of dynamic range precision, bringing out additional detail and nuance in dark areas of the image.

Brompton is already working with industry-leading panel manufacturers to integrate the new G1 receiver card as the driving force behind the next generation of LED screens. It was demonstrated at the 2023 NAB Show.

125 Best of Show Awards 2023 | NAB Show
INFO
FOR MORE

JPEG-XS is a lightweight video compression scheme that combines extremely low latency (on the order of a few lines of video) with good bandwidth savings, when compared to baseband video. Carriage of JPEG-XS over IP networks is defined in SMPTE ST 2110-22 and VSF TR-08:2022. Such JPEG-XS streams can be combined with ST 2110-30 audio and ST 2110-40 ancillary data.

The Sapphire 8JSX-8S is the highest density openGear converter on the market. It can accept up to eight independent input JPEG-XS streams (each with its associated audio and ancillary data essences) and convert them to individual SDI outputs. Up to five Sapphire 8JSX8S cards can be installed in an openGear

COBALT DIGITAL

Sapphire 8JXS-8S

frame, for a total of 40 SDI conversions per 2RU. Each Sapphire card has two SFP cages, supporting both 10G and 25G Ethernet interfaces, and optionally ST 2022-7 Seamless Switching. Sapphire is capable of ST 2022-7 Class-C operation, which makes it ideal for use in WAN environments.

For control, the Sapphire 8JSX-8S card includes full support for NMOS IS-04/ IS-05, as well as the standard openGear DashBoard management interface.

Sapphire 8JSX-8S is the ideal choice for receiving large numbers of JPEG-XS streams over a LAN or WAN and driving devices such as a router or a monitor wall. The openGear form factor allows Sapphire 8JSX-8S to be combined with

other processing in the same chassis, and the high density translates into space and power savings. The primary use for bulk JPEG-XS conversion is feeding large numbers of monitors, in space-constrained environments such as trucks and OB vans. In such environments, power and rack space are at a premium, and thus the ability to combine multiple conversions in a 2RU frame is very desirable. This also saves on 10/25G switch fiber ports, which are still expensive. Finally, there is a large number of openGear frames deployed around the globe, and the openGear form factor allows the customers to combine this functionality with other processing functions as desired.

126 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Real-time graphics are essential to an engaging live broadcast, but they shouldn’t come at the expense of time and additional money needed to hire specialized designers. That’s why we developed our Porta software: The simplest cloud-based control hub for data-driven Unreal Engine graphics in broadcast workflows.

With Porta, broadcasters no longer have to waste time on manual programming in order to deliver engaging real-time graphics. Porta runs on vanilla Unreal Engine, which means you can easily control and deliver real-time photorealistic content that adapts to a live broadcast show, driving a more engaged and loyal audience base.

Thanks to Porta’s easy template creation tools, broadcast journalists can also build unique graphics without a designer’s help, even if they’ve never used a game engine before. Both in-house and remote teams can take advantage of precise scheduling features that take

DISGUISE disguise Porta

the guesswork out of production while automating manual tasks with macros, previewing real-time graphics live using Pixel Streaming and much more.

The latest release of Porta (launched in January 2023) integrates with all leading newsroom control systems, enabling broadcasters to control all show graphics and live content, as well as LEDs, tracking systems, tickers and scorebugs — all from a single interface that can be used by multiple operators simultaneously.

Whether it’s for augmented reality or virtual production, broadcasters can also use Porta 2.1 to take advantage of the tools of the future. Porta 2.1 is fully compatible with disguise’s Emmy Award-winning xR solution, which can display realistic 8K content onto an LED set — extending your broadcast set to the size of a football stadium. disguise’s creative services teams at Meptik and disguise Labs are

also available to Porta users around the globe that want to design fully integrated extended reality workflows.

Key benefits of Porta include:

Stress-Free Teamwork: Journalists, producers and directors can collaborate and create graphics in their newsroom rundowns, which can be approved for graphics operators to play out from Porta.

Easier Workflows for Operators: Operators can easily customize their workflow and automate their show with new visual macros and scheduling widgets. They can customize their Porta UI even further by color-coding their playlist and workspaces. Through macros, they can even create playlists from data and reduce manual entry.

Faster Deliveries: Broadcasters can search particular graphics and allocate them to the right part of the story, cutting out repetitive tasks and making delivery times faster.

127 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

For more than 20 years, farmerswife has been the leading provider of resource scheduling, project management and team collaboration software for the media industry. With a consistent focus on adapting to the evolving needs of the sector, farmerswife provides media professionals with the best management tool for individuals and resources.

The release of farmerswife v7.0 is testament to the desire to take the farmerswife tool to the next level. The updated software comes with new features that have been designed to deliver an exceptional user experience for almost any device and operating system. Among these, the introduction of a dark mode, support for multiple currencies,

FARMERSWIFE farmerswife v7.0

and improved search functions, make workflow management effortless, particularly in today’s remote and collaborative working environments.

farmerswife v7.0 is a future-proof solution that provides a reliable and comprehensive management system, allowing efficient and organized workflows. In addition to being cost-effective, farmerswife v7.0 is a time-saving tool that increases productivity by cutting out administrative tasks. When combined with the task management software Cirkus, it increases collaboration between teams, transforming it into an end-toend solution.

The collaborative platform empowers customers to efficient-

ly organize and track project resources, effectively plan and manage the entire project lifecycle, streamline day-to-day tasks, create tailored budgets, and analyze financial performance in a practical manner.

With farmerswife 7.0, media companies can rely on a cutting-edge project management solution that is designed to meet the demands of the modern media industry, providing complete visibility, productivity and collaboration. Customers in production, post production, broadcasting, equipment rental, agencies or education will find in farmerswife a scalable solution that can adapt to their needs and help them achieve their goals.

128 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Haivision StreamHub

Haivision StreamHub is an extremely versatile HEVC and H.264 video receiver, decoder and IP gateway bridging 5G and bonded cellular, SDI, NDI, ST 2110, SRT, RTMP, and other IP protocols for on-premise and cloud-based live broadcast workflows.

Broadcast engineers and live production staff require greater flexibility between on-premise, cloud and remote production workflows that include a hybrid mix of 5G, IP and SDI video sources and destinations. Haivision StreamHub sets the standard for versatility, reliability and quality by way of real-time broadcast contribution features, IP and cloud-enabled remote production, and advanced content monitoring tools.

StreamHub comes with a flexible choice of configurations, letting the user address any type of workflow from simple deployments to systems requiring high-density HD and 4K video processing across multiple locations, in the cloud or on-premise. The versatile platform supports industry-standard technologies and protocols, including HEVC and H.264, support for bonded cellular and industry-standard transport protocols such as SRT that expands interoperability beyond Haivision transmitters and encoders to include various third-party sources as well.

Designed to meet the demanding requirements of live sports and news broadcasters, StreamHub receives and decodes multicamera video streams transported over mobile networks, the internet and the cloud at very low latency. Each StreamHub can receive up to 16 concurrent incoming SST and IP streams from Haivision mobile transmitters, encoders and third-party sources. With StreamHub, broadcasters can decode up to eight live video streams simultaneously to eight SDI, IP, NDI or ST 2110 outputs, with multicamera video synchronization.

Optimized for remote production workflows, StreamHub not only receives and decodes multicamera video in realtime but can also establish bidirectional audio communications with production staff, send IFB audio to talent in the field, and transmit video returns. Video returns may include contribution feeds, program outputs, teleprompting or multiviewer monitoring. StreamHub’s intuitive UI makes it easy to manage live audio, video and configure different types of multiviewers.

StreamHub’s Data Bridge feature can also remotely control PTZ cameras and other equipment connected to a field unit via IP.

With advanced IP gateway features, StreamHub supports a

wide range of protocols including RTMP, RTSP/RTP, SRT, NDI, HLS and IP, enabling broadcasters to easily distribute video content over IP networks and across the cloud in support of all types of distribution and production workflows.

Ultimately, Haivision StreamHub empowers broadcasters to adapt and simplify their workflows in real time with a flexible bonded cellular and IP video receiver, decoder and distribution platform. With user experience at the forefront of StreamHub’s design, its highly intuitive user interface, complete with video thumbnails and multiviewer configurations, enables broadcasters to quickly design workflows that cater to specific live sports, news and entertainment events.

129 Best of Show Awards 2023 | NAB Show HAIVISION
FOR MORE INFO

ZBrush 2023

ZBrush 2023 is a powerful digital sculpting and painting software that allows artists to create highly detailed 3D models with ease. With its intuitive interface and advanced features, ZBrush has become the go-to tool for professionals in the film, gaming and animation industries, offering unparalleled flexibility and control over the creative process. Its unique approach to sculpting — using dynamic brushes that respond to every stroke — allows users to quickly bring their ideas to life in ways that traditional modeling software simply can’t match, creating high-resolution models with intricate details.

One of the highlights of ZBrush is that it allows artists to work with virtual clay just as they would with traditional clay, using virtual tools to shape their creations in a natural and intuitive way. With a robust library of pre-made assets, including brushes, textures and 3D models, these resources save artists time by providing a solid foundation for their projects or providing inspiration for new ideas. Because ZBrush works with tens of millions of polygons in real time, users can paint directly on the surface of the model without first assigning a texture map or UVs. This offers significant advantages compared to a standard workflow, giving sculptors the freedom to visualize, explore and create textures in 3D and in real time. Additionally, ZBrush’s compatibility with other tools, such as Maxon’s Cinema 4D, means that when users need to move their 3D sculpt into a pipeline

for animation, rendering or 3D printing it is easy to incorporate these assets into larger workflows, making it a versatile tool for any creative project.

The latest release of ZBrush 2023 features two new tools never before introduced in 3D sculpting workflows. Implementing Proxy Pose instantly reduces polygon density, allowing artists to quickly and efficiently manipulate ZBrush models, and then convert back to a high-density mesh. The Drop 3D function combines the technology of Sculptris Pro with the 2.5D functionality of the ZBrush canvas to produce a new workflow that allows for enhanced creativity. This tool inspires artistic innovation by fusing the elements of both 2D and 3D design, allowing creators to explore design concepts aligned with illustration techniques, surface detailing and 3D modeling. Recent product updates also saw Maxon integrate their world-class

rendering tool Redshift directly within ZBrush, making Redshift accessible to all artists and creators and bringing them new levels of power and flexibility. This integration provides new and exciting opportunities to create high-quality images with subsurface scattering and emissive light generation as well as more easily render realistic materials such as marble, skin, leaves and wax.

ZBrush 2023 remains the go-to tool for professional VFX artists tasked with bringing stunning visual effects and lifelike characters to the big screen, with recent notable examples including its use in Oscar-nominated films. The titular character in “Marcel the Shell with Shoes On” was sculpted using ZBrush before being 3D printed, and the majority of digital sculptor Glen Southern’s technical work was completed using ZBrush for the puppet himself in Guillermo del Toro’s “Pinocchio.”

130 Best of Show Awards 2023 | NAB Show MAXON
FOR MORE INFO

Mimir is a cloud-native video collaboration and production platform. Mimir has live production features, media asset management features, and a wide range of live production features in the cloud. With Mimir, users can access and find content independently of its location. All that is needed is an internet connection. Artificial intelligence (AI) -assisted automatic metadata logging, including ChatGPT, combined with a powerful search tool reduces the time necessary to find the required content for editing projects and from video archives.

Mimir is the tool of choice for anyone transitioning from on-premise to the cloud or looking to modernize their existing media infrastructure and workflows. Mimir can fill the gap of several legacy systems for media ingest, production asset management, media asset management, archive and backup in a cloud environment, a hybrid cloud or even with on-premise infrastructure.

MJOLL

Mimir

Mimir is also a transcoding platform and tool for collaboration, sharing and comments.

Since its launch in 2019, media houses, production companies, news agencies and broadcasters have embraced Mimir for their media cloud archive and backup, AI-assisted automatic metadata logging, video collaboration and sharing, PAM and MAP needs, live feed integrations and more. Customers include, amongst others, The New York Times, blinx, Formula E, Hilton, Deutsche Press-Agentur, GB News, NRK, TV 2 and WHO, and is soon counting 60 customers worldwide.

Mimir is created from scratch on cloud technology without the traditional legacy of on-premise, so its deployment and not at least update cycle without downtime adapt to modern requirements of continuous updates.

Mjoll has certified a range

of integration partners for Mimir and support technologies for an enhanced workflow, including NDI support and using ChatGPT to create video content descriptions automatically.

Mimir is available as an example through MOS, iFrame and as a panel in Adobe Premiere Pro, Vimond VIA and Cutting Room. With its open interface, Mimir is agnostic to which storage solution it connects, as well as nonlinear editing solutions and newsroom systems. Mimir also integrates with a wide range of AI technologies for transcription, translation, face detection, label detection, OCR, categorization and more.

Mimir is NRCS-agnostic and integrates with newsroom systems, such as Dina, Octopus, Inception, iNews and ENPS. It is also NLE-agnostic with Adobe Premiere, Avid Media Composer, Final Cut Pro X, Edius, DaVinci Resolve, Blackbird and Cutting Room.

131 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Quantum Myriad

Myriad is a new all-flash scaleout file and object storage software platform ideally suited for the evolving needs of VFX, animation and rendering, and the increasing demand for AI and ML content creation and enhancement tools and new markets such as AR/VR, live production with LED video volumes and digital twinning.

Legacy NAS storage systems provide inconsistent performance, are complex, difficult to scale, and often deployed in islands that add workflow complexity and increased management burden. The slow performance makes rendering a painful and long process.

Instead, Myriad makes full use of readily available NVMe storage and RDMA to deliver the extreme performance (10s of GBps) and high IOPS (100s thousands) needed for cutting-edge animation and multiplatform workflows without the drawbacks or design limitations of legacy systems. Myriad requires no custom hardware, so as market available NVMe storage servers gain higher capacities, higher performance and lower cost, they can be used giving flexibility and adaptability as business evolves.

Myriad lets you consolidate multiple animation, VFX and rendering workflows into a single fast system to serve all departments, clients, workstations and workflows including rendering pipelines and AI and ML applications. Myriad delivers consistent performance for all users and is highly efficient storage for the large number of small files common in these workflows, and for serving rendering pipelines without impacting other users.

Myriad is built with cloud-native technologies like microservices and Kubernetes making it extremely flexible

and easy to use; no specialized IT or networking experience required; and it can be easily deployed on-premises or in the cloud. Myriad delivers this performance in a smaller footprint requiring less power, cooling and fewer components to reduce networking complexity, administration overhead and operational costs. Myriad’s powerful data services ensure that data is deduplicated and compressed to deliver an effective storage size up to 3x the storage capacity. Zero-impact snapshots and clones protect against operator error.

Myriad Benefits

• Consistent, fast performance of up to 10s of GBps performance and 100s thousands of IOPS to serve every creative department’s needs, including rendering, on a single system, whether deployed on-premises or in the cloud.

• Modern microservices architecture orchestrated by Kubernetes to deliver simplicity, automation and resilience at any scale.

• flash storage servers so you can quickly adopt the latest hardware capacities and form factors and adapt your storage infrastructure to meet future requirements.

• A Myriad cluster can start with as few as three NVMe all-flash storage nodes, and its architecture enables scaling to hundreds of nodes in a single distributed, scale-out cluster.

• No specialized IT or networking knowledge needed — powerful automated storage, networking and cluster management automatically detects, deploys, configures storage nodes and manages the networking of the internal RDMA fabric.

• Highly efficient data storage with intelligent deduplication, compression and self-healing and self-balancing software to respond to system changes.

• Simple, powerful data protection and recovery with snapshots, clones, snapshot recovery and rollback capabilities to protect against user error or ransomware.

132 Best of Show Awards 2023 | NAB Show
QUANTUM
FOR MORE INFO

QUICKPLAY AND VIONLABS

Quickplay-VionLabs Preview Clips Integration

Quickplay, a North American-based company, and VionLabs, headquartered in Sweden, are bringing together Quickplay’s award-winning, cloud-native CMS and Vionlabs’

AINAR Visual Discovery solution to create AI Automated Thumbnails and Preview Clips.

Pre-integration with the Quickplay CMS means that Quickplay customers automatically have access to a powerful new tool to drive customer engagement and longterm value. AI-derived metadata for content moods, micro-genres, story descriptors, keywords and more are leveraged by advanced personalization algorithms from Quickplay to:

• Find all the main characters through presence and importance to story;

• Pinpoint exactly where in the frame to feature main characters for thumbnails; and

• Find the best scenes using energy and emotion tracking across the story arc.

AI-automated thumbnails and preview clip outputs are fast and easily created, significantly reducing the cost of content marketing. AINAR Visual Discovery recognizes key people in the video and analyzes their mood and appearance to evaluate their importance. After analysis, AINAR Visual Discovery can find the main characters for thumbnails and previews and select both engaging and relevant segments of the content suited for promotional material.

Most services have the Catch 22 issue that they don’t have any data on new users and it takes time to collect the data. Vionlabs AI generates preview clips (the type of short clips that starts when you hover on a movie/ series thumbnail on Netflix) that, when

combined with metadata and content embeddings, can connect shorter clips to long-form content such as movies and series. This helps boost the amount of datapoints available for each user early in the user journey from one or two interactions per week to 50–100 interactions that can be used in recommendations, personalization and discovery.

As offered by Quickplay and VionLabs, Preview Clips uses three key AI-based capabilities to enable OTT providers to create previews automatically, without the time and cost of manually marking each noteworthy highlight.

• Character Tracking identifies characters and tracks their actions, enabling accurate previews that focus on the main characters in the story.

• Action Detection uses a deep learning algorithm to recognize high-energy scenes that should be included in the previews, including dynamic

actions such as running, jumping or fighting for an action movie, and funny moments for a comedy.

• Speech Detection technology provides an added layer of protection by checking dialogue to ensure that previews begin and end at natural breaks in the conversation.

This results in richer, more nuanced video recommendations and previews that can be targeted to viewers based on data collected by Quickplay tools across the subscriber journey. Previews can be published immediately or serve as the basis for further refinement by the content team, resulting in increased activations across Quickplay customers’ OTT content libraries. As noted above, a baseline of one or two interactions per week can be increased to 50–100 interactions that can be used in recommendations, personalization and discovery.

133 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

SoundID AI

There are more and more broadcasting channels: TV, radio, podcasts, web radio, IPTV, ... Content creators and all people involved in the neighboring rights are facing the challenges of detecting and tracking unpaid fees. It is even more complex for composers producing music for background. If some automatic content solutions exist, they hardly scale to analyze thousands of streams or to detect content disturbed by noise or mixed with voice or other music.

Thanks to more than three decades of experience in radio software solutions and years of testing, SoundNodes (spinoff of OPNS Broadcast division) released its first automatic audio content recognition system SoundID in 2019. The product was quickly awarded by the industry and by experts from the EU Commission.

While many solutions are based on an outdated technology, SoundNodes heavily invested in building the next generation of media and broadcast solutions, i.e., breaking the limits of detection for background music by developing and releasing a second generation, SoundID AI, based on an innovative multilayer artificial intelligence-powered algorithm delivering impressive detection results, even in very noisy environments at high speed.

With a proprietary fingerprinting technique and a powerful post-processing workflow, the solution delivers incredibly accurate results even on very short elements. SoundID AI is totally content-agnostic and works perfectly with music, songs, advertisements and speech of any language. Free of false

positives, the solution provides reliable reports allowing any author, composer or producer to claim their legitimate rights.

While the first product was only available as an on-premises bundled solution, the second generation is offered under many forms: on-premises, in the cloud, but also as white label for partners or even API mode for any SaaS needs. SoundID AI is now accepts almost any type of sources directly received from the customer or captured by our proven AudioSpy platform able to capture any radio, TV or internet stream signals. SoundID AI not only provides broadcasting evidences but can even offer detection playback.

Our game-changer technology even detects recurrences of unknown sounds, aggregates them to allow manual or assisted identification and metadata enrichment, or to simply associate them (without the need of

a new analysis) to a track added to our database.

SoundID AI is also powered by a combination of relational, non-relational and vectorial database to accommodate the various types of data (fingerprint, metadata, etc.) and trillions of combinations per second.

SoundID AI is relying on a complex AI algorithm executing billions of calculations per second thanks to the powerful GPU of our technology partner Nvidia.

SoundNodes is proud to bring to the market such a valuable tool to support all artists and connected people for a fair use of their creations and participate in the fight against piracy.

It is also an efficient way for advertisers to validate their campaigns and be sure to get what they paid for.

SoundID AI is the first of a family of products based on the framework developed to fulfill the ACR needs of the media industry.

134 Best of Show Awards 2023 | NAB Show
SOUNDNODES
FOR MORE INFO

SPECTRA LOGIC Spectra Vail Distributed Multicloud Data Management Software

As workforces remain largely remote and distributed, media and entertainment workflows have been under pressure to offer instant access, sharing and archiving of media. In response, organizations are looking to implement seamless hybrid cloud data management workflows that consolidate media storage and access to digital assets — with the ability to place content where it is needed and integrate workflows so users can share the same editing and access management software.

Enter Vail, a breakthrough distributed multicloud storage management solution that unifies and simplifies data access, usage and placement across on-premises storage, multiple cloud and storage platforms using a single global namespace. It can run on virtual machines, third-party hardware, cloud nodes and/or Spectra’s BlackPearl platform.

Vail allows media and entertainment organizations to leverage existing on-premises applications and native cloud services, regardless of where the data is physically stored. Users benefit from cloud transcode, edit, playout and metadata strip capabilities without the need to manually manage each data set and inherently know each cloud’s interface. Vail lifecycle policies automate

the placement of data across cloud and on-premises storage repositories so that users and applications can be directed to either type of storage based on locality and performance requirements — putting digital assets where they need to go for the most effective, agile, and productive workflows. Vail also offers automated bidirectional bucket synchronization across cloud and on-premises storage so that when content changes in one location, it automatically changes at another location.

For applications that are running in the cloud, Vail can protect the cloud repositories being written to by those applications so that the customer maintains an independent copy of all their digital assets. Vail enables workflows that leverage the best parts of cloud services while keeping content locally when needed for cost-savings, scalability and limiting content vulnerability — including on tape for the ultimate air gap protection against ransomware. For datasets that are on-premises, Vail can replicate them to public cloud storage so public cloud applications can leverage the same data as on-premises applications. Vail’s configurable policy engine streamlines the creation

of a common platform where any assets are accessible from any location and secure for long-term preservation and disaster recovery.

Vail allows organizations to optimize their use of cloud and right-size their cloud storage footprint. By integrating on-premises media storage with cloud services, Spectra Vail minimizes data egress for lowest cost and fastest access. Vail also enables cloud-agnostic media workflows to avoid vendor lock-in by moving content to and between cloud providers as well as on-premises storage.

As the first of its kind, this innovative software disrupts the industry by delivering the flexibility to balance location, performance and storage cost of digital assets in a single managed, unified cloud-operating environment. Vail enables modern media and entertainment workflows to integrate and manage any combination of on-premises and cloud services with unlimited capacity, throughput, object count, user count and site count. Ultimately, Vail gives organizations the flexibility, speed and protection necessary to create, share, deliver and monetize content in today’s competitive broadcast landscape.

135 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

MWEdge is a software gateway built to fit right into Infrastructure as Code (IaC) cloud workflows to bring reliable-transport protocols like SMPTE ST 2022-7, SRT, RIST and Zixi together with a broadcast toolset focused on Tier 1 broadcasters who need to move live media with absolute reliability and the vital telemetry to go with it.

MWEdge can be tailored to a versatile set of use cases that deliver against a broadcaster’s core goals whether they are cost savings, latency reduction, seamless switching, multicloud workflows or access to full end-to-end low-level forensic data. As an example, one global broadcaster was able to transform their entire linear TV distribution to national affiliates using MWEdge in the cloud, reducing costs on dedicated circuits and engineering.

At the 2023 NAB Show, MWEdge was the first to market with a GV AMPP Streaming SDK integration aimed at enabling 10-bit YUV compressed cloud production streams. MWEdge also stands out from other IP gateways due to its own complete API, which fits seamlessly into

TECHEX

MWEdge

broadcasters’ fully automated Infrastructure as Code (IaC) workflows that manage deployments and the pool- or usage-based licensing. With thousands of instances of MWEdge running 24/7, deep telemetry and monitoring are vital. To address this, Techex created a bespoke interface to Dataminer, enabling high-bandwidth, push-based telemetry along with options for Splunk, InfluxDB and Grafana. Based on these features, several companies have standardized on MWEdge as their preferred method of transporting video into or around the cloud.

MWEdge excels at enabling cloud production with three key abilities implemented at customer request. First, native NDI transport creates a bridge between on-prem NDI workflows and cloud workflows, allowing NDI to move in and out of the cloud without re-encoding.

Broadcast-grade redundancy features like ST 2022-7 maintain quality and minimize latency.

Second, MWEdge can carry JPEG XS, the visually lossless, low-latency codec that underpins remote production sports

workflows, pushing quality higher. Finally, MWEdge enables cloud production by ensuring reliability and flexibility, without requiring broadcasters to fully commit to hyperscalers like AWS. In the cloud, or on-prem, MWEdge can convert between several protocols, including SRT, RTP, UDP, 2022-7 and HLS and can be deployed on-premise or in the cloud. MWEdge has detailed ETR 290 Priority 1, 2 and 3 monitoring, as well as network-level stats with confidence thumbnails on every stream (see image).

This year’s innovations in MWEdge demonstrate its continuing drive to bring flexibility and reliability to high-value workflows in the cloud. MWEdge is the result of unique innovations based on open technologies and Techex’s long experience meeting broadcasters’ needs. This combination has had an immediate impact and allows service providers to continue to innovate, harnessing networks around the world to deliver more video in better ways at lower cost.

136 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

TELOS ALLIANCE

Jünger Audio AIXpressor

The Jünger Audio AIXpressor combines unparalleled I/O flexibility and legendary Jünger audio processing into a compact, 1RU powerhouse.

AIXpressor natively supports analog, AES3, MADI and Jünger’s low-latency 1024-channel tieLight, plus Telos Alliance Livewire+ and AES67 in support of SMPTE ST 2110-30 via AoIP. Four expan-

sion slots support additional I/O modules including 3G HD-SDI, microphone inputs with pre-amps and 48V phantom power, Audinate’s Dante AoIP, and additional analog, AES3 and MADI sources.

The full suite of Jünger audio processing, encoding and decoding solutions can be added as needed in the field via license.

Based on Jünger’s new flexAI

platform architecture, AIXpressor can be used as a standalone processor or employed as part of a larger processing array incorporating other AIXpressor units as well as flexAIserver for high channel-count applications.

To learn more about AIXpressor, please visit https://telosalliance.com/ aixpressor

137 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Ranger

You’ve seen low-delay. Now experience zero-delay.

The average latency for RF-based wireless video transmission is 30.8ms. But the human eye can detect a delay as low as 13ms. That’s why we designed Ranger — the only wireless video solution to achieve true zero-delay (<1ms) with visually lossless picture quality, featuring our Emmy and Academy Award-winning zero-delay, 4K HDR technology.

Ranger’s best-in-class performance allows broadcasters, live production companies, churches and government entities to implement real-time wireless workflows over licensed and unlicensed bands, from 4.9 GHz up to 6.4 GHz.

What’s new for 2023 is that Ranger now includes 12 additional RF channels over 6 GHz (U-NII 5), providing access to wireless spectrum rarely used by other electronics. And now with two form factors (Ranger Micro and Ranger MK II), users can scale their zero-delay transmission options according to production needs.

Ranger’s expansive key capabilities include:

• Zero-Delay: Go from TX to RX in .001 seconds — perfect for IMAG and live event production.

• Visually Lossless 4K HDR: Transmit video in exceptional detail — superior to HEVC and H.264 solutions.

• AES-256 Encryption: Keep your video transmission confidential and secure.

• Granular Frequency Control: Make adjustments to the frequency of your signal in 5 MHz increments.

• Licensed & Unlicensed Frequencies: Operate from 4.9 GHz to 6.4 GHz on licensed and unlicensed bands.

• Intercompatibility: Pair any previous and current Ranger systems together.

With the number of wireless devices on the rise, and a limited amount of

frequency bands to choose from, there is little room for your signal to move without interference or disruption. But with Ranger, you have options: 5 GHz for general use, licensed band frequencies for special events, and the new 6 GHz U-NII 5 band for 12 new channels of uncongested wireless spectrum. When a delay in audio and video transmission is noticeable, it’s nearly impossible to give your fans a lifelike experience. Ranger’s patented zero-delay technology solves this problem. With Ranger, your audio and vid-

eo signals are transmitted with ultra-low latency — even in 4K60 — providing a visually lossless and engaging experience for your viewers.

Ranger was engineered for flexibility and cross-compatibility. Pair any combination of Ranger TX and RX together — regardless of model. That means your production can quickly scale and operate a variety of Ranger systems best suited to your environment, despite differences in transmission range.

Visit teradek.com/ranger to learn more.

138 Best of Show Awards 2023 | NAB Show
TERADEK
FOR MORE INFO

Traxis Tracker: The Ultimate Camera Tracking Hub

Whether it’s for real-time election AR graphics or a full-fledged virtual studio, broadcasters increasingly need camera, lens and talent tracking to seamlessly blend the real and virtual worlds together.

A lot of the time, however, camera tracking data is not accurate enough. On top of that, studio and lens calibration are often complex, cumbersome and time-consuming. All this results in jittering graphics that break the illusion of a virtual production: Something that can alienate audiences and negatively affect engagement in a broadcaster’s live show.

That’s why at this year’s NAB Show, Zero Density announced the new Traxis Tracker, a software platform that will make these problems a thing of the past. Traxis Tracker acts as the ultimate tracking data hub for broadcasters, so they can rely on accurate camera, lens, object and talent tracking data that can all be managed in one place.

Traxis Tracker consolidates all tracking data sources into a unified platform, eliminating the need for broadcasters to waste time managing different tracking devices and processes. The hub supports all commonly used tracking protocol, so that broadcasters no longer need to manage tracking data coming from multiple vendors and devices.

This means it’s now easy to create a photorealistic blend of real and virtual worlds, even when fast camera movements and close-ups are involved. It also means it’s now even easier to set up a virtual studio with accurate studio and lens calibration — all using a single hub that lets you manage all your tracking data in one place.

In order to generate the most accurate tracking data available, Traxis Tracker can process the studio camera position, orientation and lens data and

apply special algorithms and filters in real time. It can input the camera feed and then visualize the camera, lens, talent or other data over the live camera feed without the need for an external render engine. All the calibration process can be done from one interface.

Once the calibration is finalized, it will then stream that data to any engine.

A solution to several market needs:

1. The need for a precise camera tracking solution that does not break even in extreme production conditions such as when using fast camera movement and closeups. Traxis Tracker meets this need using a set of sophisticated algorithms that generate the most accurate tracking data available.

2. The need for a way to make studio and lens calibration simpler and more accurate. Traxis Tracker meets this need with unique studio and lens calibration modules that remove complexity and allow broadcast teams to focus on the production itself instead of the technology.

3.The need for a single system that acts as a hub for the different tracking solutions deployed for the different productions, making tracking data transparent to the operation and the production crew. Traxis Tracker meets this need by consolidating all tracking data sources into a unified platform and eliminating complexity as broadcasters no longer need to manage different tracking devices in different ways.

139 Best of Show Awards 2023 | NAB Show ZERO DENSITY
FOR MORE INFO

AMAZON WEB SERVICES (AWS)

AWS Studio in the Cloud

The AWS Studio-in-theCloud solution includes media applications running on virtual machines for creative production workflows. The pipeline can run at 12-bits with color accurate monitoring. While content providers can employ various creative applications and partner solutions, an example workflow might comprise:

Artists creating 3D animations with Autodesk Maya use powerful virtual workstations in Amazon’s Elastic Compute Cloud (EC2) connected with HP Anyware, and a scalable, cloud-render-farm with Thinkbox Deadline for cinematic footage. The Virtual Art Department utilizes cloudbased workstations to create 2D/3D content for virtual production using Unreal Engine and Adobe Substance painter. Assets are synced through Perforce and accessible for on-set virtual production. Red Komodo footage is immediately uploaded by Red to Amazon S3, where CloudSoda moves it to a WekaIO storage cluster, enabling editors with footage within minutes of being shot. Edit-inthe-cloud with Adobe PremierePro, Adobe After Effects and Streambox enables a professional, high-quality viewing experience for remote editors. Audio is provided by Blackmagic Fairlight in AWS and linked to an on-site audio console; editors can add sound effects, music and mix up to 7.1 audio using NiceDCV.

Color-in-the-Cloud enables color grading on Baselight using AWS CDI to send an uncompressed video signal to AWS Elemental MediaConnect, which

applies a low-latency JPEG-XS 10:1 compression to the stream for real-time viewing. Colorfront’s QC Player enables high-quality, color accurate viewing of final assets.

After QC, the content moves through the media supply chain via an event-driven workflow, allowing for transcoding, and distribution to theatrical, broadcast and streaming providers. High-quality, real-time collaboration across the supply chain is provided by Like Minded Labs’ TODA; Arch Platform Technologies provides virtual workstation management in AWS including monitoring and deployment of resources on-demand.

AWS Studio-in-the-Cloud encompasses key components of a real-world production: pre-visualization, virtual production shoot, ingest, editorial, visual effects (VFX), compositing, color/

finishing and quality control. The architecture demonstrates five principles of the MovieLabs 2030 Vision that are technically achievable today (1, 2, 3, 6, 10), and includes deployment orchestration, shared storage, asset management and workflow automation.

In legacy post-production workflows, effort is wasted moving content between environments, creating operational and labor costs, increasing the chances of lost files or metadata, and impacting overall security. The time and resources spent on moving and managing data is a distraction from adding more value to the content through creative iteration. AWS is focused on a vision providing creatives with a framework of AWS services and technology partners that bring global creativity together in a secure, highly scalable, performant and collaborative user experience.

The cloud is already widely used for storage of media files, file transfers, lightweight proxy editorial and for large compute jobs. However, AWS is filling gaps to create more effective, efficient cloud-based workflows. With new cloud-enabled tools, open APIs and an expanded ISV partner ecosystem, a holistic production workflow with shared storage is now a reality.

Demonstrating a new, innovative and viable approach to high-quality production in the cloud, AWS Studio in the Cloud is deserving of a Best of Show Award recognition by Future.

140 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

BACKLIGHT Wildmoka | Digital Media Platform

Wildmoka’s Digital Media Platform allows leading sports broadcasters and rightsholders to create and distribute unlimited amounts of content such as near-live clips, highlight reels and live streams across web, mobile, OTT and social networks (including Twitter, Facebook, YouTube, LinkedIn and TikTok). It enables content creation and distribution from any source to any destination, in any format, at speed and scale.

The cloud-native platform offers a set of easy-to-use, web-based tools for fast and efficient creation and repurposing of near-live clips, highlight reels and live streams. It is ideal for working with live sports and breaking news content that has a higher value now than 5 minutes later.

Designed for efficiency, the Digital Media Platform allows its customers to create significantly more videos for digital destinations with the same number of editors. The platform also features two AI/ML empowered modules that further boost broadcasters’ digital strategies: Responsive Video and StoryBot.

NBC Sports and Wildmoka collaborated to introduce new cloud and AI/ Machine Learning technologies and boost the productivity of NBC Sports’ digital editorial teams. The goal was to produce more content and diversity, covering more digital destinations at a far higher speed than before.

This objective was successfully reached during the the latest edition of the prestigious men’s golf competition, Ryder Cup, where NBC Sports delivered “the most comprehensive digital presentation ever” according to Michael Lowe,

vice president, Digital Strategy and Partnerships at NBC Sports.

Following the year-long postponement of the tournament’s 43rd edition due to the pandemic, audience anticipation for the event was greater than ever, and NBC Sports was aware of the pressure to deliver an exceptional experience for fans to enjoy on the device and platform of their choice.

NBC Sports surpassed fans’ expectations by developing an ambitious digital strategy, which included:

• Publishing content to no less than 22 digital destinations during the tournament

• Five OTT platforms (Golf Channel, Peacock, Ryder Cup, Ryder Cup USA, Ryder Cup Europe)

• 17 social media channels on Facebook, Instagram, Twitter, YouTube or TikTok

• Productivity boost: more than 1400 videos clipped and produced from three days of competition

• Reducing time to publish for highlight reels by more than 90% (from 20 min to ~2 min)

• Creating an enormous diversity of short-form videos and strategically defining where

to publish each of them:

- Near-live clips

- Every hole summary

- Match summaries

- Entire day summaries

- Best of Shots

- Best of Players and more

• Applying different aspect ratios without compromising the original quality with Wildmoka’s Responsive Video so fans could enjoy on any device

• Making monetization much less intrusive with smart insertions, and innovating with live teasing

• Sharing the on-course atmosphere with shoulder content live streamed to social

Thanks to Wildmoka’s Digital Media Platform and its AI/ML-based video automation solution the variety, quality and speed of delivery of the Ryder Cup’s digital content made for a superior viewing experience. Moreover, the underlying technology will now serve the broadcaster to execute many more successful partnerships with federations and leagues, making it a viable and future-proof solution.

141 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Hammerspace is a powerful scale-out software solution designed to automate unstructured data orchestration and global file access across storage from any vendor at the edge, in data centers and one or more cloud regions and providers globally.

WIth Hammerspace, broadcast and film customers can now unify global access across on-premises storage/compute resources plus with cloud providers and regions from any vendor, to cut costs and rapidly adapt to changing production requirements.

Unlike solutions that shuffle file copies across incompatible storage types and distributed environments, Hammerspace creates a high-performance parallel global file system that seamlessly spans on-prem and cloud resources bridging silos, locations and clouds so users everywhere are working on the same datasets.

Hammerspace

are significantly more expensive than in cloud regions where power comes from lower cost renewable resources.

Hammerspace solves this problem transparently to users using its data orchestration system to automatically route jobs to the lowest-cost cloud regions based upon current cloud provider rates, and without creating multiple copies of data. From an artist’s standpoint, workflows are unchanged due to native integration with applications such as Autodesk ShotGrid. As a project is triggered within ShotGrid to move files to render, Hammerspace takes care of everything automatically in the background, routing jobs seamlessly to whatever the lowest cost region is at that time.

And since Hammerspace supports onprem or cloud storage from any vendor, this enables customers maximum flexibility to use any existing or new storage, without needing to consolidate data into proprietary cloud or on-prem resources.

For example, Jellyfish Pictures, a U.K.-based VFX company, leverages Hammerspace to rapidly spin up production teams globally, without the need to consolidate resources or rely on traditional file copy methods like FTP, rSync. This enabled Jellyfish to take on new VFX projects for Netflix and Disney by rapidly bringing new teams online in Australia, India and South Africa, who now had the same user experience local artists in London enjoy.

Another issue was the high cost of render workloads in more expensive cloud regions. Render jobs that need thousands of CPU cores in London or Los Angeles

Hammerspace can dynamically extend the production namespace across cloud regions to scale up or scale down when needed, to minimize cloud expenses. Render costs alone on large projects can be reduced by 30% or more because of this.

With Hammerspace, users everywhere still see the same files in the same directories, as though all files were all on local storage. The power of Hammerspace is the ability to present local access to distributed users, meaning that everyone is working on the same datasets no matter where they are or on which storage type or location the data is located. And because Hammerspace supports new or existing on-prem or cloud storage from any vendor, it means customers can rapidly adapt to changing project requirements.

In these ways, Hammerspace has revolutionized the way broadcast, film and other industries can manage distributed workflows and data across one or more on-premises and cloud compute and storage resources.

142 Best of Show Awards 2023 | NAB Show
HAMMERSPACE
FOR MORE INFO

HOMATICS X THX Home Click

Smart-Home Theater Speaker System

HOMATICS X THX Home Click Smarthome theater instantly turns your TV into a cinematic experience that is Tuned by THX. Users will access clear images with Dolby Vision and immersive panoramic surround sound through the powerful Dolby Atmos 5.1.4 channels surround sound artificial intelligence (AI) enhancement technology.

The easy-to-use design, which is plug-in-and-play through seamless wireless connections so users can enjoy theater-quality entertainment in a snap, and all through voice or RCU control. Additionally, the Homatics’ Humming EQ feature adds to the personalized audio-visual experience that is truly one-of-a-kind.

Tuned by THX provides corrective EQ and dynamics parameters for the best listening experience at all volume levels, for the best right-out-of-the-box fidelity for enjoying music, movies, games, sporting events and more. This includes per-channel transducer compensation to align the overall acoustic performance to best match the target THX audio frequency response curve. It also addresses issues that arise with variability that occurs among different speaker drivers and components, and ensures quality regardless of volume, thus optimizing the performance of the device.

Homatics’ innovation and new

products provide users with ultimate multiscenario entertainment experiences and high-quality Intelligent lifestyles. Through this Homatics and THX collaboration, innovation boundaries are pushed to bring users closer to experience intended by content creators.

Specifications: Dolby Vision, Dolby Atmos, AC3 5.1, EAC3 7.1, EAC, ARC, Wi-Fi 6, Bluetooth 5.3, Zigbee 3.0, Thread, Matter, Temperature and humidity sensor, Fair Field Microphone, Millimeter-Wave Radar, Light Sensor, Equalizer, 1 TB SSD (4T optional), HDMI 2.1, 4K, Amlogic S905X4, 4G LPDDR4, Google Assistant, ATSC/ DVB-S2/T2/C/ Optional and Android TV.

Dimensions:

• Home Click Center Speaker dimensions: (W x H x D) 210*133*70 mm (about 2.76 inches)

• Home Click Speaker dimensions: (W x H x D) 115*115*320 mm

• Home Click Box dimensions (each): (W x H x D) 114*114*28

mm (about 1.1 inches)

• Home Click Center Speaker weight: 0.710 Kg

• Home Click Speaker weight: 1.838 Kg *4

The Tuned by THX™ Homatics speaker systems are expected to be available for distribution in the Pay-TV cable and satellite television operators’ markets as early as spring 2023.

Homatics products push the boundaries of innovation to bring to market new products and services to provide users with ultimate multi-scenario entertainment experiences and high-quality intelligent lifestyles. SEI Robotics will serve as the ODM for this THX Homatics product.

143 Best of Show Awards 2023 | NAB Show
INFO
FOR MORE
HOMATICS AND THX LTD.

Tier 1 live production in the cloud requires frame-accurate, deterministic, low-latency, redundant and responsive interconnected systems at large scale. So far, there have been no cloud solutions that satisfy those requirements without compromising quality, latency and reliability.

Instead of simply shifting on-premises workflows to the cloud — thereby giving up some quality and latency — Matrox ORIGIN tackles the problem at the infrastructure level.

This disruptive technology is a software-only, vendor-neutral, asynchronous framework that runs on IT infrastructure. It can achieve highly scalable, responsive, low-latency, easy-to-control and frame-accurate broadcast media facilities for both on-premises and cloud deployments.

What Makes Matrox ORIGIN Disruptive?

• Asynchronous processing of uncompressed video for live production.

• Cloud-native, not a “lift and shift.”

• Operates on a single host or across multiple hosts within the distributed environment, making it equally effective on-premises as in the cloud.

• Vendor-neutral, so users can choose best-of-breed components from anyone without being locked into a specific ecosystem.

• Built-in, frame-accurate redundancy and live migration, even across multiple AWS Availability Zones.

• Redundancy requires no user intervention.

With the Matrox ORIGIN as the underlying infrastructure, developers can focus resources on what differentiates them. Their products will run equally well on a single host or in distributed systems on-premises or in the public cloud. They can develop once and deploy many times.

Meanwhile, broadcasters can

operate, build and develop scalable, best-of-breed solutions for public or private clouds without being restricted to a particular vendor. Broadcasters can make better use of their on-premises resources, offload peak needs into the cloud, run exclusively in the public cloud — or all of the above — at whatever pace makes sense for their business.

Unique Features and Benefits:

Asynchronous — Matrox ORIGIN operates asynchronously to process and interconnect uncompressed data as fast as possible and as soon as possible, removing all delays associated with synchronous interconnects. This enables low-latency, uncompressed, and highly responsive systems that make largescale, Tier 1 live production in the cloud possible.

Single-Frame Control — Matrox ORIGIN provides simple, granular control of a single frame. Any single unit can be frame-accurately routed or processed anywhere within the distributed and nonblocking environment of the Matrox ORIGIN framework — resulting in great flexibility with guaranteed AV synchronization that hasn’t been

possible before.

Integrated Clean Routing and Switching — This is possible because Matrox ORIGIN controls every frame. Signal-path compensation delays are no longer relevant, and any frame can reach any destination frame-accurately on a large-scale, uncompressed and distributed fabric.

On-Air Scalability — Matrox ORIGIN can provision or decommission compute to closely match dynamic operational processing needs with infrastructure costs — while on the air. It can livemigrate software processing in runtime without dropping a single frame or disrupting the control system.

Built-In Redundancy — Matrox ORIGIN provides the infrastructure to develop and operate stateless media-processing services with granular protection of every frame. The framework manages redundancy and requires no additional intervention. It also supports redundancy across multiple AWS Availability Zones to address mission-critical resilience requirements.

Simple APIs — So developers can build best-of-breed offerings for broadcasters to choose from.

144 Best of Show Awards 2023 | NAB Show
FOR MORE INFO
MATROX VIDEO Matrox ORIGIN

Quantum Myriad

Myriad is a new all-flash scaleout file and object storage software platform ideally suited for the evolving needs of VFX, animation and rendering, and the increasing demand for AI and ML content creation and enhancement tools and new markets such as AR/VR, live production with LED video volumes and digital twinning.

Legacy NAS storage systems provide inconsistent performance, are complex, difficult to scale, and often deployed in islands that add workflow complexity and increased management burden. The slow performance makes rendering a painful and long process.

Instead, Myriad makes full use of readily available NVMe storage and RDMA to deliver the extreme performance (10s of GBps) and high IOPS (100s thousands) needed for cutting-edge animation and multiplatform workflows without the drawbacks or design limitations of legacy systems. Myriad requires no custom hardware, so as market available NVMe storage servers gain higher capacities, higher performance and lower cost, they can be used giving flexibility and adaptability as business evolves.

Myriad lets you consolidate multiple animation, VFX and rendering workflows into a single fast system to serve all departments, clients, workstations and workflows including rendering pipelines and AI and ML applications. Myriad delivers consistent performance for all users and is highly efficient storage for the large number of small files common in these workflows, and for serving rendering pipelines without impacting other users.

Myriad is built with cloud-native technologies like microservices and Kubernetes making it extremely flexible

and easy to use; no specialized IT or networking experience required; and it can be easily deployed on-premises or in the cloud. Myriad delivers this performance in a smaller footprint requiring less power, cooling and fewer components to reduce networking complexity, administration overhead and operational costs. Myriad’s powerful data services ensure that data is deduplicated and compressed to deliver an effective storage size up to 3x the storage capacity. Zero-impact snapshots and clones protect against operator error.

Myriad Benefits

• Consistent, fast performance of up to 10s of GBps performance and 100s thousands of IOPS to serve every creative department’s needs, including rendering, on a single system, whether deployed on-premises or in the cloud.

• Modern microservices architecture orchestrated by Kubernetes to deliver simplicity, automation and resilience at any scale.

• flash storage servers so you can quickly adopt the latest hardware capacities and form factors and adapt your storage infrastructure to meet future requirements.

• A Myriad cluster can start with as few as three NVMe all-flash storage nodes, and its architecture enables scaling to hundreds of nodes in a single distributed, scale-out cluster.

• No specialized IT or networking knowledge needed — powerful automated storage, networking and cluster management automatically detects, deploys, configures storage nodes and manages the networking of the internal RDMA fabric.

• Highly efficient data storage with intelligent deduplication, compression and self-healing and self-balancing software to respond to system changes.

• Simple, powerful data protection and recovery with snapshots, clones, snapshot recovery and rollback capabilities to protect against user error or ransomware.

145 Best of Show Awards 2023 | NAB Show
QUANTUM
FOR MORE INFO

Centra Gateway is the core functionality of Sencore’s new Centra software platform; namely, the reception, transmission and conversion of internet protocols for optimized distribution of video. Protocols supported will include RIST, SRT, Zixi and HLS, along with MPEG over IP. With the aim to take advantage of the benefits associated with RIST and SRT protocols in terms of reliability, low latency, packet-drop compensation and forward error correction.

Far more than a mere tool for protocol translation Centra Gateway is meant for collecting, converting, aggregating, orchestrating and distributing as required. Constant metrics assessing latency across network links will allow for Centra to optimize transport according to resources and need.

Centra Gateway has the capabili-

Centra Gateway

ty to monitor and analyze the entire network, which provides insight into every mile of video transport, flagging a significant range of potential error types, as well as providing historical analytics for more strategic assessment of network performance over time. Through monitoring by exception Centra will notify and provide users with a streamlined path to rectification, linking to the correct department, site, engineer or solution required.

Thanks to the latest software and API tools, Centra Gateway is fully scalable to the needs of the broadcaster, at a range of bitrates depending on the appliance model or the capacity of the cloud platform selected. Future iterations of the Centra platform will integrate an increasing number of third-party

devices, protocols and services through REST API, acting as a single point of coordination through which broadcasters can access all of the components needed for effective, efficient network management.

Centra Gateway aims to achieve this in a way that is accessible, intuitive and highly usable, even to those without a technical or engineering background. The key point of differentiation is the low learning curve associated with its use. Through the creation of an easily-understood GUI, logical and clear workflows, and the deployment of automation, prompts and wizards where required, Centra Gateway allows broadcasters to put eyes — and hands — on all components of their network, quickly and easily.

146 Best of Show Awards 2023 | NAB Show SENCORE
FOR MORE INFO

ZIXI D2C VIDEO GATEWAY

Distribution platforms need to flexibly onboard live content channels from a diverse network of content partners with complex arrays of delivery methods, large differences in quality and stream consistency and varied mechanisms to protect and monetize programming. Built-in collaboration with OTT providers such as Fubo, Paramount+ and AppleTV+ to streamline operations, the Zixi D2C Video Gateway is a comprehensive suite of tools that simplify content partner onboarding. Easily deployed and scaled in any operating environment, it includes everything needed to consolidate ingest of any live programming regardless of the method, protocol or format content partners used for delivery.

The D2C Video Gateway pulls live feeds from partner content origins, out of hosted meet-me rooms or spins up redundant entry points for partners to push to. With support for more than 18 IP video transport protocols and formats, deep compliance inspection and integrated processing for content normalization, the D2C Video Gateway provides universal interoperability and simplified onboarding operations that enable video operations teams to add and manage live linear and event channels.

As a modular D2C Video Gateway, Zixi’s Software-Defined Video Platform (SDVP) continuously validates live channel quality and compliance, normalizes content partner feeds to match downstream production workflow requirements, centralizes channel management and delivers real-time actionable insights that video teams require to efficiently scale operations so that adopting broadcasters can maximize revenue.

The SDVP is the only live streaming software platform that offers users a

wide range of protocols in addition to the pioneering Zixi protocol for the delivery of live video. The Zixi protocol is congestion and network-aware, dynamically adjusting to varying network conditions with advanced forward error correction techniques enabling error-free video over any IP network. It features ultra-low latency, dramatic throughput, compute and efficiency improvements that realize extraordinary cost reductions. The SDVP delivers unparalleled live video delivery performance running over the Zixi Enabled Network, which is “the industry’s largest ecosystem” and consists of more than 1,000 media companies and 400 technology partners globally.

The D2C Video Gateway also enables video operations teams to engage much faster and with far fewer resource costs, delivering the scale, performance and agility that D2C platforms require. With it, distributors can manage the ingest of live linear

and event programming, conforming to their existing workflows so that they can deliver to regional, national and global audiences simply and efficiently.

The D2C Video Gateway organizes content partner feeds into highly intuitive and dynamic operational dashboards to deliver actionable insights into current performance and analyze health trends for specific channels over time. Problem channels can automatically be quarantined before they impact downstream systems, and changes to program structure, performance or quality can automatically generate a richly decorated RCA report complete with leading, trailing and impacted objects indicators.

The D2C Video Gateway is designed to give operators the flexibility to rapidly onboard video channels delivered in any protocol, over any network, process into any format, and deliver to any target.

147 Best of Show Awards 2023 | NAB Show ZIXI
FOR MORE INFO

AMAZON WEB SERVICES (AWS)

AWS MediaConnect Gateway

AWS Elemental MediaConnect Gateway is a new cloud-connected software application to transmit live video between on-premises multicast networks and AWS. Part of AWS Media Services, MediaConnect Gateway improves operations in hybrid environments, providing monitoring, security and management of video feeds from the AWS Management Console. Customers can use it to build end-to-end live video contribution and distribution workflows in AWS at scale for seamless integration into their on-premises infrastructure.

Typically, delivery of live-video multicast streams between datacenters and the cloud requires investment in specialized third-party hardware and software or a custom solution, which can be costly and difficult to support. With MediaConnect Gateway, live video stream transport in on-premises datacenters can be viewed, monitored and controlled directly from the AWS Management Console or using the MediaConnect API.

For video contribution, content providers that originate live linear channels on premises might send these feeds to partners around the globe, using MediaConnect Gateway as a bridge between their multicast, on-premises network infrastructure and the cloud. Each MediaConnect Gateway instance can subscribe to one or more multicast groups, where a group represents either a single channel or multiple channels multiplexed together in a multiprogram transport stream (MPTS). Once subscribed, MediaConnect Gateway converts the network traffic to unicast, adds encryption and sends the video to a MediaConnect flow. Then a live streaming application is created using the feed, processing and delivering the video to end viewers using AWS Elemental Medi-

aLive, AWS Elemental MediaPackage and Amazon CloudFront or another software application.

For video distribution, customers might use MediaConnect Gateway to build sophisticated networks that span hundreds or thousands of end points on premises. For example, a broadcaster

AWS Management Console or using the MediaConnect API. When an on-premises multicast video feed is selected, the video signal is transported as unicast to the cloud using AWS Elemental MediaConnect, a service that combines the dependability of satellite and fiber-optic transport with the user-friendliness of

might send 24/7 live linear content to hundreds of affiliates, using MediaConnect Gateway to seamlessly bridge on-premises multicast network at the source and destination. The result is a cloud-managed solution with improved operational agility and decreased cost compared to a satellite-based workflow. MediaConnect Gateway runs inside Amazon Elastic Container Service (ECS) Anywhere, a service that allows customers to manage ECS containers on their servers. Once ECS Anywhere has been installed on the customer’s VM or bare metal server, they can download MediaConnect Gateway as a software container, and all video feed management can be handled in the

IP-based networks.

Once in MediaConnect, video can be sent to other AWS Regions, processed using AWS Media Services or other applications, shared with partners and affiliates, and delivered to other on-premises MediaConnect Gateway locations. Integration of MediaConnect Gateway into Amazon CloudWatch lets customers monitor the health of feeds without separate tools.

MediaConnect Gateway gives customers full control over deploying and monitoring hybrid live video workflows, saving valuable time and resources so they can focus on their core business, making it a prime Best of Show Award candidate.

148 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Zype Playout

Content owners often have an extensive library of video content that they don’t monetize to its full potential. Without an easy way to increase the viewership and value of their existing content, they miss out on many opportunities.

Zype Playout enables content owners to transform live and on-demand video into digital linear channels for millions of viewers. Built for broadcasters, digital publishers, consumer brands and developers, Zype Playout’s intuitive tools make operating playout a breeze. The user-friendly interface features drag-and-drop scheduling, dynamic adbreak setup and insertion capabilities, and flexible distribution to FAST or owned-and-operated destinations.

We’ve also just added two new capabilities to Playout: Custom Branding Overlay and Dynamic Graphics Overlay. With these features, linear publishers can add engaging, broadcast-quality graphics that elevate their playout channels without needing to contract with third-party graphics software. With Channel Branding Overlay, it’s now possible to insert a single static image, such as a logo, at a channel level. Dynamic Graphics Overlay allows users to design graphics directly in Playout that are powered by data and can be automatically inserted onto the timeline, providing viewers with key programming information and making channels look and feel like premium television. These new built-in graphics

tools allow content owners to encourage more engagement with their channel programming and take advantage of new revenue opportunities.

Unlike other playout solutions, programming changes conducted in Zype Playout can update live in seconds, so making changes on the fly is efficient, and optimizing programming in realtime is easy. Other differentiators of Zype Playout include:

• A horizontal timeline UI for easy scheduling

• Smart content organization tools for automated programming

• Quick updates for real-time programming optimization

• An API-first nature that allows for customization

• Self-service channel creation and distribution

Getting started with Zype Playout is as simple as importing an existing content library, scheduling programming, inserting ad breaks, and selecting distribution endpoints. Whether you want to publish a continuous 24/7 linear channel or create a short-term pop-up channel for a special event, Zype Playout offers a range of tools that make it easy to get started streaming.

With Zype Playout, content owners can find new life in assets and further monetize existing content libraries by building digital linear channels of curated live or VOD with self-service tools.

Companies like WMX (Warner Music Experience), VEVO, Nine Network, Spin Master, Selkirk and Night Flight have turned to Zype’s playout solution to curate and distribute linear video channels.

149 Best of Show Awards 2023 | NAB Show BACKLIGHT
FOR MORE INFO

Zype Apps Creator

As content providers look to engage more eyes across a wider variety of streaming platforms and devices, controlling content distribution through owned streaming applications has never been more critical.

With Zype Apps Creator, content owners can maximize the reach of their media and take advantage of new revenue streams through creation of custom-branded applications.

Zype Apps Creator is Backlight’s turnkey solution for media and entertainment companies looking to build and launch beautiful and performant, enterprise-grade OTT apps across digital platforms, including web, mobile, smart TVs, connected devices, gaming consoles and more.

A no-code app-building platform, Zype Apps Creator allows content owners to focus on content creation and programming by eliminating the need for coding experience or expertise. With Zype Apps Creator, even operational teams can quickly build and replicate the production of high-quality apps without needing engineering resources in-house or having to contract expensive or unreliable third parties.

Zype Apps Creator supports a broad range of platforms and offers market-tested enterprise-grade features, such as:

• Support for both video-on-demand content offerings and live streaming, whether for event-based entertainment, FAST channel distribution, or playout channel distribution

• A user-friendly interface to quickly design and replicate cross-platform apps, along with a responsive dashboard with tools to organize content

into sections, collections, playlists, catalogs and more

• Support for different monetization models like SVOD, AVOD, TVOD and hybrid-model approaches

• Multiregion configuration and multilingual support, which allow apps to look, feel and behave differently in different regions

• Sophisticated security and analytics features, including Digital Rights Management capabilities and support for Google Analytics 4

• Access to a robust first-party data and analytics platform, which provides cross-device streaming analytics and offers a holistic picture of streaming metrics to help owners create smarter content and distribution workflows

• Compatibility with the latest streaming OTT platforms and

devices, including Apple TV, Android TV, Roku, FireTV, Sony, Tizen, Samsung, Vizio and more

Competitive solutions typically require customers to perform complex coding, wait on lengthy custom development cycles, and only offer limited platform support. Apps Creator distinguishes itself by enabling users to launch in-market quickly with beautiful, custom-branded, marketplace-compliant apps, with no coding experience needed. Media enterprises looking to rapidly and reliably expand their reach into new markets and deliver content through multiscreen experiences turn to Apps Creator. To date, Apps Creator has launched over 1,400 apps for companies like Conde Nast, Barstool Sports, Outside TV, Harvard Business Review and more.

150 Best of Show Awards 2023 | NAB Show BACKLIGHT
FOR MORE INFO

COBALT DIGITAL

SafeLink-8TS-VM (Virtual Machine)

The Reliable Internet Stream Transport (RIST) series of Specifications from the Video Services Forum (VSF) provides a set of best-in-class mechanisms for content contribution over the internet. Cobalt Digital is an active member of the RIST Activity Group, and Cobalt products provide a set of rich RIST features.

The low latency, advanced security and high reliability of RIST make it the ideal protocol for cloud ingress and egress. To provide this functionality in the cloud, a gateway is needed. Such a gateway would need to convert between RIST and the simpler UDP/RTP protocols used inside the cloud.

The Cobalt RIST product line includes the SafeLink Gateway, an openGear card that can provide eight channels of conversion between RIST and plain UDP/ RTP. SafeLink is compression-agnostic and can protect any type of transport stream. Each input channel can support independent/unrelated primary/backup streams, or SMPTE ST 2022-7 seamless switching or bonding. Each output can replicate the outgoing stream to up to eight destinations. SafeLink is ideal for providing link protection to existing encoding/decoding infrastructures.

SafeLink can also support up to eight RIST Main Profile tunnels, each of which can support any arbitrary number of streams. The SafeLink Gateway is designed to provide RIST functionality to legacy devices.

Cobalt is now releasing a cloud version of the SafeLink Gateway, available as an AWS instance — no need to buy any hardware, it can work in any virtual machine. The cloud version has the exact same functionality as the openGear card and exposes the same configuration interface using the DashBoard application. The user can tailor the level of performance by selecting one of the various CPU architectures supported. Using Cloud SafeLink, customers can:

• Reliably and securely receive content for further processing in the cloud from any RIST device, both from Cobalt and other vendors.

• Provide primary/backup workflows to increase reliability.

• Reliably and securely transmit content from the cloud to any RIST device, both from Cobalt and other vendors.

Cloud processing is great,

but content needs to get there, and once processed, needs to come back. Some users have large amounts of dedicated bandwidth for this, and can use simple transfer protocols, especially if they are working with files. However, for those who are working with live content, and only have traditional non-dedicated internet links, a more advanced protocol (RIST) is needed, in combination with advanced compression. SafeLink can operate as the in-ramp and out-ramp to/from the cloud in such situations.

Many vendors offer encoders/decoders with RIST support, but something needs to “catch” the content as it comes in, and once processed, reliably and securely send it where it needs to go. Moreover, customers may have redundant processing paths, and a logical “switch” between them is needed at the cloud. The openGear version of SafeLink can do this at the customer premises, and now SafeLink Cloud can do the exact same functionality on the cloud side in a cost-effective manner, since instances can be spun up and down on demand in a pay-as-you-go model.

151 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Reflektor On-Premise & Cloud Signal Processor

Reflektor, Evertz’ software-as-a-service (SaaS) IP distribution platform is ideal for providers of live/linear services, cloud applications or OTT. Reflektor is a microservice-based signal processor for on-boarding and normalizing video transport streams, transcoding and the replicating of streams.

Reflektor represents a key element within Evertz’ remote production solutions by providing customers with a powerful, low-bandwidth cloud on-ramp option for easy and convenient contribution of high-quality video with ultra-low latency for production and streaming applications. Reflektor addresses the challenges found in today’s cloud workflows — handling multiple transport formats and codecs while delivering incoming feeds to different software instances.

For example, some remote locations may use Secure Reliable Transport (SRT) to send H.264 streams with Evertz XPS whereas another remote location will use a third-party encoder to stream using MPEG-2 transport streams. A custom-

er may have created a cloud production suite in the public cloud that is expecting only SRT. Reflektor receives all the incoming streams, normalizes them with video and audio processing, converts the MPEG-2 TS to SRT and sends copies of the streams to the cloud instances.

Reflektor is a valuable tool in managing the expanding number of signal formats (MPEG-TS, NDI, ST 2110, HLS, MPEG-DASH, etc.) that can be produced by a traditional broadcast. Reflektor uses licensed microservices in the cloud to normalize signal types to best suit the needs of the end user or final application, making it an ideal cloud solution for UHD/4K field contribution, remote production, return feed monitoring, remote collaboration and cloud production.

Reflektor’s versatility and ability to transcode, translate and replicate IP flows in and out of the cloud make it a valuable tool for everyone who wants to transition to cloud workflows.

With Reflektor, it is easy to simultaneously distribute, stream and playout multisignal content directly to broadcast centers, remote operators, CDNs and more, which opens many creative possibilities.

Any signal type can be accommodated, as can all video, audio or data content required for any broadcast application, including monitoring, encoding/ decoding, TS muxing, duplication, etc. In combination with a XPS edge device, venues can use common transport protocols such as SRT, Reliable Internet Streaming Transport (RIST) and Zixi to send a low-bandwidth HEVC signal to Reflektor for immediate transcoding into a format best-suited for the endpoint. Reflektor can also accommodate bidirectional support for the XPS encoder/decoder, meaning this process can be replicated in reverse, ensuring video content is distributed instantaneously to and from the cloud using reliable transport protocols.

152 Best of Show Awards 2023 | NAB Show
EVERTZ
FOR MORE INFO

Ease Live Interactive Graphics

Ease Live is a software-as-a-service (SaaS)-based interactive graphics platform that gives live sports, live events and broadcast customers the tools they need to create, build and distribute overlays to millions of end users on multiple platforms in real time. Already used by sports leagues, broadcasters and content providers around the world, the platform delivers edge-rendered graphic overlays that add interactive experiences to existing over-the-top (OTT) services and streaming applications. Ease Live drives engagement and monetization opportunities by giving content and rightsholders the opportunity to “gamify” the viewer and fan experience. Graphical content can be overlaid onto live streams, allowing viewers to interact with in-game live statistics, watch parties, polls and trivia, and sponsored betting and wagers — all without having to leave the event. This provides opportunities for monetization with new ad revenues that have not previously been available.

Customers using Ease Live have seen double-digit growth in their audience engagements. The addition of interactive live game stats has increased the number of live stats impressions per game (i.e. the count of how many times users launch the live stats overlay) by 60% over the previous year. Response rates have also increased, with up to 60% responses achieved on factoids and up to 68% responses achieved on polls. The additional support for watch parties, where users can invite their friends to a live video chat during the game, has increased viewership by 53% in terms of unique viewers per game, with the average watch party session lasting more than 30 minutes.

The Ease Live platform includes the powerful Ease Live Sync Server, which makes it very simple for customers to

synchronize their live broadcast moments with interactive graphics in a frame-accurate manner. Getting the timing right is crucial for unlocking the commercial potential that interactive live streaming offers, as interactive content can be placed in relation to the game action and provide valuable clicks and conversions.

Ease Live leverages Evertz’ years of experience in timing and synchronization to bridge the production timing and the OTT delivery platform to ensure frame-accurate placement of interactive graphics on top of the customer’s video player.

This synchronization of the live broadcast and interactive overlay graphics also addresses concerns over latency and opens opportunities for free-to-play live predictions of in-game occurrences. Delivering a single-screen

experience, where fans can watch and simultaneously play a free-to-play game, is unique for the streaming industry.

An additional benefit to Ease Live is the ability to collect first-party data and analytics, which are generated from powerful cloud-based data tools. The knowledge gathered about user behaviors can be used to inform broadcasters on what content is resonating with audiences, and this can be used to identify and target specific audience demographics with paid content or advertisements.

In addition to the existing support for mobile and web-based touch devices, Ease Live also offers interactive experiences developed for connected TV devices. These allow the viewer to engage with content using their television’s remote control device or their mobile devices as a “true” second screen.

153 Best of Show Awards 2023 | NAB Show EVERTZ
FOR MORE INFO

As a sports fan, were you among those who kept on waiting for the Superbowl to appear on their smart TV screen? Did your DAZN app crash during Serie A matches?

As a series fan, did similar inconveniences happen to you on Netflix or Amazon Prime?

Within the last few years, the OTT streaming market has been facing technologically advanced needs to address the highly increasing demand. Indeed, as the offer for content streaming platforms has skyrocketed, nurtured by a historically lightspeed adoption by viewers, the substantial increase of online video consumption has generated many dimensioning challenges, of three-fold nature:

1. The exponential demand for online video content (79% of internet data) yet massive frustration (23% of viewers fully satisfied)

2. Consequent server crashes (many major crashes, especially during popular sports events) forcing future investment (data center deployments to double by 2030, working 80% of the time at 40% of their capacity)

3. Gigantic energy consumption of streaming (570 TWh in 2021, 2464 TWh by 2030).

Why is the servers’ role so problematic?

Across the sector, dependence on CDN servers is the central streaming standard generally applied, yet standing out as one of its key drawbacks. To recurrent server crashes due to massive simultaneous viewer connections, CDN companies massively invest in the implementation of new data center sites, breeding the specter of upcoming crashes. And along with this way of proceeding, extra consumption of electricity for functioning

QUANTEEC QUANTEEC

and water for temperature regulation.

All this is spurring the need for optimized OTT architecture.

QUANTEEC is a French startup offering to any actor within the stream-

QUANTEEC, the more viewers, the more potential re-streamers and the far less installment of gigantic servers, with utterly compelling results: our clients monitoring metrics reveal up to 75% of video

ing sector (broadcaster, OTT platform provider, SVoD/AVoD services, etc.) an innovative technology capable of solving video streaming scaling issues, while at the same time enabling them to significantly save costs and reduce their energy footprint.

The QUANTEEC motto is: “More with Less,” and to achieve it, the technology relies on three core fundamental optimization principles: performance, cost savings and energy reduction.

The QUANTEEC web3-inspired technology shifts the current generalized paradigm and turns each single viewer into a “smart and virtuous restreamer.” Doing so, it allows the audience to scale up almost infinitely without the need to rely on deployment of additional servers. In other terms, through

data transmitted from viewers — with the highest quality for minimum 90% of them — and more than 40% of energy reduced, for minimum 25% less costs.

The QUANTEEC technology is CDNagnostic, DRM-agnostic, ultra-low latency-compliant and fully compatible with any HLS or DASH video player.

Now, let’s think forward: How much server deployment would then be saved? How much energy? How much carbon? How much water? And how many more costs could be saved?!

The streaming market thus has a profitable way to go forward, inducing no tradeoffs, just plain common sense. Demo available online at quanteec.com/demo, free trial, 5-minute installation, immediate benefits.

154 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

QUICKPLAY Churn Prevention Module

The endless array of OTT choices and the lack of long-term subscriber agreements have changed the rules of data analytics. It’s no longer enough to know who your subscribers are and what they want. What’s equally important is knowing when their loyalties are wavering and what can be done to keep them in the fold.

Quickplay’s Churn Prevention Module leverages the company’s rich data science and analytics capabilities in conjunction with its end-to-end platform to help OTT service providers to predict and reduce churn through analysis of subscriber and watch data utilizing AI/ML techniques.

The Churn Prevention Module can leverage Google Cloud tools and partner products to:

• Give streaming providers insights into “survival curves” that can forecast how long consumers are likely to stay on the service based on their subscription levels, previous renewal track-record, video watch patterns,

and more.

• Provide insights into reasons for subscriber churn, which can be used to prevent cancellations.

• Target satisfied subscribers or cohorts interested in upgrades, new content, or other promotions to accomplish engagement and monetization objectives.

Parks Associates placed the OTT subscriber churn rate at 44% last year. With subscriber loyalty at risk every day, it’s more important than ever for OTT providers to pinpoint when consumers are most likely to make a service change — and to take corrective action. Quickplay uses historical patterns of video viewing, subscriber behavior within the application, and subscription details — in this case, from the Evergent subscriber management system — to give customers the power to understand which accounts are at risk at any given

time and what needs to be done to keep them from disconnecting.

Quickplay is Google Cloud’s 2021 Industry Solution Media & Entertainment Partner of the Year, so the Churn Prevention Module is powered using Google’s Big Query and its Looker business intelligence platform. Leveraging the power of Google tools with Quickplay’s cloud-native platform, the Churn Prevention Module can combine video viewing information such as device, time of day, content genre, session duration and subscriber behavior with third-party data such as purchase trends, payment methods and other variables to create accurate dashboarding of each subscriber’s journey.

FOR MORE

Machine learning capabilities can target subscribers for retention strategies such as promoting relevant content, trial upgrades or presenting other offers when they cross predetermined thresholds.

155 Best of Show Awards 2023 | NAB Show
INFO

QUICKPLAY AND VIONLABS

Quickplay-VionLabs Preview Clips Integration

Quickplay, a North American-based company, and VionLabs, headquartered in Sweden, are bringing together Quickplay’s award-winning, cloud-native CMS and Vionlabs’

AINAR Visual Discovery solution to create AI Automated Thumbnails and Preview Clips.

Pre-integration with the Quickplay CMS means that Quickplay customers automatically have access to a powerful new tool to drive customer engagement and longterm value. AI-derived metadata for content moods, micro-genres, story descriptors, keywords and more are leveraged by advanced personalization algorithms from Quickplay to:

• Find all the main characters through presence and importance to story;

• Pinpoint exactly where in the frame to feature main characters for thumbnails; and

• Find the best scenes using energy and emotion tracking across the story arc.

AI-automated thumbnails and preview clip outputs are fast and easily created, significantly reducing the cost of content marketing. AINAR Visual Discovery recognizes key people in the video and analyzes their mood and appearance to evaluate their importance. After analysis, AINAR Visual Discovery can find the main characters for thumbnails and previews and select both engaging and relevant segments of the content suited for promotional material.

Most services have the Catch 22 issue that they don’t have any data on new users and it takes time to collect the data. Vionlabs AI generates preview clips (the type of short clips that starts when you hover on a movie/ series thumbnail on Netflix) that, when

combined with metadata and content embeddings, can connect shorter clips to long-form content such as movies and series. This helps boost the amount of datapoints available for each user early in the user journey from one or two interactions per week to 50–100 interactions that can be used in recommendations, personalization and discovery.

As offered by Quickplay and VionLabs, Preview Clips uses three key AI-based capabilities to enable OTT providers to create previews automatically, without the time and cost of manually marking each noteworthy highlight.

• Character Tracking identifies characters and tracks their actions, enabling accurate previews that focus on the main characters in the story.

• Action Detection uses a deep learning algorithm to recognize high-energy scenes that should be included in the previews, including dynamic

actions such as running, jumping or fighting for an action movie, and funny moments for a comedy.

• Speech Detection technology provides an added layer of protection by checking dialogue to ensure that previews begin and end at natural breaks in the conversation.

This results in richer, more nuanced video recommendations and previews that can be targeted to viewers based on data collected by Quickplay tools across the subscriber journey. Previews can be published immediately or serve as the basis for further refinement by the content team, resulting in increased activations across Quickplay customers’ OTT content libraries. As noted above, a baseline of one or two interactions per week can be increased to 50–100 interactions that can be used in recommendations, personalization and discovery.

156 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

TAG VIDEO SYSTEMS

Content Matching Technology

Content Matching is TAG’s newest and perhaps most ground-breaking technology yet. This unique mechanism detects similar content across two different streams to ensure correct and uninterrupted delivery to the intended destination. This is done by creating a unique fingerprint for each video frame and audio envelope and matching them across the entire media distribution path against a user-defined reference point. This new technology dramatically reduces workflow complexity and eyeson-glass and enables media companies to deliver quality content with fewer resources and more confidence.

TAG’s Content Matching can identify and correlate audio and video uniqueness accurately regardless of the resolution, bit rate or frame rate, thus enabling a match between any two or more points in the workflow. Even after the content has been processed and manipulated, TAG will still be able to identify the match and confirm that the content is identical, correct and behaves as expected.

In addition, the new TAG technology

allows users to get to the root cause of problems faster and troubleshoot more efficiently, even in the most complex, elaborate workflows. Based on a sophisticated, real-time, frame-to-frame correlation engine, the user will be notified when the first content mismatch occurs and combined with TAG’s rich probing and monitoring, they can easily identify and resolve the source of the errors.

TAG’s content matching enables, but is not limited to, the following highly requested media workflow applications:

• Frame-accurate latency measurement between any two or more points in the workflow

• Comparing quality and content accuracy across different feeds to compare distribution methods or alternative paths

• Confirm ad insertion to SCTE messages with frame accuracy to assure and protect revenue

• Measurement Validate A/V alignment and audio channel drift at any point in the workflow

The ability to identify, match and correlate content to content anywhere in the workflow empowers users to measure a wide variety of parameters, and the potential uses are left to the user’s imagination. With a reference point and one or more monitoring points, comparisons are easily made, and issues can be quickly identified.

Combined with TAG’s flagship software-only monitoring and visualization platform, Content Matching is a powerful tool. The technology adds yet another layer of monitoring to TAG’s robust Multi-Channel Monitoring (MCM), a system that manages alarms and alerts operators of 500+ userdefined event thresholds. In addition, Content Matching provides another resource for the data collected and aggregated by TAG’s Media Control System (MCS). The MCS allows data to be visualized with IT open-source tools, providing engineers with a more precise understanding of their workflow and the information they need to improve it.

157 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

The VideoReady Go solution aims to empower streaming businesses to launch a production-grade OTT platform with their branding and content in a few minutes with a few clicks. The solution is designed to be flexible, scalable and adaptable, enabling the OTT platform to customize as per their business requirements. VideoReady Go gives small-to-medium players a production-ready platform from Day 1, while the bigger players looking for a

TO THE NEW VideoReady GO

fully custom solution can leverage it to have a head start with a comprehensive MVP launch with incremental releases for custom needs.

The customers looking to launch their OTT service can upload their logo, branding, videos and metadata in a simple four-step user interface with VideoReady Go and launch a fully functional end-to-end OTT platform in minutes with multidevice presence and comprehensive CMS.

VideoReady Go includes a range of features and capabilities, including:

• Customizable interfaces and user journey

• Support for multiple device platforms, including mobile, web, smart TV, streaming boxes/sticks/consoles

• Advanced content management and monetization tools

• Analytics and reporting dashboards

• Cloud-based infrastructure with 24/7 monitoring and support

158 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Today’s live streaming and television production challenges call for a solution that addresses tomorrow’s demands. As productions are increasingly opting for the convenience and optimal costs of utilizing the cloud, operators and producers need a system they can rely on.

Viz Vectar Plus, a software-based live production system, enables broadcast and media organizations to quickly adopt an IT and IP-friendly solution as a software plan. It’s a system that can be deployed in different ways depending on organizational needs.

Viz Vectar Plus provides multisource live video mixing of up to 44 source channels, each supporting key and fill and multidestination delivery including eight HD mix outputs over IP. With its software-based control panels, it’s possible to operate Viz Vectar Plus systems from any compatible desktop or mobile device, remotely anywhere on the network, even in virtual environments.

Broadcasters are already adapting to the versatility of cloud, as it dramatically reduces the need for people to be onsite or to operate in production rooms saving on cost and carbon. Recently, UEFA partnered with BT Sport to broadcast the first-ever Youth League game entirely produced in the cloud, using Viz Vectar Plus.

A typical Youth League game broadcast package includes six cameras, a traditional outside broadcast truck, two generators for backup power, and a satellite uplink truck — all supported by 25 on-site personnel. For this production, only eight people were required at the stadium, while seven others worked 20 km away at BT Sport’s Stratford studios with the necessary infrastructure in place.

VIZRT GROUP

Viz Vectar Plus

“It was important for us to choose a solution that would reduce the complexity for the production. Viz Vectar Plus gave us better control options, with just seven people operating from a cloud gallery with Viz Vectar Plus. Using the public cloud to send the broadcast to the hub, with the right technology making cloud adaptable to how we wanted to operate, we were allowed to

essentially unlimited IP sources from anywhere across the network in real time.

Live Production Trainer and consultant Kim Henderson attended a Vizrt Experience in Los Angeles, which showcased the powerful capabilities of Vizrt’s cloud production solutions, proving anyone can create enterprise-grade content as if on-site, from anywhere — including the beach.

build a completely virtualized production center,” said Andy Beale, BT Sport’s chief engineer.

The future-ready software is made for flexibility — switching, mixing and actualizing any type of live production with robust multiformat processing and per-channel frame synchronizers for effortless intermixing. Additionally, Viz Vectar Plus employs standard computing and network infrastructures with IP connectivity to achieve instant access to, and seamless interchange with,

Henderson puts it simply: “Viz Vectar Plus works like a modern multimedia switcher works. You could be switching in Los Angeles, while the director in New York is watching from their home, while there are camera men doing the job down in Florida.”

The ability to access network-grade production equipment in the cloud is a game-changer to create more content, making the best use of time and resources, and enhancing the adjustment to changing production needs.

159 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Witbe’s Witbox+ is a next-generation solution for testing and monitoring video services on multiple devices, including OTT boxes, smart TVs, mobile platforms, gaming consoles and more. Designed specifically with video operations teams in mind, the Witbox+ utilizes Witbe’s revolutionary technology to create an ultra-scalable, powerful device in a small form factor that’s easy to set up. Users can simply plug any physical device into the unit to start automatically testing and monitoring any digital service running on it. The Witbox+ supports 4K video and 5.1 surround sound, as well as Bluetooth and RF4CE control, for up to four different devices simultaneously connected.

With Witbe’s Remote Eye Controller (REC) software application, network operation center teams and manual testers can remotely access and control every device plugged into the Witbox+ from anywhere in the world, removing the need for engineers to travel thousands of miles to test specific devices in the field. REC aggregates all connected devices on the same screen in a mosaic, and the number of devices supported is unlimited. If a company has 100 STBs attached to multiple Witbox+ units in the

WITBE Witbox+

field, all 100 of them can be accessed and controlled simultaneously.

Recently, Witbe introduced a brandnew version of REC that is directly available on the web. It can run on any modern web browser, allowing users to control their Witbox+-connected devices on laptops, smartphones, tablets and more.

Unique features and benefits of the Witbox+ include:

QA test automation: Helping QA teams cover the performance, endurance and stress testing that is difficult for human team members to accomplish manually, the Witbox+ enables automatic, around-the-clock testing.

Video service monitoring: The Witbox+ goes beyond standard testing with its proactive monitoring capabilities. Even when a device isn’t being used, the Witbox+ can still monitor its video streaming quality using a proprietary algorithm that relies on the same metrics as the human eye. Whenever the quality dips, the Witbox+ sends users an alert, enabling QA teams to proactively identify and fix the issue.

Short-form video testing:

Automatically evaluating key performance indicators for short-form videos — including availability, buffering time and quality — the Witbox+ allows video service providers, social networks and mobile network operators to understand the QoE their customers truly receive.

QoE benchmarking: By comparing the quality of a video streaming service against local and global competitors through Smartgate Benchmarking, the Witbox+ helps operators understand the QoE their users expect.

Compact scalability: With the ability to test and monitor four different 4K devices simultaneously, scalability sets the Witbox+ apart. The major technological breakthrough wasn’t just making this happen; it was packing all these capabilities into a sleek 20cm by 20cm package. This makes the Witbox+ not only the most powerful test automation device on the market, but also the most compact. With the compact size, users can now test up to 64 devices in a single rack.

Environmental sustainability: The Witbox+ consumes eight times less power than Witbe’s previous 4K-compatible products.

160 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Software-Defined Video Platform 5G

Zixi is the architect of the Software-Define Video Platform (SDVP), the industry’s most complete live IP video workflow solution. Zixi has integrated support for managing 4K video streams on 5G networks within multi-access edge compute (MEC) infrastructure. Broadcast-quality content means no errors, and delivery across 5G networks should be no exception.

With more than 15 years of innovating live video delivery over IP networks, Zixi is uniquely positioned to enable new use cases for 5G delivery while ensuring broadcast quality and reliability. 5G networks and MEC infrastructure unlock exciting new opportunities that Zixi is helping bring to market, including ultra-low latency live remote production, satellite rationalization for distribution, and new fan experiences both in and outside the venue. Like all IP networks, 5G requires protection against challenges like jitter, congestion, signal interruption and degradation.

The SDVP now features optimizations necessary to fully operationalize live IP video delivery of pristine 4K video over 5G networks, completely untethering production and distribution. The new solution is

being used to distribute time-sensitive video with unprecedented low latency without sacrificing image quality or broadcast reliability. The SDVP automates critical functions necessary to ensure that the full benefits of 5G radio networks and MEC infrastructure are achieved:

1. Edge Presence — In order to take advantage of the high-performance, low latency characteristics of 5G, you must be able to move video processing and management to the 5G edge. The SDVP leverages ultra-low latency access to AWS compute and storage services enabled by AWS Wavelength at the Verizon 5G Edge to process huge amounts of UHD video and compress it for delivery to mobile devices.

2. Seamless Bonding of IP Networks — Accessing any radio network can introduce challenges of signal integrity, especially for mobile application in areas with high interference. The SDCP can seamlessly bond across diverse signal paths, including redundant 5G access points, Wi-Fi and 4G LTE networks. This ensures uninterrupted video delivery that is essential in live production and distri-

bution workflows.

3. Network Aware Adaptive Bitrate —

The Zixi protocol is congestion- and network-aware, seamlessly adjusts to varying network conditions and employs patented, dynamic Forward Error Correction techniques for error-free video transport over 5G. Zixi’s unique ability to adapt the video quality to the available bandwidth makes it easy to maintain stream continuity for the optimal Quality of Experience, even as signal strength or traffic congestion changes.

These innovations have been demonstrated to enable pristine quality, ultra-low latency live 4K video backhaul for real-time production and distribution including for deployed customers such as BloombergTV on Verizon and AWS Wavelength Zones infrastructure. At a time that sees the normalization of remote working and a proliferation in ways programs reach viewers, Zixi’s SDVP provides the agility, reliability and broadcast-quality video securely from any source to any destination over flexible IP video routes.

161 Best of Show Awards 2023 | NAB Show
ZIXI
FOR MORE INFO

Zixi for FAST Channels

IP-based transport enables flexible, efficient workflows with lower fixed costs, better visibility into stream data and allows for greater regionalization of content. These capabilities enable broadcasters to expand to digital-first touchpoints, such as smart TVs and launch new FAST channel distribution to capture new monetization opportunities. Zixi IP channel distribution at scale simplifies complex live video workflows for FAST channel, international syndication and OTT requirements. Zixi’s software-defined IP transport facilitates cost-effective contribution and distribution to or from affiliates and partners utilizing any protocol, IP network, cloud or edge device.

Media companies looking to syndicate programming to an increasing number of target destinations face a range of major challenges, from compliance validation and content formatting to stream localization, delivery strategy and security. As a result, video distribution over IP has become an essential part of any modern, agile live video delivery strategy. Zixi offers advanced features and functionality to deliver high-volume live video distribution at scale and reliably over any network with lower fixed costs, for better visibility into stream data and to allow for greater regionalization of content.

Zixi Software-Defined Video Platform (SDVP) and Zixi-as-a-Service (ZaaS) are modular, cost-efficient and purpose-built video services designed specifically to securely manage highly complex, scalable and resilient live video routes over mixed IP networks. The SDVP integrates seamlessly into customer infrastructure

environments, providing the flexibility to scale video processing, redundant video routes and sophisticated monitoring services across customer-managed infrastructure. ZaaS builds on top of the SDVP, adding video-optimized live video cloud infrastructure and cost-efficient video routes that can augment or replace existing public cloud deployments.

The solutions also deliver centralized management with orchestration and live visibility across the distribution process. With end-to-end monitoring, it includes a complete live video monitoring suite, with robust network telemetry details, real-time audio/video analytics and live impairment detection. Operation teams can scale on-demand, flexibly provisioning additional resources, increasing throughput and adding new delivery targets in real time and without disruption. By ensuring ultra-low latency live transcoding, content switching, program mapping and live compliance validation automatically conform inbound source streams to meet unique requirements for each target destination.

To take advantage of rapidly changing monetization opportunities with new and legacy content, in collaboration with major FAST channel ecosystem players, Zixi for FAST has been developed and deployed to provide visibility into stream data allowing for greater regionalization of content, seamlessly and centrally orchestrating, monitoring and managing live streams from end-to-end of the video supply chain. Amagi, Cinedigm and other ecosystem players are using these advancements that lead an industry-wide explosion in technology advancements, including the virtualization of broadcast media infrastructures and the implementation of software-defined video networks supporting business agility and cost-efficiency strategies the market demands.

This emphasis on bringing transformative cloud-based technologies to the widest possible audience is playing a major role in helping the video distribution sector to innovate. In today’s highly competitive broadcast and media landscape, this represents a crucial set of capabilities.

162 Best of Show Awards 2023 | NAB Show
ZIXI
FOR MORE INFO

ABM - Electromagnetic Field Meter

The new ABM is a broadband (wideband) electromagnetic field meter.

ABM is designed to measure and control the field strengths in compliance with personal health and safety regulations/ requirements in accordance with international limits (ICNIRP, IEC, IEEE). ABM allows accurate measurements in real time with minimun effort for the operator.

Main Features

• Broadband measurement (DC – 65 GHz)

• Interchangeable plug-and-play probes

• High measurement stability

• High dynamic range

• Multi-datalogger 24H (up to 2 million records)

• GPS receiver integrated

• Temperature and humidity sensors available on board

• Compact and light (300g only)

• Operation time > 5 days

• Rechargeable battery

• Anti-shock protective cover

ALDENA provides a full range of E-Field/ H-field probes covering different frequency ranges. Probes are plug-and-play, with individual calibarion certificate.

In particular:

• New EP-8 wideband probe electrical field (100 kHz √∑ 8 GHz)

• New EWB-DIG narrow band probe digital signals (5G)

163 Best of Show Awards 2023 | NAB Show
ALDENA
FOR MORE INFO

BROADCAST BIONICS Virtual Rack

Both broadcasters and their traditional technology partners have increasingly turned to container and virtualized technologies to deploy broadcast hardware more efficiently, scalably, securely and resiliently.

The degree of highly specialized IT and networking knowledge required has proved to be a steep learning curve and deployment has been extremely challenging as early adopters struggle to implement these technologies without considerable cost and complexity and unfortunately in many cases significant frustration and disappointment.

In launching Virtual Rack Broadcast Bionics has solved these challenges by offering a unique hardware and software solution that can be entirely managed in a web browser by any engineer. All of the complexity is entirely abstracted away and traditional engineers can quickly, easily and flexibly build and change virtual racks filled with equipment by clicking on a simple application library offering software consoles, codecs, talkshow systems, mic processing and audio processing provided by a host of leading broadcast vendors.

Virtual Rack is a hardware appliance providing a perfectly optimized combination of hardware, processor, firmware and operating system designed for low

latency broadcast containers and delivering plug and play simplicity alongside rock solid reliability.

This appliance is controlled by a revolutionary browser-based UI. Users never see Linux or a command line but instead work with a simple, visual, virtual rack of broadcast equipment simplifying the deployment and management of your entire broadcast infrastructure into just

a few mouse clicks.

Multiple Virtual Racks can be combined to provide unlimited scale or networked to enable resilience and failover between multiple appliances.

Broadcast Bionics is working with many of the world’s leading broadcast technology vendors to test and deploy applications covering every aspect of content creation and delivery.

164 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

DTS / XPERI

DTS AutoStage Broadcaster Portal

For the first time in the industry, through the DTS AutoStage Broadcaster Portal, broadcasters can see where their listeners are listening to their stations, when they are listening and what content they enjoy the most. From an easy-to-use dashboard, broadcasters can gain insights such as listener heat maps, day-parts from 24 hours ago, and the songs, ads and program segments listeners enjoy the most.

Free to broadcasters, the DTS AutoStage Broadcaster Portal provides new insights to help sales teams and program managers understand listener engagement and increase revenue. Broadcasters also have full control of brand and metadata while being able to make immediate changes in the listener’s car radio.

With millions of cars on the road using DTS AutoStage technology, radio can now be part of the world of “Big Data” to make an educated decision based on real-world usage. Whether the station owner has a single station or a thousand, the same analytics are available, allowing each owner to know more about their listeners and to make actionable, data-driven decisions.

Some insights available are:

• Broadcaster Control: Full control over all the metadata of each of their stations. Logos, slogans, genres, social media and contact information, just to name a few. Changes made by broadcasters are reflected in the DTS AutoStage vehicle immediately.

• Heatmap: A graphical representation of the stations’ coverage based on the vehicles in that market that are listening to their station. The broadcaster can select different times of the month and week based on the different transmission systems they use (AM, FM, DAB, CDR or HD Radio). This map allows the broadcaster to

see not just a bird’s eye view of their coverage area but zoom into different areas, down to the neighborhood level, to understand their listeners better.

• Listening Charts: Broadcasters will be able to see what users are listening to, i.e., a music chart can show the most popular songs listened to, an ad chart can show what ads were listened to the most, and with program-level information, what were the most popular segments of the program.

• Dayparting: Not only can broadcasters see when users are listening, but they can see that information 24–48 hours later. If the broadcaster changed the station’s format, on-air talent or would like to see the results of sponsored events, they can get that information in this timeframe.

• Station Reach/Change: Station coverage map is based on the location of vehicles within the coverage area and who is listening, providing actionable data such as the percentage of total vehicles in the market listening to the station and how that percentage changes over time to improve program-

ming decisions.

• Privacy: No personally identifiable information is used, so the broadcasters are compliant with all local, regional and country-based privacy protection laws. Broadcasters only see their data.

Stations enrolled in DTS AutoStage will have access to all metrics available through the Broadcasters portal. The portal will serve all available metrics for their stations as well as the hub for all maintenance of their metadata.

165 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Rapid changes the way a station manages and delivers metadata for their broadcasts. Receivers today require more than just an audio feed ... they require a rich visual experience paired with the audio! Rapid’s intuitive platform streamlines the creation and publishing of metadata across platforms for broadcasters. Rapid improves the accuracy of content delivered to receivers, allows stations to deliver real-time metadata during on-air programs and ensures a consistent experience for listeners.

Rapid is the first product to prioritize the delivery of metadata content while managing the complex ecosystem of broadcast delivery. Rapid brings FM/RDS, HD Radio, IP/web stream and mobile delivery into one environment, while not sacrificing control and accuracy of the metadata. It enables stations to manage all their outputs across all their broadcasts for their music, news, programming and station content. Rapid is also designed to be used by all station teams; programming, engineering, management and sales can all contribute in the delivery of station metadata.

With Rapid, stations can control their core station metadata to ensure optimal delivery across all of their platforms. In addition to station information, Rapid offers a dynamic on-air program scheduler, giving the station the ability to easily control the metadata for scheduled programs as well as for special programs.

Rapid also utilizes advanced logic to navigate and match the world’s leading music metadata sources to intelligently deliver the most accurate metadata possible. Rapid ingests a station’s feed to match music content with the proper song/artist titles as well as licensed album art. Understanding that music content is diverse and can have variations, Rapid allows a station to manually modify metadata for music according to preference. This gives the station full control over the visual representation of the content they are delivering.

Rapid also adds to the visual experience by enabling stations to maximize their visual content. Rapid allows a station to add editorial content along with their audio, promote upcoming station events or programming, fundraising or additional relevant track information. For example, a station can utilize

Rapid to deliver real-time headlines as the announcer reads the news; a station could also send metadata to promote the afternoon drive show during midday broadcasts; or send along news/weather updates.

Research has shown that visuals paired with radio broadcasts can significantly improve the experience and recognition of content. Consumers today have come to expect basic items such as song/artist titles or imagery when consuming content. However, a tool to simplify the station’s delivery of this hasn’t existed, until the creation of Rapid. Rapid streamlines the flow and improves the accuracy of metadata, while not limiting where a station can send its content. Rapid’s robust number of delivery methods is changing how broadcaster’s think about metadata and improving the experience for listener and broadcaster alike.

166 Best of Show Awards 2023 | NAB Show
FOR MORE INFO
DTS / XPERI Rapid

ELENOS GROUP - BROADCAST ELECTRONICS

Quick Block

Quick block is a next-generation transmitter platform that is based around ultra-high efficiency, scalability, simplified maintenance and significantly reduced footprint. High efficiency and compact transmitter shouldn’t mean you have to give up modularity and simplicity — with Quick Block you don’t have to. Using the same simple building block you can scale a transmitter from 1 kW to more than 50 kW for analog and HD Radio FM broadcasts. All Quick Block modules are compact, lightweight and easily shippable In fact Quick Block delivers 50 kW in a single 19-inch rack — an industry- first reducing space required and the associated lease costs. Multiple modular and software-defined exciter options allow you to have complete redundancy, and a simple upgrade path to HD Radio when you are ready. Completely controllable from a color front-panel screen, or an easy-to-use remote HTML-based web GUI interface — no java or flash. The extensive control system provides access to detailed information about the operation, and powers useful

preventative maintenance algorithms that simplify on-going operations.

Quick Block is designed for serviceability. Locating an issue is a snap; using either the powerful remote and local user interface, or by simply using multicolored front panel LEDs, modules status can be know in an instant. All controllers, power supplies, RF amplifiers and exciters are hot swappable for simple rapid exchange that can be done by personnel with basic skills. Since all modules are the same from low power to very high power, you have less spare parts to stock, simplifying and reducing your ongoing operations.

Quick Block gives you the option to either repair any issues with a simple module exchange with the factory or repair yourself since Quick Block is designed for field repairability. You can even get a module test fixture to speed up field repair and make troubleshooting a snap.

Quick Block — reliable, modular, compact, energy-efficient, scalable and more. Quick Block doesn’t force you to make tradeoffs. QuickBlock equals no compromises.

167 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

RadioGPT

Futuri is revolutionizing the radio industry with the launch of RadioGPT — the world’s first AI-driven localized radio content solution. RadioGPT combines the power of GPT-4 technology with AI voice tech and Futuri’s AI-driven targeted story discovery and social content system, TopicPulse, to provide an unmatched localized radio experience for any market, any format.

RadioGPT uses Futuri’s patented Echo automation link and TopicPulse technology, which scans Facebook, Twitter, Instagram and 250K+ other sources of news and information, to identify which topics are trending in a specific market. Then, using GPT-4 technology, RadioGPT creates a script for on-air use, and AI voices turn that script into compelling audio.

Stations can select from a variety of AI voices for single-, duo- or trio-hosted shows, or train the AI with their existing personalities. Programming is available for individual dayparts, or Futuri’s RadioGPT can power the entire station. RadioGPT is currently available for all English-language formats in a white-labeled fashion, and several other languages were being added this spring. Stations can also use RadioGPT to generate social posts, website articles and other content for digital platforms. The available TopicPulse Instant Video add-on creates AI-driven short videos on hot topics for social use. By adding on Futuri’s POST AI-enabled podcasting system, stations can take broadcast audio and immediately publish it on-demand with POST’s auto-publishing feature.

“As early AI innovators in the broadcast space, it’s only natural that we’re bringing the incredible power of GPT-4 technology, paired with groundbreaking technology like TopicPulse, to radio,”

said Futuri CEO Daniel Anstandig. “The ability for broadcasters to use RadioGPT to localize their on-air content in a turnkey fashion opens up resources for them to deepen their important home-field advantages in new and unique ways. With RadioGPT, the possibilities are endless.”

Beta partners for Futuri’s

RadioGPT include Alpha Media in the United States and Rogers Sports & Media in Canada. Anstandig recently keynoted Radiodays Europe in Prague with a session on uses of AI for radio, focused on RadioGPT, and presented the keynote at the inaugural Radiodays North America with the same topic in early June 2023.

168 Best of Show Awards 2023 | NAB Show
FUTURI
FOR MORE INFO

Maxiva MultiD DAB Transmitters

GatesAir introduces its second generation of Maxiva MultiD Series of multichannel DAB/DAB+ radio transmitters. New at the 2023 NAB Show, the second generation brings new benefits such as improved power and bandwidth management, connectivity and monitoring and control functionality. Its compact chassis now supports up to four independent digital radio services across separate DAB/DAB+ channels, with selectable IP (EDI) and legacy (ETI) connections and versatile power output levels per channel.

The MultiD Series comes from the clever engineering minds of the GatesAir Europe team, which developed the series to reduce the costs and infrastructure of per-site multichannel DAB broadcasting. The original MultiD system integrates three separate transmitters within a single 1RU chassis, instead of requiring a separate transmitter for each channel plus an external combiner and auxiliary hardware.

The new MultiD system design adds capacity for a fourth DAB service, and removes the limitations of broadcasting all services within a single DAB channel

band. This means that MultiD customers can now broadcast four independent DAB radio services across separate channels (such as 10A, 10B, 11A and 11B). This is useful for broadcasters that were not allocated all licenses within a certain channel, for example.

Power output per channel can also now be built to each customer’s specifications. For example, DAB broadcasters can now order a 1.5 kW MultiD system to support three 500W services, or a 1.2 kW system to support four 300W services. It is also possible to establish varied power levels across different DAB services. With such exceptional power output flexibility, MultiD systems are now configurable to serve virtually any combination of power levels for up to four DAB services.

The ability to divide system capacity is also useful for MultiD systems that share different tenants — a common occurrence in road tunnels where broadcasters can effectively consolidate resources to ensure consistent coverage of their most important broadcasts.

MultiD’s multicarrier architec-

ture provides the same high-efficiency benefits even when supporting more than one tenant or broadcasting across more than one DAB channel. That includes a sizeable footprint reduction by removing the need for external RF combining. MultiD internally combines low-level RF signals, and then generates and retransmits all independent DAB services through a single amplifier. Additionally, by splitting the energy use and power consumption of a single modulator across all channels, consumption per channel is substantially reduced. The result is an enormous efficiency advantage when it comes to reducing cost, footprint and maintenance.

The streamlined architecture requires only a single band-pass filter and RF antenna connector for transmission to listeners. Elsewhere, the ability to select between IP-based (EDI) and legacy (ETI) connections provides flexible signal transport options over IP and microwave. GatesAir has also added a secure HTML5 user interface to remotely monitor and analyze signal health and performance from any browser.

169 Best of Show Awards 2023 | NAB Show
GATESAIR
FOR MORE INFO

GATESAIR

Flexiva GX Transmitters

The Flexiva GX air-cooled FM solid-state transmitter family provides today’s FM analog broadcaster with an ultra-compact transmission platform. Introduced at IBC2022, the Flexiva GX line continues the legacy of the highly successful line of GatesAir FM transmitters and combines innovative RF amplification and software-defined exciter technology to take FM transmission to the next level. Broadcasters seeking to refresh existing analog FM infrastructure with cost, space and energy-efficient systems for large regional and national networks represent a key customer base.

Available today in 5 kW and 10 kW versions, GatesAir added 50W and 1 kW versions for the NAB Show and will continue to flesh out the family to serve lower power levels. The Flexiva GX family was also named an IABM BaM Awards 2023 shortlist entry in the Publish category.

The Flexiva GX Series is built for customers who are fully focused on the benefits of modern, high-efficiency solid-state technology. Using the latest LDMOS technology, GatesAir has packed

exceptional power density into a compact 5RU chassis, providing broadcasters with powerful FM transmission solutions up to 10 kW that deliver a remarkable overall efficiency rating up to 76 percent. The engineering breakthroughs in power density, efficiency and footprint are made possible through the Flexiva GX’s design, enhanced by GatesAir’s third-generation PowerSmart high-efficiency transmitter architecture.

The Flexiva GX Series also carries traditional GatesAir solid-state design benefits forward, including modular, redundant transmitter designs with hot-swappable power supplies. The transmitters provide auto-switching inputs, with dual AES (including AES192), dual composite and analog left/right audio inputs. Flexiva GX transmitters support N+1 configurations, enabling large national network operators to build flexible and consolidated transmission sites that meet stringent uptime requirements.

Options include a GPS receiver to support SFN func-

tionality, and GatesAir’s new Intraplex IP Link 100e module. The latter integrates within Flexiva GX transmitters, enabling direct receipt of contributed FM content instead of requiring an external codec. This further reduces rack space requirements inside RF facilities with limited open real estate.

FOR MORE

Flexiva GX transmitters also maximize on-air protection through several design attributes. Its LDMOS-FET power amplifier device technology coupled with GatesAir’s innovative PowerSmart amplifier design delivers its dramatic increase in power density. Redundant, rugged amplifiers and low-loss combiners provide protection against lightning, antenna system short-circuits and high VSWR while maximizing its ability to stay on the air. This reduces operating and maintenance costs, lowering the total cost of ownership over the life of the transmitter. Flexiva GX transmitters can operate up to full rated power at up to a 1.5:1 VSWR with proportional fold back into infinite VSWR.

170 Best of Show Awards 2023 | NAB Show
INFO

541 FM Modulation Monitor

The 541 incorporates all the features for station setup, regulatory compliance, and remote monitoring. The 5-inch LCD Touch Screen, displays essential modulation data graphically on the front panel, or can be viewed remotely via the web.

The 541 is Inovonics’ fourth-generation FM Modulation Monitor. It delivers a wealth of information about the transmitted signal in terms of the RF carrier and all subcarriers, the audio component defining the technical quality that the listener hears, and full decoding of RDS data and SCA audio.

The all-digital 541 combines detailed DSP signal analysis with a menu-driven touch-screen display, plus web server-based total access for remote operation, including measurements, graphical data and direct web-browser audio monitoring of the off-air program.

Some of the features of the 541 FM Modulation Monitor include:

• Intuitive, menu-driven setup from the front panel, or remote setup and operation with the built-in web server that may be addressed over any IP network by computer or mobile device. The 541 supports full SNMP remote control and monitoring.

• Graphic front-panel and remote display of all level metering; FFT spectrum analysis of IF passband, MPX baseband and program audio; oscilloscope display of program audio and stereo XY.

• Alarms for a range of signal faults, with tallies and SMS/text or email message dispatches to specific individuals for various alarm conditions. All alarms are logged chronologically as well.

• Analog, AES3-digital, HTTP/UDP web-streaming and independent AoIP-streaming program audio outputs, plus an FM composite/MPX baseband output.

• The BandScanner utility scans the FM spectrum and displays each station with its signal level, PI code and call sign.

• StationRotation mode enables automatic sequential monitoring of multiple station presets.

• Collects and logs a history-over-time of FM and audio signal parameters.

• Accurate program loudness measurement to the human perception ITU-R BS.1770 (“LU”) loudness specification.

• Stays on-channel and retains measurement setups through signal and power losses.

171 Best of Show Awards 2023 | NAB Show INOVONICS INC.
FOR MORE INFO

INOVONICS INC.

611 Streaming Monitor

The 611 is Inovonics’ second-generation dedicated hardware solution for uninterrupted monitoring of network streaming audio, such as online internet radio and other streaming applications.

The world of streaming has progressed over the years and the new 611 is designed to meet the technological challenges of today with greater processing power and advanced functionality.

There is no other dedicated hardware solution in the market that does what the 611 does. Like its predecessor, the

611 Streaming Monitor provides balanced analog and AES-digital outputs, self-logging alarms that constantly check for audio loss, stream loss and internet loss. If the stream is lost for whatever reason, the 611 continuously strives to reconnect. Online alarm notifications alert personnel with email or text messages.

Some of the advanced features available with the 611 Stream Monitor include:

• Support for HTTP and HTTPS streams.

• Stream formats — Icecast/Shoutcast, HLS (Ras, MPEG-TS, fMP4).

• Stream Rotation — will rotate through preset streams sequentially, monitoring one stream at a time.

• Failover Support — Preset back-up streams with customizable failover triggers.

• Adjustable output levels for analog L/R and AES-digital

• Alarms and notifications via email or SMS for audio loss, stream loss, internet loss.

172 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Nautel GV2: Just Add Audio

The GV2 transmitter, is the first in the industry to integrate all HD Radio components inside the transmitter. The GV2 supports HD Radio with Xperi Gen4 Importer, Exporter and Exgine implementations for HD Radio encoding, station logo and artist experience and is the first solution that locks FM and HD signals synchronously to eliminate HD FM blend drift. Omnia for Nautel covers all FM and HD Radio audio processing needs and provides Livewire AoIP inputs for all audio streams. No additional hardware is required.

Another key innovation of the GV2 architecture is an air-chain selector, which allows the GV2 to select one of many air-chain inputs (FM and all HD) and gracefully change from one feed to an-

other, no matter whether the feed originates from the studio, the cloud or the internal platform; perfect FM/HD1 time alignment is always guaranteed through the blend lock function without the need of GPS antenna connections. All components can optionally be activated to suit each station’s specific needs.

GV2 uses advanced transmitter technologies including new power supplies, solid-state storage, a new interface card and dramatic increases in internal computational capacity. A new generation Nautel Advanced User Interface (AUI) based on HTML5 enables the integration of new software components through virtualization technology.

The GV2 transmitter is ideal for any broadcaster looking for a simpler way to deploy HD Radio. It is our hope that the unprecedented simplicity and advanced digital radio functionality of the GV2 transmitter will encourage a fresh new wave of HD Radio adoption.

The GV2 transmitter is also ideal for broadcasters buying a transmitter for analog transmission today but wanting an easy and cost-effective upgrade path to HD Radio transmission in the future.

As long as there is an IP Audio feed available at the transmitter site, operation is Audio In and RF Out. FM stations benefit from this ultra-streamlined deployment as well as elimination of time alignment issues.

173 Best of Show Awards 2023 | NAB Show NAUTEL
FOR MORE INFO

ORBAN LABS INC.

Orban 5950 FM/HD processor

The Orban 5950 is a clean-sheet-of-paper design using the Orban New Platform hardware (ONP). ONP is a flexible hardware design that covers a plethora of processing requirements. Supporting two to eight DSP and multiple sub processors, it is capable of providing AM/FM/FM DAB+ HD-1/AoIP with local audio playback and watermarking as well as other functions, all in a 1RU form factor. It’s Orban’s first totally new hardware design since the legendary OPTIMOD 8400.

The 5950 includes several proprietary technologies. The MX limiter decreases distortion, increases transient impact, and more HF energy. The multipath mitigator/phase corrector reduces multipath picket fence static bursts and weak-signal blending in car radios. Its subharmonic synthesizer generates punchy bass. The OPTIMOD 5950 simultaneously processes one stereo program for FM and DAB+/HD Radio/Streaming. The settings can be coupled to make the blend between HD Radio analog and HD-1 smooth. Alternatively, the FM and DAB+/HD Radio/Streaming can be adjusted independently. This is valuable when the digital processing drives a channel that does not require blending, such as an internet stream.

Six Processing Structures: FiveBand, Low-Latency Five-Band, Ultra-Low-Latency Five-Band, Two-Band,

Five-Band MX and Two-Band MX.

Window-Gated AGC: Intelligent twoband window-gated AGC controls levels unobtrusively.

RDS/RBDS: Onboard generator supports dynamic PS scrolling and IP access.

Factory Presets: The 5950 also comes with a variety of factory presets; Orban’s exclusive “Less-More” control simplifies creating your stations signature sound.

AES67/SMPTE ST-2110: Two redundant network interfaces are available for AoIP networks supporting AES67, RAVENNATM and SMPTE ST-2110. AES67 provides Dante and Livewire+TM compatibility.

Remote Control/Monitoring: OPTIMOD 5950 can be configured/controlled via any HTML5 browser. It also supports the SNMP v2 and Ember+ protocols.

Audience Measurement: Two internal Nielsen or Kantar Encoders are optionally available, allowing the FM and the DAB+/HD Radio signals to be watermarked independently.

Streaming Monitor Output: The processed FM or DAB+/HD Radio signals can be monitored remotely via IP, allowing processor adjustment in locations where an off-air signal is unavailable.

Optional µMPX Interface

Internal Audio Backup: Provides two hours linear or 12

hours AAC, MP3 or OPUS encoded audio. Internet Streaming Decoder: Can be used as a backup audio source.

Diversity Delay: An adjustable delay can be inserted in the FM and/or digital path to ensure time-alignment of the FM and HD Radio/DAB+ signals at the receiver.

“True Peak” Limiter: The “True Peak” limiter in the digital processing path anticipates and controls peak levels following D/A conversion.

ITU BS.412 Multiplex Power Control: For countries requiring the multiplex power to be constrained to a specified limit, it will ensure compliance while controlling MPX power smoothly.

ITU-R BS.1770-4 Loudness Control: Facilitates compliance with target loudness recommendations like EBU R-128.

Silence Detection: Programmable silence detector is available for all inputs. It can generate alarms and allows automatic switching to a backup input/ audio storage.

Dual Power Supplies: OPTIMOD 5950 is equipped with monitored dual-redundant power supplies.

Bypass Relays: The analog, digital AES3 and the composite audio inputs and outputs have defeatable safety bypass relays that operate in case of a hardware failure.

174 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Radio.Cloud is the first full-time cloud-native radio automation system, and went on-air five years ago. The platform encourages remote collaboration from anywhere in the world, both for live and voice-tracked on-air shifts. It’s important to note that this is a full-blown automation system, not just a disaster recovery platform. While Radio.Cloud can be utilized for disaster recovery, there are already more than 160 affiliates across the United States and Europe using it for automation.

Radio.Cloud is unique because it’s the first cloud-native automation/ playout system certified as a partner of Amazon Web Services (AWS). The cloud-native approach means we built the platform from the ground up for the cloud rather than what AWS refers to as a “lift and shift” approach, moving a terrestrial platform to the cloud. The

RADIO.CLOUD

Radio.Cloud

way we take advantage of the power of cloud computing is by using various features and functions such as storage in Amazon S3, speech-to-text with Amazon Transcribe and most notably AWS Lamba. Also known as serverless computing, Lambda allows us to switch on and off specific functions (such as transcoding or transcribing audio) and only pay for the milliseconds that the function is active. This event-driven architecture is a cornerstone of a cloud-native infrastructure.

Because of this, Radio.Cloud harnesses the true power of the cloud and is able to seamlessly marry network and local content with custom voice tracks. This allows any client, whether a small cluster of two stations or a large network with hundreds of affiliates, to sound completely local and independent of one another even if

FOR

they’re all running off the same broadcast clock.

Radio.Cloud’s model provides protection against hardware failure and power outages. All content is stored in the cloud in multiple locations worldwide. The only hardware on-premises for 24/7 affiliates is a small Edge Gateway, which serves as local storage for playback, keeping stations on air during internet outages. The small hardware footprint also encourages sustainability, as there’s no longer a necessity for large server rooms and expensive electric bills.

Our singular browser-based system is built for the modern radio landscape and allows personalities to shine through by streamlining the content production process. Using processes only available in cloud-native computing, Radio. Cloud has changed the way radio is produced around the world.

175 Best of Show Awards 2023 | NAB Show
MORE INFO

RADIO.CLOUD

Radio.Cloud Live Studio

The first, and to date, only fully cloud-native radio platform that allows live broadcasting using a web browser, Radio.Cloud is also the only automation system certified as a partner of Amazon Web Services (AWS). The software combines all the functions needed to broadcast live on the radio — terrestrial, digital or internet-only. No matter the location, hosts, co-hosts or guests are presented with a configurable layout of channels dedicated to playlist audio, hotkeys, IP and SIP phone control and mic channels. Guests can be invited via a system-generated email link. The software allows you to drag and drop audio directly from your library into the playlist or hotkeys, and to reorder the playlist. The system uses the global network of AWS Regions and Availability Zones, assuring maximum availability, reliability and durability.

Radio.Cloud’s Live Studio opens the door for talent to go live from anywhere at any time, with any level of equipment. All that’s needed is a microphone, a computer or tablet, and an internet connection to produce a high-quality broadcast. The true technological ad-

vancement is that milliseconds latency and high throughput performance for live broadcasting is now possible through a web browser, even if collaborators are around the world. Seamless communication between hosts, co-hosts, and guests is simple, in the same way that Zoom allows conversations between friends and colleagues. Radio.Cloud achieves this by using other proprietary technologies.

The construction of conventional radio facilities can incorporate virtual servers locally or in a different location. To date, this has been achieved through what cloud-native providers consider a “lift and shift” approach. With a cloud-native approach, which enhances efficiency, performance and intelligent functionality, all collaborators are on equal footing and have a shared experience. This allows the cloud to be the central point of access rather than connecting to various virtual servers depending on location. Additionally, multiple contributors can use the software simultaneously to produce collaborative talk breaks, which

are recorded and saved directly into the playlist. Another factor that sets the Live Studio apart is that both live and prerecorded audio are in stereo.

This software is the ideal next step for an industry that’s changing its workflows every day. The ability to work remotely is necessary, whether for a local station, syndicated program or a voice-tracked shift on a station across the country. The virtual console works on a touchscreen interface but can also communicate with your studio hardware (Wheatstone, Telos, etc.) or small midi mixing device. In fact, Radio.Cloud is fully interactive with consoles that have motorized faders, allowing for complete remote control. So rather than going from a full studio to a cloud-native virtual console overnight, a parallel setup is possible where the software controls a piece of hardware, and vice versa. This lets stations rethink how remote broadcasts are conducted.

Instead of hauling equipment to various locations, hosts can pack a mic and laptop and have all they need for a light footprint, high-quality, on-air shift.

176 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

SOUND4 BIG VOICE .CL

SOUND4 BIG VOICE .CL, the first real-time virtualized voice processor for broadcast applications.

No more latency than the hardware version of BIG VOICE! Our revolutionary architecture authorizes unlimited application domains. As file processing up to 60x time speed for voice tracking, podcast or others, as a VST3 plug-in, and of course, as a live application with the ASIO I/O driver. Thanks to this, for example, journalists can get the same sound from their laptop than the sound they get when they are live in studios. YouTubers can get a crazy voice also ... The light weight of the application authorizes numerous mics on the same computer.

For making sound setup and controlling everything, BIG VOICE .CL is armed with a super ergonomic user-friendly HTML5 GUI. A second GUI for live applications will be available soon. In this version, it is only possible to recall a preset or mute a Mic.

SOUND4 BIG VOICE .CL is available for Windows operating systems (W10, W11, Server 2019, Server 2022) in those ver-

sions: Virtual Driver, ASIO, VST3 and the native DLL engine for software integrations. On Linux, only the DLL is available for product integration for the moment. In the upcoming months, we will propose a macOS-compatible version.

An unbeatable efficiency:

• Run hundreds of instances on the same CPU.

• Our solutions are simple, efficient, optimized and finally super robust!

• No more latency than hardware conventional products. Yes, this is finally possible!

• Less investments.

A renting service:

We are proposing our solution as a service invoiced monthly. According to needs, we can also invoice on different periods. At SOUND4SOFT all is included; we will not ask you for extra-support fees. Updates are, of course, included. Our goal is to keep our customers

satisfied year after year.

Our unbeatable price:

• $15/month, without engagement

• $13.50/month, two years engagement

• $12/month, three years engagement

About our technology:

For five years now, we have been migrating our technology from hardware to software. We are well-known as an innovative company. Virtualization at SOUND4 is not using conventional software on virtualized platforms. We are definitely not thinking like this!

We are coming from hardware environments where we are using the strongest DSPs coded only in assembly for the maximum of efficiency. Our approach for porting our application is the same, using low-level instructions offered by the CPUs directly, and not resting on the floor of the operating system so far from the low-level possibilities.

177 Best of Show Awards 2023 | NAB Show
SOUND4
FOR MORE INFO

SOUND4 IMPACT .CL

IMPACT .CL is the virtualized version of our big flagship called SOUND4 IMPACT, the Big One for HD and DAB applications.

The ingredients of this ultimate processing chain: The best ingredients in all processing stages, this is the secret of SOUND4 IMPACT .Cl.

Armed with the HQ Sound engine 192 KHz, a SOUND4-patented technology, the IMPACT .CL offers: two-band AGC (settable in wide-band), Stereo FX, Tone FX, De-Clipper, six-band processing, four-band EQ, Final limiter with an astonishing brickwall limiter. From purist needs to extremely competitive, SOUND4 IMPACT .CL is comfortable with all products type and requirements.

Yes, a big product, but so simple to drive: Driving a SOUND4 IMPACT .CL is accessible to everyone. We have included a large factory preset list. Thanks to this, you will be able to get the sound you want in just a few minutes! No risk of being lost in sound setup. Undo/Redo helps you going back in just a few clicks if needed. The ability to compare a “changed” with a “saved” preset, or with the original values contained in the factory preset, is accessible in each processing function at any time. This is outstandingly simple and efficient.

R128 loudness management: SOUND4 IMPACT .CL is fully compliant with loudness standard, truepeak respectful, users can set the loudness barrier where they want!

450 MHz needed, no more: SOUND4 IMPACT .CL is the only big HD/DAB FM processor designed for high density applications. When our competitors can run just four to six instances on big and costly architectures, we can run 150! How can we do this? We are not resting on the floor of the operating system so far from the low-level possibilities. We are using direct instructions offered by the CPUs. Remember, we are coming from hardware products using the strongest DSPs coded only in assembly for the maximum of efficiency. Virtualization at SOUND4SOFT is not using conventional software on virtualized platforms. We are definitely not thinking like this!

Applications: This product is perfect for HD Radios and DAB, and in general, for all digital applications. IMPACT .CL is available with Windows driver I/Os, connected to ASIO Windows driver, VST3 and as

DLL for software integration. IMPACT .CL can also do file processing on the go at a 10x time speed or more according to CPU capacities.

OS-compatible: On Windows operating systems (W10, W11, Server 2019, Server 2022), IMPACT .CL is available with Virtual Driver, ASIO, VST3 and the native DLL engine for software integrations. On Linux, only the DLL is available for product integration for the moment. In the upcoming months, we will propose a macOS-compatible version.

A renting service: We are proposing our solution as a service invoiced monthly. According to needs, we can also invoice on different periods. At SOUND4SOFT all is included, we will not ask you for extra-support fees. Updates are, of course, included. Our goal is to keep our customers satisfied year after year.

178 Best of Show Awards 2023 | NAB Show
SOUND4
FOR MORE INFO

Introducing Program Director, the world’s first operating system for building and scaling amazing radio.

Program Director is a modern suite of AI-powered, web-based tools that saves time and money by completely reimagining the workflow for curating, programming, scheduling and broadcasting radio, so you can deliver world-class listening experiences in minutes instead of months.

Produce World-Class Radio Experiences in Minutes

Program Director’s revolutionary approach to radio programming includes powerful playlisting tools, daypart scheduling, smart clock rotations and AI-powered automated production that will take you from concept to customer experience in no time.

SUPER HI-FI Program Director

Manage

All Your Media in One Place

Program Director includes powerful tools for organizing and understanding your music and content catalog in a simple web-based interface, and integrates directly with the industry’s largest catalog providers to allow your programmers unlimited access to any song ever recorded.

Distribute and Monetize Your Streams Any Way You Want

Program Director lets you choose your codecs, quality levels, streaming format, locations and time zones, and generate monetized streams with localized branding and detailed reporting that sound completely amazing with the click of a button.

Do More With Less

Program Director powers your potential with an intelligent, modern and blazing-fast toolset that feels even more responsive than the music services you use every day.

It includes hundreds of innovative features like built-in ChatGPT and ElevenLabs generative content, one-click DMCA compliance and powerful programming rules that simplify all of your daily tasks.

It’s also completely modular, so you can integrate features and capabilities directly into your own custom software or workflow with our modern APIs.

179 Best of Show Awards 2023 | NAB Show
INFO
FOR MORE

From the legendary Omnia team comes Forza, a brand-new approach to the multiband audio processor.

Forza’s all-new AGCs and multiband limiters breathe new life into the traditional five-band processor design, yielding a sonic profile that delivers a consistent and polished audio signature without sounding overly processed.

Forza debuts as a stereo processor optimized for HD, DAB and, streaming audio applications. As an ever-increasing number of listeners move to online listening, proper audio processing becomes as essential for this platform as for the FM signal. Omnia’s highly

TELOS ALLIANCE Omnia Forza

regarded Sensus codec conditioning for low bitrate streams and a new LUFS target-driven ITU-R BS.1770 loudness controller for compliance with streaming platform requirements make it ideally suited for the task.

Expertly crafted “launch point” presets and an intuitive yet powerful user interface empowers users of all skill levels and ensures instant sonic excellence for listeners. Central to Forza’s smart UI is its interactive processing logic, seamlessly maintaining harmony between “under the hood” controls and settings. Anyone can confidently drive Forza without a Ph.D. in

processing, while professionals will love its powerful simplicity in crafting their unique signature sound.

Forza also marks a deployment turning point in how a “processor” is defined. It lends itself to existing Telos Alliance hardware and software offerings, appearing as a mid-tier processing option in Z/IPStream X/2 and R/2. It also leverages the power and flexibility of delivery by Docker container, a broadly adopted method of delivering software that is being used by an increasing number of new Telos Alliance products.

To learn more, please visit https://telosalliance.com/forza

180 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

What happens when you move the console out of the studio and into a virtualized environment? A world of new possibilities emerges.

Axia Altus, our new software-based audio mixing console, brings the power and features of a traditional console to desktop and laptop computers, tablets and smartphones running any modern web browser, inviting users to rethink where their content is created and produced.

Altus provides full-function mixing — including eight virtual auxiliary mixers

TELOS ALLIANCE Axia Altus

and integration with Telos VX broadcast phone systems — for a distributed and remote workforce, allowing collaboration on recorded programs and live broadcasts.

Altus is also ideal for any situation where fast deployment is necessary, such as setting up a temporary studio, building a low-cost disaster recovery center, or quickly launching a remote broadcast.

The possibilities are limited only by imagination, which is why Altus captures that “I didn’t

know I could do that!” feeling.

Altus is delivered as a Docker container, a method of software deployment used extensively in modern IT environments that provides a high degree of flexibility on- or off-premises to meet your needs now and in the future using non-proprietary COTS (commercial off-the-shelf) hardware. It is available as a one-time buyout or as a subscription. To learn more, please visit https://telosalliance.com/ Altus

181 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Tieline’s MPX II codec delivers composite FM multiplex (MPX) codec solutions for real-time network distribution of FM-MPX or MicroMPX (µMPX) signals to transmitter sites. The MPX II can transport two discrete composite FM-MPX signals from the studio to transmitters with return monitoring. It supports sending the full uncompressed FM signal, or compressed µMPX to deliver high-quality multiplexed FM signals at lower bitrates. Transport analog MPX (BNC) or MPX over AES192 to deliver a wide range of flexible composite encoder and decoder configurations for many different applications. An optional satellite tuner supports decoding DVB-S/DVB-S2 signals.

The Benefits of Tieline MPX Solutions: Sending transmission-ready FM composite signals from the studio allows broadcasters to maintain audio processing and RDS data insertion at the studio. This significantly reduces capital and operational costs by eliminating processing equipment from transmitter sites, which reduces on-site power consumption, wiring and space requirements, as well as site visits for service and support. Composite MPX over IP signals can be easily replicated and distributed using multicast and multi-unicast technologies and take advantage of rocksolid redundancy features like redundant streaming, RIST, FEC and automat-

TIELINE: THE CODEC COMPANY MPX

II Codec

ed SD card file failover.

Encode MPEG-TS over IP to transmit UDP streams over DVB satellite connections. When the satellite signal is received at the decoding MPX unit, a satellite tuner card decodes the DVB-S or DVB-S2 signal and the MPX unit outputs MPX composite directly into the exciter.

Applications:

• Encode/Decode up to two point-topoint MPX/µMPX composite signals.

• Encode/Decode two point-to-point MPX/µMPX composite signals with Encoder or Decoder FM Monitoring: At the encoder monitor the demodulated local MPX input or return feed; at the decoder monitor the MPX output or the secondary MPX feed, which is either the local MPX input or the secondary stream.

Key Features:

• Transport uncompressed MPX or µMPX composite signals to sites with support for GPIOs.

• Monitor demodulated MPX at the encoder or decoder and configure return FM confidence monitoring.

• Use a single MPX II codec to multicast uncompressed MPX or compressed µMPX signals, or multi-unicast µMPX signals to reduce CAPEX and OPEX at

the studio and TX sites

• MPX II operates as an encoder or decoder.

• Both analog and digital MPX signals available to support analog transmitters as networks transition to newer all-digital setups over time.

• Redundant streaming (hitless packet switching), RIST and Forward Error Correction

• SD card file failover.

• Dual internal PSUs.

• Full remote control using HTML5 Toolbox Web-GUI, Cloud Codec Controller; comprehensive automated alarms and SNMP monitoring.

MPX Signal Distribution:

Connect over WANs like the internet at low bitrates with µMPX to expand the number of STL sites that can receive composite signals, reducing hardware requirements at many STL sites significantly. This is possible because the uMPX compression algorithm is specifically designed for FM and maintains perfect peak control, which eliminates the need for an expensive audio processor at each transmitter site. µMPX facilitates multipoint distribution via multicasting or multiple unicasting using a single encoder, similar to replicating baseband IP audio streams in audio codecs.

182 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Layers Software Suite, Stream Module

Wheatstone presented a cloud version of Layers Stream running on AWS at the 2023 NAB Show as the first practical use of cloud data centers for broadcast applications.

Layers Stream software includes stream provisioning, audio processing and metadata support. It is part of the Wheatstone Layers Software Suite, which also has software modules for running instances of FM/HD processing and mixing in cloud data centers such as AWS or on-premise servers.

For the NAB Show, Wheatstone demonstrated streaming instances

running on AWS that can be brought up rapidly and torn down rapidly and controlled through a browser-like user interface. Layers Stream includes audio processing designed specifically for streaming applications and Lua transformation filters to convert metadata input from any automation system into any required output format, including Triton Digital, for transmission to a CDN server.

Wheatstone’s Layers Software Suite also has a Layers Mix module for television. Layers Mix has a full-featured mix engine

and Glass virtual mixers for the laptop, tablet or touchscreen with routing, logic, automixing and full native IP audio integration with major production automation systems.

Layers modules can be used for remote or REMI applications between studios and cloud data centers or for extending studio failover redundancy across multiple cloud data centers.

Wheatstone demonstrated Layers Stream and other cloud applications in their booth at the NAB Show.

183 Best of Show Awards 2023 | NAB Show WHEATSTONE CORP.
FOR MORE INFO

WORLDCAST SYSTEMS

Ecreso FM Transmitter — AiO series

Ecreso FM AiO Series is WorldCast’s next-generation FM transmitter with the lowest Total Cost of Ownership.

The new AiO series of the Ecreso FM transmitter range is proof that aligning cutting-edge innovation with the lowest Total Cost of Ownership is possible. In the form of a 2U transmitter, the Ecreso FM AiO Series (available in 100W, 300W, 600W, 1 kW) is a compact, yet powerful product designed to achieve the highest efficiency possible, ensure rock-solid operations, improve user experience, and substantially lower operating costs.

Its software-oriented design reflects a shift towards a “more software, less hardware” paradigm, therefore limiting costs associated with purchasing and maintaining peripheral equipment. Specifically, the Ecreso AiO transmitter enables broadcasters to replace outdated hardware at transmission sites via software integration of the Audemat RDS Encoder, multiband processing and the APT IP Decoder. The latter represents a major step forward with the coming together of Ecreso technology with APT codec technology.

This means broadcasters can now benefit from an embedded APT IP Decoder — a unique software feature that directly ingests Audio over IP to the Direct to Channel Digital FM Modulator. The integrated decoder is compatible with both APT’s SureStream software and APTmpX, the unrivaled MPX compression algorithm.

This innovation in transmitter design brings high-added value to broadcasters looking to modernize their network with a transition to an all-IP broadcast chain.

Another unique onboard software technology is SmartFM, a worldwide patented artificial intelligence. The Ecreso FM AiO already delivers up to 76% efficiency but when SmartFM is activated, the broadcaster will benefit from additional energy savings of up to 40%. How?

By implementing a field-proven shift in its mode of operations, from fixed RF power to dynamic RF power — and without impacting audio quality and coverage.

In addition to the range of innovative software described

FOR

here, the Ecreso FM AiO transmitter also offers advanced hardware features. Composed of a new RF planar design and a hot-swappable power supply and removable fan from the front panel, it is robust and very easy to maintain.

For remote configuration and monitoring, the Ecreso transmitter also offers a user-friendly GUI and SNMP support. Natively supported by Kybio, the network monitoring solution from WorldCast, the Ecreso FM AiO transmitter comes with a free one-year subscription to the SaaS offer.

Overall, the Ecreso FM AiO Series is the new generation of Ecreso transmitters within the lower power ranges of 100W to 1 kW. They are built upon the company’s 60 years of expertise in FM broadcast and are designed to meet the needs of modern broadcasters who must face rising costs while adapting to increasingly complex infrastructures with non-extensible resources. The Ecreso FM AiO is a solution they can reliably count on to broadcast high-quality audio while helping solve this business challenge.

184 Best of Show Awards 2023 | NAB Show
MORE INFO

ADDER TECHNOLOGY ADDERView CCS-MV 4224

Before the age of digitization, control room operators relied upon analog technology to monitor processes and procedures. The introduction of computers brought a deluge of more comprehensive information to operators. However, this data was still captured, processed and presented within a number of disparate applications, each designed to monitor and control one aspect of the operation.

To compensate for this disparity, operators initially increased the number of monitors on their workstation to match the number of sources, or applications, they wanted to see. This resulted in a chaotic and cluttered workstation with many monitors, keyboards and mice, which was extremely unproductive when needing to make fast, and often mission-critical, decisions.

The ADDERView CCS-Pro revolutionized control room efficiency by allowing operators to interact with multiple computers using one set of peripherals. With the introduction of the ADDERView CCS-MV 4224 (CCS-MV 4224), Adder is tackling the constraints of the modern control room while maintaining

this level of efficiency. Operators can achieve real-time access to multiple PCs through consolidated monitors without compromising the user experience.

Part of Adder’s command and control portfolio, and designed to put users in total control, the CCS-MV 4224, a desktop multiviewer switch, delivers up to four different video, audio and USB signals to a single workstation in a user customizable window layout across one or two monitors.

CCS-MV 4224 delivers flexibility and choice for the user in the most demanding workspace environments. Users can instantly take control of the target computer by simply moving the mouse between windows, providing the experience of a single desktop.

As desktop real-estate continues to be compromised in busy control room environments, with increasing numbers of computers and more, larger screens, the CCS-MV 4224 is designed to be a complete control room solution that eliminates desktop clutter

and presents critical information the way the user needs it.

The CCS-MV 4224 is designed by users, for users to combat pressures and ease the mental load by combining flexible and dynamic user experiences allowing operators the freedom and flexibility to leverage the product in the way they need, allowing the user to focus on the task at hand rather than the technology. The multiviewer switch empowers users to take total authority of the resources they need access to, when they need them, in 4K UHD resolution.

Designed with a high user-adoption rate of features and functions at its core, the multiviewer switch presents a fully configurable, color-coded, window display onscreen enabling users to quickly choose how data is presented. Simply by moving the mouse cursor between windows users can switch between sources in real time and without latency, meaning they can focus on the mission and not the technology.

185 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

AJA VIDEO SYSTEMS

AJA Dante AV 4K-T and 4K-R

AJA’s new Dante AV 4K-T and 4K-R transmitter/receiver converters enable seamless transport and control of ultra-low latency, professional quality 4K/UltraHD/2K/HD/VESA Dante video and audio to/from 12G-SDI or HDMI 2.0 devices over a 1 GigE Dante AV network. Audio and video streams from the AJA Dante AV 4K-T and 4K-R can be easily managed using the popular Dante Controller software. Building on AJA’s OG-DANTE-12GAM 12G-SDI/ Dante 64-channel audio embedder/ disembedder, these new devices allow AV teams, professionals who manage video distribution in a networked facility and systems integrators to seamlessly incorporate high-quality, visually lossless video into AV-over-IP environments spanning stadiums, arenas, theaters, churches, schools, office

complexes, hotels, conference rooms and more.

Harnessing Audinate’s new Dante AV Ultra high image-quality solution for standard 1 GigE networks, AJA Dante AV 4K-T converts 4K/UHD/2K/HD/ VESA SDI and HDMI to Dante AV Ultra signals while Dante AV 4K-R supports conversion of Dante AV Ultra signals to 4K/UHD/2K/HD/VESA SDI and HDMI. Both devices offer unprecedented low latency and time synchronization, ensuring perfect lip sync to in-venue screens and for external broadcasts and streams.

AJA Dante AV 4K-T and 4K-R also allow AV teams to leverage existing IP infrastructure and digital AV equipment across multiple physical locations for simple video and boardroom conferencing technology integration. Built-in

networked control and flexibility make it easy to deploy and manage video walls and digital signage at a lower cost, with end-user routing to screens or speakers supported in-venue. AJA Dante AV 4K-T and 4K-R also pair well with professional baseband devices, including the AJA Ki Pro Ultra 12G and Io 4K Plus, and can be used with AJA KUMO SDI routers to easily route signals into or out of a Dante AV infrastructure.

AJA’s Dante AV 4K-T and Dante AV 4K-R are helping set the foundation for a new ProAV IP video standard, along with delivering the ease-of-use, multivendor interoperability and an integrated control experience that customers have come to expect from AJA, making the products a strong contender for a Future/Sound & Video Contractor Best of Show Award.

186 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

The PTZ310UNV2 packs powerful, intuitive features to support a variety of streaming and broadcasting applications. Notably, the PTZ310UNV2 is the first-of-its-kind by integrating the latest edition of NDI to promote low latency for live broadcasting and content sharing.

AVer’s PTZ310UNV2, the world’s first UHD NDI HX3 camera, features 4K resolution, 12X optical zoom and 8MP pictures with 60fps. Designed for broadcasting and streaming, the PTZ310UNV2 features the latest format of NDI. It boasts reduced latency with less than 100ms while only utilizing a fraction of

AVER INFORMATION INC. USA PTZ310UNV2

the bandwidth of NDI high bandwidth. The PTZ310UNV2 enables simultaneous output at 4Kp60 with HDMI/NDI/IP to provide the best image quality and video streaming capabilities. Featuring builtin AI functions, like SmartShoot, the PTZ310UNV2 provides an intuitive yet simplified user experience.

Featuring AI functionality, the PTZ310UNV2 optimizes camera controls for automatic content capturing between present areas and the ability to create a multicamera feel across preset zones. The PTZ310UNV2 integrates framing technology, which

instantly adjusts the field of view (FOV) to capture multiple people, fit everyone on-screen and easily record presentations, lectures and more with a simple touch. The PTZ310UNV2 features Zoom and Microsoft Teams video mode for a simplified user experience with popular video conferencing platforms. Additionally, the SRT protocol within PTZ310UNV2 optimizes video streaming performance across unpredictable networks. Both cameras are only available in white, include NDAA and TAA compliance and are protected by the industry-leading AVerCare warranty.

187 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

BG-ADAMO-JR

The next generation of PTZ cameras—BGADAMO-JR—is unrivaled in its class in the live stream broadcasting market.

BG-ADAMO-JR auto-tracking PTZ camera is loaded with features including a full interface of video connections and 1080p@60Hz resolution. The 3G-SDI connection enables long-distance cable runs without compromising image quality.

Advanced AI auto-tracking uses the latest human detection AI algorithms, providing the ultimate in convenience and efficiency without needing a camera operator or additional hardware.

NDI|HX 3 and Dante AV-H models add to its versatility offering a way to utilize existing network infrastructures to offer exceptional video signals over the network while minimizing latency to imperceptible levels.

Compose shots ahead of time utilizing up to 255 programmable presets, with 10 accessible using the IR remote. Capable of storing 1 TB of video footage with the Micro SD card writer, start recording on the fly when other connections are inaccessible.

The lens is designed with an advanced auto-focusing algorithm that promptly snaps into focus with dependable accuracy and stability. The 3D noise reduction technology combined with the low-noise CMOS sensor ensures impeccable image clarity. Choose between the 12x optical zoom lens with a 70.3° wide-angle, the 20x (60.04°) or the 30x (58.1°).

Packed with an array of video connections including HDMI, 3G-SDI, USB 2.0, USB 3.0 and LAN, the BG-ADAMOJR boasts unparalleled functionality with sophistication and style. The dual stream USB facilitates concurrent mainstream and substream outputs while

the HDMI, SDI and USB connections are capable of transmitting 1080p video and audio signals simultaneously.

Innovative Design

Available in either classic white or black finishes, the chassis of the BG-ADAMOJR is designed to be as functional and attractive as its formidable feature set. The high-stability substructure provides a solid foundation for the precision lens delivering 1080p resolution video at 60fps. Eliminating the need for bulky external accessories, the control arms feature distinctive built-in tally lights illuminating green or red with 360-degree visibility.

Control

Control with RS232, RS422, web GUI, IR remote or control app BG-PTZ-Control — a free BZBGEAR proprietary PTZ control software for Windows, Mac and iOS (with Android available soon).

With 1080p@60 resolutions, AI autotracking, flexible connection options and seamless IP streaming capabilities with NDI|HX 3 and Dante AV-H, the BG-ADAMO-JR is the ideal solution for those looking to add automation to their workflow. The BGADAMO-JR is a high-performance PTZ camera delivering exceptional functionality and value — with a starting MSRP of just $1,499.

188 Best of Show Awards 2023 | NAB Show
BZBGEAR
FOR MORE INFO

BG-COMMANDER-PRO

Demand perfection. Command with precision.

Introducing the BG-Commander-Pro, a PTZ camera joystick controller that comes with an integrated 7-inch touch screen. It supports real-time image previews from connected cameras via their RTSP streams on the integrated 7-inch touch screen. It can output up to a 3x3 video wall to an external display through the HDMI interface. This controller is designed to simplify video viewing and management. Built using Android 11, it supports H.265/H.264 decoding and easily handles up to nine cameras simultaneously.

With HDMI projection and total PTZ control including presets, focus, zoom and exposure, you can easily control your PTZ cameras for better broadcasting production. The Commander Pro is also customizable with single IP multichannel acquisition and ONVIF protocol support, meaning you can add up to 2048 devices.

The Commander Pro was designed to be user-friendly. It can be upgraded through a standard USB flash drive, utilize an external mouse/keyboard for easier interface control, and even record RTSP streams or take

screenshots at the moment using the available Micro-SD expansion storage slot. It also features four RS422/RS485 and one RS-232 control port, making it ideal for large-scale video projects.

The Commander Pro’s unique capabilities, such as support for a 3x3 video wall with up to nine camera inputs, can be controlled via mouse and keyboard, all with Power over Ethernet (PoE), make for a clean, easy setup. All-in-all, the Commander Pro provides a simple UI for users to access professional-grade controls, management and editing tools.

189 Best of Show Awards 2023 | NAB Show
BZBGEAR
FOR MORE INFO

The ultimate all-in-one video production systems, Pearl-2, Mini and Nano can stream, record, encode, live switch and more. Each model of Pearl was built with rugged components to withstand prolonged, daily use. And though powerful, each Pearl model was designed to be accessible. The intuitive, one-touch streaming and recording functionality allows brand-new users to capture their content in exceptional quality. Veteran integrators can spend minimal time training new users and take advantage of the AV-over-IP philosophy in order to fit it seamlessly into existing infrastructure.

EPIPHAN VIDEO Pearl Family

Each Pearl system is a HETMA-approved device, proving to be a versatile, reliable solution for lecture capture and class recording. With integrations with Kaltura, Panopto and YuJa video platforms, educators can register the systems to any CMS as soon as the Pearl is plugged in. Compatible with virtually every piece of hardware, from cameras to microphones to room controls to operating systems, the Pearl family of systems can fit anywhere and deliver a massive volume of HD-quality content.

Through HDMI, SDI, NDI or SRT, users

have the flexibility to ingest their signals into Pearl systems. And, thanks to Epiphan Cloud, multiple devices can all be operated, monitored and maintained from a single centralized dashboard. Producers can now be anywhere in the world and ensure their Pearls are performing at a high level at a single glance.

Whether the engine behind a brand-new venture or a complement to enhance existing infrastructure, Pearl-2, Mini and Nano provide exceptional quality, reliability, power and flexibility to achieve content goals.

190 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

AV.io+ Capture Cards

Capture you can count on. Portable, reliable and simple to use, AV.io+ capture cards make acquiring video signals from SDI and HDMI sources with zero lag or latency easy. Ready to work right out of the box, the only setup required to capture exceptional, HD video is to plug it in, making it an easy fit in any workflow. Compatible with Windows, Mac or Linux, there are no drivers to install or setup required to achieve incredible results. Connect the card to any device and it will automatically set the best scaling, aspect ratios and other settings based on the software in use — including when swapping between sources.

Though the AV.io+’s plug-and-play functionality is convenient, operators always have the option to tailor their devices to their specific needs. The Config Tools allows users to set specific EDIDs, customize output resolution and label each device.

The AV.io+ has also now added 3.5 mm analog audio inputs. While some users may prefer to capture audio straight from their SDI or HDMI source, they now have the option of connecting and capturing professional-grade microphones and audio mixers on the fly. Once again, no driver installation is required.

Compact, compatible with any device and built to be tough as nails, AV.io+ allows users to capture content with greater convenience. Whether relying on autodetection or customizing the setup, these capture cards can fit into any workflow and be put to work immediately.

191 Best of Show Awards 2023 | NAB Show
EPIPHAN VIDEO
FOR MORE INFO

Epiphan Connect turns Microsoft Teams meetings into a remote and hybrid video production asset. Add a Teams meeting URL to Epiphan Connect and producers are able to access isolated video and audio from each participant. These isolated assets are available via SRT and can be added to any SRT-enabled production tool for recording or streaming.

When Connect isolates Teams video, it takes the highest quality available

EPIPHAN VIDEO

Epiphan Connect

higher than a standard Teams call, up to 1080p. And because this operation is entirely cloud-based, there’s no strain on local hardware or networks to achieve higher quality.

Teams’ screen share function is also acquired through SRT. This persistent feature allows any participant to share their screen without disrupting the production’s layouts. Plus, participants are able to see how they look within the

finished production through a virtual confidence monitor. An SRT return feed enabled in the admin panel activates the monitor and helps guests hit their cues.

Epiphan Connect bridges the gap between quality and convenience. By unlocking a powerful, accessible and familiar tool like Microsoft Teams, remote and hybrid video production is simplified at every level.

192 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

FOR-A MFR-3100EX Production Processor

Producing a live multimedia production that excites attendees doesn’t need to be complicated or require a ton of gear. FORA’s new MFR-3100EX all-in-one production processor with NDI support is a switcher, router, multiviewer, downstream keyer and streaming device in one powerful, 4RU package. Other features include: PTZ camera control, NDI/NDI|HX I/O and RTMP/SRT video streaming. This highly versatile unit is also an ideal choice for compact system building.

The MFR-3100EX configures a matrix of up to 64x72, with up to four inputs/

four outputs for 8K signals or 16 inputs/18 outputs for 4K UHD. Multiple units can be used together, enabling matrix expansion and redundancy. All input channels may be monitored via web browser, and optional functionality like frame synchronization, AVDL and audio MUX/DEMUX is a simple matter of adding an expansion card to an input or output slot.

Users can send or receive video and audio, control signals, and tally control over IP. With NewTek NDI Tools installed, MFR-3100EX supports meeting collabo-

ration software such as Zoom, Microsoft Teams and Google Meet. The MFR3100EX also enables users to upload video directly to YouTube, Facebook and other streaming services.

SSD storage enables recording and playing of video and audio. Material supplied via SDI or over the network can be encoded and saved as files. Saved files can be decoded for output via SDI or over the network. As core components, FOR-A routing switchers can incorporate redundancy to ensure nonstop operation in case of any issues.

193 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

G&D NORTH AMERICA INC.

KVM Extender VisionXS-CPU-Type-C-UHR

KVM extenders let you extend computer signals over IP or dedicated cables. This allows you to operate a computer from a workstation up to 10,000 meters away. Using KVM, you can easily separate computers and users to make your workstation more ergonomic and increase the security of your IT infrastructure.

The systems consist of a computer module (CPU) and a console module (CON). Together with a central module they can build a powerful matrix system that allows for more flexibility and a better performance. KVM systems help you to operate complex IT infrastructures simply and intuitively.

The new VisionXS-CPU-Type-C-UHR is a matrix-compatible extender module that enables the integration of modern sources such as USB-C computers, notebooks, tablets and even smartphones, following the trend towards an improved and universal USB standard. Video signals, keyboard and mouse signals, as

well as audio, can all be transmitted via one cable using the Type-C port. Additionally, even the module’s power supply can be covered via the connector. The Type-C variant transmits DisplayPort1.2 signals in Alternate Mode and enables a maximum resolution of up to 4096x2160 @ 60 Hz.

G&D relies on its proven bluedec lossless compression for the VisionXS series, which enables authentic, pixel-perfect video experiences. In addition to the video signal and the keyboard and mouse signals, the computer module transmits embedded stereo audio as well as optional USB classes HID (Human Interface Device) and mass storage, all conveniently via one interface. This saves cabling effort and thus simplifies installation. The high-performance KVM extenders are matrix-compatible and offer a wide range of useful features for the optimal user experience.

With their significantly smaller size, the products can also be used in applications where space is a critical factor. In addition, there are new and efficient mounting options that require fewer screws. For the new VisionXS modules, G&D relies on a new click mechanism making it much easier to install the devices.

To maximize the reliability of the overall installation, additional redundancy concepts can be developed. Implementing a transmission redundancy solution usually requires the use of additional hardware. VisionXS series products provide built-in transmission redundancy, without additional hardware. The function can be easily enabled via a software feature key, which can also be activated at a later date. With this innovative solution, security and reliability can be increased in installations without incurring additional hardware costs.

194 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Draco G-Flex KVM Matrix

IHSE expands the capabilities of the popular Draco Flex KVM Matrix Systems with the introduction of the G-Flex Matrix series. With an integrated Draco tera IP gateway card, the Draco G-Flex Series provides system designers with the ability to bridge multiple KVM matrix systems over an existing IP network. It combines the high levels of security and performance of the Draco tera KVM system with the flexibility and ease of connectivity inherent in IP-based communication. Therefore, it allows users to access remote computers and interact in real time with minimal latency and no visible artifacts.

The Draco G-Flex series can maximize efficiency by multiplexing up to eight full HD channels over a single duplex fiber networked connection between KVM matrix frames. This is extremely important where limited cable runs are available for adding more sources or workstations. For applications where both fiber and copper are specified, the Draco G-Flex option is the perfect solution where localized connections can be distributed on traditional copper or fiber connections and shared connections can be accomplished over an IP gateway.

The Draco G-Flex matrix starts with 16 physical ports and eight gateway ports in 1RU. The series can be expanded up to 152 physical ports and eight gateway ports in 4RU. Systems are available in 1G copper, 1G fiber, 3G copper or 3G fiber. For systems needing a mixture of fiber and copper, the Draco G-Flex can be customized to fit almost any type of hybrid fiber/copper requirement.

In addition to the high level of security data transmitted throughout IHSE’s KVM switching and extension systems, the Draco G-Flex features IHSE’s Secure Core technology, which prevents direct access to the data within the KVM system from the IP network. This maintains the integrity of the KVM system and is consistent with countermeasure to potential cyberattacks.

With the Draco G-Flex series, you simply plug in the desired extender unit to an open port on the matrix and the built-in control system will automatically recognize the type of device and assign it as a source or destination device. This is accomplished through IHSE’s proprietary Flex-Port technology

that provides instant switching capabilities for all the popular video formats and resolutions.

It is a simple operation to set up and configure with the on-screen display (OSD) or through tera tool; a free downloadable IHSE utility program for configuration and system management. For those who prefer a third-party control application, the Draco G-Flex can be configured to operate with many popular control systems using the IHSE optional API protocol package. Along with the compact design and low cost the G-Flex incorporates features from the Draco tera enterprise series of switches, including SNMPv3, LDAPS, multilingual on-screen display, encrypted communication for maximum security (for API, Draco tera configuration tool and Matrix Grid).

With a small footprint and ruggedized chassis design, the Draco G-Flex provides a space-saving solution where both centralized and remote access are desired. It is especially suited for mobile production, command-and-control systems, production studios and campus wide classrooms.

195 Best of Show Awards 2023 | NAB Show
IHSE USA
FOR MORE INFO

Draco CON App

Draco CON App is an easyto-use KVM extender app that gives users complete access to an IHSE KVM Matrix System directly from any networked computer using Windows, Linux or MacOS. After a quick initial setup, the Draco CON App becomes an emulated KVM extender that interfaces with any IHSE control system. By leveraging IHSE’s secure IP gateway interface connected to the IHSE Matrix, users can scale their systems by simply connecting a laptop or PC to the system’s network.

The main goal of the Draco CON App is to provide easy access to a KVM system without the need to provide additional hardware or cabling. Users only need an Ethernet connection linked directly to the same network where the KVM matrix resides. With sufficient bandwidth, users can interface with any computer CPU connected to the matrix, maintaining similar low latency normally found with a physical IHSE CON unit.

A major benefit for existing customers with IHSE Matrix Systems is the ability to expand their system using simple IP-based network switches or routers. This enables system administrators the flexibility of adding additional users

to the KVM system without sacrificing the secure core of the proprietary KVM switching network.

Other powerful features include the ability to quickly access multiple PCs across several KVM systems linked via the IP Gateway cards. In campus-wide environments where multiple KVM systems

need to be linked for sharing resources, the Draco CON app provides the ability to quickly access any computer no matter which system may be linked. Technicians will appreciate this new feature as it supplements the control and maintenance of any computer on the system for easy access when troubleshooting.

196 Best of Show Awards 2023 | NAB Show IHSE USA
FOR MORE INFO

NETGEAR AV M4250 Network Switches

The M4250 switch series comprise 13 models that range from desktop to rackmount models with a range of port types and PoE capabilities for any application. These switches are the perfect product for any type of 1 Gb streaming application, including IPMX, with 10 Gig uplink and fiber ports on some models.

These switches have been certified by a number of manufacturers for quick, reliable setup when using NDI and Dante, for example. NETGEAR uses a powerful profile-based approach to switch configuration that lets you simply select the protocol or type of installation (audio, video, lighting control, etc.), select which ports are using that protocol and that’s it. We do the rest — no more sifting through online documents to find the right settings for your install!

The available models cover installs small and large with new 10-port desktop models to a 48-port, 2RU model with a whopping 2,880W PoE budget. These models also offer multiple mounting options for both inside and outside of the rack. All rackmount models are designed to have the ports in the back for a clean look in an AV rack, but include

197 Best of Show Awards 2023 | NAB Show
hardware for reverse mounting in an industrial rack, too. And threaded holes
FOR MORE INFO
on the front and bottom allow VESA and alternative mounting options, too.

Planar Venue Pro VX Series

Planar Venue Pro VX Series, a family of indoor fine pixel pitch LED video wall displays delivering exceptional incamera visual performance for virtual production (VP) and extended reality (XR), as well as on-camera visual performance for broadcasters. The series combines high-performing scan and refresh rates with high brightness and narrow pixel pitches, making it well-suited for LED XR stages in markets as diverse as film and video production, corporate, broadcast, rental and staging and live events.

Designed to support hanging, stacked or wall-mounted installations, Planar Venue Pro VX Series expands on the capabilities of the industry leader’s first solution designed to revolutionize the production of realistic in-screen

and on-screen content, the Planar CarbonLight VX Series. With support for HDR-ready content, a wide color gamut, including up to DCI-P3 color space, and compatibility with a wide range of cameras, the Planar Venue Pro VX Series delivers the unmatched visual performance and deployment versatility today’s companies need to develop lifelike recorded, streamed or broadcast video content.

The release of the Planar Venue Pro VX Series bolsters the Planar Studios initiative and expands the company’s portfolio of visualization solutions designed to support VP and XR. The newest addition will be backed by Planar’s dedicated VP and

XR team, which includes pre-sales and post-sales support from local experts. This reinforces the industry leader’s commitment to making such applications streamlined and available to mainstream markets.

The new Planar Venue Pro VX Series is designed to reduce the complexity of setup and teardown, featuring magnetically-attachable cabinets with quick locks for single-person installation. The series also includes mechanical features to suit both temporary applications and fixed installations.

The series is available in 1.9 and 2.5 millimeter pixel pitches and compatible with Brompton LED processors and LED controllers from Colorlight.

198 Best of Show Awards 2023 | NAB Show
PLANAR
FOR MORE INFO

UniPlex Cardioid Lavalier Microphone

A speaker’s voice is essential to conveying their message — especially during high stakes broadcasts, presentations, conferences, educational seminars and more. Broadcast industry professionals and innovators are constantly in search of more effective solutions for protecting the speaker’s voice and message in these challenging environments.

Shure’s UniPlex is a 5mm subminiature cardioid lavalier microphone engineered to be the ideal, discrete solution for these exact speaking applications where rejection of stage noise, an audience,and close-proximity presenters is pivotal to conveying the message.

Designed for corporate presentations and guest speakers in conference rooms, lecture halls, theaters and arenas, UniPlex’s UL4 unidirectional lavalier microphone delivers excellent, isolated

audio capture, minimizing feedback with its custom-tuned cardioid element.

Previously, visually intrusive cardioid lavaliers were necessary to provide the gain-before-feedback and sound quality required in professional speaking and presentation situations. UniPlex provides the performance of larger lapel microphones in a significantly smaller design, simultaneously outperforming the audio quality of similarly sized, subminiature lavaliers. This makes UniPlex the perfect solution for speaking applications where sound-quality-for-size cannot be sacrificed, even in the noisiest environments.

UniPlex’s 1.6mm Shure Plex Cable technology ensures long-lasting durability and comfort for the wearer, while making it easy for

crews and engineers to apply the mics effectively and efficiently. The cable is immune to kinks and memory effects, resulting in unmatched performance thanks to innovative spiral construction with redundant shielding. The cable is also a fully paintable, versatile solution suitable for any speaker or wardrobe.

Shure has developed the Plex lavalier lines, including UniPlex and the award-winning and DuraPlex product portfolios, in conjunction with worldclass wireless solutions like Axient Digital, ULX-D, QLX-D and SLX-D. These wireless solutions enable top audio professionals to execute events and performances flawlessly in the most complex setups and constrained spectrum situations on earth.

199 Best of Show Awards 2023 | NAB Show
SHURE INC.
FOR MORE INFO

Eco-Friendly Solar-Powered News Van

Currently, traditional news vans rely on satellite uplinks or microwave transmission, which can be expensive and have limitations such as delays and line-of-sight issues.

TV Pro Gear uses bondedcellular technology to construct news vans that do not require satellite uplinks or microwave transmitters. These vans use solar cells on the roof and large lithium-ion batteries to power the equipment. The transmission is done through bonded-cellular technology, allowing for faster and more cost-effective transmission of video signals.

The use of bonded-cellular technology reduces the cost of building and operating news vans by one-third compared to traditional news vans. The use of solar cells and large batteries also provides an eco-friendly solution, reducing the need for generators and other fossil fuel-powered equipment. This technology also eliminates the delays associated with satellite transmission and line-of-sight issues with microwave transmission, allowing for faster and more efficient transmission of video signals.

Bonded cellular combines multiple 5G modems into one signal and can combine multiple 5G modems from different carriers into a single signal transmitting a clean 4K signal, even in areas with poor network conditions. Sending a signal via bonded cellular is as easy as making a phone call and can be done from anywhere in the world. It allows for a 30-megabit bandwidth with 22 mb headroom, ensuring reliable transmission in challenging locations.

They are quiet: Solar-powered vans

use electric motors, which are much quieter than gas-powered engines. This makes them ideal for use in urban areas, where noise pollution can be a problem.

Additionally, the use of a heat pump for heating and air-conditioning, which requires half as much power as a normal HVAC unit, coupled with solar panels that charge lithium-ion batteries, eliminates the need for a gas generator.

Not only are these vans noise-free and eco-friendly, but they have also been equipped as “mobile studios on wheels.” Inside each van, there is a 70-inch LED monitor connected to a video server, which allows for remote interviews. The interviewee can sit in front of the screen with any background, whether it be still or in motion. A robotic PTZ (pan, tilt, zoom) camera is installed 7 feet away from the talent to capture their image. The camera’s feed is then transmitted via bonded cellular back to the station. With bidirectional

bonded-cellular technology, the person in the van can see the news anchor in real time and respond to questions with minimal latency.

TV Pro Gear’s dedication to innovation and customer satisfaction has made us a leading player in the broadcast industry.

Specs:

• Six solar panels generate 925 watts

• Runs on Lithium lon batteries at night

• Uses bonded-cellular for transmission

• HVAC heat pump = 50% power savings

• 4K PTZ camera on electric lift

• Prompter with talent monitor

• Talent sits in front of 70-inch monitor

• Video server for backgrounds

• Requires only one person to operate

• No need for a generator

• No Need for a microwave

• No Need for a satellite uplink

200 Best of Show Awards 2023 | NAB Show
TV PRO GEAR
FOR MORE INFO

VIZRT GROUP Flex Control Panel

Over the course of the last several years, the demand for high-quality distributed production has grown beyond measure. We recognize the need for operators to be able to adapt to any workflow — and the Flex does just that, with the power of NDI. Offering greater control and connectivity than ever before, it’s NewTek’s most flexible and powerful control panel yet.

NewTek’s Flex Control Panel offers unprecedented control from anywhere, with built-in innovative features that offer operators greater precision and flexibility. Now, operators can take direct control of PTZ devices, audio connections, audio and video mixing and talkback — all connected with NDI.

Flex addresses two major headaches of operators anywhere: significant risk of live production failure and being tethered to a single location, due to video switchers with fixed cable runs and complex setups. Flex Control Panel is the

only NewTek control panel that connects simply to a network with NDI, as well as controlling any video switcher on that network. With Flex, operators can control any TriCaster on a network from anywhere, giving operators the ultimate control and production freedom.

By offering audio and video mixing, PTZ control and talkback built directly into the panel, the Flex Control Panel significantly reduces the margin for error, allowing operators the tranquillity to achieve a high-quality production without the hassle of depending on external connections.

“The Flex Control Panel is made for the convenience of operators — it’s a smaller device, with NDI audio inputs and outputs, and an NDI-based controller that allows the managing of any TriCaster on a network, from anywhere. Flex offers a tactile control of audio,

PTZ cameras and virtual camera control. It has the connectivity, control and easeof-use every busy producer needs,” said Richard Evans, senior content producer at NewTek.

Flex sports a tactical control of audio. With the addition of audio I/O, Flex instantly expands the on-board I/O of any TriCaster with the option to add external sources directly to the NDI ecosystem. Not only can the panel connect to any switcher on a network by using NDI, Flex also offers operators all they need for distributed productions by working with all current TriCaster models; mastering control of a TriCaster 2 Elite as easily as it can a TriCaster Mini X. At home, in the studio, or on the road, the Flex Control Panel scales with any production eliminating complexity to better support operators everywhere, from anywhere.

201 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

Earlier this year, Avid delivered on what is considered the workflow enabler that media production teams have been asking for to significantly speed up the post-production process between picture and sound workflows — Media Composer interoperability with Pro Tools.

Collaborating with the 40,000 member global Avid Community Association to bring this new innovation to market, Media Composer now speaks the same language as Pro Tools. This interoperability makies the hand-off between editorial and audio post teams quick and easy. Post-production teams in TV, film and education can now simply choose “Pro Tools Session” from the Media Composer “Export to File” command and select the parameters and options they want to include in their export. Pro Tools

Media Composer

users can then just double-click the resulting session file to open it in Pro Tools along with the exported video and all edit points. That simple!

Avid is uniquely positioned to streamline the process of exporting complex sequences with video, audio and metadata down to a single step, combining everything in one export .PTX file that can be opened directly in Pro Tools. This capability enables teams to complete projects faster and eliminate costly, time-consuming mistakes, while accelerating content creation and delivery for post-production workflows. This is the first step in delivering ground-breaking collaborative workflows between Media Composer and Pro Tools, which was identified as a critical

need during customer workshops and will allow Avid to introduce a differentiated set of capabilities not available in the market today.

Avid also delivered the ability to produce world-class audio within Media Composer by offering full support for Avid’s completely reimagined MBOX Studio audio interface for both Macs and Windows. Media Composer now provides editors with a powerful solution for recording, punch-ins and multichannel monitoring of sequences in up to 7.1 surround sound. Access to the exceptional preamps and audio converters in MBOX Studio also enables users to capture every sonic nuance of every performance with low-latency monitoring.

202 Best of Show Awards 2023 | NAB Show AVID
FOR MORE INFO

Z by HP innovation starts with the customer, purpose-built for media and entertainment, to deliver the performance benefits needed. We work with ISVs to test performance and make the right hardware recommendations and offer certifications from Autodesk (Flame) and Avid (Media Composer and Pro Tools).

The latest Z8 Fury workstation

HP Z8 Fury

desktop includes up to 56 CPU cores, four NVIDIA RTX 6000 Ada Generation professional GPUs, and NVIDIA ConnectX-6 DX SmartNICs in the Z8 Fury G5.

HP’s innovation with the HP Z8 Fury starts with the customer and it delivers performance benefits needed for virtual production, visual effects, modeling and animation, edit-

ing, motion graphics, color grading or finishing.

A workstation like the Z8 Fury is behind it all. The Z8 Fury is suited to VFX artists — those using Autodesk Flame and Foundry Nuke — as well as color graders using software such as Blackmagic Design’s DaVinci Resolve, and CG artists using software such as SideFX’s Houdini.

203 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

SHURE INC.

AD600 Axient Digital Spectrum Manager

In today’s fast-paced and challenging RF environments, industry professionals seek solutions that enable comprehensive management and coordination. RF environments require real-time scanning for planning and managing frequency coordination across professional audio applications. Shure introduced the Axient Digital AD600 Digital Spectrum Manager to provide industry professionals the tools they need for real-time RF control.

The Shure Axient Digital AD600 Digital Spectrum Manager helps RF coordinators and audio professionals monitor real-time RF information to plan and coordinate frequencies in the most challenging RF environments. This includes highly televised broadcast media and entertainment like Super Bowl LVI and the international Tokyo-based athletic competition held in the summer of 2021.

The AD600 Axient Digital Spectrum Manager is the successor to the Shure AXT600 and continues to build upon Shure’s portfolio of next-generation technology that equips the industry with powerful and comprehensive RF coordination workflow tools. The AD600 boasts

faster scanning that finds available frequencies and analyzes RF spectrum in real-time, streamlining site surveys and spectrum management.

The AD600 Digital Spectrum Manager delivers wide-band spectrum scanning and monitoring from 174 MHz to 2.0 GHz, spectrum analysis, and frequency coordination in a single rack unit. Six antenna connections deliver multiple coverage options while Dante connectivity provides advanced audio monitoring of your network. Users can lean on the USB port to export, import or save backup scans, event logs and other important data. When used with additional Axient Digital solutions, AD600 users can benefit from interference avoidance features available with ShowLink, a feature unique to the Axient Digital ecosystem that enables real-time control and communication with all ADX transmitters.

With AD600, users can plan, scan and deploy frequencies to their wireless network, or dive deep for complete control in challenging RF environments thanks to the guided coordination features. These features have been put to the test at major events around the world, with

Steve Caldwell, RF coordinator, trusting the AD600 and Axient Digital in some of the world’s most demanding RF situations.

Speaking of its use at the international Tokyo-based athletic competition in 2021, Caldwell said, “In my opinion, the best feature of the AD600 is its ability to sample up to six different antenna (or distribution network) sources concurrently. This allowed me to see comparable levels of four separate antenna inputs (the Axient Digital Quadversity distribution) and two localized wideband antennas. This ability to compare the six discrete antennas allowed quite accurate localization of any transmitter in the Tokyo stadium. As the six antennas were varied in both location and beamwidth direction, including two antennas on the opposite side of the stadium on an RF over Fiber (RFoF) network, the ability to locate a transmitter based purely on RSSI was remarkably accurate.”

The new AD600 Digital Spectrum Manager ensures RF coordinators and audio professionals have the most accurate RF information in the most demanding audio applications.

204 Best of Show Awards 2023 | NAB Show
FOR MORE INFO

The Sound Devices A20-Nexus is an ultra-high performance wireless microphone receiver in a compact half-rack wide chassis. It features eight independent, true-diversity channels that can be expanded to 12 or 16 channels via software licenses available for purchase or rent from Sound Devices’ online store or certified resellers.

With Sound Devices’ SpectraBand technology, the A20-Nexus receiver offers an unprecedented global tuning range of 470–1525 MHz and exceptional RF filtering capabilities that allow it to utilize legal RF frequencies everywhere.

The A20-Nexus features Sound Devices’ proprietary NexLink remote control, an innovative concept in wireless

SOUND DEVICES

A20-Nexus

microphone receivers. With NexLink, the A20-Nexus allows full remote control of transmitters via an integrated, long distance RF link. The A20-Nexus also features an integrated RTSA (Real Time Spectrum Analyzer, allowing spectrum visibility while decoding audio.

The A20-Nexus can be mounted in a standard 19-inch rack or easily placed remotely near the microphone transmitters due to its Power-over-Ethernet + (PoE+) capability, its Dante audio-overIP network ability, and its built-in web server for control via phone, tablet or computer.

Additionally, the A20-Nexus can be docked to Sound Devices 833, 888 or Scorpio mixer-recorders using the

A20-QuickDock accessory, which provides convenient power, audio and timecode connections with no additional cables. This accessory allows the A20-Nexus to connect and disconnect from the 833, 888 or Scorpio in seconds with no tools, and be remote mounted when desired.

The A20-Nexus receiver is perfect for productions requiring full remote operation. Its combination of features allows it to replace multiple devices in any RF professional’s wireless toolkit, its flexible power and control options make it adaptable to any workflow, and its svelte form factor makes it equally at home in a bag, on a sound cart, or taking up minimal space in a fixed install.

205 Best of Show Awards 2023 | NAB Show
FOR MORE INFO
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.