A SPECIAL GUIDE TO ALL THE NOMINATED & WINNING PRODUCTS FROM FUTURE’S BEST OF 2021 AWARDS AWARDED BY
WINNER 7Mountains DiNA DiNA is a complete cloud newsroom solution built for journalists and story creators that work across platforms. With DiNA you bring your staff from digital/online news and linear news together into one unified workflow and equip your staff to work from anywhere. When choosing core technology platforms for a modern news operation, both for greenfield projects and projects for improving and modernising news operations, choosing lightweight tools that can run in the cloud is key. Cloud tools allow for working from anywhere, speed up time to air, and frees up resources for on-premise systems maintenance and support. DiNA has disrupted the traditional newsroom market by offering a journalist tool that is built from the ground-up on modern web technologies and with the end user in mind. DiNA unifies storytelling across all publishing platforms, such as Facebook, Twitter and YouTube, as well as for scheduled/ linear news and news for websites (CMS systems). With DiNA, the planning, creating and publishing of news for scheduled
TV shows becomes as easy as creating news for social media platforms, with all journalist teams collaborating across all news desks. With cloud newsroom tool DiNA, broadcasters and media houses equip their staff to work creatively and efficiently from anywhere; covering the news where it happens and with a true multi-platform workflow. Key to DiNA is that all planning, writing and publishing to all platforms is done within the same tool, breaking down the silos between news departments. DiNA is offered as a Solutions-as-a- Service tool with a subscription. DiNA integrates with graphics systems, MAM systems, automation systems, booking and resource systems, and more, through its API architecture. With DiNA at the heart of all storytelling, users can prepare material for live broadcast and for any other publishing platforms, use immediate tools such as graphics, videos and images search, news feeds integrations, translations, captions, AIrecognition, and more. All within one unified user interface.
Best Of Show At IBC 2020
WINNER AJA Video Systems Diskover Media Edition AJA recently launched Diskover Media Edition. The new software is based on open source roots and lets media and entertainment professionals easily search, find, and analyze media asset data originating from on-premises, remote, and cloud storage – aggregating associated metadata into a unified global index. The solution enables users to make more informed data decisions, a key capability as the M&E industry is on track to create more data in the next three years than in the last three decades. AJA Diskover Media Edition is designed for a range of industry professionals, from executives to systems administrators, IT managers, operational personnel, creatives and beyond. The software allows users to index hundreds of petabytes of data to easily locate files, analyze them, and pinpoint misallocated resources. It ultimately saves companies time and expenditure by helping them to identify wasted storage space, aging and unused files, data changes, and more. With AJA Diskover Media Edition, metadata can be seamlessly harvested to add business context and insights to files to inform data decisions and business processes, and to streamline workflows. It includes custom plug-ins that address the specific needs of M&E professionals around the world, regardless of the physical or virtual platforms utilized to store media content. The software further enables users to search across multiple platforms simultaneously, and discover and present data in a master index, while generating reports and cost analysis to a range of roles across productions or enterprises. Features include: n A single master index: Ensure all files across cloud, remote, and on-premises storage are up-to-date, and easily control data access and rights on a per-user level. n Snapshots in time: Tap into a history of indexes to compare storage over time, project future growth demands, and identify areas of concern. n Elasticsearch: Scale to any environment size and type with an open-source core that simplifies asset search and provides insights into associated storage costs. Easily integrate with
external APIs and order management platforms. n Tagging: Use tags to support workflow actions and approval processes, such as file deletion on an expired asset. n Analysis: Advance media workflows and monetization efforts with access to robust technical metadata via a simple search. Gain insights that provide team members with in-depth information about each asset and where it landed for more informed decisionmaking. n Support for hybrid environments: Seamlessly run Diskover across on-premises and cloud-based storage services. n An intuitive web-browser UI: Access Diskover from anywhere with an internet connection and easily deploy multiple indexers globally at other facilities, all reporting into one common platform and master index. n Security: Read-only access to the file systems and a webbased UI not directly connected to the storage prevents file corruption, deletion, or unwanted changes to assets. n Global asset overview: Provide remote teams with a global view of assets for each given production, group of productions, or client – empowering their own relationship with the data. n Cost analysis tools: Make more informed data-related decisions about time, resources, and investments to reduce operating costs.
WINNER Amazon Web Services Nimble Studio Amazon Nimble Studio is a new service that empowers creative studios to produce visual effects, animation, and interactive content entirely in the cloud, from storyboard sketch to final deliverable. Launched April 2021, the service enables customers to set up a content production studio in hours instead of weeks, with elasticity that provides near limitless scale and on-demand rendering access. Customers can rapidly onboard and collaborate with artists from anywhere in the world, and produce content faster and more cost efficiently using virtual workstations, high-speed storage, and scalable rendering across the globe. There are no upfront fees to use Nimble Studio, and customers only pay for the underlying AWS services used. Historically, studios relied on local high-performance workstations connected to shared file storage systems over low-latency, onpremises networks to create high-quality visual effects, animation, and other creative content. Costly infrastructure decisions were made up front, with studios aiming to balance capacity and demand in sourcing hardware used for several years and the space to house it. This limited studios to artistic talent located near studios (or those willing to move). With Amazon Nimble Studio, customers can use the cloud for remote and traditional studio setups. Studios can scale up compute resources and team size based on project demand, and shut down those resources after delivery. This flexibility is vital to studios as demand for premium content rises. Visual effects and animation, which are computeintensive to render, feature in nearly every modern production. Keeping pace with ever-growing demand causes content production studios to over-provision compute, networking, and storage infrastructure for peak capacity, which proves expensive, difficult to manage, and hard to scale. Amazon Nimble Studio leverages the power of the cloud to transform content production, making it much faster, easier, and less expensive to create content that consumers want to watch. Once set up on Amazon Nimble Studio, creative talent can instantly access high-performance workstations powered by Amazon Elastic Compute Cloud (EC2) G4dn instances with
NVIDIA Graphical Processing Units (GPUs), shared file storage from Amazon FSx, and ultralow latency streaming via the AWS global network. Nimble Studio lets content production studios start with as few resources as needed, scale up resources when rendering demands peak, and spin them back down once projects are complete. Content production studios can onboard remote teams from around the world and provide them access to just the right amount of high-performance infrastructure for only as long as needed—all without having to procure, set up, and manage local workstations, file systems, and low latency networking. Amazon Nimble Studio supports both the Windows and Linux operating systems so that artists can work with their preferred third-party creative applications. Additionally, studios can use custom software applications. A transformative technology, Amazon Nimble Studio is well deserving of an NAB Best of Show Award this year.
Best Of Show At IBC 2020
WINNER Appear NEO Series Appear has expanded its product portfolio with the recent launch of the NEO Series, a line of server-based compression products combining Appear’s hardware pedigree with the utility of software-based technology. The NEO Series delivers all the advantages of server-based compression solutions with cost-effective software transcoding, free of the hassle of software deployment and operation. The first available product in the NEO Series, NEO 10, is designed to meet the needs of operators launching OTT and IPTV services. The issue Spurred by the performance of modern CPUs becoming both cheaper and more powerful, software-based transcoding has expanded beyond predominantly OTT applications to facilitate more live content applications. Yet despite these gains in CPU technology, integrating, deploying and operating software on general hardware is a complicated and time-consuming process for most operators, especially if clusters of individual servers supporting different functions are required. The solution As software-based compression grows in popularity, rather than just patching over the problem, Appear wanted to create a product that actively made its customers lives easier. To meet this challenge, Appear designed the NEO Series. It combines the efficiencies and flexibility of software encoding installed on dedicated high-performance hardware from Appear. NEO 10 simplifies typical operational models that combine clusters of servers with different roles, into a stand-alone product with fully integrated management, transcoding and streaming functions. Customers deploying NEO 10 do not require specialist knowledge of deploying and operating general purpose server-based software compression solutions, which makes the NEO 10 a “plug and play” solution straight out of the box; no further software deployment, installation or integration is needed.
The details NEO 10 is targeted at live distribution environments where OTT and/or IPTV AVC transcoding is desired. The NEO software upgrade system ensures that all NEO products are futureproof, taking advantage of the fully integrated nature of the NEO Series to upgrade the server without the worry of software dependencies such as operating system, drivers and supporting applications. Further key features of the NEO Series include: n Extensive monitoring: The NEO 10 provides information on the transcoding process, as well as the health of the server (fans, internal voltages, temperatures, power consumption and dish health), all readily available from the user interface. n Multiple format support: The NEO 10 supports IP UDP/RTP, SRT input and IP UDP/RTP and packaged output. n Simple and effective: Deploying the NEO Series does not require knowledge of deploying and operating servers. It combines Appear’s vast knowledge of building innovative and effective live transcoding workflows with custom hardware (like its X Platform and XC Platform) with modern server technology.
WINNER CGI Media Solutions NewsBoard 21 The new OpenMedia NewsBoard by CGI: The ‘go-to’ crossmedia planning tool for journalists. With its modern and unified web user interface, OpenMedia NewsBoard enables journalists and editorial teams to organize their story production process directly from everywhere, any time, providing:
Third-party integration Implement additional functionalities and workflows from third-party systems. In addition to the proven OMIS interface framework, NewsBoard offers a variety of integration interfaces in the front-end. OpenMedia NewsBoard provides the following features:
Ease and Flexibility It features a fully customisable and open widget architecture that allows journalists in large teams to stay flexible, adapting views to their individual needs. Its modern, unified web-based user interface for journalists and editorial teams dynamizes the collaboration process. Story-Centric Approach NewsBoard’s dashboard solution helps journalists and editorial teams organize the story creation process from research to planning and distribution, all within its unified web-based user interface. Story-centric workflows and a state-of-the-art architecture allow it to scale from a few individuals to teams of thousands, always focusing on maximum transparency and collaboration around actual topics. All departments - television, radio and online - gain complete visibility into topics, including status of tasks and planned products. Powerful Research Tool via integrated agency wires Keep information visible at all times. NewsBoard’s widget-based dashboards enable the surfacing of breaking news from multiple online sources. From social media to agency feeds, NewsBoard makes sure your journalistic teams are on top of the story. Modern architecture OpenMedia NewsBoard is based on Microservice architecture to provide cloud native possibilities for scalability. Updates will be implemented faster with lower efforts for UAT. Component driven development speeds up development of further features and possibilities.
n Web browser-based interface n Story-centric approach n Intuitive handling n Open widget architecture n Customizable dashboard n Social media friendly n Able to handle hundreds of concurrent users n Stateless UI - New n No user session handling in backend - New n Possibilities for loadbalancing and realtime health checks for all services - New n Dedicated board and topic management - New n Visual subdivision of research and planning by the Board Manager and Topic Manager - New n Uses standard usability framework (Google Material) - New
Best Of Show At IBC 2020
WINNER Glensound Minferno 3 In a new world where staying apart was the norm rather than getting together, the need for commentary and news units that could be used off tube in a commentators’ home was key. Glensound’s Minferno 3 is a Dante based commentary unit providing a simple headset interface to a remote commentator. Facilities include 4 outputs (PGM + 3 talkback), and 4 monitoring inputs, which are clearly laid out for non technical commentators to use on their own. Using the Dante Virtual Sound card allows multi channel interfacing with the commentators’ PC. Once the audio is on the PC it can interface with the software based remote links such as Unity Intercom, IPDTL, or Luci Live for example. This then gives multi channel interfacing between the studio and the commentator. The Minferno 3 has a built in web server that operates on a separate IP address to the Dante network. This means that an engineer at the studio can log into the remote web server of the Minferno 3 and control the unit, removing and worries or concerns from the user. The engineer can change system configurations, monitor levels, and most importantly adjust the gain of the units mic amp, ensuring that levels are maintained within the desired range. Combining this remote level adjustment with the high specification mic amp on the Inferno 3, with the highly regarded Glensound Referee input management compressor, ensures that the commentators’ audio is always clear, balanced and free from clipping. The Minferno 3 is a very simple unit that fits the brief for remote production of sports and news reporting eloguently, without fanfare, just getting the job done.
WINNER JW Player Broadcast Live In Q4 of 2021, JW Player, a leading video software and data insights platform, has unveiled its Broadcast Live solution. This integration of the former VUALTO solution into the JW Player platform enables broadcasters and other content owners with adaptable, scalable, secure and intelligent solutions for video orchestration and encryption. The newly released Broadcast Live is the industry’s most flexible, robust and scalable video orchestration solution for Live, VOD, Live2VOD and VOD2Live. Developed specifically for media workflows, the new generation of Broadcast Live brings integration with encoders, streaming servers, workflow rules and DRM into a single set of APIs and GUI. The pluggable architecture also allows for easy integration with thirdparty and existing customer systems. With comprehensive channel configuration, event scheduling, monitoring, clipping and syndication, Broadcast Live enables premium viewing experiences with significant cost savings. The launch of this offering culminates JW Player’s acquisition of VUALTO in May 2021, to create a comprehensive video intelligence platform that empowers customers with independence and control in today’s Digital Video Economy. The combined result is a single platform for broadcast-quality live and on-demand video delivery across mobile, web and OTT platforms; secure content delivery with industry leading DRM services; and unique insights, intelligence and monetization features to help grow revenue. Customer success: Broadcaster and leading media and entertainment company ITV, which reaches over 40 million viewers every week, needed a solution to drive incremental views and provide increased exposure for their more niche events. The broadcasting giant selected JW Player to deliver the necessary infrastructure to enable the spinning up of pop-up channels for the streaming of live events via its ITV Hub, which is available on 28 platforms and over 90% of connected televisions sold in the UK. JW Player’s Broadcast Live solution provides ITV with dynamic event orchestration, allowing for the scaling up of resources for a live streaming event immediately before it begins and ability to
scale down once it is over. This enables the broadcaster to save on cloud-hosting costs that would otherwise be accrued from having the service continuously running. Following a successful first test with ITV via its ITV Hub platform for the British Touring Car Championship, JW Player will deliver further support for a calendar of live event streams, including more niche sporting events. “JW Player’s solutions and expertise have been invaluable in delivering the necessary infrastructure to ensure the commercial success of some of our more niche sporting events, and we look forward to the next stage of the project,” Vinay Kumar Gupta, Senior Architect at ITV Video Platform said.
Best Of Show At IBC 2020
WINNER Litepanels Gemini 1x1 Hard RGBWW LED Panel Litepanels Gemini 1x1 Hard uses advanced lensing technology to create an RGBWW panel which casts intense white or richly saturated color further than any other 1x1 LED panel with an outstanding output of over 3000 lux @10ft/3m and a narrow 46-degree beam angle. This output gives operators the unique versatility to choose powerful hard light or diffused softer light from the same fixture. The incredible power of Gemini Hard, expands the creative potential of a 1x1 panel. With more power to command and control, a wide range of diffusion and creative light shaping tools can be used, not possible with less powerful fixtures. Combined with full RGB control of over 16 million colors, 300 gels and 11 customizable special effects. With CRI and TLCI ratings of 98, Gemini 1x1 Hard delivers consistently accurate and dependable white light in any CCT from 2,700K – 10,000K with no color shift or flicker at any framerate, shutter angle, or at any intensity from 100-0.1%. Everything in the scene, from close-up candlelight to brilliant sunlight, appears true to life and the skin tones of on-screen talent are rendered perfectly. Tough enough to meet the rigorous demands of daily set life in the studio or on location, the compact and lightweight Gemini weighs just 13.2lbs/6kg for rapid rigging and easy transportation. Manufactured to industrial standards using durable, robust materials, Gemini is designed to protect your investment for years. It is technically challenging to create color accurate, and powerful LED lighting; Gemini 1x1 Hard delivers on both points. Like all LED panels in the Gemini range, Gemini 1x1 Hard produces highly accurate full spectrum white light as well as full RGB output and a range of creative cinematic effects. Its remarkably compact and lightweight form weighs just 13.2 lbs (6kg), including yoke and power supply, and has a maximum draw of just 200W yet Gemini 1x1 Hard produces an impressive output 20% brighter than a 200W HMI. This enables productions to make savings in human resource and energy consumption making them cheaper to run and cleaner for the environment. The exceptional output of the Gemini 1x1 Hard is possible
because of Litepanels’ advanced lensing technology. Individual red, green, blue, tungsten and daylight LEDs are tightly lensed to ensure that every bit of light emitted from each LED is captured and delivered forward, this also allows heat to dissipate easily allowing more power to run through them for greater output. Gemini 1x1 Hard gives operators more options. The increased output is used by some as an opportunity for greater diffusion and light shaping, while others value the power to offset bright sunshine. It’s a versatile production tool suitable for a wide variety of applications.
WINNER LiveU Air Control LiveU is pleased to announce Air Control – a new orchestration solution designed to elevate any live production workflow. LiveU, the leader in live video streaming and remote production, launched the solution in the run-up to the planned IBC2021 timeframe. Air Control is a broadcast-grade solution, enabling production crews to collaborate in a virtual production booth and manage on-air talent and guests joining from any device without the pain points of using consumer video conferencing tools or apps. Guests and talent receive a single click-to-join links that will work from their Mac, PC, or Smartphone, while the production crew has a sophisticated, yet easy to use, web solution giving the behind-the-scenes staff in-person-like visibility and control over a production – virtually. It begins with the production staff building a show inside Air Control. Using a secure interface with granular permissions, the production crew creates the show, sets all the important information – the show’s name, the start time and duration, selects how the show’s feeds should be used to display program, prompter – as fed by the LiveU Video Return service. Finally adding and inviting the on-air talent and guests. These guests are stored in a private address book and can be invited to join a show, without the crew ever having to leave the solution. Once it’s time for the show to start, the crew launches into the Production Dock – a single pane of glass experience with the ability to collaborate with other crew members in the production booth; monitor the production with an integrated multiviewer; see, listen, and speak to the guests and talent; and finally route broadcast-quality video to a LiveU output channel – be that a physical server like the LU2000 or LU4000 to output SDI or NDI, or to a cloud channel outputting SRT or NDI. Meanwhile, the talent or guest interface is focused on simplicity. After clicking the link, they received, they are directed to the Air Control streaming client where they can see themself and the program feed. That’s it. They can hear any instructions spoken to them from crew using the production dock, and once routed to a LiveU output channel, receive the mix minus from the station-side audio mixer.
The use cases for Air Control range from simple on-air guest management replacing consumer-grade video conferencing tools that are not optimized for the quality or rigorous coordination that define the industry, through large productions where the need to coordinate multi-national crews and talent is an ever-present challenge. With low latency and greater reliability, teams can get online faster and work smarter in one digital environment – no matter where they are. Air Control can be deployed on top of any LiveU infrastructure providing a frictionless path to deployment. In our changed reality, Air Control makes workflows sustainable enabling anyone in the production process to perform their job from anywhere – ensuring broadcasters can cope with new or unexpected challenges and reduces the environmental impact of shipping equipment or travelling to a studio to get reliable broadcast-grade live video.
Best Of Show At IBC 2020
WINNER M2A Media M2A CONNECT | Cloud Frame Rate Converter M2A Media, InSync Technology and Hiscale have worked in partnership to launch the first live, motion-compensated, pay-as-you-use frame rate converter orchestrated in the cloud. Integrated into M2A CONNECT, our cloud acquisition, aggregation and distribution system, M2A CONNECT | Cloud Frame Rate Converter enables scalable, hardware-quality conversion quickly and without the upfront costs associated with buying traditional kit. This is a transformative development opening up a world of possibilities in dynamic content conversion - without costly investments. As the global broadcast industry transitions to the cloud for all manner of activities, it is only natural that frame rate conversion also utilises this technology. Hardware-based solutions continue to serve organisations well but do not suit every workflow. Examples include OTT providers who operate delivery workflows almost entirely in the cloud and where on-premise and hardware frame rate converters break the workflow adding complexity and additional operational costs. Other examples include large scale sporting events where large banks of hardware-based converters are purchased at great cost but then not used in volume for months at a time. Live, motion-compensated frame rate conversion in the cloud allows broadcasters to maintain their workflows in the cloud and have the flexibility to scale their capability on-demand. The technology behind M2A CONNECT | Cloud Frame Rate Converter is FrameFormer by InSync who have two decades of experience in both hardware and software conversion development. The combination of M2A CONNECT and FrameFormer is the change and solution our industry needs. For too long broadcasters have been hamstrung by on-prem, equipment-based capabilities for shortterm events - even more so when it comes to frame rate conversion. The joint product offering is genuinely market leading on both the commercial and technical fronts. Commercially, this is the only flexible, motion-compensated frame rate conversion solution available in the cloud with event-based, pay-as-you-use pricing. Technically, M2A CONNECT | Cloud Frame Rate Converter powered by InSync FrameFormer stands head and shoulders
above the competition. InSync Technology’s attention to detail in transferring technical knowhow and capabilities from hardware to software conversion has ensured a like-for-like experience between the two methods of implementation. Unlike its competitors, FrameFormer runs exclusively on CPUs. This gives users the ability to run the software on a greater range of more efficient cloud-based instances when compared to GPU-based options on the market. Which in turn, affords broadcasters greater freedom when deploying their frame rate conversion capability via cheaper CPU instances on a pay-asyou use basis whilst setting them free of CAPEX investments and the limitations of hardware-based options. Broadcast customers can feel safe in the knowledge that M2A CONNECT | Cloud Frame Rate Converter provides a comparable output to hardware-based solutions while harnessing the ability to scale capacity. Integration with the M2A CONNECT product ensures that customer workflows are streamlined in the cloud alongside other acquisition and distribution requirements putting them in control of their operations. Our joint solution shows the power of the cloud in transforming broadcasters’ operations when combined with M2A Media’s orchestration capability, InSync’s conversion technology and Hiscale’s media processing capability.
WINNER Maxon Cinema 4D Cinema 4D is a professional 3D modeling, animation, simulation, and rendering software package. Its fast, powerful, flexible, and reliable toolset makes 3D workflows more accessible and efficient for design, motion graphics, VFX, AR/MR/VR, game development, and all types of visualization professionals. Cinema 4D produces stunning results, whether working on your own or with a team. Thanks to the friction-free experience afforded to users, Cinema 4D creations can be seen in almost every industry. From the next-generation electric vehicle UI in Cadillac’s LYRIQ (https://territorystudio.com/project/cadillaclyriq/) to visualizing the history of financial trading for Interactive Brokers’ (https://www.maxon.net/en/article/ who-says-finance-cant-be-visually-interesting) to Hollywood TV shows and films like WandaVision (https://www.maxon.net/ en/article/maxon-congratulates-perception-design-lab-forwandavision-emmynomination) and Westworld (https://www. maxon.net/en/article/machine-dreams), it’s hard to consume content without coming across Cinema 4D. With Cinema 4D R25, the best user interface has been made even better: this latest release features a new modern skin, user interface enhancements, and an expansive preset system for optimizing your workflow. An updated scheme and icon set offers a fresh, modern spin on Cinema 4D’s classic look that intuitively communicates what’s important and puts more focus on your artwork. Dynamic palettes power new layouts that make great use of space and ensure the tools you need are always close at hand. Tabbed documents and layouts make it easy to flow between multiple projects and workflows. All-new Spline Import options allow users to easily use Illustrator, PDF, and SVG vector artwork in their 3D scenes. Capsules allow anyone to tap into the power and flexibility of Cinema 4D’s Scene Node system, with plug-in-like features directly in the Classic Object Manager. Cinema 4D’s new Scene Nodes core powers a flexible system for procedural scene creation, and allows plugin-like functionality to be packaged and distributed as Capsule Assets. In Release 25, these Capsules can be directly used within Classic C4D as primitives, generators, and geometry modifiers, so any
user can tap into the power of Scene Nodes without sacrificing key Classic C4D features like MoGraph, Dynamics, and Volumes. The possibilities within Cinema 4D’s new Scene Node core continue to expand with new Spline and Data Integration functionality, which can be used while building powerful new Capsule Assets. Beyond the innumerable use cases and applications for Cinema 4D, it is widely recognized as one of the easiest and most accessible 3D packages to learn and use. The recently announced Release 25 reveals numerous improvements to the user experience for even faster creative workflows. Building off of Cinema 4D Subscription Release 24 in April 2021, Cinema 4D Release 25 demonstrates Maxon’s ability to enhance and increase user value on a regular basis to ensure ongoing creative success for its wide and varied network of users.
Best Of Show At IBC 2020
WINNER MediaKind CE1 MediaKind’s Cygnus Contribution solution provides high-quality, low latency live contribution links via satellite or IP, including reliable ingest into public clouds, fitting neatly into a production landscape undergoing profound change. The transition to a software or native-cloud base is of acute focus for many media organizations today. Launched in October 2020, MediaKind’s CE1 media contribution encoder has now been added to Cygnus Contribution. It offers a flexible, software-based encoding platform that provides secure, high-quality professional content contribution and exceptional time-to-market for new video offerings. Coupled with hardware acceleration, the CE1 facilitates highly immersive and compelling experiences. It strengthens Cygnus Contribution with enhanced IP interoperability and security over managed and unmanaged networks, accommodating the latest industry standards such as SMPTE ST 2110, Secure Reliable Transport (SRT), and BISS-CA. The enriched solution immediately responds to anticipated changes in professional media contribution. Cloud, for example, forms the backbone of the delivery pipeline for nearly all digital services, including media and entertainment streaming companies. According to August 2021 predictions from Gartner, global spending on cloud services is expected to exceed $482 billion in 2022, up from $313 billion in 2020. Critically, CE1 enables broadcasters, operators, and service
providers to transition to all-IP and the cloud, integrate existing and future codecs and standards, scale and optimize services, and embrace new flexible business models. It is primed to handle demanding live event coverage, delivering HEVC, or MPEG-4 AVC encoding of HD and UHD video content in low latency. A flexible and robust product, the CE1 provides a platform to support high-quality, 4:2:2 10-bit content contribution into the cloud, reducing the bitrate required to deliver first-class media experiences. Combined with MediaKind’s multi-codec and multi-service professional decoder, the RX1, the CE1 enables an end-to-end media contribution workflow that is more resilient and futureready than existing professional media contribution processing and delivery solutions. The CE1 is available as a cloud-native deployment model and can integrate with public cloud providers to enable broadcastquality contribution to the cloud. CE1 software is also available within a hardware platform, giving broadcasters the option to run their services on-premise. Combining CE1 and RX1 directly addresses the challenges of the most demanding live event coverage, and other use cases such as remote/at-home production. By utilizing the X86 server platform, the CE1 is a future-ready application that can accommodate new codecs and standards, typically released with an X86 software development kit (SDK).
WINNER nxtedition nxt|cloud Built for storytellers, nxtedition opens the way to live production as it should be, where creativity, not engineering, is at the heart of every workflow. At the flick of a switch, the system can run an entire show, trigger media files, graphics, videos, camera switching, lighting, audio mixes, robotic moves, video walls, or anything else you might want. Everything can be done automatically - or manually if preferred – on premise or remotely. The unique design and agile microservices architecture make complex and time-consuming processes and workflows much simpler and more intuitive. Both playout and HTML5 graphics are built on the popular open-source Caspar CG platform: nxtedition has now implemented Caspar CG in a containerised Linux version to allow it to operate in the public cloud. The latest Caspar CG enhancement, nxt|cloud, allows nxtedition, which has typically been implemented as a private cloud, as containerised software on the broadcaster’s premises, to be implemented in the public cloud, in exactly the same way with exactly the same functionality, user interface, automation and outputs as IP streams. Capability can be easily and rapidly be spun up and down as needed. This adds infinite scaling and elasticity to the number of outputs nxtedition can deliver for live playout with graphics and subtitling, increased security of operation in an encrypted and authenticated environment, and replication from ground to sky for disaster recovery. The NRCS inside nxtedition will replicate all the user scripts, rundowns, media, graphics, and planning to the cloud so in the event of a disaster, the teams just log onto the cloud server and carry on where they left off. This rapid response ensures immediate recovery from any disaster affecting the primary playout centre: such business continuity assurance, protecting advertising revenue as well as audience retention and brand loyalty. With a playout engine based on the widely used, opensource Caspar CG platform, nxtedition now also offers a fully containerised, Linux version of Caspar CG, allowing all its sophisticated playout and graphics functionality to be deployed in the cloud or on-premise as a scalable and elastic
microservice. This consolidated approach combines the latest in web technologies with workflows designed for broadcast environments. The solution provides all the tools required to help creative teams easily move through the planning and writing of a story, content acquisition, media management, live on-air broadcast automation, channel playout and publishing to VOD, web, and social media, with secure business continuity plans. Colour correction can be applied to any video clip or image in the system to match footage from various sources, and outputs can be rendered in multiple formats. Ingest record from SRT and NDI® feeds can be spun up in the cloud to be restreamed with graphics. Using nxtedition, creative teams can focus on what they do best: making unbeatable content. This ultra-flexible, content centric approach allows clients to be fast, first, and accurate with the content they put out, ensuring they are the first notification on the viewers phone or the first to break a story live.
Best Of Show At IBC 2020
WINNER
Ross Video Ross Ultrix Acuity Hyper Converged Production Platform Ultrix Acuity combines the routing and AV processing capabilities of Ultrix with the creative capabilities of Acuity production switcher. Ultrix Acuity takes routing, audio mixing, MultiViewers, trays of frame syncs and audio embedders/de-embedders – all solutions that have traditionally filled multiple equipment racks – and compresses them all down to a single 5RU chassis. Ultrix Acuity is therefore ideal for environments where size really matters, such as OB vans and mobile units. Add 2RU of rack-mounted redundant power and the result is a complete system in 7RU that can outperform packages requiring multiple racks, complex cabling and control system integrations. As with the current Ultrix solution, Ultrix Acuity is based on our Software Defined Production philosophy, ensuring that futureproofing is never a concern. The Software Defined Production Engine – SDPE – from Ross removes the need for costly ‘forklift’ upgrades by providing base hardware that
can grow via convenient and relatively inexpensive software licenses. Ultrix Acuity’s SDPE backbone will therefore reduce the uncertainty around meeting future creative or technical requirements. The flexible architecture of Ultrix Acuity means that format and connectivity challenges simply disappear. Transition from HD to UHD with a simple software license. Mix SDI and IP sources in the same frame transparently. Use sophisticated tie-line management tools to incorporate the system into a larger distributed routing fabric. In short, as your needs and requirements change, so the unrivalled flexibility of Ultrix Acuity can easily keep pace. Ultrix Acuity also provides excellent return on investment – expensive power, cooling, maintenance, and support costs are significantly lower. In addition, Ultrix Acuity can become one node of a larger distributed routing environment, reducing the incremental cost of adding I/O and further production switchers.
WINNER Simplestream App Platform App Platform is Simplestream’s out-of-the-box product, designed to streamline the launch of premium video services across multiple devices. It’s a powerful framework that allows broadcasters, sports, and entertainment brands to create feature-rich applications quickly and effortlessly, with the ability to distribute premium content offerings across all major platforms. More specifically, App Platform supports up to 13 platforms and devices — from desktop to iOS and Android, from Samsung TV and LG TV to Playstation, and more. The demands of hungry audiences bring to the surface a number of challenges. App Platform is the best answer to the needs of OTT operators today — thanks to a reduced timeto-market, greater scalability, and enhanced opportunities for monetisation across a variety of business models. The ecosystem of apps is built with a design-first approach. Each project can count on a variety of out-of-the-box templates that contain all of the must-have features for any service, yet with freedom for personalisation within the given framework. At the forefront are capabilities for live streaming, on-demand, and automated Live-2-VOD workflows for the generation of catchup content from live streams. Additionally available are single or multi-channel EPG views, as well as a host of features including options for content download to device, nomadic viewing, and in-app purchases. Recent projects, such as the Tokyo 2020 Paralympic Games delivered with Channel 4, saw App Platform supporting over 1,300 hours of live events, with up to 16 concurrent streams. The solution is also at the heart of the brand new suite of OTT services developed for GB News, the British free-to-air news channel launched earlier in 2021. Choosing the right technology is still one of the most crucial steps to take when ‘going OTT’ and wanting to build a brand new service. End-to-end value propositions have been so far the most popular means of presenting a suite of services that satisfy the many needs of players in the market – with one bottleneck often being the difficulty of integrating solutions with existing online video platforms (OVPs). App Platform is a best-in-class solution
that can seamlessly integrate with any OVP existing in the market today, bringing clear benefits to our clients. By leveraging this OVP-agnostic approach, content owners can use services they are already trained on. Moreover, storage costs aren’t duplicated. This is the case with a project due to be publicly announced imminently, which integrates Simplestream’s proprietary solutions with an existing online video platform by Brightcove. Migration of content is not required, and existing configurations can remain unchanged. Finally, and most importantly, clients can keep working seamlessly with existing third parties, benefiting from utilisation of Media Manager (Simplestream’s proprietary content management system) as an orchestration layer. App Platform is pre-integrated with industry-leading analytics, advertising, and marketing tools. An environment that’s optimised for continuous audience monitoring and growth is integral to a successful OTT service. The versatility of the product and the ability to integrate with any third parties make App Platform a best-in-class, reliable, and scalable offering for any player in the OTT space.
Best Of Show At IBC 2020
WINNER Synamedia OTT ServiceGuard Using the intelligence provided by Synamedia’s operational security team about the methods used by streaming pirates, Synamedia has developed OTT ServiceGuard, the industry’s first solution to systemically address the inherent weaknesses that make it easy for pirates to steal premium content and even entire streaming services by gaining access to the service provider’s CDN. Not only is Synamedia OTT ServiceGuard the first solution to protect content across all open platforms - whether mobile, browsers, or smart TVs – it is also the industry’s first solution to extend protection to the service provider’s CDN. A quick Google search will take you into a world of organised crime: industrial scale hackers, criminal technology experts with content aggregators, content wholesalers and content resellers conducting the biggest criminal heist the world has ever seen. With little to no acquisition or content costs, pirates have become ultimate media super-aggregators. This is because current anti-piracy approaches - such as DRM, client hardening and concurrency restrictions - are simply scratching the surface of streaming piracy. Using its intelligence, and with access to pirates’ scripts, Synamedia has unearthed the root source of this problem - the OTT protocol is broken. The technology of OTT delivery makes it simple and cheap to set up as a pirate operator. Pirates don’t necessarily need to break the DRM to steal content. Using pirate servers and clients, pirates are hacking the OTT protocol to get the DRM license and redirect pirate clients to legitimate service and content providers’ CDNs. The fight back Hollywood studios and sports rights holders are justifiably frustrated with the amount of premium content being leaked from their services. Content owners invest substantial resources to fight piracy, but these efforts only focus on the symptoms. Because Synamedia OTT ServiceGuard addresses the root causes of piracy, it is now possible to secure platforms and protect high value movie, TV and sports content.
Synamedia OTT ServiceGuard makes it possible to securely distribute content on open platforms by validating that only legitimate subscribers and applications are granted authorised access and receive content. It gives each client a unique identity that is not cloneable and allocates secure keys for signing service requests, ensuring all client messages are validated for their authenticity and origin. Available as a service, it is quick to deploy. It is easily integrated with existing OTT infrastructure without impacting the user experience or any existing application-service communications. It addresses the protection of all types of clients with a simple software library that can be integrated in the normal development pipeline. The solution does not require special expertise or support knowledge and adds zero overhead to release schedules or communications costs. It also supports any multi-DRM solution, including Synamedia’s multiDRM solution. Finally, with Synamedia OTT ServiceGuard, service providers can stop streaming pirates in their tracks.
WINNER Telos Alliance Telos Infinity® Virtual Intercom Platform First, we broke the matrix. Now, we’re putting intercom in the Cloud. Telos Infinity® Virtual Intercom Platform (VIP) is the first fully-featured Cloud-based intercom system. It delivers sophisticated comms virtually, making Cloud-based media production workflows available on any device—smartphone, laptop, desktop, or tablet. Users can even use third-party control devices, like Elgato’s Stream Deck®, to control Telos Infinity VIP. Now you can harness Telos Infinity IP Intercom’s award-winning performance, scalability, ease of integration, and operational/ cost efficiencies anywhere—At Home, On-Prem, Site-to-Site, or in the Cloud. Telos Infinity VIP n Cost-Efficient - Less Maintenance, Infrastructure, Space Required n Scaleable - Pay for Only What you Need n Ease of Use – Virtual Panels on Familiar Devices (Smartphone, Computer, Tablet) n Workflow Flexibility - At Home, On-Prem, Site-to-Site, In the Cloud n Reliable, Proven Cloud Workflows n Flexible Deployment Options n TelosCare™ PLUS Service Option for Premium Service & Support
Deployment Options Meeting users where they are on the path toward virtualization, Telos Alliance offers several deployment options for VIP, which scales to suit users’ varying requirements, from a few remote smartphone VIP instances to an enterprise solution requiring hundreds of instances. n On-Prem – Use Telos Infinity VIP hardware appliance or your own server for on-prem installations. n Integrated – For both On-Prem or Cloud versions, Telos Infinity VIP can be integrated with Telos Infinity hardware comms, or any thirdparty intercom or audio subsystem using AES67 or SMPTE 2110-30 connectivity. n Cloud Server – Software for supported Cloud platform installations. A complete communications infrastructure in the Cloud with connectivity options for integration with third-party Cloud-based and On-Prem audio subsystems. n Software as a Service (SaaS) – Various third-party Telos Alliance partners will offer a Telos Infinity VIP SaaS option, allowing users to lease it in a virtual environment. Contact Us Today to Design Your Telos Infinity VIP CloudBased Intercom System: Inquiry@TelosAlliance.com
Best Of Show At IBC 2020
WINNER Teradek Teradek WAVE Teradek’s Wave is the only live streaming monitor that handles encoding, smart event creation, network bonding, multi-streaming, and recording – all on a daylight-viewable touchscreen display. Wave’s sleek form factor is compact and versatile. Users can simply set the device on tabletops with its leg stands, or mount Wave to cameras for on-the-go streaming. It’s hot-swappable battery plates and USB-C connector provide continuous power for long productions, and its daylight-viewable monitor makes it easy to see what’s happening on the screen at any time of day, in any brightness. These features make Wave a highly adaptable device for streamers in any environment. A big draw for Wave users is the ability to set up an unlimited number of events ahead of time with Wave’s easy-to-use project workflow: FlowOS. This intuitive operating system guides users in creating their live streaming events in advance – from video and audio configuration to network connection, and destination settings. With FlowOS, streamers can also monitor their video in real-time from Wave, and keep tabs on their stream settings and analytics, giving the flexibility to prep and plan for a stressfree stream. When using Wave with its mobile app, streamers can step away from their Wave, and easily review their stream’s stats like bitrate and network status to ensure a stable stream from their mobile device. Wave users can take their streams two steps further by pairing their device with Sharelink, Teradek’s cloud service. Sharelink enables users to utilize network bonding, which protects live streams by splitting the video bitrate across multiple network connections including Ethernet, USB modems, and cellular hotspots. If one connection becomes unreliable, Wave load balances across the other connections – locking in a stable connection in challenging environments. Sharelink’s secondary benefit is its ability to send streams to multiple platforms all at once, allowing viewers to tune in from whichever platform they prefer, and allowing streamers to grow their streaming audience. In the past two years, one of the fastest growing requirements for media that originates in the camera (broadcast, corporate,
educational, entertainment, etc.) has been for accessible streaming. Here’s what sets Teradek WAVE apart from all others in the field. Wave is the only live streaming monitor that handles encoding, smart event creation, network bonding, multistreaming, and recording – all on a daylight-viewable touchscreen display. n Wave is Teradek’s first monitor-encoder that allows users to view their video feed directly on the encoder itself – eliminating the need for an additional screen n Using Sharelink – Teradek’s cloud service – Wave users can enable network bonding for highly-stable streams, and broadcast to multiple streaming platforms simultaneously n It features hot-swappable battery plates for continuous power, and a USB-C connector for universal power connectivity n The monitor-encoder features a daylight-viewable 7” touchscreen with IPS LCD display, and 1000nits of brightness n It encodes in H.264 up to 1080p60 to any RTMP destination while supporting simultaneous on-board recording
WINNER ThinkAnalytics Super-aggregation with ThinkAnalytics As viewers subscribe to increasing numbers of streaming services – an average of four in the US according to Deloitte there’s a corresponding increase in frustration for users who have to open individual apps and remember where content is located. ThinkAnalytics enables service providers to become super-aggregators in their quest to continually boost viewer engagement. It ensures viewers quickly find something to watch from a huge, growing array of content libraries and OTT platforms/apps. This gives viewers a personalized user experience with universal search and recommendations across OTT streaming services and content catalogues. To date, this has been challenging to deliver. But now, with the cloud-native Think360 viewer engagement platform, ThinkAnalytics has broken through this barrier. Several operators, including Liberty Global and Tata Sky, have already gone live with super-aggregation using Think360 to allow viewers to quickly find compelling content to watch. In spring 2021, Tata Sky notched up a super-aggregation world first on its Binge streaming service with personalised recommendations across 11 streaming apps (including Amazon Prime Video and Hotstar Disney+) plus the TataSky VOD catalogue, and recently aired linear TV. The platform delivered 82% more average watch time per user than standard editorial rails, and 37% more total watch time. Pallavi Puri, Chief Commercial & Content Officer, Tata Sky: “ThinkAnalytics’ tools have helped us deliver personalised recommendations to every Binge subscriber across devices. Our shared vision and collaboration has accelerated viewer engagement and made Binge a valued product for subscribers.” Liberty Global is using Think360 for personalised content discovery - including voice search - across multiple streaming services, linear, VOD, catch-up, and recorded content, in 15 languages across seven countries,. Think360 features AI/ML, information science and the massive scalability needed to power universal search and personalised recommendations across content from multiple diverse, siloed sources in multiple languages. Its information science ontology
generates multi-dimensional tags including elements of a plot, narrative styles, moods and spans 35,000+ content features that support super-aggregation, allowing the engine to: n understand a huge variety of content/content types, and categorise content across all catalogues in a consistent manner to enable all viewing behaviour to be applied to all catalogues n ingest and deliver search and recommendations across multiple catalogues n learn from viewing across catalogues to build a single viewer profile n support use cases that can mix and match which catalogues the search and recommendations are from in real time. Universal search is intuitive as Think360 auto-completes search terms for the viewer and learns which searches are trending, reducingthe time spent looking for content. Think360 runs across multiple AWS availability zones and data centres, minimizing costs by allowing each service to auto-scale up/down independently, including auto-scaling in advance of peak viewing periods to support hundreds of millions of viewers. The largest independent content discovery platform, ThinkAnalytics delivers content discovery and viewer insights to 80+ service providers, serving 350 million subscribers in 43 languages with 6 billion recommendations per day. Customers include: Liberty Global, Tata Sky, Deutsche Telekom, Astro, DirectTV Latin America, Proximus, Rogers, Singtel and Vodafone.
Best Of Show At IBC 2020
WINNER TVU Networks TVU Channel TVU Channel - Going beyond playout. The 24/7 Channel solution created for the world’s biggest broadcasters available for everyone. Cloud playout, scheduling, live programming, and more. The easiest way to manage and launch your 24/7 live digital channel for live broadcasting over the air, on cable, OTT, apps, social media, and websites. Launch from your laptop in minutes. Schedule live and VOD programming with full SCTE support and set up multiple encoders for delivery to CDNs, OVPs or edge devices for traditional linear workflows from a simple web browser interface. Use for traditional linear channel television playout, an OTT channel, a unique pop-up channel or send directly to social media 24/7 and without required infrastructure. Live broadcasting and playout software can be complex. We made our cloud-based scheduling and playout actions intuitive. Some single click playout features include: Break-in live video or switch programs instantly at any time Breaking news auto-recording. Schedule or manually insert graphic overlays. Output Electronic Program Guides. Instant on or scheduled Ticker/Crawl. Program interrupt for instant changes. No New Infrastructure or Additional Infrastructure Required TVU Channel is a completely cloud native solution that can be quickly deployed since it doesn’t require traditional infrastructure. There is also no complicated licensing or configuration needed in order to start using it. Cost-effective Pay-as-you-go-pricing With TVU Channel, purchase only what you need and avoid unnecessary capital expenditures. Deploy as many channels as needed at a fraction of the cost of traditional playout. Spin up one, one hundred or as many channels as you need with just the click of a mouse.
Remote Access from Anywhere Channel uses a simple browser based interface which can be accessed through any smart device or laptop with Internet access. Login from anywhere without being constrained to a physical studio location. Easy to Use TVU Channel was designed for fast setup without the need for extensive training to get started. Building and managing channels in TVU Channel is as simple as managing an ordinary website calendar. Secure User Access TVU Channel provides full control over the management of permission levels by individual users to schedule and operate channels. Full Compatibility with Scheduling Programs Are you using BFX, Wide Orbit or other major commercial programs? TVU Channel is compatible with the most popular third party scheduling platforms. Full Integration with PAM and MAM Easily transition content and metadata from major PAM, MAM and editing tools into your playlists. Full Integration with the TVU Ecosystem TVU Channel works with the entire portfolio of TVU solutions and other edge devices for the ingest of live content via SDI/ SMPTE 2110/NDI as well as output for traditional linear channel use cases.
WINNER Vizrt Live Production in the Cloud Vizrt Live Production in the Cloud is the first cloud-based production solution that fully exploits all NDI® capabilities, including connectivity across both WAN and LAN networks, running onsite and in the cloud on the same sync, audio-over-IP integration with digital audio systems, and the ability to bring in any camera with a browser as a source, from anywhere. Vizrt Live Production in the Cloud ensures broadcasters and media organizations always have access to the right resources to meet production needs no matter where they are located. With Vizrt Live Production in the Cloud, sources can be brought in from all around the world via NDI. Broadcasters are not tied to a physical location and can now use the best team for any production. The producer can monitor the production from a hotel in New York, the technical director can operate production automation or a switcher from an office in Atlanta, while the graphics artist delivers unparalleled quality Vizrt graphics from their living room in Los Angeles. Creating and sourcing global content engages audiences in innovative and immersive ways while also ensuring that the highest caliber talent is working together to produce an outstanding broadcast. Vizrt Live Production in the Cloud allows for experiential, agile, and flexible workflows to produce phenomenal productions around the world to captivate audiences. With the introduction of Live Production in the Cloud, broadcasters can simultaneously simplify, yet enhance live productions. Without compromising quality or incurring excessive costs, broadcasters and media organizations can leverage a suite of Vizrt’s flagship products, Viz Vectar Plus, Viz Trio, Viz Engine, and Viz Mosart into a cloud-deployable solution with unmatched performance in 4K video switching, real-time graphics rendering and playout, and studio automation for any production ecosystem whether it’s cloud, remote, local or hybrid. Furthermore, the Live Production in the Cloud offering from Vizrt is the first, and only, cloud solution system to utilize all the components of NDI including transmitting video, audio, and data over NDI and audio over IP integration with digital audio
systems to achieve instant access to, and seamless interchange with, unlimited IP sources in real-time. Available as a low, forecastable monthly cost as part of Vizrt’s Flexible Access Plan, Live Production in the Cloud gets a production up and running without substantial initial costs – and allows unprecedented flexibility in experimenting and adjusting production capabilities based on need. For example, Sky used the Live Production in the Cloud along with Flexible Access to deliver a cloud-first, remote production for the COP26 Summit that also had a reduced carbon footprint. With Live Production in the Cloud, broadcasters can size up or down capacity according to pace and needs, revolutionizing the traditional approach to a control room. Broadcasters can now build a production team of the very best, no matter where they are in the world, and create extraordinary, uncomplicated remote productions with unparalleled quality.
Best Of Show At IBC 2020
WINNER Zixi Zixi Software-Defined Video Platform Zixi is the architect of the Software-Defined Video Platform (SDVP), the industry’s most complete live IP video workflow solution. The SDVP is currently integrated into 300+ encoder, decoder, cloud multiscreen, and cloud playout partners. The SDVP enables media organizations to economically and easily source, manage, localize, and distribute live events and 24/7 live linear channels in broadcast QoS, securely and at scale, using any form of IP network or hybrid environment. Superior video distribution over IP is achieved via four components. 1) Protocols - Zixi’s congestion and network-aware protocol adjusts to varying network conditions and employs forward error correction techniques for error-free video transport over IP. As a universal gateway, standards-based protocols such as RIST and the open source SRT are supported, alongside common industry protocols such as RTP, RTMP, HLS, and DASH. Zixi supports 17 different protocols and containers– the only software platform designed for live video to do so. 2) Video Solutions Stack – Provides essential tools and core media processing functions that enable broadcasters to transport live video over any IP network, correcting for packet loss and jitter. This software manages all supported protocols, transcoding and format conversion, collects transport analytics, monitors content quality and layers intelligence on top of the protocols such as bonding and patented hitless failover across any configuration and any IP infrastructure, allowing users to achieve 5-nines reliability. 3) ZEN Master - The SDVP’s control plane enabling users to intelligently provision, deploy, manage, and monitor thousands of content channels across the Zixi Enabled Network, including 300+ Zixi enabled partner solutions such as encoders, cloud media services, editing systems, and ad insertion and video management systems. With such an extensive network of partner enabled systems, Zixi ZEN Master presents an end-toend view across the complete live video supply chain. 4) Intelligent Data Platform – A data-driven advanced analytics system that collects billions of telemetry points per day to
clearly present actionable insights and real-time alerts. The SDVP leverages cloud AI and purpose built ML models to identify anomalous behavior, rate overall delivery health and predict impending issues. This fourth key component, accessible via the ZEN Master control plane, consists of a data bus that aggregates over three billion data points daily from hundreds of thousands of inputs within the Zixi Enabled Network, including over 300 partner solutions and proprietary data sources such as Zixi Broadcaster. This telemetry data is then fed into five continuously updated machine-learning models where events are correlated, and patterns discovered. With clean, modern dashboards and market defining realtime analytics, the Zixi SDVP enables users to focus on what’s important. Intelligent alerts and health scores generated by Zixi’s AI/ML models help sift through and aggregate data trends so that operations teams always have the insights they need without data overload. At a time that sees the normalisation of remote-working and a proliferation in the ways programs reach viewers, Zixi’s SDVP provides the agility, reliability and broadcast-quality video securely from any source to any destination over flexible IP video routes.
WINNER Adobe Speech to Text in Adobe Premiere Pro Captions are increasingly necessary in many areas due to the growth of social video, globalized content, and accessible content, but the captioning process can be tedious. Solving this painstaking process has the opportunity to greatly expand accessibility across all areas of video, from broadcast sports and news, to YouTube and social media, as well as television and film. Especially in the year and half, people have come to rely more and more on video to connect with friends and family, entertain themselves with a welcome distraction, and learn new skills. Video creators need tools to help them add captions to content, and audiences everywhere, especially those in the deaf community, need captions to unlock the verbal communication that drives most video content. The new Speech to Text feature set (currently in beta) in Adobe Premiere Pro, the world’s leading video editing software, will introduce an efficient way to add accurate captions to any kind of video content. More specifically, Speech to Text will enable video creators to automatically create a transcript from their video, then generate automatic captions on their editing timeline. Also embedded in this feature is Auto Captions, which uses Adobe Sensei, Adobe’s proprietary artificial intelligence software, to accurately mirror the pacing of spoken dialog and match it to the video timecode. While this can potentially be overlooked as a small element, ensuring that the cadence of captions is accurate is key to understanding and engagement. The Speech to Text feature set represents the first time that a professional editing software offers this kind of robust tool for captioning. Beyond providing accessibility, captions have many practical benefits for video creators, such as improving SEO optimization to boosting engagement rates and shares. Additionally, the new Speech to Text feature allows video editors to add creative spark and style to captions. Captions have always been black and white - literally. Now, creators can add color, manipulate sizing and placement, and access additional stylization elements that open up creative pathways for accessible captions that have never existed before.
The Speech to Text workflow is designed to be intuitive and customizable to the user’s needs. The Captions workspace in Premiere Pro consists of the Text panel, which includes the Transcript and Captions tabs). To get started, the user auto-transcribes their video in the Transcript tab, which then generates captions. These captions can then be edited in the Captions tab and in the Program Monitor. Captions have their own track on the timeline. Lastly, editors can stylize their captions with the design tools in the Essential Graphics panel. As video continues to dominate communication and entertainment worldwide, the Speech to Text will be welcomed by content creators and audiences alike.
Best Of Show At IBC 2020
WINNER AJA Video Systems BRIDGE NDI 3G AJA Video Systems recently introduced BRIDGE NDI 3G, a sleek, high-performance 1RU appliance that enables reliable, high density, and high quality conversion to/from SDI and NewTek’s NDI® video over IP protocol. The flexible, intuitive gateway device supports multichannel 4K and HD workflows and is designed to help broadcast, production and proAV professionals move seamlessly between various platforms, protocols, and connectivity types. The robust IP video device is a plug-and-play, standalone solution, designed for simple integration into facility racks, DIT carts, flypacks and anywhere that high-quality NDI conversion is required. AJA developed BRIDGE NDI 3G to help facilities integrate existing SDI infrastructure to a new NDI backbone. BRIDGE NDI 3G packs a punch with dual 10GigE onboard NICs for NDI I/O and remote control over the web, as well as high-density SDI connections for up to 16 channels of 3G-SDI I/O – offering up to four channels of 4K or 16-channels of HD, or a mixture of HD and 4K NDI encodes/decodes in a compact form-factor. The device boasts an intuitive interface and system administration screen that make it simple to get BRIDGE NDI 3G up, running, and configured quickly and securely. Using a standard web browser, technicians, engineers, operators, and producers can access the interface remotely to view and manage content, including local monitoring preferences. Operators are also able to freely browse, favorite, label, and filter a large volume of NDI sources on the network, as well as label any SDI inputs or outputs, and see all I/O activity at any given time. BRIDGE NDI 3G can easily be used to convert SDI camera and playout sources into NDI streams, enabling simple integration into NDI supported workflows, including virtualized productions leveraging NDI-based switchers. Using a common network, these sources can be located anywhere within a facility, allowing seamless integration of various production islands into a unified workflow. Conversely, NDI streams can be converted back into SDI ecosystems via BRIDGE NDI 3G’s configurable I/O, allowing NDI signals to move back into SDI routing systems and traditional baseband workflows.
Device configuration and management are simple via a local interface, or remotely from a web browser interface or REST API. The rackmountable appliance supports UYVY and UYVA 4:2:2, 8-bit and P216, for NDI, and for SDI, YCbCr 4:2:2, 10-bit. It simplifies the integration of graphics and/or 4K sources into workflows with one-click grouping controls for video-and-key, and for 4K via 3G I/O. BRIDGE NDI 3G includes dual power supplies for redundancy and AJA’s legendary 3-year warranty. BRIDGE NDI 3G is available now through AJA’s worldwide reseller network for $11,995 US MSRP. For more information, visit: www.aja.com/products/bridge-ndi-3g.
WINNER Amazon Web Services AWS Elemental Link UHD AWS Elemental Link UHD is an intuitive encoding device that connects a live video source, like a camera or video production equipment on the ground, to AWS Elemental MediaLive in the cloud, enabling broadcast-grade live video streaming of ultrahigh definition (UHD) video with up to 10-bit color depth and high dynamic range (HDR) support. Launched in June 2021, the portable device improves the quality and reliability of UHD live video streams for production professionals in the field while reducing the cost and complexity of equipment needed to move live video signals from on-premises technology into the cloud. Link UHD ships fully configured to a user’s AWS account, offering an easy, cost-efficient way to transfer UHD video securely and reliably to MediaLive for delivery to viewers on a range of device types. Both high HDR 10 and HLG video outputs are supported. Using the device is as simple as connecting it to power, Ethernet, and a video source. It can be controlled remotely and monitored from anywhere with an internet connection using the AWS Management Console. Available for $4,995 USD per device (excluding customs clearance, duty, tax, and shipping), AWS Elemental Link UHD provides a more cost-efficient approach to cloud video contribution compared to traditional methods. The compact device requires minimal power and cooling requirements and is easy to own and operate. With silent, fanless operation,
the device is also well-suited for low-noise environments like sporting and event venues, studios, or conference rooms. It also maximizes the quality of the UHD video sent to the cloud, adapting automatically to changes in network conditions. To deliver the best possible video, Link UHD devices encode using the HEVC (High Efficiency Video Coding) codec, which is up to 50-percent more efficient than the AVC (Advanced Video Coding) AVC codec. For high resiliency video transport, AWS Elemental Link UHD uses the Zixi delivery protocol, which combines content-aware and network-adaptive forward error correction with error recovery, while minimizing latency. Encoded video is encrypted using AES-128 and rotating keys from AWS Key Management Service (AWS KMS). The device also uses a network-aware adaptive bitrate algorithm, adjusting in real time to changes in network conditions. This closed-loop feedback system minimizes packet loss to keep the video signal stable, even if network issues occur. Link UHD is a valuable tool in live production and streaming environments, especially with as companies try to minimize the amount of staff located on-site during a global pandemic. Even a non-technical person can plug in a Link UHD device on-site and after that control the remainder of the stream configuration from the cloud.
Best Of Show At IBC 2020
WINNER Canon Canon RF5.2mm F2.8 L Dual Fisheye Lens The RF5.2mm F2.8 L Dual Fisheye lens is Canon’s first interchangeable dual fisheye lens capable of shooting stereoscopic 3D 180° VR imagery to a single image sensor - streamlining the complexities of VR production for both seasoned and new filmmakers, photographers, and videographers. Previously VR has been viewed as a tedious development process but with this lens Canon has developed an innovative way to produce impressive VR imagery to meet the demand for high-quality content from viewers with VR headsets and VR streaming platforms. Designed to seamlessly mount on Canon’s EOS R5 camera with compatible firmware, creators can go from traditional shooting to 3D stereoscopic capture with a simple lens swap. The lens features a 190-degree field of view captured from two separate optical systems to deliver outstanding, high-resolution results for 180° VR viewing. With an interpupillary distance of 60mm, natural parallax closely resembling human vision is possible producing a realistic VR experience. From a quality perspective, Ultra-low Dispersion glass minimizes chromatic aberration despite the incredibly wide view and fluorine coating and dust- and-water-resistant seals provide peace of mind even in challenging conditions. The L-series optics are engineered with Subwavelength Structure coating technology, offering impressive flare control in backlit conditions. The aperture range of a bright f/2.8 to a deep depth of field of f/16 delivers versatile exposure control, with coordinated dual (left-right) electromagnetic diaphragms, control of aperture settings is familiar and easy, and no different than other RF lenses. Using the EOS R5 with the compatible Firmware Update, creators can enable a magic window UI overlay on the rear LCD screen that aids in the framing of their shot, whether delivering for an online platform like YouTube VR or for a headset. The focusing capability allows the user to magnify up to 15x with MF Peaking and to confirm focus of each individual lens image separately. Canon’s EOS Utility and Camera Connect apps provide a remote live view image to help users compose and remotely record. This lens is an impressive tool
to capture engaging VR imagery when covering news stories, documentaries, or entertainment events for VR viewing. Canon EOS VR System’s convenient workflow is a standout feature with this lens. Accomplished by recording left and right fisheye images to a single full-frame image sensor, this compact lens design solves many common VR stitching and synching challenges by recording one single image file. Canon is currently developing two paid subscription-based software solutions to streamline the postproduction process. Canon’s EOS VR Utility offers functionality to convert clips from a dual fisheye image to equirectangular with the ability to make quick edits and adjust the resolution and file format before export. With the EOS VR Plug-In for Adobe Premiere Pro, creators can automatically convert footage to equirectangular, and cut, color, and add new dimension to stories within Adobe Premiere Pro – which also supports in-headset editing. And as it is compact, lightweight, and portable, the lens is also easily packed in a camera bag for the opportunity to tell unlimited VR stories.
WINNER Cobalt Digital, Inc. Indigo 2110-DC-01 The Indigo 2110-DC-01 is a factory add-on option to Cobalt’s 9904-UDX-4K and 9905-MPx cards. These cards include an advanced audio/video processing engine, capable of up/down/ cross conversion, audio routing, color correction, 3D-LUT processing, as well as SL-HDR encoding and decoding. This option adds native SMPTE ST 2110 support for these cards, with multiple 25G Ethernet interfaces. With this option, all the advanced processing in these cards is now available with IP inputs and outputs, without the need for an external gateway. Indigo 2110-DC-01 includes support for ST-2022-6 seamless redundancy switching, as well as IS-04/IS-05 NMOS for automatic discovery and configuration. The transition from SDI to IP has been happening for a few years now. However, in most deployments, this is achieved by using gateways.
Best Of Show At IBC 2020
WINNER disguise disguise xR Have you ever read a book or watched a TV show and wondered what it was like to be completely immersed in that world as if it were real? disguise xR is making this not only possible but easily achievable. disguise Extended Reality (xR) sits at the confluence of dramatic advances in game engines, LED screens, and graphics processing power. It combines leading LED technology, advanced camera tracking and real-time graphics rendering in a production environment, to create a virtual world that is visible both live on set and directly in-camera. disguise xR represents the next generation of virtual production technology that is set to replace the standard practice of filming against blue or green screens for film, television and broadcast. With disguise xR, talent is filmed against large LED screens that feature real-time generated photorealistic virtual scenes. Talent do not have to be trained to interact with green screens, they can see the graphics on the LED displays around them to deliver a more natural performance. As the workflow enables graphics to be delivered in-camera in real-time, the extensive chroma key compositing process is removed. Crew can also make real-time changes while shooting - saving significant time in post production. Green spill is eliminated and lighting reflections on objects appear natural, as they pick up the ambient light and reflections from the graphics rendered on the LEDs. disguise is committed to pushing the technology further and widening the uses and appeal of shooting in xR, all while lowering its barrier of entry. Due to xR’s ability for rapid spatial and colour calibration, its set extension feature can be switched on and the virtual scene can be rendered from the camera’s point of view and far beyond the LED panels. Production teams no longer have to build large sets. Thanks to xR, they can have pixel-perfect virtual scenes whenever and wherever they want. The impact of this is far-reaching, as more broadcasters aim to lower their carbon footprint by bringing remote teams together. Having shown consistent, profitable growth on a global basis, disguise has demonstrated the validity of its business model. It
has powered over 400 productions in the past 18 months since its beta launch, including several major broadcasts such as TV Azteca’s coverage of the 2020 Olympic Games, the 2020 MTV VMAs as well as ITV Sport’s coverage of the 2020 UEFA Euro Championships. Meanwhile, over 300 stages powered by the workflow have been set up in 40+ countries to meet the growing demand. disguise xR’s aim is to enable everyone to create the impossible. In April 2021, the workflow became publicly available as part of disguise’s core software, available to download for free. Since then, disguise also launched its free eLearning platform giving creatives around the world the opportunity to learn and master the workflow at their own pace.
WINNER EditShare Universal Media Projects Many traditional media production workflows function as a series of hand-off points. Different parts of the production use different tools for capturing, organizing, editing, reviewing and finishing the content. These hand-offs are often related to the non-real-time nature of most applications. Technologies allow users to “lock” projects but very few allow for fully real-time synchronization of relevant project data. This creates a waterfall style of workflow which puts pressure on those towards the end of the process. Different tools also require different data in different formats at different times or for different purposes. When used for editorial, something like DaVinci Resolve needs a different view of the project than when it’s being used for color grading. When a Media Composer user is working with their own bins, it can be hard to interact with other users who might be using Adobe Premiere Pro. Pulling together a workflow that fits any particular production can be complex, especially with the varying openness of these systems. EditShare has developed a technology to build a universal view of media projects that can be interacted with in real time yet still work within the constraints of the various tools. Universal Media Projects, which ships as part of EditShare’s FLOW media management, brings together - DaVinci Resolve, Premiere Pro, and Media Composer - into a single workflow environment. From anywhere, the editor has real-time access to content and can collaborate freely in a mixed editing environment. Universal Media Projects seamlessly manages all the necessary project data - such as sequences, clips, bins, and markers between editorial tools. All relevant information is available remotely through a secure web interface. It creates a metadata
store that models all the key common entities of a project including clips, subclips, sequences, etc., along with extended attributes to store NLE-specific data. It facilitates a continuous exchange of this data between different editorial tools. This innovation is especially important as the media industry moves towards hybrid and cloud-based workflows and remote workflows become standard for content creators collaborating on projects.
Best Of Show At IBC 2020
WINNER Frame.io Frame.io Camera to Cloud (C2C) Throughout the past 100 years, there have been two fundamental shifts in filmmaking technology. First, there was the shift from film to tape. Then there was the shift from tape to digital files. But still, physical media needed to be copied and shipped for postproduction to begin. Frame.io Camera to Cloud (C2C) fundamentally changes the way video is created by eliminating the need for physical media to be exchanged, which removes the barriers of time and distance. All assets are securely stored in the cloud, and accessible to any authorized user. On-set production and remote post-production teams can now work together concurrently, allowing creatives and stakeholders to give vital feedback when it matters most—during the shoot—so changes can be made. The C2C workflow requires a Teradek CUBE655 to be paired with a C2C-compatible camera through Frame.io. Select ARRI, Panavision, RED, Sony, Canon, and Panasonic cameras are supported, and the list is growing. Once authenticated, highquality H.264 proxy files with matching timecode and metadata to the OCF are directly uploaded to Frame.io every time the camera is triggered via LTE, 5G, or WiFi. Sound Devices 888 or Scorpio field recorders, as well as Aaton Cantar X3, can be paired to record, encode, and send either proxy or fullbandwidth audio files to Frame.io, which automatically sync to the video—an industry first. Proxies are viewable on computers, iPhones, and iPads via the Frame.io web and iOS apps, and available directly in NLEs with Frame.io native integrations including Adobe Premiere Pro, Final Cut Pro, and DaVinci Resolve. This allows for immediate editing, and the proxies can later be swapped for “hero” dailies or be easily relinked for final conform and grading. Frame.io C2C changes the way anyone who shoots video works. From news producers who need to deliver timely stories to sports teams capturing live game footage for fast distribution, to reality and scripted TV, feature films, documentaries, commercials, and more, the ability to compress schedules and work remotely is revolutionary.
At a critical time in the industry where working hours and conditions are front and center, the fact that Frame.io C2C can engage an editorial team that works in parallel with a production crew helps demonstrate to directors and cinematographers that a sequence is working. Therefore, filmmakers can confidently move on to the next setup or even wrap for the day. Not only does this help them stay in the moment creatively, the impact ripples throughout the entire production process because everyone can access media relevant to their role, including production, editorial, sound, scripty, VFX, production design, dailies assists, DITs, and colorists, etc. Anything that can demonstrably save time on a production leads to improved conditions creatively, financially, and logistically. Finally, the cost of using C2C is very low. C2C is free with a paid Frame.io account, requires no additional crew to operate, and requires only an internet connection, Teradek CUBE655 and C2C-compatible camera (which can be rented or purchased).
WINNER Glensound GTM The GTM is the world’s first truly dedicated broadcast specification gamers interface for E-Sports that combines the facilities normally provided by three products into a single device. The GTM is the gamers interface. They can connect their gaming headset via multipole 3.5mm jack socket, or via traditional 3 pin XLR for mic input, with a 6.35mm and 3.5mm stereo jack socket for headphones, which can be separately addressed (more later). The top panel features 3 large volume controls; game audio, team talkback audio, and the gamers own voice. This adjusts a stereo headphone mix for the gamer. The game audio can be derived from the following sources; 3.5mm stereo jack socket, SPDIF, USB, DANTE, or de-embedded from SDI. This allows compatibility with various tournaments using different gaming hardware. SDI is also looped out so that it can be further used for the gamers monitor. The GTM includes a built in mixer to generate the talkback mix between 6 gamers in one team, plus their coach. This mix is distributed via Dante and made available to each gamer and coach in their headphone mix via the TEAM pot. The final volume pot is the gamers own voice in their headphones. This is fed directly from their own devices so there is no delay. This allows a more comfortable experience for the gamer when they can hear their own voice at their desired level. The gamers mic input is also sent out to the Dante network for further broadcast use if required. The GTM is also used by the referee of the gaming tournament. When the GTM is assigned as a referees device, when the white push button on the top panel is pressed between games, all GTM units output white noise to all gamers headphones. This is a standard requirement in gaming so the gamers cannot hear stadium chat between games. The GTM can do this via a single headphone output. As the GTM has two headphone amplifiers, if the tournament requires that the white noise should be on a separate over ear pair of headphones, then the white noise can be routed to just the alternate headphone output.
For further broadcast use, the gamers stereo headphone mix is sent back out onto the Dante network. There is a mic/talkback button that can be configured latching, momentary or always on with cough. GPI on the rear allows external mic switching. Control of the GTM is remote via Glensounds Glencontroller app. All local controls can be disabled if preferred by the tournament. Redundant networking is provided with a primary and secondary Ethernet connection for the Dante link. Power is via PoE. The GTM has a minimalist look and design with a multi coloured LED panel to suit a modern gaming environment. The GTM solves many problem presented at E-Sports tournaments and gives the engineers a single device that can be used by gamers, coaches and referees. The GTM could also be used by tournament commentators.
Best Of Show At IBC 2020
WINNER JW Player Broadcast Live In Q4 of 2021, JW Player, a leading video software and data insights platform, has unveiled its Broadcast Live solution. This integration of the former VUALTO solution into the JW Player platform enables broadcasters and other content owners with adaptable, scalable, secure and intelligent solutions for video orchestration and encryption. The newly released Broadcast Live is the industry’s most flexible, robust and scalable video orchestration solution for Live, VOD, Live2VOD and VOD2Live. Developed specifically for media workflows, the new generation of Broadcast Live brings integration with encoders, streaming servers, workflow rules and DRM into a single set of APIs and GUI. The pluggable architecture also allows for easy integration with thirdparty and existing customer systems. With comprehensive channel configuration, event scheduling, monitoring, clipping and syndication, Broadcast Live enables premium viewing experiences with significant cost savings. The launch of this offering culminates JW Player’s acquisition of VUALTO in May 2021, to create a comprehensive video intelligence platform that empowers customers with independence and control in today’s Digital Video Economy. The combined result is a single platform for broadcast-quality live and on-demand video delivery across mobile, web and OTT platforms; secure content delivery with industry leading DRM services; and unique insights, intelligence and monetization features to help grow revenue. Customer success: Broadcaster and leading media and entertainment company ITV, which reaches over 40 million viewers every week, needed a solution to drive incremental views and provide increased exposure for their more niche events. The broadcasting giant selected JW Player to deliver the necessary infrastructure to enable the spinning up of pop-up channels for the streaming of live events via its ITV Hub, which is available on 28 platforms and over 90% of connected televisions sold in the UK. JW Player’s Broadcast Live solution provides ITV with dynamic event orchestration, allowing for the scaling up of resources for a live streaming event immediately before it begins and ability to
scale down once it is over. This enables the broadcaster to save on cloud-hosting costs that would otherwise be accrued from having the service continuously running. Following a successful first test with ITV via its ITV Hub platform for the British Touring Car Championship, JW Player will deliver further support for a calendar of live event streams, including more niche sporting events. “JW Player’s solutions and expertise have been invaluable in delivering the necessary infrastructure to ensure the commercial success of some of our more niche sporting events, and we look forward to the next stage of the project,” Vinay Kumar Gupta, Senior Architect at ITV Video Platform said.
WINNER Mjoll Mimir Mimir is a native cloud production asset management and AI tool used by production companies, broadcasters, digital agencies, schools, organisations and companies, worldwide. It launched in early 2019 as a tool for cloud storage and backup, for smart media search, and for automating metadata enrichment using integrated AI technologies. Today, Mimir is one of the most extensive native cloud Production Asset Management (PAM) tools in the market, with tight integrations to the Adobe and Avid platforms, for editors, and to cloud newsroom tool DiNA, for storytellers and journalists. The product represents s a new breed of professional broadcast solutions that harnesses the power of the cloud and Artificial Intelligence. Mimir has gained the attention of media houses and broadcasters worldwide that are looking to move their media production workflows to the cloud and at the same time use AI technology to automate time consuming tasks. Mimir enables journalists and editors to use AI as part of their everyday work to automate speech-to-text transcriptions, translations, to detect objects, persons, events, and more in videos and images, and to log all metadata automatically. With great metadata logging users are able to find back to what they need for their news stories, film production projects and more, in a fast and secure way. With Mimir, content creators that work with video and images can easily find what they need for their stories and projects. Mimir has an easy-to-use search with built-in advanced search options. Mimir users can search for video titles, persons, objects, spoken words, translated spoken words, and any other logged metadata. Finding the content you need is fast and reliable, saving users both time and money. From an easy to use script editor, Mimir users can edit transcripts, highlight segments, and create subtitles.
Getting access to Mimir does not require any investment or on-premise installation. You subscribe to the platform and have the flexibility to decide how many users you need, what AI services to use, what data you want to analyse, and what to move to cloud storage. Mimir represents a new breed of software that is built from scratch with cloud-technology. It has all the elasticity, flexibility and security that broadcasters and media houses require for a modern media management platform.
Best Of Show At IBC 2020
WINNER NDI NDI® 5 NDI® 5 turns the whole world into a studio. The latest iteration of NDI allows storytellers, from broadcasters to smartphone users, to connect to any device, in any location, anywhere in the world— enabling it to work with almost any video application across the globe. With NDI 5, physical studios can connect to ones in the cloud and remote production becomes local. Powered by its predecessors, NDI 5 introduces a host of unique features and benefits for global end-users to create more stories, better told. Designed to harness the creative potential of software and networks, NDI 5 empowers anyone to create visually compelling stories, no matter their location, with any internet-connected device. It also has innovative features including NDI Bridge and NDI Remote. NDI Bridge forms a secure connection between any NDI network, regardless of its whereabouts. NDI Bridge redefines the concept of ‘remote workflows’ by opening up a breadth of new opportunities for live video production. For example, Sky Germany used Vizrt tools for a recent Handball Bundesliga real-time 5G broadcast all facilitated by NDI, specifically the NDI Bridge feature brought the programme feed from the cloud back to Sky HQ in Munich. NDI Remote lets users contribute live audio and video to live productions via a URL and internetconnected device. NDI Remote opens a limitless realm of possibilities for industry professionals by introducing content contributions from across the globe to live productions of any size. Additional add-ons of NDI 5 include NDI Audio Direct, reliable UDP Support, Apple Support, improved Adobe Premiere, and Final Cut Pro support for refined artistic opportunities for broadcasters and producers. These powerful additions to NDI Tools allow users to have a more streamlined workflow, creating and delivering content with higher quality and more efficiency than ever before. NDI 5 allows users to explore professional-level productions in remote environments for industries that have previously faced barriers. Consider Guildhall School of Music and Drama as a prime example of this. When Covid-19 restrictions went into place the school needed to find a way to enable live ensemble performances that could be both safely performed
and effectively viewed. To make that happen, the Guildhall team utilized NDI 5 with a NewTek TriCaster live production system at its core. The system provides not just a Covid-19 workflow, but an entirely new avenue for collaboration and creativity. Further, it provides a level of confidence that the school can continue its work if social distancing guidelines are reintroduced at some time in the future. NDI 5 is essential for the broadcasters, visual storytellers, and creatives that want to succeed in the post-pandemic digital era we are entering. Viewers are craving engaging and stimulating content. Throughout the turmoil of the ongoing pandemic, content has become vital for viewers. Content will only continue to be crucial for viewers post-pandemic. With NDI technology, the broadcasters and visual storytellers can rest assured knowing they’ve futureproofed their ability to create engaging productions to keep delivering for viewers - no matter what obstacles may arise.
MOVING VIDEO, MOVING THE WORLD
NDI® is the world’s most powerful video-over-IP technology. No matter how you use NDI®; from using NDI® Tools to expand video opportunities across your network, to using the Software Development Kit to equip your own systems with NDI®, or even integrating NDI® into your devices, NDI® 5 harnesses more power than ever before.
Seamless Integration
New plugins for Adobe Creative Cloud and Final Cut Pro create a full post-production workflow
Easier Everywhere Seamless Integration
Easier Everywhere
NDI® Bridge
Transport complete NDI® streams over LAN, WAN or public internet
NDI® | HX Camera App
Turns any iOS or Android™ mobile device into an IP-ready live video source
RUDP transfer makes WAN and WiFi connections more resilient with less configuration required.
NDI® Remote NDI® Bridge
NDI® | HX Camera App
NDI® Remote
ARM Support; Portable NDI®
Share live video and audio using an internet connected NDI® enabled device… just using a URL.
ARM Support; Portable NDI®
Billions of devices NDI® enabled through support for ARM
WINNER NUGEN Audio Paragon Reverb As the world’s first 3D-compatible convolution reverb, unlike any other on the market, Paragon offers full control of the decay, room size and brightness via state-of-the-art re-synthesis modelled on 3D recordings of real spaces. Perfect for TV and film scoring applications, it also provides an unprecedented level of tweak-ability, with zero time-stretching – which means no artifacts. Additionally, Paragon features spectral analysis and precise EQ of the Impulse Responses (IR). With purity of sound at the forefront of this plug-in, Paragon reverb operates in up to 7.1.2 channels of audio, making it ideal for surround and immersive applications, including Dolby Atmos bed tracks. Further, it features individually configurable crosstalk per channel, unique technology for re-synthesis of authentic IRs, HPF and LPF per channel, and switchable LFE. Different from any other convolution reverb on the market, NUGEN Audio’s Paragon plug-in eliminates the need for enormous IR libraries. This technology not only enables users to reduce the sheer volume of recordings, it also encourages a greater level of creativity. In a recent software update, NUGEN Audio also implemented outdoor IRs, new presets and an improved browser, with “search,” tagging” and “favorite” functions. These elements are especially important to people working on movies and TV shows with exterior scenes, which are found in nearly every production. Additionally, the new browser makes it easier for users to organize their presets, further expediting the creative process. Using state-of-the-art technology developed alongside the University of York’s Dr. Jez Wells, 3D Impulse Responses are analyzed, decomposed and re-synthesized to create new authentic spaces. This ensures a small digital footprint for the IR library and makes it possible to configure limitless combinations of spaces with just a few adjustments to the settings. The IR panel also enables users to make changes to the frequency response of real spaces by EQ’ing the reverb model and altering the frequency-dependent decay rate. Unlike traditional convolution reverbs, Paragon does not use static IRs, which provide a wider scope to transparently transform the sound of a space.
Additionally, Paragon’s crosstalk feature creates a sense of liveliness and interaction between channels and allows users to produce surround reverb from mono or stereo sources. It also offers the control and flexibility to determine how reverb from each channel interacts with other channels, increasing dialog intelligibility. Paragon has been incredibly well-received by the film and TV industry since its release. In addition to its Atmos application, NUGEN Audio’s Paragon reverb plug-in is well-suited to creating immersive reverb in mono, stereo and surround formats. It is ideal for recreating authentic sounds of real spaces and manipulating IRs while still maintaining true convolution characteristics.
Best Of Show At IBC 2020
WINNER OWC The Jellyfish The Jellyfish by OWC product family consists of Jellyfish Mobile: The first plug-and-play video workflow server that comes with a handle. Jellyfish Mobile was designed to be on-the-go or, at least, out of the server room. It excels with teams of four to six editors working with 4K media on the road or at the office. Jellyfish Tower: Matches the ease-of-use and plug-and-play magic powers of the Mobile and is as powerful as something you’d find in your server room. It’s quiet enough to stand on its own in your edit bay and powerful enough to take on 6+ editors working with 4K media and beyond. Jellyfish Rack: The most powerful in the lineup plug-and-play solution, intended to live alongside all your other fancy server room equipment. Jellyfish Rack is the preferred solution for ultra-high-bandwidth connectivity (25GbE/50GbE) and seamlessly merges into the most complex enterprise network environments.
WINNER Ross Video Ross Ultrix Acuity Hyper Converged Production Platform Ultrix Acuity combines the routing and AV processing capabilities of Ultrix with the creative capabilities of Acuity production switcher. Ultrix Acuity takes routing, audio mixing, MultiViewers, trays of frame syncs and audio embedders/de-embedders – all solutions that have traditionally filled multiple equipment racks – and compresses them all down to a single 5RU chassis. Ultrix Acuity is therefore ideal for environments where size really matters, such as OB vans and mobile units. Add 2RU of rack-mounted redundant power and the result is a complete system in 7RU that can outperform packages requiring multiple racks, complex cabling and control system integrations. As with the current Ultrix solution, Ultrix Acuity is based on our Software Defined Production philosophy, ensuring that futureproofing is never a concern. The Software Defined Production Engine – SDPE – from Ross removes the need for costly ‘forklift’ upgrades by providing base hardware that
can grow via convenient and relatively inexpensive software licenses. Ultrix Acuity’s SDPE backbone will therefore reduce the uncertainty around meeting future creative or technical requirements. The flexible architecture of Ultrix Acuity means that format and connectivity challenges simply disappear. Transition from HD to UHD with a simple software license. Mix SDI and IP sources in the same frame transparently. Use sophisticated tie-line management tools to incorporate the system into a larger distributed routing fabric. In short, as your needs and requirements change, so the unrivalled flexibility of Ultrix Acuity can easily keep pace. Ultrix Acuity also provides excellent return on investment – expensive power, cooling, maintenance, and support costs are significantly lower. In addition, Ultrix Acuity can become one node of a larger distributed routing environment, reducing the incremental cost of adding I/O and further production switchers.
Best Of Show At IBC 2020
WINNER Telos Alliance Telos Infinity® Virtual Intercom Platform First, we broke the matrix. Now, we’re putting intercom in the Cloud. Telos Infinity® Virtual Intercom Platform (VIP) is the first fully-featured Cloud-based intercom system. It delivers sophisticated comms virtually, making Cloud-based media production workflows available on any device—smartphone, laptop, desktop, or tablet. Users can even use third-party control devices, like Elgato’s Stream Deck®, to control Telos Infinity VIP. Now you can harness Telos Infinity IP Intercom’s award-winning performance, scalability, ease of integration, and operational/ cost efficiencies anywhere—At Home, On-Prem, Site-to-Site, or in the Cloud. Telos Infinity VIP n Cost-Efficient - Less Maintenance, Infrastructure, Space Required n Scaleable - Pay for Only What you Need n Ease of Use – Virtual Panels on Familiar Devices (Smartphone, Computer, Tablet) n Workflow Flexibility - At Home, On-Prem, Site-to-Site, In the Cloud n Reliable, Proven Cloud Workflows n Flexible Deployment Options n TelosCare™ PLUS Service Option for Premium Service & Support
Deployment Options Meeting users where they are on the path toward virtualization, Telos Alliance offers several deployment options for VIP, which scales to suit users’ varying requirements, from a few remote smartphone VIP instances to an enterprise solution requiring hundreds of instances. n On-Prem – Use Telos Infinity VIP hardware appliance or your own server for on-prem installations. n Integrated – For both On-Prem or Cloud versions, Telos Infinity VIP can be integrated with Telos Infinity hardware comms, or any thirdparty intercom or audio subsystem using AES67 or SMPTE 2110-30 connectivity. n Cloud Server – Software for supported Cloud platform installations. A complete communications infrastructure in the Cloud with connectivity options for integration with third-party Cloud-based and On-Prem audio subsystems. n Software as a Service (SaaS) – Various third-party Telos Alliance partners will offer a Telos Infinity VIP SaaS option, allowing users to lease it in a virtual environment. Contact Us Today to Design Your Telos Infinity VIP CloudBased Intercom System: Inquiry@TelosAlliance.com
WINNER TVU Networks TVU Channel TVU Channel - Going beyond playout. The 24/7 Channel solution created for the world’s biggest broadcasters available for everyone. Cloud playout, scheduling, live programming, and more. The easiest way to manage and launch your 24/7 live digital channel for live broadcasting over the air, on cable, OTT, apps, social media, and websites. Launch from your laptop in minutes. Schedule live and VOD programming with full SCTE support and set up multiple encoders for delivery to CDNs, OVPs or edge devices for traditional linear workflows from a simple web browser interface. Use for traditional linear channel television playout, an OTT channel, a unique pop-up channel or send directly to social media 24/7 and without required infrastructure. Live broadcasting and playout software can be complex. We made our cloud-based scheduling and playout actions intuitive. Some single click playout features include: Break-in live video or switch programs instantly at any time Breaking news auto-recording. Schedule or manually insert graphic overlays. Output Electronic Program Guides. Instant on or scheduled Ticker/Crawl. Program interrupt for instant changes. No New Infrastructure or Additional Infrastructure Required TVU Channel is a completely cloud native solution that can be quickly deployed since it doesn’t require traditional infrastructure. There is also no complicated licensing or configuration needed in order to start using it. Cost-effective Pay-as-you-go-pricing With TVU Channel, purchase only what you need and avoid unnecessary capital expenditures. Deploy as many channels as needed at a fraction of the cost of traditional playout. Spin up one, one hundred or as many channels as you need with just the click of a mouse.
Remote Access from Anywhere Channel uses a simple browser based interface which can be accessed through any smart device or laptop with Internet access. Login from anywhere without being constrained to a physical studio location. Easy to Use TVU Channel was designed for fast setup without the need for extensive training to get started. Building and managing channels in TVU Channel is as simple as managing an ordinary website calendar. Secure User Access TVU Channel provides full control over the management of permission levels by individual users to schedule and operate channels. Full Compatibility with Scheduling Programs Are you using BFX, Wide Orbit or other major commercial programs? TVU Channel is compatible with the most popular third party scheduling platforms. Full Integration with PAM and MAM Easily transition content and metadata from major PAM, MAM and editing tools into your playlists. Full Integration with the TVU Ecosystem TVU Channel works with the entire portfolio of TVU solutions and other edge devices for the ingest of live content via SDI/ SMPTE 2110/NDI as well as output for traditional linear channel use cases.
Best Of Show At IBC 2020
WINNER Adthos Adthos Platform The Adthos Platform was created with a single goal: to democratize the audio advertising industry. Built to answer today’s most pressing challenges - changing consumer behavior, data utilization and pace of development in the digital space Step One: Release of the Adthos Ad-Server. Offering first of its kind ad-serving technology built specifically for radio, the ad-server was made available for free to download and use - because it’s our belief that everyone should have access to this technology. This lightweight yet powerful addition to any existing traffic or playout system can be installed without multiple integrations and zero downtime. Radio stations gain the ability to easily sell, schedule and execute multi-platform campaigns with minimal intervention and many tasks completely automated. With instant reconciliation and intuitive customer interfaces providing campaign updates and insights at a moment’s notice, and the ability to replace spots in real time delivering incredible responsiveness.
Advertising that’s always a step ahead Adthos Creative Studio allows on-the-go spot creation and offers powerful targeting possibilities: dynamically using any webservice to insert content based on different datasets to produce timely, relevant advertising. From web-based information such as weather pages, using geo-targeting for location-based advertising, or even Excel files in the case of pricing catalogues for supermarket deals.
Then came Adthos Creative Studio From the opportunity to create broadcast-quality , targeted audio advertising on-the-go using human and synthetic voice, to the ability to generate thousands of creatives for a multi-national campaign in record time, Adthos Creative Studio is shaking-up audio advertising. This first-of-a-kind browser-based multitrack editor allows broadcasters and advertisers to collaborate online, combining music, human and synthetic voice to produce real-time audio advertising.
The latest innovation for Adthos Creative Studio will change everything… Pre-produced, customizable audio adverts. This technology was recently put to the test through the creation of a provaccination ad campaign, available to download and use for free. Using Adthos’ advanced text-to-speech and synthetic voice technology combined with geo-locations, Adthos generated >13.000 creatives, covering >6500 cities, in 70 languages, all within a matter of hours! The possibilities it offers for large-scale and multi-national campaigns is extremely exciting.
How? Adthos Creative Studio harnesses the power of AI to produce natural sounding, programmable audio. With a voice library of 40 US - English voices, including broadcast professionals and Emmyaward winners (with the ability to add more), which are brought to life by controlling intonation, speed and applying reading rules for content such as phone numbers or emails. It also offers features commonly found in Digital Audio Workstations: equalisers, compressors, limiters, with an option to use professional plugins.
In conclusion The Adthos Platform is expanding the boundaries of possibility for audio advertising. And it’s doing it while democratizing the industry. From the release of the free Adthos Ad Server to the highly accessible pricing of Adthos Creative Studio (just $49.95 for the standard package), we’re putting the control back in the hands of broadcasters and advertisers, creating a more even playingfield within the digital advertising space and elevating the industry.
WINNER WorldCast Systems APTmpX APTmpX is the world’s first and only non-destructive MPX/ composite algorithm helping broadcasters to save network bandwidth (<900kbps) while keeping the highest broadcasting quality. With the launch of 3 new APTmpX versions, WorldCast Systems marks another milestone in the industry never reached before, allowing broadcasters to lower their transmission costs with bandwidths from <600kbps down to <300kbps. Listening tests have shown an impressive transparency even at the lowest bitrates. The latest version of APTmpX, launching in December 2021, requires an impressively low bandwidth allowing to transmit the final MPX/composite signal - including over narrowband DSL connections - at <900kbps, but also <600, <400 and <300kbps. Besides network savings, several broadcasters have managed to save hardware costs with APTmpX, since the majority of composite equipment at transmitter sites is eliminated with a centrally generated MPX/composite signal. Furthermore, APTmpX combines signal fidelity with the best latency performance and makes it easier than ever to guarantee a consistent sonic signature across the transmitter network. In terms of reliability, APTmpX offers the best resilience to packet loss. First, thanks to its non-framed compression, a packet loss only affects the signal during a very short, unnoticeable time instead of having to wait for the next keyframe. Second, APTmpX includes OMC mechanism to lower the impact on the signal. For FM-SFN applications, APTmpX is also the best choice as it keeps the highest signal transparency across multiple transmitter sites and is fully compatible with SynchroStream, the highest signal synchronicity technology. APTmpX is highly flexible, being able to process and transmit all existing signal types, be it analog or digital audio, or a combination of both. And thanks to its low complexity and easy integration, it has proven itself as the ultimate solution to enable a 100% digital transmission chain, bridging the transition from digital studios to digital transmitters. “APTmpX not only enhances our portfolio for MPX solutions, but also marks a milestone in the transition to an MPX/
composite environment. The user benefits from significantly lower hardware and distribution costs while maintaining the station sound,” says Hartmut Foerster, APT Product Manager at WorldCast Systems. FM is still the type of transmission that reaches the most listeners worldwide, being a central technology in radio broadcasters’ efforts to bring valuable content and information to citizens across the world. Although it has been a wellestablished channel for decades, optimising efforts and costs of this type of distribution remained a constant challenge. The creation of APTmpX was a paradigm shift in the industry, as it solved this problem with a first-of-its-kind MPX/composite algorithm that is the perfect balance between maintaining a high quality and cost-effective transmission. The impact that the APTmpX’s launch had on radio broadcasters is a true testament to WorldCast’s commitment to optimising processes and allowing cost savings in the industry, and with the perfecting of this algorithm, a significant milestone for the industry is achieved.
Best Of Show At IBC 2020
NOMINEE Autoscript Autoscript Voice Autoscript Voice uses revolutionary advanced speech recognition technology to free presenters and production staff from dependence on foot or hand scroll control devices. Using Voice for WinPlus-IP, presenters are empowered with reliable and accurate real-time control of their prompted script simply by speaking the words. It’s the most effortless way to control a prompter ever Created in consultation with a major US television network, Voice utilises patent pending IP technology to provide a bespoke solution designed for the next generation of live television production. Using active listening, Voice continuously monitors the production audio feed to automatically advance the script as the words are spoken. It will even pause for adlibs and resume scrolling when the presenter is back on script - allowing them to comfortably read the script without the extra effort of manual scroll control. Autoscript Voice is designed to manage the complexities of live broadcasts and perform in the most demanding workflows. Automatic speech recognition combines with proprietary algorithms and advanced pattern matching to scroll and navigate the script in perfect synchronisation with the presenter. Voice automatically handles numerous script and show formats, supports multiple presenters, and can understand regional accents. Autoscript Voice enhances dynamic automated production environments, allowing redeployment of resources to areas of greater value. Voice controls WinPlus-IP simultaneously with other Autoscript scroll control devices offering flexibility of control where required. Fully IP and Virtual Machine compliant, Voice can be anywhere on the network delivering resource efficiency and added agility to productions. Voice is the result of an intensive 3-year development project to create automation that broadcasters can rely on. Working with a major US television network, including beta testing, shadowing live newscasts, Autoscript engineers have created the first fully featured voice control system fit for demanding real-world workflows. Until now, there’s been nothing that effectively leverages
speech recognition at broadcast scale, addressing the needs of everyone involved the prompting workflow within a complex live broadcast operation. Autoscript Voice is the broadcast industry’s first solution to use speech recognition to streamline teleprompting in a live newsroom production environment. Autoscript Voice brings multiple benefits to broadcast operations. n Talent is relieved of the burden of having to do their own prompting or reliance on others to control the scroll speed of their script. n The control room can interact with Voice in the same way they’re used to interacting with current systems, using the same commands. n Control room resource is released, enabling expenditure and focus on more strategic tasks n Productions can move smoothly from Voice to human operator and back, as the software works as another controller within the system, participating the in the same controller arbitration scheme as physical controllers Voice is a revolutionary automation technology that fits seamlessly into the existing production environment. It delivers accuracy that is demanded of modern broadcasting, bringing benefits to every stakeholder and increasing the efficiency and resource flexibility to suit modern production requirements.
NOMINEE Black Box Emerald® GE Gateway The newest addition to the Black Box Emerald® KVM-over-IP product family, Emerald GE gateway enables multiple users to connect simultaneously and control the same virtual machine (VM) just as they would a physical system. Virtually every KVM system is designed to allow multiple users to connect to the same system, but until now, it was simply impossible or too complex and expensive to achieve simultaneous connectivity for users working on VMs. Emerald GE is the industry’s first solution to leverage PC-over-IP (PCoIP®) and PCoIP Ultra technology to support VM sharing and to ensure a secure, high-definition, and highly responsive computing experience. The innovative solution from Black Box makes it possible for multiple users to connect simultaneously to a single VM or physical machine, with seamless access via any user console connected to the Black Box Emerald KVM platform. Unlocking collaboration across physical and virtual machines, Emerald GE
empowers broadcaster teams to work together more efficiently in remote production scenarios, unconstrained by their physical location or by the type of machine supporting their work. Emerald GE makes flexible, convenient access to VMs simpler and more affordable while also providing exceptional performance that ensures a user experience rivalling that of a direct connection. Eliminating the need for a thin client to access VMs, the new product reduces not only the complexity of accessing VMs, but also the cost. While support for VM sharing is unique to the Emerald KVM family, it’s not the only differentiator for Emerald GE. The Black Box system is also unique in making connectivity available on VMs via RDP/RemoteFX, PCoIP, and PCoIP Ultra with extremely low bandwidth requirements. Offering unprecedented compression efficiency, PCoIP Ultra ensures unparalleled performance that gives users the responsiveness they need in dynamic, fast-paced environments.
Best Of Show At IBC 2020
NOMINEE Boland Monitors X4K31HDR5 -OLED The largest of Boland’s new X-4K OLED monitor series, this 31” reference grade model features a true 10 bit panel and processor, with a dynamic 1,000,000:1 contrast ratio that guarantees ultra deep black levels. 4Ksignal is delivered via 12G and 3G SDI (single or quad link), HDMI 2.0, and Sfp (2110) inputs. The next-generation X4K31HDR5-OLED offers numerous scopes
and audio meters, 3D LUTs, time code, markers, and multiple aspect functionality. All firmware updates are completed in-field using USB, and all X-4K Series models include VESA mount holes on the rear in addition to a desktop stand. Also available in 21” and 27” sizes.
NOMINEE Bridge Technologies VB440 with HDR functionality As broadcast technology develops, so too do viewer expectations, and with the rise of HDR-ready TVs, viewers are becoming increasingly discerning in relation to image quality. But this represents a problem in the field of broadcast production, because the ability to work with both SDR and HDR in tandem – particularly in live or remote contexts – is both challenging and potentially expensive. It is this challenge that the most recent addition to Bridge Technologies’ award-winning VB440 probe addresses: a suite of unique, innovative new HDR tools which allow content producers to adapt their workflows to accommodate HDR in an efficient, accessible, intuitive and accurate manner. Built as a monitoring solution for IP and SDI-encapsulated production environments, the VB440 delivers ultra-low latency analytics of compressed and uncompressed data to provide creatives and technicians alike with the deep insight they need to ensure error-free delivery of live and recorded broadcast, from any remote location across the globe. In relation to the addition of HDR functionality, the VB440 starts from the point of being able to identify the type of coded stream coming in, be this HLG, PG, SLog3, or a number of other standards, either through manual setting or through automatic recognition from auxiliary NMOS signaling data. The user is then able to access any of the existing wide range of waveform scopes within the VB440 and apply them to this HDR stream. In addition though, the Graticule has been adjusted to accommodate the needs of HDR more comprehensively, including not only IRE but NITs graticules, as well as an ability to adjust Graticule sensitivity. Furthermore, an HDR specific CIE Chromacity scope has been added which demonstrates the full colour gamut of a given video, and provides a number of options in order to suit the user’s need. Of course, since HDR still represents a transitional standard that has not fully penetrated the market, the VB440 also facilitates data and image visualization according to SDR parameters. Whilst these data visualisations are key, what is most fundamental about the HDR capabilities of the VB440 is its ability
to give a visual preview of an HDR output image through a nonHDR compatible browser. This is achieved by converting the specific codings of the HDR image into the sRGB colourspace of the browser, thus effectively ‘mimicking’ a localised preview of what the HDR output will be like for audiences. This is a unique and significant one-device-only capability that sets the VB440 apart from other technologies in the field, and provides unrivalled insight and control for creative production professionals. More than this, because it facilitates full insight through any HTML5 browser with ultra low latency, the VB440 allows for full production capability to be achieved from anywhere in the world, in real time. The contribution this makes to outside, remote and distributed production cannot be understated – both in terms of facilitating exceptional production standards, and also cost reduction through the elimination of a need to equip facilities with specific HDR-capable equipment.
Best Of Show At IBC 2020
NOMINEE BT Media and Broadcast Vena The media and broadcast industry has found itself caught within the perfect storm. Consumer demand for content and evolving technology is driving transformation, and broadcasters and media companies need new tools to meet changing requirements. Providing the future now Launched in May 2021, Vena is the future of broadcasting. It combines the power of a managed broadcast network alongside the flexibility to provide on-demand services. A critical duo given the changing nature of content consumption and the new ways media organisations are working commercially and operationally. The low latency smart network was developed around three key pillars – flexibility, futureproof and usability – and has Digital 3&4 among its contracted customers. Combining best of breed technologies with BT’s own software-defined orchestration layer, it ensures content is where it’s needed and in the right format. It is able to pinpoint potential pathway issues, optimising routing to ensure seamless delivery of live video streams and content between venues, studios, production facilities, playout and broadcast infrastructure. Through its API, Vena will be compatible with additional services, existing systems and processes. Additionally, the entire content production and distribution ecosystem will be available to control all from a single user interface. This combination provides unparalleled interconnectedness, offering unrivalled control over all operations to deliver lasting experiences. The ultimate reliability BT has been at the heart of broadcasting for decades. It’s helped deliver breaking news and unforgettable sporting moments. The only thing more consistent than BT’s presence in television history is the reliability of its network. Reliability flows throughout Vena, with the purpose-built network guaranteeing 99.999% availability. It will also constantly evolve to meet new demands, with continuous upgrades able to be made seamlessly as functions are software-defined. Whatever the next evolution, from 8K video to object-based broadcasting, Vena will support it.
Being software-defined, Vena doesn’t require huge quantities of hardware. This reduces the room and power required to support technology stacks, as well as decreasing the need for onsite engineers as it can be upgraded and maintained remotely. Both lower associated costs and carbon footprint. Untold workflow flexibility Vena also enables remote production like never before. Where once hundreds of on-site technicians produced a major event, the development of Remote Operating Centres has greatly reduced the required number. Vena’s interconnectedness will accelerate things further. Fewer resources on-site means a reduced carbon footprint, and a more diverse workforce. Technicians can perform roles remotely. Greater accessibility to talent pools is vital, particularly during the ‘great resignation’. Connecting the entire ecosystem, Vena provides the ultimate booking power. If an upcoming daytime programme needs multiple live feeds, traditionally those lines need to be manually booked. Vena enables seamless connection to those lines, all via the touch of a button. Vena’s influence will stretch far beyond the UK too. BT has global points of presence so content can be delivered to new audiences cost effectively, enabling media providers to test regions before committing long term. This flexibility will unlock new experiences for global audiences.
NOMINEE Caton Technology Caton Transport Protocols (CTP) Caton is an industry leader in next-generation IP network transport solutions. Headquartered in Singapore with regional offices in Shanghai, Beijing, Los Angeles, Hong Kong, Taipei and Tokyo, Caton Technology enables advanced video encoding and data transmission over the Internet. Caton Transport Protocols (CTP) will be showcased at IBC 2021 and is entered for the first time to the TVBEurope Best of Show awards. Comprised from a series of IP transmission technologies, CTP was developed to ensure stability, quality and security for video, media and other data transmissions, CTP utilises more than 30 in-built algorithms and deep learning approaches to smooth and mitigate network challenges. Patented dynamic error corrections to recover from data loss are another headline benefit of CTP, which is an ideal technology for live streaming of high value content including 4K and 8K content, such as premier league soccer matches and world-class sports games, where quality, security and real connections are paramount. CTP can seamlessly adapt to challenges caused by jitter and congestion in the network making it a sturdy and reliable choice. Guaranteeing an optimal viewing experience in the harshest of network bandwidth environments is crucial and CTP ensures this by efficiently delivering high-quality video and low latency
– every time. Beyond video, CTP is also finding favour with enterprises and service providers for fast file/data transfer, resulting in accelerated transmission speeds that are significantly faster than traditional solutions, including FTP. As CTP transports all forms and sizes of content over any IP network, the technology is scalable for your business needs as they evolve. CTP is fully interoperable and can integrate into existing networks with comprehensive Network Monitoring System so operators can benefit from cost savings while extending their services. Caton Technology understand the importance of security when it comes to high-quality video and data transmission. CTP fully encapsulates its data and encrypts the end-to-end connection with AES-128 and AES-256 encryption technologies. For an additional layer of security, designated devices can also be assigned for specified whitelisted connections. Delivery of high-quality live streaming today, across any platform, relies on optimal transmission reliability, exceptional quality of video and fast transfer speeds on data files over any IP network. Caton Transport Protocol offers the most seamless workflows to enable the future of video and data transmission and why we think it is worthy of this award.
Best Of Show At IBC 2020
NOMINEE Dalet Dalet Pyramid Dalet Pyramid is the next-generation solution for Unified News Operations. Accommodating both digital-first and linear end-to-end news workflows, the cloud-native Dalet Pyramid solution enables the industry’s first collaborative Storytelling 360 approach to multiplatform production and distribution. Dalet Pyramid is a subscription-based solution that can run with a range of cloud providers, including AWS, on-premises, or a hybrid of both, offering the unprecedented mobility that supports the industry’s continued pivot to solutions designed to support remote productions. Dalet Pyramid builds on Dalet’s approach to Unified News Operations where planning, content creation, asset and resource management, playout and multiplatform distribution have been combined into one platform that enables production of fast-breaking, digital and live news, current affairs shows, and more. The agile architecture design facilitates collaboration at story-level with the industry’s first Storytelling 360 approach and enables a truly virtual newsroom. Dalet Pyramid can be deployed either as an extension to an existing Dalet Galaxy five installation or on its own running on-premises, in the cloud, or a mix of both. Users can contribute, produce and manage the full news story lifecycle from anywhere using smartphones, tablets and laptops connected to basic internet. The underlying Dalet asset management and orchestration engine facilitates content flow from ingest through planning to distribution and archive. The AI-powered Dalet Media Cortex, a standard offering at entry-level within Dalet Pyramid, provides speech-to-text features, including assisted captioning and translation, with innovative AI services that automatically index or recommend content to news professionals and storytellers. Robust APIs and panels, such as Dalet Xtend for Adobe Creative Cloud®, enable extensive custom integrations, allowing customers to tailor their workflows according to their needs. The agility of the Dalet Pyramid architecture and inherent benefits of SaaS enable customers to rationalize their productions across the entire operation and increase user productivity everywhere, reducing the overall TCO of the
platform. Combined with its digital-first and Storytelling 360 workflow capabilities, the solution will help transform the business of news production, setting a new bar for operational standards and efficiencies while opening doors to new revenue opportunities thanks to stronger digital and OTT workflows. With Dalet Pyramid, our estimate is that on-premises newsrooms will save on average 30% on their infrastructure costs over five years. In addition to the immediate benefits to the balance sheet, Dalet Pyramid comes with an increase in business agility. With Dalet Pyramid’s approach to news production, it’s easy to incorporate new aspects to their business. New digital outlets, sponsors, and the ability to add hundreds of new users almost instantly. Overall, customers can expect as much as a 50% productivity boost from Dalet Pyramid. That’s because from its multiplatform webbased interface, they’ll have instant access to all the source material they need to make optimized content for every digital platform at a marginal additional cost. Dalet Pyramid is the result of 20+ years of R&D, expertise, experience and collaborative innovation with customers. This has resulted in a newsroom solution like no other and represents what the modern newsroom should look like.
NOMINEE FOR-A MV-1640IP The FOR-A MV-1640IP is the perfect compact, adaptable multiviewer for production and monitoring installations, supporting both SDI and IP inputs and outputs for hybrid environments. As the need for signal monitoring and quality checking has escalated at all points in the broadcast chain, so the move is away from the big, dedicated multiviewers to smaller products which can create highly tailored mosaics at the point they are required. The MV-1640IP is designed with precisely this requirement in mind; it is compact, affordable, flexible, readily implemented and meets the real needs of today’s system architecture in a transitional or hybrid environment. This solution incorporates all the functionality required for point multiviewers in a hybrid architecture, allowing monitoring of multiple signals whatever their source, format or resolution. It offers all the flexibility needed to design, select and modify layouts. Layouts can include any combination of image sizes as required by the application, with up to 25 windows on a single screen. Because units can be cascaded, very rich displays can be built up using simple control and minimal latency. In IP environments, this solution conforms to all the elements of the SMPTE ST2110 standard for video, audio, ancillary data and timing. Availability is high through two independent network connections under the ST2022-7 redundancy scheme. The 1U appliance supports up to 32 inputs at up to 4k resolution: 16 on SMPTE ST 2110/ST 2022 streams and 16 with an optional SDI card. The network connection uses an SFP+ port and can support 10 Gb and 25 Gb ethernet. Network ports can be duplicated for SMPTE ST 2022-7 redundancy schemes to make the system. Stream switching is by NMOS. The multiviewer output is a free arrangement of any of the inputs, from either IP or SDI, up to a 5 x 5 matrix. Layouts can be defined in advance and selected by the front panel - a web application, or pre-programmed to switch at specific times. The MV-1640IP supports the following two output combinations (up to five outputs): one 12G-SDI output or four 3G/HD-SDI outputs and one HDMI output together with one 12G-SDI output and three 3G-SDI output and one HDMI output. Images are re-sized as appropriate, seamlessly converting
between HD and 4k as required. Built in signal monitoring detects frozen frames, supporting external control via SNMP. The unit is capable of streaming motion JPEG for monitoring over remote web-enabled devices. While the directly connected screens feature alarms, tallies and audio levels, a Windows application allows a clean streamed output, allowing it to be used for conformance recording or fault trail analysis. A roving engineer could even monitor systems from a tablet or phone. Finally, MV-1640IP units can be cascaded to create larger displays with many more windows, all controlled from a single user interface.
Best Of Show At IBC 2020
NOMINEE Hedge Postlab for Media Composer Postlab now has full support for Media Composer. The cloudnative platform brings collaborative remote editing workflows to Avid editors. Postlab for Media Composer becomes an extension to existing Nexis or other Avid-compatible NAS/SAN storage, or can be used on its own for an exclusively cloud-based facility. Eliminating cumbersome access gateways and high latency typically associated with cloud-media workflows, Postlab for Media Composer provides a smooth and fluid editing experience, even with limited bandwidth. Flexible monthly plans allow post facilities and media businesses to use Postlab without upfront commitment, and scale and shrink team sizes. The platform is secure and doesn’t require slow and hard-to-configure VPNs, allowing editors to work from anywhere. It keeps workspaces in sync, ensuring on-premise storage (NEXIS and third-party NAS/SAN) is always synchronised with Postlab Drive. By retaining the familiar ‘workspace’ workflow, including Bin Locking, Postlab for Media Composer doesn’t need training, and guarantees secure collaboration and productivity when working from home, on the road or in the local cafe. Postlab for Media Composer coordinates media and metadata via the cloud, allowing users across the globe to work together on projects in real time. For slower connections, relevant media is cached on users’ workstations for a responsive editing experience. The platform continually exchanges metadata in the background to keep all bins and media within the production in sync. It makes Avid’s useful Bin Locking feature work in the cloud, so users can collaborate on projects without overwriting each other’s work. Postlab for Media Composer makes it easy for facilities to predict costs, scale resources,and maximise the investment in their Avid NEXIS or NAS/SAN-based storage solution for a small incremental outlay. The pandemic impacted the media industry. Employers have reduced office spaces, while creative professionals have embraced the freedom and flexibility of working remotely. Being able to edit inside your usual workspaces wherever you are, and with familiar tools, is now an absolute necessity.
Postlab for Media Composer gives editors that “Avid facility” experience everywhere. It delivers freedom and flexibility for editors and a way for facility owners to get more from their investment in on-premise NEXIS or Avid-compatible NAS/SAN storage. Avid editors have been bound to edit suites because of the way the Avid ecosystem was designed. Collaborating remotely has not been an option. So Avid editors are forced to go to the office to work, locking out creative professionals in other parts of the world. Democratising workflows has been a priority at Hedge from day one, and it brings an immediate benefit to Avid users. Postlab for Media Composer is a cloud-driven environment for Avid editors that’s easy to set up, fluid, fast and smooth to use, secure, and affordable, functioning as if it’s a local shared workspace. It acts like you have fast storage connected in your own facility. It also has all the collaborative functionality built in - exactly what’s needed for today’s unpredictable business environment.
NOMINEE Imagine Communications Nexio NewsCraft Imagine talked to news broadcasters around the world to understand today’s requirements. All agreed that the ability to get news to viewers as quickly as possible is paramount and adds real value to their operations. The story must engage the viewer, and the news output must be accurate, building trust and loyalty in the audience. Nexio NewsCraft™ is a new generation of news production systems, designed precisely to meet these goals. Nexio NewsCraft provides a feature-rich and intuitive toolkit in a single, highly automated environment that streamlines news production from ingest and content management to playout and delivery over multiple platforms. Based on proven open standards throughout, it operates seamlessly in hybrid SDI/IP architectures and allows access to systems in the newsroom in the field or from home. Built on Imagine’s long heritage, Nexio NewsCraft combines the field-proven performance and reliability of the company’s Nexio® production servers and IOX shared storage with the modern, easy-to-use web-based GUI and best-in-class media management capabilities of the EditShare FLOW family. Its MOS compatibility ensures Nexio NewsCraft can seamlessly interface with any existing newsroom computer system. This respects legacy capital investment and allows Nexio NewsCraft to be part of a technology transition at the pace set by the broadcaster. Multiple software tools give wide and fast access to content, empowering journalists by removing technical and operational constraints. Simple browse controls in the journalist’s workstation allow basic newsroom tasks like topping and tailing, shot-listing and logging, all through intuitive web user interfaces. Users can access these core tools from wherever they are via an internet connection. Journalists on location can prepare stories remotely in a cafe or hotel room; editors and producers working in the newsroom or from home can pick up and progress any story. Where more sophisticated editing is required, Nexio NewsCraft is fully integrated with popular third-party tools like Adobe® Premiere® Pro and DaVinci Resolve for seamless flow and transparent sharing of projects between editors. Rights
access is protected with granular user and group permissions. Artificial intelligence tools like automated metadata extraction and video analysis can be added to increase productivity. Powerful searches on better metadata not only allow the newsroom to create more authoritative and engaging stories, but also open up the archive as a potential revenue stream. For users, workflows are clear and simple. Efficiency in design and daily use accelerates the time to air, minimising the potential for human errors. Accuracy is not just a legal requirement but a reputational issue: building brand loyalty by getting the right information at the right time. Nexio NewsCraft is fully scalable up to the very largest newsroom, delivering excellent efficiency and capable of being implemented on premises or in the cloud. It is UHD-ready, allowing broadcasters to transition when needed, while still retaining all of the value of existing HD workflows and archive material. Nexio NewsCraft represents a full-featured, elegant solution for broadcasters to deliver news content faster, building trust and loyalty in the audience at a competitive price point.
Best Of Show At IBC 2020
NOMINEE Iron Mountain Entertainment Services Smart Vault Smart Vault provides broadcasters, production companies and other media organisations involved in the production of content with a powerful one-stop solution for content management, storage, and archive. It uses the latest cloud-native technology to enable ease of access to content with total locational flexibility, and at any scale, via an in-built video player. As well as streamlining and bringing multiple efficiencies to the elongated processes used in managing media assets, Smart Vault includes AI/ML capabilities that enable users to enrich the media stored in the platform. This allows users to leverage the value of their library content, providing monetisation opportunities that dovetail well with the current spike in content demand, which was exacerbated by Covid. At IMES, we have been in the business of media asset management for decades. We are not only leaning in on that heritage to provide media organisations with a helping hand in managing their content, but expanding what they are able to do with it. Covid has shone a light on the new importance of remote workflows, and we have built Smart Vault to enable these from the ground up. Additional functionality allows users to share media, contribute to the platform, and create custom workflows for media production and distribution, increasing agility throughout the entire video chain. As a result, Smart Vault empowers organisations and individuals to monetise mammoth amounts of content through one centralized and easily accessible platform — whenever they want, wherever they want. This is thanks to the detailed organisation of digital media assets that Smart Vault enables through collections and tags. Intuitive, efficient organisation and metadata entry facilitates logging, while it provides the ability to annotate media files with notes for editors or directors to consult during post. Audio transcription using speech-to-text technology allows further content management flexibility via the search, verify and edit transcripts, with the same technology also supplying frameaccurate captions at broadcast quality. Admins have complete control over who can upload media, download media, or edit metadata, and feedback and approvals
are enabled with time-stamped comments and threaded replies. A rough-cut assembly can be built and send to Avid Media Composer or Adobe Premiere Pro as a sequence for immediate editing, while any selection of clips can be sent to the preferred edit suite as an enriched sequence for immediate editing. Our Smart Vault media asset management solution has been built for today’s smart, creative, and future-focused content owners. It leverages two main things: the flexibility and costeffectiveness of a cloud-native MAM deployment and the decades of expertise of the Iron Mountain Entertainment Services team when it comes to the secure managing, sorting, and archiving of users’ content. IMES and the Smart Vault solution are already trusted by some of the industry’s biggest brands. We strongly believe that when it comes to media assets, preservation matters, access matters, and monetisation matters. In summary, content matters, and Smart Vault enables it to be securely and seamlessly managed, processed, and distributed in new and innovative ways.
NOMINEE JW Player STUDIO DRM In Q4 of 2021, JW Player, a leading video software and data insights platform,has unveiled its enhanced STUDIO DRM (formerly VUDRM) solution. This is a part of the integration of VUALTO into the JW Player platform to create the industry’s most powerful platform for video orchestration and encryption. STUDIO DRM is a multi-DRM solution that makes content protection easy for broadcasters, sports OTT platforms and other premium content rights holders. It is highly scalable and uniquely flexible, allowing content owners to request DRM encryption keys on the fly. STUDIO DRM also supports the latest content protection standards including CPIX, CMAF and CBCS. The solution has long been a leader in DRM innovation and was one of the first to implement support for PlayReady, WideVine and FairPlay, as well as ABR streaming with DRM encryption. Customer success: As a start-up in 2015, STARZPLAY, a regional streaming platform for the Middle East & North Africa region serving over 1.8m subscribers, was looking for an economic unified API integration that would help to save both costs and time spent on deployments. After testing solutions from multiple DRM providers, the organization turned to JW Player and its digital rights management solution. Aside from the solution offering a much-needed update to STARZPLAY’s DRM setup, the streaming provider was immediately impressed by the support offered by the technical team, with a clear solution presented to meet the organization’s strategic requirements. As one of the first companies to offer a multi-DRM service and a wealth of experience in the sector, STARZPLAY trusted the new setup with a unified API. During periods of significant traffic spikes experienced during lockdowns due to the COVID-19 pandemic in 2020, broadcasters were able to rely on a fully integrated, scalable, and resilient content protection with STUDIO DRM. Along with allowing for a single API for a multiple DRM managed service, STARZPLAY has benefited from massively improved uptimes compared to previous years, with 100% uptime in 2020. STARZPLAY saw increases of 50% in streaming hours per unique user, meaning unprecedented levels of consumer demand had to be met. With scalable infrastructure using a Kubernetes
container orchestration system on AWS, STARZPLAY was able to continue securely delivering low-latency live and videoondemand (VOD) content to its audience throughout the UAE and Saudi Arabia, across multiple devices, retaining complete control of who watches its content and when. In 2020, STARZPLAY also began providing a technology solution for an OTT service in India and this solution incorporates STUDIO DRM. “Wherever our business goes, the embedded VUDRM goes along with it. The partnership we have with JW Player is an established part of our tech stack, bringing us go-to-market speeds. I look forward to working closely with JW Player in the coming years to expand our offerings in other regions, helping to solidify our strong position in an ever-growing market,” Faraz Arshad, Chief Technology Officer at STARZPLAY said.
Best Of Show At IBC 2020
NOMINEE MediaKind MediaKind Engage The global streaming market was valued at USD 50.11 billion in 2020 and is expected to expand at a CAGR of 21.0% from 2021 to 2028 (Grand View Research). This market proliferation means streaming services must reach a level of technical maturity that can deliver the reach, scale, and reliability that meets consumer demand. Launched this year, MediaKind Engage is a new direct-toconsumer (D2C) solution for video production, streaming, and audience engagement. The solution is based on MediaKind’s longstanding experience and expertise in delivering robust and resilient broadcast solutions. The company’s engineering and product teams have leveraged this knowledge to develop a cloud-native technology service that acts as a foundation for modern video workflows and applies directly to today’s market challenges in enabling streaming delivery and monetization. Whether it be live productions with publishing and client applications, MediaKind Engage exposes advanced workflows that have been built leveraging MediaKind’s end-to-end portfolio, complemented with best-of-breed partner solutions. MediaKind Engage is designed to respond to the many organizations that need to compete with the features and functionality that other streaming media giants offer on a fraction of these giants’ budgets. It allows sports entities, broadcasters, channel originators, and content owners of all sizes to create and monetize new D2C offerings that increase fan engagement opportunities and enable consumers to curate and personalize content experiences. MediaKind Engage provides a fast and efficient service that drastically speeds up time-to-market and lowers setup costs. The solution guarantees agility, quality, and stability at scale while maximizing return on investment on a wide range of compelling, high-quality live and on-demand video and data services. The technology will enable stability for all professionally delivered live streamed events which has proved a significant challenge for many prestigious and major live events over the last 18 months. This stability has been factored into the development of MediaKind Engage to provide operators with a reliable and robust solution. There are four core pillars of the MediaKind Engage solution:
1. Broadcast quality framework. MediaKind Engage handles all the fundamentals needed to deliver a streaming service. It provides broadcast-grade quality and availability, achieving reliability of scale, including peak loads that enable users to cover some of the larger events that happen. 2. Operational excellence. The service is supported 24/7 by an expert operations team from MediaKind. Customers can choose the level of technical care they require to integrate with or enhance their own operations team as needed. 3. Continuous Improvement. MediaKind’s ethos is to embrace modern, agile, and DevOps models and act as an extension of its customers’ engineering and products teams. 4. Reaching consumers. It provides a gateway service and experience that can be enriched and built upon. It enables users to focus on specific areas that apply to their marketplace and customer base to provide a far better service to their consumers and gain a far better ROI on their content. With MediaKind Engage, media organizations can reach more consumers by monetizing existing content opening up more resources to enhance fan engagement.
NOMINEE Mo-Sys Engineering VP Pro XR Mo-Sys’ VP Pro XR, the first purpose-built Cinematic XR server solution on the market, takes a radical new approach to delivering cinematic standards to on-set real-time virtual production using LED volumes. Designed for LED stages with or without set extensions, it can also be used with blue/green screens and enables traditional shooting techniques within an LED volume, with a focus on composite image quality. Typically, directors and cinematographers make continuous calculations to make the real and virtual elements match in postproduction - a costly and repetitive process. A pioneer in providing a platform for graphics rendering as well as camera and lens tracking to create higher quality virtual productions, Mo-Sys has made it even easier to produce seamless, high-end productions, combining the power and capability of professional systems with the easy operation of prosumer solutions and with affordable scalability. VP Pro XR offers seamless set extensions, confers a minimal XR delay and includes unique capabilities such as Cinematic XR Focus (see below, right). VP Pro XR, which supports Epic Games’ Unreal Engine 4.27, offers innovative capabilities including: An industry first, Cinematic XR Focus enables seamless interaction between virtual and real worlds, turning LED walls into more than just a backdrop, and allowing them to integrate with the real stage to give cinematographers the means to seamlessly rack focus deep into the virtual world for layered images. This methodology saves both time and money by enabling shots to be combined in a way that they will be familiar with. Mo-Sys uses the same wireless lens control system commonly used in filmmaking and is compatible with Preston wireless lens controllers (Hand Unit 3 and MDR-3). The lens controller is synchronized with the output of the Unreal Engine graphics, working with Mo-Sys’ StarTracker camera tracking technology to constantly track the distance between the camera and the LED wall. NearTime® is a fresh and unique workflow for virtual production that allows cast and crew to see the full effect of the shot on-set in real-time and delivers a higher-quality version of the shot, completely automated, and in a timescale which
matches the practical requirements of the production – ‘near-time’. An HPA Engineering Excellence award winner, this cost-effective solution draws on the proven Mo-Sys expertise in camera tracking and live compositing, delivering a complete system in partnership with the AWS Media and Entertainment team. NearTime solves one of the key challenges of LED ICVFX shoots, which is balancing Unreal image quality whilst maintaining real-time frame rates. Currently every Unreal scene created for ICVFX has to be reduced in quality to guarantee realtime playback. Using NearTime with an LED ‘green frustum’, the same Unreal scene can be automatically re-rendered with higher quality or resolution and used to replace the original background Unreal scene. Whilst it takes longer to do this, no traditional postproduction costs or time have been used, plus moiré issues can be avoided completely! VP Pro XR also features an Online Lens Library giving users access to a wide selection of cost-efficient lens distortion calibration tools.
Best Of Show At IBC 2020
NOMINEE Net Insight JPEG XS Available now, Nimbra JPEG XS applications are rolling out to support the delivery of some the world’s biggest live sporting events. Net Insight’s Cloud and IP media platform combines the power of zero-compromise video compression with its industryleading, openstandardmedia delivery technology. Net Insight has in partnership with intoPIX, the leading provider of innovative compression technology, developed cutting-edge JPEG XS compliant solutions. intoPIX’s TICO-XS is fully compliant with the new JPEG XS standard and delivers pristine image quality and imperceptible latency within a highly portable software application framework. The JPEG XS content production codec is a lightweight image coding system that processes video at the microsecond level with linebased latency. Based on real-world testing and compression grades in the range of 4:1 to 12:1, Net Insight’s JPEG XS applications delivers lossless quality video while reducing typical network resource consumption by 90%. As JPEG XS is designed to scale, it fully supports the ability to natively process UHD-4K and UHD-8K content. JPEG XS may be used wherever uncompressed video is currently used, including live and distributed production, AV over IP, VR, AR and eSports. The media industry has embraced remote working and the enabling technologies. By leveraging JPEG XS, we as an industry can further reduce our resource utilization and the number of processing steps. Software-based processing enables content editing and production workflows to eliminate the need to jump back and forth between different uncompressed interface standards and different codec formats. This breakthrough in video workflow processing accelerates the transition to highquality distributed content creation. No longer is there a need to accept heavily compressed and artefact filled workflows as the compromise for moving to distributed production. By integrating JPEG XS into its Cloud and IP media platform, Net Insight adds yet another processing option for content producers and service providers to harness. The same application acceleration platform offers virtualized processing across IP, SDI, and mixed format environments. Customers
can reuse the same acceleration hardware to perform both media and network processing. The application list includes the IP Media Trust Boundary, lossless Ethernet switching, IP WAN aggregation, MPEG-4, JPEG 2000, and JPEG XS. Reusing the same hardware and loading new software applications when needed allows Net Insight customers to plan further ahead1 and adapt to the media industry’s shift to IP, secure IT and Cloud services. The addition of JPEG XS video compression to the Nimbra 600 and Nimbra 1000 platforms represents a breakthrough for media network operators and distributed production workflows. Compared to uncompressed video, the new solution delivers imperceptible delay, lossless image quality and a massive reduction in networking and compute resources. Combined with Net Insight’s open-standard media delivery technology, this allows content owners and producers the ability to innovate without risk for vendor-specific limitations. The standards-compliant JPEG XS platform update introduces yet another innovation in distributed live content production. This is the beginning of an exciting journey to provide cuttingedge, zero-compromises compression technology that will revolutionize live video experiences, while minimizing financial and environmental costs.
NOMINEE NewTek NewTek NC2 Studio Input/Output Module To combat climate change, our world, and the industries that keep it moving must become sustainable. Supporting 12G-SDI and 10 Gigabit Ethernet connections, the NC2 Studio I/O Module breaks down barriers to visual storytelling by bridging traditional SDI equipment and infrastructure with flexible IP networks. This provides a sustainable solution ensuring current equipment doesn’t need to be replaced. Instead, the technology can be upgraded to future compatible standards like NDI® and 4K connectivity capacity. Eight channels of flexible I/O- including media file playback and recording and a 1 RU chassis- are backed by the innovative NDI® 5. The all-new features of NC2 Studio I/O is far more than the conversion of source types, as it connects multiple video and audio formats, including NDI®, SDI, and other IP formats. The NC2 Studio I/O Module also provides a unified interface including selectable multi-viewers and professional video scopes. This offers users greater control of their signals through built-in precision color correction. The feature set makes the module a perfect drop-in solution for interconnecting video signal types with a complement of audio formats, creating adaptable workflows through Dante™, AES-67, and ASIO/WDM software audio drivers*. Management of each channel can be harnessed locally through integrated NDI KVM technology, or a web-based API. Combined with the NC2 Studio I/O Module, NDI 5 creates a game-changing future for producers seeking to expand productions. The union removes the restrictions of productions bound to a physical location by offering efficient, secure, and reliable workflows globally. With the NC2 Studio I/O Module, it’s not just possible to transcode SDI to NDI and vice versa but enable the creation of NDI Streams from SDI Fill + Key and share it with remote locations using NDI Bridge as transport. As many companies look to reduce their environmental footprint, they may look toward investing in new, more ecofriendly technology. The NewTek NC2 Studio Input/Output Module not only provides a unique and powerful solution for video producers but also allows end-users to keep existing equipment and upgrade the software for current needs.
Quality hardware should last a lifetime, and creating hardware that can meet the growing demands of visual storytellers through software updates is a valuable asset for time, money, and the environment. If producers don’t integrate sustainable methods into their workflows, not only will their costs continue to increase but their impact on the environment will also grow. Consumers and audiences are conscious of the impact of the companies they interact with and broadcasters and visual storytellers who don’t work to incorporate sustainable technology into their workflows may see that influence in the form of audience loss. Providing end-users with the NewTek NC2 Studio I/O Module and other sustainable technology for their systems will keep their electronics in service for years to come – helping reduce carbon footprints and combat climate change. *Virtual sound-card drivers may require separately purchased licenses.
Best Of Show At IBC 2020
NOMINEE Salsa Sound MIXaiR 2.0 Today, fans at home want to enjoy a live sports experience that is even better than being at the stadium, with personalised and immersive sound complementing the graphics and multiple camera angles for great storytelling. Broadcasters are looking to put fans closer to the action on the pitch and closer to fans in the crowd and sound plays an important role in this. Surging demand for more (and better) content means the sound engineer’s job is becoming harder. Even as new technologies and formats come to the fore, the work involves more time-intensive, manual processes. The average live premium sports production already involves creating over 16 mixes across different formats, anything more puts stress on an already stretched workflow – or requires an increased workforce. UK Startup, Salsa Sound’s newly released MIXaiRTM2.0 answers these challenges with its AI-based live audio mixing platform - giving broadcasters and sports organisations a more robust, intelligent way to create multiple mixes automatically and deliver stunning immersive sound. The system has been trained with hundreds of hours of content from English Premier League and Championship games to learn what sounds make up a great mix. It automatically recognises and mixes the significant audio moments in a game and can even automate the process of reducing the amount of profanity from pitch-side mics getting into the broadcast. Using AI to automate some of the more mundane tasks of audio mixing, MIXaiR lets sound supervisors craft a mix rather than chase it - giving better, more immersive, and more customisable experiences to audiences. MIXaiR is a pure software, cloud-ready, AI-based automated platform for live audio mixing. Built on the company’s patented AI technology, this solution allows sound engineers to automatically create the best possible mix using standard microphone set-ups. This, ‘mics in, mix out’ approach ensures that amazing, immersive audio is no longer the preserve of topflight clubs or Tier 1 broadcasters. By making use of existing infrastructure, MIXaiR opens up the power of AI to smaller clubs, niche sports
and even applications outside premium live sports broadcasting to offer viewers a next-level experience. MIXaiR creates automated spatial audio mixes for the ultimate listening experience, whether over headphones or loudspeaker setups, giving fans access to the most immersive and enhanced experience. Unlike other mixing systems, it requires no additional tracking or manual operation. Taking audio feeds from existing broadcast microphones, MIXaiR 2.0 uses AI algorithms that automatically detect, mix in and enhance the on-pitch sounds and even triangulates their location. As a result, the sound engineer can easily create engaging real-time mixes without the need for additional kit requirements. Designed to speed up audio workflows and make life easier for sound teams, MIXaiR 2.0 automatically and simultaneously renders mixes to multiple formats and multiple language versions/crowd flavours (e.g., home/away), with each mix automatically made compliant to the requisite loudness standards required for linear broadcast, VOD, OTT or social platforms. With MIXaiRTM2.0 content creators can create more-forless and do it better, offering every viewer the ‘best seat in the house’!
NOMINEE Singular.live UNO by Singular.live Singular.live launched three years ago and has been constantly evolving ever since. As a cloud native, browser-based platform, Singular has seen significant growth through the last 18 months, and this has helped our mission to democratise live graphics overlays, making them available to everyone. In response to the wide range of people who have signed up to our platform, this year we released a new set of dynamic, mobile friendly templates called UNOs. These highly innovative templates are designed to help anyone, no matter their experience, enhance their live content with customisable graphic overlays. They have been built to broaden accessibility and open up professional tools to a much wider and more diverse audience. UNOs are designed to do one thing brilliantly. We have spent the last few months designing and refining a wide set of different UNO templates that we have made available and completely free for the duration of 2021. Current templates include a soccer clock and score, a tennis score bug, a baseline flipper, upper bugs and themes specifically for graduations, corporate communications, news and sport. Most recently we have received Google approval for a new AddOn that will allow us to release several more UNOs that connect directly to Google Sheets. This will enable users to populate data into a Google sheet which can then be visualised directly in UNO graphics. We are also about to release a new set of UNOs that are integrated with Sportzcast, enabling anyone with a Sportzcast device to connect their scoreboard data directly from that into Singular overlays from their mobile phone or tablet. This will include American Football, Baseball and Basketball. In this next release we will also be including some UNOs that are integrated with a Bible API enabling any houses of worship to connect any specific Chapter or Verse which will then automatically be shown in the Singular graphic. Other religious texts are being released very shortly after. UNOs, combined with our Singular For Good program that gives Professional equivalent accounts to any schools or
nonprofit organizations completely free of charge, remove barriers to entry. We hope this will help bring the next generation of professionals into our industry. It is also a further demonstration of how we are changing live broadcast graphics. Singular is the only live graphic platform that is accredited by the BAFA affiliated Albert consortium meaning UNOs represent a more sustainable way of adding graphics to live content. In addition to making live graphics easy, sustainable, more affordable and accessible, we are working to deliver a better experience for everyone from content creators all the way through to the end viewers.
Best Of Show At IBC 2020
NOMINEE SoftAtHome VASP Service Aggregator Platform SoftAtHome presents a unique solution running on any OS, device, and screen to provide one perfect experience to deliver profiled home services such as linear TV, SVODs, universal search, smart home, and Home LAN management or AppStore services: SoftAtHome VASP Service Aggregator Platform. This cloud gateway to third party service platforms offers all Operators’ services such as unified search, targeted advertising, an app store, homeLAN management or voice control. The cloud platform eases the deployment and longterm sustainability of new services in a complex multi-screen environment. One Experience any OS and any device: running on Linux, RDK, AndroidTV, WebOS, Tizen FireTVOS, MSWindows and iOS/Android and relying on SoftAtHome unified player, the platform also includes ImpressioTV NextGen cost-effective solution delivered from the cloud to service providers. This solution provides aggregated video and IoT services on tablets, smartphones, smart TVs, HDMI dongles or set-top boxes. With its VASP Service Aggregator Platform, SoftAtHome enables operators to become super-aggregators of services and content in a multiscreen environment. Its modern UI is customer centric, the end-user can customize it himself, selecting services and apps he would like to see first. When operators want to deploy a service in several territories, they are confronted to a variety of broadband networks, whether ADSL, Fiber or 4G TDD. SoftAtHome’s system automatically adapts to the right bandwidth to always propose addedvalue IP services. This solution has been designed to deliver super-aggregation in a multiscreen environment. Compatible with Tizen, WebOS, AndroidTV, RDK-V or Linux OS with a unified player to adapt to OS specificities, Impressio Tv delivers services on tablets, smartphones, smartTVs, HDMI dongles or set-top boxes. The solution is based on a cloud platform, VASP, which offers several services such as unified search, targeted advertising, an app store or voice control. The cloud platform eases the deployment of new services in a multi-screen environment.
Become a Super Aggregator with SoftAtHome
OPERATORS CAN PLAY A CENTRAL ROLE BY BECOMING SUPER-AGGREGATORS. In a dynamic video industry with the proliferation of new premium video streaming application providers and the associated investment of billions of Euros, SoftAtHome sees room for operators to position themselves as super video-aggregators, in order to take the lead in video content distribution and deliver an outstanding experience to their subscribers.
SoftAtHome enables operators to: -bundle live TV and on-demand videos from major content owners (Netflix, Disney+, Amazon Prime Video, YouTube, etc.); -provide access to a multitude of rich content; -discover voice experience to simplify content discovery; -allow content suggestions based on user behaviour; -enrich metadata with the power of the cloud;
If you wish to attend a digital face-to-face demo session, please get in touch with us: contact@softathome.com
-bring personalised content with built-in user privacy.
More information: www.softathome.com Linkedin/company/softathome @SoftAtHome
NOMINEE Synamedia Synamedia Video Quality as a Service One of the most important tasks for anyone involved in compression is controlling the video quality. This requires objective measurements – such as SSIM, PSNR and VMAF - alongside subjective viewing when quality testers examine the end result for quality on different types of screens. To date, these tools have forced users to sit in front of an analyzer system which produces in-depth details that are notoriously difficult to understand. But no longer. Realizing quality managers needed an extremely simple to use yet powerful tool, Synamedia has developed Video Quality as a Service (VQaaS). Because it is cloud-based, VQaaS can be accessed from anywhere, freeing anyone on the quality team to look at compression from their sofa if they wish. Importantly, the results are designed to be shared and understood by colleagues who have no knowledge of compression theory. For the first time, users can see the actual impact on the video stream during the theoretical objective measurement. That means that when there is a drop in quality the user can zoom in and select the frames where this is happening. This has huge advantages. It makes it easy to understand the impact of the compression, as it correlates the objective with the result, bringing the theoretical measurement to life. Also, it saves time, as users no longer have to watch hours of video, rewinding when they see a potential problem, then rewatching it carefully to detect a problem. It supports SD, HD, and all the way up to 8K. As the first tool that makes it easy to analyze the quality of 8K, VQaaS has the potential to play an important role in the growth of 8K content. Because it can compare the quality of a piece of video when it is created with different encoders or software versions, VQaaS can also be used to determine whether to change encoders or upgrade. Users can now easily optimize compression to meet their video quality preferences and requirements. VQaaS also allows users to understand the impact of a lower bitrate or a different profile ladder, which can result in cost reductions on the network and transport side.
How does it work? VQaaS analyzes video files with recordings directly from the encoder or non-linear assets, including VOD or cloud recorded files. Synamedia vDCM customers can record a sequence directly from a live stream. VQaaS automatically processes the files for both an objective analysis and a subjective view of the video. The strength of the tool is simplicity. It is very easy and intuitive to select and scroll through the different objective measurements.
Best Of Show At IBC 2020
NOMINEE ThinkAnalytics ThinkAdvertising Advertising on TV has fallen far behind online digital advertising because of the lack of accurate targeting affinity attributes available. There is huge, latent demand from new and existing TV advertisers for rich user profiles that let them reach valuable, measurable and highly segmented TV audiences. ThinkAdvertising brings the personalisation we experience with digital advertising on the web to TV. Launched in September 2021, the new version of ThinkAdvertising is a critical component in the TV fight-back, driving increased advertising revenue opportunities, and reducing costs and time-to-market. It enables a TV operator/ OTT service to rapidly take advantage of highly predictive targeting attributes. In a TV industry first, ThinkAdvertising blends comprehensive, dynamic, first-party behavioural data with a broad set of enriched metadata for the ultimate in hyper-targeted audience segmentation. Advertisers can now reach engaged, hypertargeted audiences with TV campaigns that deliver a fast ROI, while video service providers can boost revenues by capturing more advertising dollars. ThinkAdvertising breaks new ground with the ability to create valuable attributes not available from third-party data, including intent to purchase. By tracking viewer behaviour and providing a sensitivity score that gives advance warning of potential purchasing intent, the solution dynamically captures and builds hyper targeted audience segments with a specific interest in a particular purchasing category at a given time. ThinkAdvertising Household Composition data provides advertisers with up-todate information on the composition of the household, such as a family with a toddler and teenager, or a single person household, plus languages spoken, so that the correct audio track always accompanies the advert. In another industry first, ThinkAdvertising supports the Internet Advertising Bureau (IAB) taxonomy used by media buyers for digital ad campaigns, making it easy for advertisers to buy and run cross-media campaigns that include TV ads. ThinkAdvertising automates the creation of valuable audience
segments using more than 160 IAB audience affinities – for example, people interested in luxury cars. The enhanced ThinkAdvertising also cracks the problem of identifying and reaching those important but elusive “light” TV viewer categories that are like gold dust to many TV advertisers. Light viewers watch significantly fewer TV hours than the average person per week, which makes it harder for big brands to reach them with mass-market ad campaigns. ThinkAdvertising helps these advertisers increase the reach of these ad campaigns to this group. Delivering a breakthrough in consumer profiling and predictive behavioural analysis thanks to a powerful combination of AI, information science and data science, ThinkAdvertising is kickstarting a new era of data-driven TV advertising that really delivers. The results speak for themselves. ThinkAdvertising has been proven to boost viewer engagement with ads due to improved ad relevancy in the ThinkAdvertising hyper-targeted affinity groups. A cloud-based service offering rapid, cost-effective implementation, ThinkAdvertising is available as part of the Think360 suite or as a standalone solution. It can be easily integrated with other analytics platforms and ad decision services to support a new generation of transformative TV ad campaigns.
NOMINEE Zero Density TRAXIS talentS TRAXIS talentS is an industry-first AI-powered markerless stereoscopic talent tracking system that can identify the people inside the 3D virtual environment without any wearables. Deploying talentS makes beacons or wearables -talents had to wear to track their movement- obsolete, thus giving more flexibility and ease in moving in virtual studios. With cutting-edge AI algorithms and the power of Nvidia’s GPU Tensor Cores, talentS extracts the talent’s 3D location from the image with utmost precision. It sends the tracking data to engines to create accurate reflections, refractions, and the virtual shadows of the talent inside the 3D space. Broadcasters and studio operators can enjoy hyperrealism in their virtual studio and augmented reality productions, with the perfect virtual and physical merge. Designed by live production experts, talentS works 24/7 continuously without any interruptions. It sends data through industrystandard FreeD protocol and integrates with the Reality ecosystem and any other FreeD speaking platform, out of the box. Moreover, TRAXIS talentS can be utilized in other applications besides virtual studio, such as augmented reality in sports and live events. For example, with talentS AR graphics for statistics can be enhanced above a boxer during a live boxing game. It can also enable robotic lights to track a specific dancer during live performance automatically. Zero Density’s disruptive approach to talent tracking unlocks a new level of freedom inside the virtual space and more. It frees the individuals from external wearables, items, and beacons. Zero Density places the talentS system at the heart of the innovation and the future of live interactive production by harnessing the power of machine learning and taking advantage of the advancements in GPU technology.
Additional Benefits: n Easy setup and calibration: talentS comes as a pre-calibrated and preinstalled system. Going live with talentS will take only minutes following the unboxing. n Mission-critical hardware: Live broadcast demands reliable systems. talentS works continuously with utmost precision until its settings are updated. It can also send the data at every frame rate required by the broadcasters or production crew from 24 to 59.94. n Optional optics: talentS comes with special optic filters that help remove physical reflections that cause false detections. n Hang it anywhere: talentS is a lightweight system that can be installed on a truss or mounted on a tripod. It comes with the necessary mounting kit and safety cables to assist in safe installation.
Best Of Show At IBC 2020
NOMINEE
Adobe Multi-Frame Rendering in Adobe After Effects Time and efficiency are of the extreme essence when creating content for TV, film and entertainment. With the acceleration of content production, motion designers and video editors are under tremendous pressure to construct and deliver quality content under strict time constraints. As a result, precious time that could be reserved for creativity is often sacrificed, so these artists need tools that keep them keep their creative control while stripping away time-intensive tasks that distract and delay. Realizing the need for greater efficiency, Adobe launched new Multi-Frame Rendering capabilities in Adobe After Effects, the industry’s leading motion graphics tool, to help minimize the time spent rendering and exporting video content. This helps post-production professionals keep their focus on the creative process of motion graphics work while still meeting and exceeding deadlines. After Effect’s Multi-Frame Rendering accelerates the tedious tasks that come with exporting projects by taking advantage of the full power of your system’s CPU cores when previewing and rendering. What’s more, the new tool is accompanied by other features, such as the Composition Profiler, Speculative Preview and a reimagined Render Queue that take advantage of Multi-Frame Rendering to further speed up workflows while also simplifying the export process. The Composition Profiler allows video editors to view which layers and effects in a composition are taking the most time to render in relation to other layers and effects, while the reimagined Render Queue shows the average frame rendering time and number of concurrent frames rendering. This new
feature includes a new progress bar that has three colors which show exported frames in blue, frames ready to be exported or already cached in dark green, and frames that are currently rendering in light green. To further compliment the Composition Profiler and Render Queue, the Speculative Preview feature renders active compositions while the application is idle. This means that users have the freedom and ability to work on other items while leaving After Effects idly open without the fear of losing any progress when rendering their project. By simplifying workflows, raising the standard and speed of work, and increasing time spent on creative, not tedious tasks, Multi-Frame Rendering notably improves the efficiency and overall experience of After Effects.
NOMINEE AJA Video Systems BRIDGE LIVE v1.12 AJA recently introduced BRIDGE LIVE v1.12, a new software update for the turnkey multi-channel live video solution for remote production, contribution, collaboration, streaming, and delivery. BRIDGE LIVE is a powerful streaming solution that that makes it easy to move UltraHD or multi-channel HD video between uncompressed baseband SDI to and from a wide range of streaming and contribution codecs, including H.264 (AVC), H.265 (HEVC) and H.262 (MPEG-2 TS), as well as an option for JPEG 2000. In response to growing adoption of IP protocols, BRIDGE LIVE v1.12 introduces bi-directional NDI® (Network Device Interface) input, output and transcode, in addition to HLS output, video preview, and user interface updates for more intuitive configuration. Whether facilitating remote production, two-way interviews, live event streaming, multi-cam backhaul, field contribution, program return, confidence monitoring, collaborative production, or ABR ladder profiles to hand-off for OTT packaging, BRIDGE LIVE v1.12 delivers powerful new functionality. Bringing bi-directional NDI I/O and HLS output to BRIDGE LIVE offers professionals broader hardware and software integration for streaming workflows and a simpler, cost effective alternative to deploying large teams of personnel and resources at remote locations. New bi-directional NDI support makes it easy to encode SDI inputs for NDI output to the network and/or to receive NDI for outputting SDI. The ability to also transcode IP Video Streams to NDI and/or transcode NDI Inputs to IP Video Streams enables a host of new workflow possibilities. For example, BRIDGE LIVE can now sit at the edge of an NDI event or facility network, enabling professionals to transport outbound NDI video as a streamable format, and/or return the stream to NDI for use at a remote NDI production destination. Additional BRIDGE LIVE v1.12 feature highlights include: n Bi-directional NDI-SDI conversion: n Receive NDI and decode to SDI n Input and encode SDI to NDI
Bi-directional NDI-IP Video Streams conversion: n Receive NDI and transcode to IP Video Streams (i.e., H.265, H.264) n Receive IP Video Streams (i.e., H.265, H.264) and transcode to NDI n Integrate remote NDI and non-NDI equipment/facilities via RTP/UDP/SRT n Tap directly into the NDI network and provide a conduit to CDNs or other delivery mechanisms HLS Output (HD): Input SDI sources and IP Video Streams into BRIDGE LIVE and encode them for sharing to widely used devices and software, or via remote screening to iOS and iPadOS devices via HLS Video Preview: With support for Video Preview thumbnails now available in the BRIDGE LIVE GUI for SDI Inputs, receive visual confirmation for correct SDI input/content encoding, even if unable to “go live” to check the content for security UX improvements: Explore more intuitive pipeline configuration and options such as “start detecting input” and “set as input” buttons, and enhanced responsiveness A new Factory Reset method: An alternative and rapid method to access factory reset directly from the boot menu For more information, visit: https://www.aja.com/products/bridge-live
Best Of Show At IBC 2020
NOMINEE Boland Monitors X4K31HDR5 -OLED The largest of Boland’s new X-4K OLED monitor series, this 31” reference grade model features a true 10 bit panel and processor, with a dynamic 1,000,000:1 contrast ratio that guarantees ultra deep black levels. 4Ksignal is delivered via 12G and 3G SDI (single or quad link), HDMI 2.0, and Sfp (2110) inputs. The next-generation X4K31HDR5-OLED offers numerous scopes
and audio meters, 3D LUTs, time code, markers, and multiple aspect functionality. All firmware updates are completed in-field using USB, and all X-4K Series models include VESA mount holes on the rear in addition to a desktop stand. Also available in 21” and 27” sizes.
NOMINEE Canon Canon Cine-Servo 25-250mm The CINE-SERVO 25-250mm T2.95-3.95 (CN10x25 IAS S) is a new CINE-SERVO cinema lens designed for use with 4K cameras. The lens provides cinematographers and broadcast operations with a compact, lightweight design (only 6.7 lbs.) using Canon optical elements, while offering outstanding performance and reliability in professional shooting environments. The new CINE-SERVO 25-250mm lens adds a great deal of versatility to the CINE-SERVO lens family. The new lens, which is available in both EF and PL mount, features 10x optical zoom, a built-in 1.5x extender and a powerful and removable servo motor drive unit, providing broadcast-friendly shoulder operation for ENG/EFP and documentary style shooters. The lightweight design of the lens (6.7 lbs.) is remarkable given the zoom range and feature set, especially when compared with other lenses of similar focal lengths. This outstanding model, as well as the Sumire Prime Lenses announced in 2019, have further strengthened Canon’s robust lineup of Cinema Lenses. The new lens is fully 4K-ready, with a high optical resolution and support for Super35mm large-format cameras. An 11-blade aperture diaphragm helps ensure an artistic and beautiful representation of out-of-focus areas. The lens also features a high 10x zoom magnification, wide focal length range of 25mm to 250mm, and a 180o smooth rotating focus ring. Acknowledging that broadcasters often need to control zoom, focus and iris/aperture in different ways than filmmakers, Canon has developed this zoom lens with full external servo control for drama, documentary and other broadcast productions. Similar to the existing award-winning 17-120mm and 501000mm lenses, the CINE-SERVO 25-250mm offers outstanding 4K optical performance thanks to its ultra-low dispersion glass and a large-diameter aspherical lens. Combined with Canon’s unique optical design technology, these components work to help correct color fringing and limit chromatic aberration during operation. The lens features Canon’s renowned warm color science and an 11-blade aperture that produces a beautiful, smooth bokeh.
The new lens is ideal for cameras with a Super 35mm sensor. While the 10x zoom covers a focal range of 25-250mm, the built-in extender stretches that range to an impressive 375mm with an added benefit of allowing for full-frame sensor coverage with only a stop of difference in light loss. The servo drive unit included with the 25-250mm lens can be easily removed to allow for manual operation, and the gear pitch is compatible with standard cinema controls of zoom and focus. The EF mount version of the lens allows for the utilization of Canon’s proprietary Dual Pixel CMOS AF, which provides users with smooth AF operation and high-speed tracking performance, and the PL mount version supports Cooke/i Technology. In addition, like the 17-120mm, the 25-250mm lens also features a macro function to enable close-up shooting.
Best Of Show At IBC 2020
NOMINEE Canon Canon EOS C70 The EOS C70 4K Digital Cinema Camera is Canon’s first-ever RF mount Cinema EOS camera. The unique design of the EOS C70 camera puts a significant emphasis on operational convenience for the end-user. The small form-factor, weighing only 2.6lbs., allows the camera to be easily handheld and dramatically enhances a videographer’s mobility, providing a seamless bridge between the EOS and Cinema EOS families for cinematic applications. Cleverly designed, the camera features a slim, motorized ND filter unit – having a mere 6mm depth – that is built into the short flange back of the RF mount. The motorized 10-stop ND filter provides users with the flexibility to control exposure while keeping the desired depth-of-field and capturing images that feature the desired level of bokeh. Thirteen customizable buttons allow users to select from more than 80 functions to be assigned based on individual preferences. The camera features Canon’s innovative and next-generation Super 35mm DGO Sensor that further extends the high dynamic range and lowers noise levels by reading out each photodiode with two different gains. One gain prioritizes saturation – protecting detail in highlight areas – while the other suppresses noise in the shadows. The result is an image with up to 16plus stops of total dynamic range, clean, rich shadows, and vibrant highlights in up to 4K/ 60p or 2K/120p in Super16mm Crop mode. The EOS C70 camera also features Canon’s recently developed DIGIC DV7 image processor that collects the extensive information captured from the DGO sensor and processes it into exceptional HDR imagery while offering choices between Canon Log 2 and 3, in addition to PQ and HLG gamma functionality. The C70 camera can also record 4K DCI or UHD up to 120fps and 2K DCI or HD up to 180 fps – with an important flexibility in the choice of codecs. The camera supports XF-AVC format (in variable bit-rate) – both Intra and Long GOP with MXF file format. The intraframe format compresses the data after analyzing each frame separately, while Long GOP format compresses data at a higher rate, creating an even smaller file size. A secondary choice is Long GOP 10-bit 4:2:2/4:2:0 MP4/HEVC (a next-generation HDR video recording compression standard) with an MP4 file format
– a first in the Cinema EOS line. The camera’s independent air intake system is separated from the electrical systems to protect the sensor from water, sand, and dust. In addition, the camera also features two air outlet vents that allow uninterrupted recording for extended periods of time. The Canon EOS C70 camera was designed to satisfy and delight a variety of users on the search for a high-powered piece of video equipment. Versatility is key in a world of fast-moving filmmaking and content creation, and the EOS C70 provides a familiar form and feature set to a wide spectrum of imaging customers.
NOMINEE Canon Canon DP-V3120 Reference Display Canon’s 31-inch DP-V3120 4K Reference Display is designed to meet the unique set of challenges that come with HDR production. Offering stunning image quality with industryleading 2,000 cd/m² high luminance, 2,000,000:1 contrast ratio, exceptional accuracy and consistency, wide colour gamut, as well as extensive HDR monitoring assist functions, the DPV3120 is the perfect reference display for professionals creating stunning High Dynamic Range content. The DP-V3120 delivers industry-leading 2,000 cd/m² fullscreen brightness, supported by Canon’s newly developed cuttingedge backlight system. This system includes highly efficient LEDs, with a precise LED control algorithm and advanced image processing – enabling the display to deliver a minimum black of 0.001 cd/m² and an outstanding 2,000,000:1 contrast ratio for accurate reproduction of shadow details and bright highlights. This backlight system incorporates a newly designed cooling mechanism – allowing the display to achieve high brightness continuously with quiet operation, making it an ideal tool for a grading suite. Additionally, Canon’s innovative backlight system equips sensors across the entire unit, and intelligent auto correction technology enables the display to sustain image accuracy during operation. The DP-V3120 exceeds the Dolby Vision required monitor specifications including General Monitor Specifications and Grey Scale Reproduction in order to meet the requirement of Dolby Vision certified post-production facility. With this achievement, Canon further proves its ability to support the efficient production of high-quality HDR visual content and meet the various needs of content production workflows. Addressing the demand for excellence and efficiency in 4K HDR production workflows, the DP-V3120 features a range of advanced HDR monitoring functions in order to visualize HDR signal parameters such as HDR Reference white, signal levels and image brightness for accurate signal optimization. Equipped with a 12G-SDI interface, the DP-V3120 can support a 4K image via a single SDI cable. With its four 12G-SDI interface, the display can handle four different 4K signal inputs providing a fourscreen split view, or can switch to a single 4K view, alternating
the desired input. In addition to this, its four 12G-SDI interface terminals, enables the handling of a single 8K signal input. The DP-V3120 also supports the latest Video Payload ID to identify the signal’s transfer characteristics and an Auto Setting function, providing the ability to switch display’s Picture Mode settings automatically. In addition to this, the display can be remotely controlled by LAN connection enabling access to full menu controls, settings and link to other monitors, as well as access to display settings and signal information. Canon’s 31-inch DP-V3120 4K Reference Display is engineered to provide the highest level of image quality and versatility for demanding professionals in the cinema and broadcast industries. Canon Reference Displays feature the built-in HDR Toolkit, which was awarded the Hollywood Professional Association’s 2018 Engineering Excellence Award. These tools help to ensure a finished product that delivers beautiful and vivid HDR imagery.
Best Of Show At IBC 2020
NOMINEE Digital Nirvana MetadataIQ MetadataIQ offers off-the-shelf integration with Avid Interplay™ to automate the end-to-end process of generating speechtotext and video intelligence metadata for Avid-based assets. The application uses advanced machine learning and AI-based content analysis to accelerate metadata generation. The result is better-structured, more detailed, and more accurate metadata and shorter content delivery cycles. Through this powerful, integrated video intelligence, MetadataIQ can provide logo detection, face recognition, object identification, and shot-change identification. MetadataIQ also integrates directly with Digital Nirvana’s Trance platform to generate transcripts, captions, and translations in all industrysupported formats. Operators can automatically submit media for processing from within the existing workflow and can either receive the output as sidecar files or ingest it directly into Interplay as markers. They can also create and ingest different kinds of metadata, including speech-to-text, facial recognition, OCR, logos, and objects, each with customizable markers based on duration and color. And they can access all of it through Avid MediaCentral™. Editors simply type a search term within Interplay or MediaCentral, identify the relevant clip, and create content. For VOD and content repurposing, video intelligence metadata aids in product placement/replacement and accurately identifying ad spots. MetadataIQ automatically extracts the original asset and creates a low-res proxy or audio-only version of the actual media file from the Avid watch folder. Upon completion of the automatic metadata generation, the metadata is returned to the onpremises application, which ingests the metadata back into Avid Interplay with locators. The low-res proxy files transcoded on-premises are transferred to the cloud, where they are temporarily stored in encrypted storage only as long as they are required to process the job, and then they are erased. From there they are converted into formats required for submission to the AVID MAM, shared folders, or other systems based on customer-specific requirements.
MetadataIQ is a new metadata automation tool for content producers using the Avid media platform. A secure and scalable SaaS solution, MetadataIQ ensures that generating and ingesting pertinent metadata as timecoded markers into Avid is 100% automated. The platform is the first to offer on-premises transcoding and intelligent extraction of audio files to generate speech-totext transcripts. Users aren’t required to create a low-res proxy or manually import files into Avid MediaCentral. MetadataIQ automatically generates speech-to-text transcripts for file-based assets in addition to streaming speech-to-text transcripts from growing audio assets in real time. Operators have the option of sending transcripts to Digital Nirvana’s processing centers for high-quality, human-curated output, which is returned within Interplay. By completely automating the generation and ingestion of relevant metadata as locators into Avid, MetadataIQ helps editors accurately identify relevant content to save time and effort. In fact, users have reported that the process of creating new content has been reduced from 15 hours to just two hours. In addition, the platform replaces several traditional manual processes — from creating low-res proxies to submitting files to third parties for transcripts, captions, and translations — to increase the efficiency of production, preproduction, and live content creation.
NOMINEE GeoComply GeoGuard The geo-piracy challenge Streaming services have seen record subscriber growth recently. However this surge in viewing has an evil twin - an increase in VPN and proxy usage, as casual and for-profit pirates spoof their location to bypass territorial restrictions and illegally access content. Without stopping geo-piracy with effective detection and blocking, OTT providers suffer revenue leakage and are in violation of their contractual territorial licensing agreements with rights owners. The GeoGuard difference Fortunately, there’s an easy and affordable way to counter this threat. GeoComply’s highly accurate VPN and proxy detection solution, GeoGuard, is used by leading broadcasters and streaming platforms including the BBC, beIN Media and Amazon Prime Video, to combat geo-piracy. New functionality Increasingly, VPN providers are using more advanced techniques to bypass detection: hijacked residential IPs and targeted proxy-over-VPN attacks. In 2021, GeoComply countered these new threats with a major upgrade to GeoGuard, including advanced algorithms and new detection processes that enable online broadcasters and OTT services to stop these highly sophisticated forms of geo-piracy. Geo-piracy isn’t new but for too long the industry has accepted a +70 percent efficacy rate from VPN and DNS proxy detection solutions. Allowing so much illegal access to slip through the net is unsustainable, given the rise in value and volume of premium content on streaming platforms. The enhanced GeoGuard is the industry’s only solution to be independently rated by Kingsmead Security as 99.6 percent effective in detecting VPNs and DNS proxies. GeoGuard’s effectiveness and low false positive rate, thanks to being frequently updated, has earned it the trust of Hollywood and major sports leagues. Benefits Dynamically tracks and flags compromised residential IPs.
Free VPN solutions and other malicious apps hijack domestic IP addresses from compromised devices and resell them to premium VPN providers. This enables users to appear as though they are a legitimate viewer in the territory of their choosing. Solves the growing and ever-evolving threat from proxy-overVPN attacks, an industry first. One OTT customer successfully blocked 87 percent of proxy-over-VPN attacks with GeoGuard, compared to only one percent previously. Reduces operational and infrastructure costs by removing the need to support illegal users. One customer started saving $500K a year on its CDN bill after implementing GeoGuard. IPv6 detection. This allows OTT broadcasters to remain fully compliant with their contractual obligations while continuing to allow users to access their services on IPv6-only connections. Reduces credential sharing and fraud, e.g., the sharing of a streaming password among family and friends. Using GeoGuard, one customer reduced credential sharing and fraud on its service by 66 percent. GeoComply has optimized the integration of GeoGuard with the two main CDNs (content delivery networks) for video streaming: Akamai and Amazon CloudFront. This enhancement allows streaming services to simply “turn on” VPN and proxy detection, giving them additional security through the detection of hijacked residential IPs and proxy-over-VPN attacks.
Best Of Show At IBC 2020
NOMINEE Interra Systems BATON® Captions Captions have long been mandated by all major broadcasters, and now with the rise in global consumption of online content, captions and subtitles represent an amazing opportunity for television viewers to watch and comprehend foreign-language content with ease. However, captions can be tedious and expensive to produce. Also, when transitional issues happen throughout the file-based workflow, the delivery of captions becomes more complex. Given the massive amount of content that is being created today, broadcasters and media companies need an efficient way to create high-quality captions, which are legally required in many regions of the world. BATON Captions is a new addition to Interra Systems’ comprehensive automated QC platform BATON that simplifies this process and improves workflow efficiency utilizing ML and automatic speech recognition technology. Using this solution, broadcasters and media companies can ensure that when content is delivered in multiple video quality levels within OTT video streams, the captions maintain a high quality. BATON Captions enables users to address all of their captioning needs, from caption generation to QC, auto corrections, review, and editing. Easily integrated with the third-party tools, the application comes with a feature-rich review and editing platform with frame-accurate playback options, supporting a host of subtitle and closed caption formats so broadcasters and other media professionals can deliver content on a global scale with ease. Utilizing this high-performance solution, broadcasters and media companies can dramatically expedite the caption creation and verification processes for both live and VOD content. BATON Captions tackles a critical industry challenge: how to
generate and distribute a high volume of content while assuring highquality captions. What makes BATON Captions unique is its industry-leading performance and technology innovation. Through ML and state-of-the-art speech recognition technology, BATON Captions dramatically expedites the caption creation and verification processes for both live and VOD content. In doing so, BATON Captions ultimately helps drive globalization of content for broadcasters and other media professionals.
NOMINEE JW Player STUDIO DRM In Q4 of 2021, JW Player, a leading video software and data insights platform,has unveiled its enhanced STUDIO DRM (formerly VUDRM) solution. This is a part of the integration of VUALTO into the JW Player platform to create the industry’s most powerful platform for video orchestration and encryption. STUDIO DRM is a multi-DRM solution that makes content protection easy for broadcasters, sports OTT platforms and other premium content rights holders. It is highly scalable and uniquely flexible, allowing content owners to request DRM encryption keys on the fly. STUDIO DRM also supports the latest content protection standards including CPIX, CMAF and CBCS. The solution has long been a leader in DRM innovation and was one of the first to implement support for PlayReady, WideVine and FairPlay, as well as ABR streaming with DRM encryption. Customer success: As a start-up in 2015, STARZPLAY, a regional streaming platform for the Middle East & North Africa region serving over 1.8m subscribers, was looking for an economic unified API integration that would help to save both costs and time spent on deployments. After testing solutions from multiple DRM providers, the organization turned to JW Player and its digital rights management solution. Aside from the solution offering a much-needed update to STARZPLAY’s DRM setup, the streaming provider was immediately impressed by the support offered by the technical team, with a clear solution presented to meet the organization’s strategic requirements. As one of the first companies to offer a multi-DRM service and a wealth of experience in the sector, STARZPLAY trusted the new setup with a unified API. During periods of significant traffic spikes experienced during lockdowns due to the COVID-19 pandemic in 2020, broadcasters were able to rely on a fully integrated, scalable, and resilient content protection with STUDIO DRM. Along with allowing for a single API for a multiple DRM managed service, STARZPLAY has benefited from massively improved uptimes compared to previous years, with 100% uptime in 2020. STARZPLAY saw increases of 50% in streaming hours per unique user, meaning unprecedented levels of consumer demand had to be met. With scalable infrastructure using a Kubernetes
container orchestration system on AWS, STARZPLAY was able to continue securely delivering low-latency live and videoondemand (VOD) content to its audience throughout the UAE and Saudi Arabia, across multiple devices, retaining complete control of who watches its content and when. In 2020, STARZPLAY also began providing a technology solution for an OTT service in India and this solution incorporates STUDIO DRM. “Wherever our business goes, the embedded VUDRM goes along with it. The partnership we have with JW Player is an established part of our tech stack, bringing us go-to-market speeds. I look forward to working closely with JW Player in the coming years to expand our offerings in other regions, helping to solidify our strong position in an ever-growing market,” Faraz Arshad, Chief Technology Officer at STARZPLAY said.
Best Of Show At IBC 2020
NOMINEE
OWC Mercury Helios 3S + U.2 NVMe Interchange System Bundle For media and entertainment industry professionals seeking the fastest performance from U.2 NVMe SSDs with easy drive swap convenience in a protective, transportable carrier. Speed. Security. Savings. The holy trinity of data storage requirements in production to lab workflow is transformed by the OWC U.2 NVMe Interchange System and its application for the OWC Mercury Helios 3S. By combining a locking drive bay and removable tray, the OWC U.2 NVMe Interchange System turns the Mercury Helios 3S PCIe Expansion chassis into a time and money-saving swappable U.2 NVMe SSD storage solution for high-performance film production requirements.
NOMINEE Prime Focus Technologies CLEAR Vision Cloud – AI-powered multi-frame rate conformance Many companies in some geographies like the Americas create content at 29.97 or 23.97 FPS. When that is localized to other geographies like Europe, the frame rate is changed to 25 FPS. Then, S&P edits are applied to the local geography, dubs and subs are created. To prepare, store, manage and distribute this content as one global master package, dubs and subs of the dubbed master have to be conformed to the source video master. This use case is applicable even with the same frame rates across source and regional master. And, last but never the least, forced narrations have to be identified and exported to a side car file in sync with the source video. Conforming audios and subtitles to source video are highly labor-intensive tasks. Forced narrations in source have to be identified and translated by the professionals before conformance. As much as these are time and effort-intensive, they are highly error-prone too. An AI-powered Video Comparator can quickly conform the audio in regional master to the source video master. It can automatically identify gaps in audio and forced narration in the regional master compared to the source video master. With CLEAR Vision Cloud, conformance issues flagged are exported for a quick human QC/Edit to finalize and publish. The timecodes where forced narration is required, is exported to a side car file where it is lined up on the subtitle tool within CLEAR Vision Cloud and a linguist performs a quick QC. The solution has been tested on hundreds of hours of content with varied range of applied image edits. It demonstrates accuracy in the range of 99-100%. It does not need any training data, unlike other resource-hungry AI solutions. It works in real-time and does not require the use of GPUs (Graphic Processing Units, the backbone for neural networks) hence, the solution is economical. In general, the comparison time for a one-hour video with another one-hour content at a resolution of 1920x1080 takes approximately between 1- 1.5x of content duration.
This enables AI to lead automation in the following use cases: 1. Automatically conform regional dubs to source master 2. Automatically conform regional subtitles to source master 3. Automatically identifies areas where forced narration is required, marks them up and lines them up on a sophisticated subtitling tool for linguistic review and edits 4. Prepare global masters with one source video and conformed audio tracks 5. Identifies edits to enable creation of an IMF package CLEAR Vision Cloud has ensured near 100% accuracy even across frame rates, enabling reduction in time, effort, and cost involved in Conformance by leveraging a high level of ML-led automation to suit specific M&E requirements. n Significantly reduces time and effort involved and, enhances the accuracy across frame rates. 2 of 2 n The reduction in time and effort also optimizes the cost of conforming forced narrations. n The high level of automation that ML will bring over time as the machines pick up the logic can help scale up the conformance activity for a large volume of content.
Best Of Show At IBC 2020
NOMINEE
Spectra Logic StorCycle Storage Lifecycle Management Software Digital content is the lifeblood of media and entertainment organizations and must be preserved for the long run. Spectra’s StorCycle software was developed to address the challenge of content storage and lifecycle management by identifying, migrating, accessing and preserving digital media assets for the entire lifespan of that data – be it short-term or forever. StorCycle identifies file attributes of unmanaged assets and moves less frequently accessed content to a secure nearline or Perpetual Tier, which includes any combination of cloud storage, object storage disk, network-attached storage (NAS) and object storage tape. StorCycle initiates true content storage and lifecycle management by automatically scanning and moving digital assets based on creation date, age, size, or last access. Broadcasters and post-production companies can manage archives remotely via Web UI and manually or automatically archive entire project-based directories and make additional copies for data protection, while maintaining familiar and consistent access to copied or migrated assets. With the use of HTML Links or Symbolic Links, and a web-based search, data in the Perpetual Tier is easily accessible by users in a semitransparent or transparent manner. StorCycle stores content in open formats, such as CIFS or NFS file systems, LTFS tape or native cloud formats (S3), so that data is always accessible through StorCycle or independent of it. Assets can also be migrated to the cloud for sharing or collaborative workflows. For best practices in media storage, StorCycle can also automatically
make additional copies of data for disaster recovery purposes, including on tape (locally or remote) for physically separated air-gapped copies that protect data from ransomware. The software was designed so that recurring and popular migrations/ tiering can be automated for easy and seamless workflows in production environments. StorCycle also provides a simple API to provide data analytics and intelligence to previously deployed applications, further optimizing intelligent storage and asset management. New features added to StorCycle in the last 12 months allow tiering and protection of cloud data, provide increased protection against ransomware attacks, and boost metadata searchability and accessibility for quick search and recall of assets remotely, among other benefits.
NOMINEE SSIMWAVE SSIMPLUS® VQ Dial Current content delivery workflows do not offer any control over video experience. Streaming providers too often implement workflows that overshoot the optimal video experience, resulting in a waste of resources, or undershoot video experience, resulting in subscriber churn. Video experience automation is required to achieve complete control over experience in a scalable fashion. SSIMPLUS® VQ Dial is built to achieve exactly that. Using the Emmy® Award-winning SSIMPLUS family of algorithms, SSIMPLUS VQ Dial drives content encoding infrastructure to achieve target video quality consistently without wasted bits, resulting in greater consumer satisfaction and a reduction in delivery costs of up to 50% (approximately $20 million in savings for a mid-sized D2C platform) when compared to contentaware encoding approaches. To see the possible cost savings, SSIMWAVE® has created a savings calculator (https://www. ssimwave.com/savings-calculator/) that helps transfer bitrate reduction to delivery savings. Applications for VQ Dial include: n Adaptation of encoding bitrate for targeted video quality and content attributes, such as resolution. n Encoding optimization to reduce delivery costs by up to 50%, when compared to content-aware encoding approaches, equating to savings of millions of dollars per year for providers with 5 million subscribers or more. n Delivery of consistent quality by adapting encoding behavior to the content, encoding performance, and display devices. n Reduction of rebuffering events by 50% and a decrease in video start times of 10%. VQ Dial can be deployed across all public cloud platforms to drive cloud-based encoders to achieve target viewer experience, set by streaming providers, in an automated manner. The product supports all commonly used encoders, including: AWS MediaConvert, Bitmovin, Google Transcoder, and more. All of the underlying capabilities of an encoder are leveraged to achieve the target
experience while meeting content decoding, delivery, and playback constraints. The key to VQ Dial’s unparalleled success is the use of the most accurate and complete measure of human perception for video experience assessment: SSIMPLUS. The metric measures video experience at any point in a workflow, supports all content attributes including high dynamic range, and works uniformly across all content types, such as animation and sports content. In 2020, SSIMWAVE received a Technology & Engineering Emmy® Award from the National Academy of Television Arts & Sciences for work in Development of Perceptual Metrics for Video Encoding Optimization for its work on SSIMPLUS. The role of SSIMPLUS in enabling the development of massive processing optimized compression Technologies at Disney Streaming Services is recognized in this post: https://www.linkedin.com/ posts/scottlabrozzi_disneys-2020-technology-and-engineeringactivity-6767223821717004288- cvvg.
Best Of Show At IBC 2020
NOMINEE TAG Video Systems TAG Video Systems: Media Control System Today’s media organizations are data-driven businesses, using big data analysis of consumer viewing analytics enhanced by AI & ML to drive recommendation engines, marketing, promotions and even programming and creative processes. Yet media operations are more complicated than ever – new formats and technologies, on-premises and cloud, traditional delivery and OTT, all which make building a data-driven operation difficult. TAG’s new Realtime Media Performance is an end-to-end solution for realtime deep monitoring and visualization across all these workflows and topologies. TAG’s Multi-Channel Monitoring system (MCM) is a software-based end-to-end monitoring, deep probing, logging, and visualization solution. It monitors every type of signal in the ecosystem from live production through OTT supporting all the latest technologies such as JPEG-XS, NDI-5, Dolby ATMOS, and CDI, providing realtime visualization and deep probing for critical analysis into signal health and transport networks, running on COTS hardware, cloud or a hybrid. With the new TAG Media Control System (MCS), media organizations now have a single point of control and a centralized dashboard to manage TAG across their entire ecosystem. TAG MCS aggregates the data and then exposes it to open-source 3rd party analysis and visualization tools like Elasticsearch, Kibana, Grafana and Prometheus, turning rich data into actionable, invaluable insights into the workflow.
By opening its data to users instead of creating proprietary tools, TAG is leading the industry towards transparent, open workflows, empowering solutions that enable media companies to choose best-in-breed and not inhibited by legacy constraints. Additionally, TAG’s Zero Friction™ licensing model enables users to take full advantage of their license for any feature in any workflow, including all future TAG innovations. Hence, TAG is creating the industry’s first financially appreciating technology asset that improves utilization of technologies that would otherwise drift idly towards obsolescence, a concerning issue in the advancement of media technologies.
The TAG Video Systems Realtime Media Performance Enabling Data-Driven Media Operations TAG CONFIDENTIAL
NOMINEE Teradek Teradek WAVE Teradek’s Wave is the only live streaming monitor that handles encoding, smart event creation, network bonding, multi-streaming, and recording – all on a daylight-viewable touchscreen display. Wave’s sleek form factor is compact and versatile. Users can simply set the device on tabletops with its leg stands, or mount Wave to cameras for on-the-go streaming. It’s hot-swappable battery plates and USB-C connector provide continuous power for long productions, and its daylight-viewable monitor makes it easy to see what’s happening on the screen at any time of day, in any brightness. These features make Wave a highly adaptable device for streamers in any environment. A big draw for Wave users is the ability to set up an unlimited number of events ahead of time with Wave’s easy-to-use project workflow: FlowOS. This intuitive operating system guides users in creating their live streaming events in advance – from video and audio configuration to network connection, and destination settings. With FlowOS, streamers can also monitor their video in real-time from Wave, and keep tabs on their stream settings and analytics, giving the flexibility to prep and plan for a stressfree stream. When using Wave with its mobile app, streamers can step away from their Wave, and easily review their stream’s stats like bitrate and network status to ensure a stable stream from their mobile device. Wave users can take their streams two steps further by pairing their device with Sharelink, Teradek’s cloud service. Sharelink enables users to utilize network bonding, which protects live streams by splitting the video bitrate across multiple network connections including Ethernet, USB modems, and cellular hotspots. If one connection becomes unreliable, Wave load balances across the other connections – locking in a stable connection in challenging environments. Sharelink’s secondary benefit is its ability to send streams to multiple platforms all at once, allowing viewers to tune in from whichever platform they prefer, and allowing streamers to grow their streaming audience. In the past two years, one of the fastest growing requirements for media that originates in the camera (broadcast, corporate,
educational, entertainment, etc.) has been for accessible streaming. Here’s what sets Teradek WAVE apart from all others in the field. Wave is the only live streaming monitor that handles encoding, smart event creation, network bonding, multistreaming, and recording – all on a daylight-viewable touchscreen display. n Wave is Teradek’s first monitor-encoder that allows users to view their video feed directly on the encoder itself – eliminating the need for an additional screen n Using Sharelink – Teradek’s cloud service – Wave users can enable network bonding for highly-stable streams, and broadcast to multiple streaming platforms simultaneously n It features hot-swappable battery plates for continuous power, and a USB-C connector for universal power connectivity n The monitor-encoder features a daylight-viewable 7” touchscreen with IPS LCD display, and 1000nits of brightness n It encodes in H.264 up to 1080p60 to any RTMP destination while supporting simultaneous on-board recording
Best Of Show At IBC 2020
NOMINEE Zixi Zixi Software-Defined Video Platform Zixi is the architect of the Software-Defined Video Platform (SDVP), the industry’s most complete live IP video workflow solution. The SDVP is currently integrated into 300+ encoder, decoder, cloud multiscreen, and cloud playout partners. The SDVP enables media organizations to economically and easily source, manage, localize, and distribute live events and 24/7 live linear channels in broadcast QoS, securely and at scale, using any form of IP network or hybrid environment. Superior video distribution over IP is achieved via four components. 1) Protocols - Zixi’s congestion and network-aware protocol adjusts to varying network conditions and employs forward error correction techniques for error-free video transport over IP. As a universal gateway, standards-based protocols such as RIST and the open source SRT are supported, alongside common industry protocols such as RTP, RTMP, HLS, and DASH. Zixi supports 17 different protocols and containers– the only software platform designed for live video to do so. 2) Video Solutions Stack – Provides essential tools and core media processing functions that enable broadcasters to transport live video over any IP network, correcting for packet loss and jitter. This software manages all supported protocols, transcoding and format conversion, collects transport analytics, monitors content quality and layers intelligence on top of the protocols such as bonding and patented hitless failover across any configuration and any IP infrastructure, allowing users to achieve 5-nines reliability. 3) ZEN Master - The SDVP’s control plane enabling users to intelligently provision, deploy, manage, and monitor thousands of content channels across the Zixi Enabled Network, including 300+ Zixi enabled partner solutions such as encoders, cloud media services, editing systems, and ad insertion and video management systems. With such an extensive network of partner enabled systems, Zixi ZEN Master presents an end-toend view across the complete live video supply chain. 4) Intelligent Data Platform – A data-driven advanced analytics system that collects billions of telemetry points per day to
clearly present actionable insights and real-time alerts. The SDVP leverages cloud AI and purpose built ML models to identify anomalous behavior, rate overall delivery health and predict impending issues. This fourth key component, accessible via the ZEN Master control plane, consists of a data bus that aggregates over three billion data points daily from hundreds of thousands of inputs within the Zixi Enabled Network, including over 300 partner solutions and proprietary data sources such as Zixi Broadcaster. This telemetry data is then fed into five continuously updated machine-learning models where events are correlated, and patterns discovered. With clean, modern dashboards and market defining realtime analytics, the Zixi SDVP enables users to focus on what’s important. Intelligent alerts and health scores generated by Zixi’s AI/ML models help sift through and aggregate data trends so that operations teams always have the insights they need without data overload. At a time that sees the normalisation of remote-working and a proliferation in the ways programs reach viewers, Zixi’s SDVP provides the agility, reliability and broadcast-quality video securely from any source to any destination over flexible IP video routes.
NOMINEE Zype Playout 2.0 Zype Playout 2.0 represents a new generation of live-linear streaming capabilities that make it possible for anyone to curate a diverse range of content types and formats into a singular linear channel. It allows users to easily build and monetize linear TV channels through streamlined, dragand- drop programming of live and on-demand video into linear streams which lowers the time, cost and expertise required to grow viewership of linear content on OTT, mobile and social video platforms. Zype Playout 2.0 helps broadcasters, digital publishers and distributors take advantage of the growth in digital linear platforms to easily create a lean back viewing experience of “always on” linear video channels with real time analytics to optimize video streams for maximum engagement and monetization. Accessible from any browser, Zype’s cloud-based playout solution makes it easy to build and customize digital linear channels from anywhere in the world, and deliver quality playout feeds to popular streaming platforms that can be quickly updated on the fly. With Zype Playout 2.0, content owners can find new life in existing assets and further monetize existing content libraries by building digital linear channels for streaming platforms. Playout 2.0 makes it simple to build and grow curated linear FAST channels, whether for always-on, pop-up, seasonal or programming marathon use cases. Companies like VEVO, Spin Master and Night Flight have turned to Zype’s playout solution to curate and distribute linear video channels. Zype’s Playout 2.0 gives users one consolidated platform to ingest content, curate live or VOD programming into linear streams, monetize the content, and distribute to multiple digital endpoints. Playout 2.0 provides intuitive tools designed to make operating playout a breeze such as dragand- drop scheduling of live or VOD content, automatic content ingestion, easy adbreak setup and insertion capabilities, and flexible distribution destinations, all accessible from a user-friendly interface. Within Playout 2.0, users can easily manipulate playout channels and group content into reusable program blocks, automatically fill program gaps, override scheduled content and loop assets, with the ability to export programming rundowns before publishing.
Customers can also integrate Zype’s playout Analytics API with their own data warehouse to get real time updates for key business users to make smarter programming and business decisions. Differentiators of Zype Playout 2.0 include smart content organization tools, a horizontal timeline schedule UI, and a tight integration with Zype’s API-first digital infrastructure tools. Unlike other playout solutions, programming changes in Zype Playout 2.0 can update in seconds so making changes on the fly is efficient and optimizing programming in real-time is easy. Combined with Zype’s ability to sync content between its CMS and Zype Playout 2.0, its CRM and subscriber management solutions, its wide ecosystem of connectors, its best-in-class full technical support and enablement of first-party data collection, Zype offers an unbeatable solution for end-to-end video distribution.
Best Of Show At IBC 2020
NOMINEE Telos Alliance Axia® Quasar™ SR AoIP Mixing Console Introducing the New Axia® Quasar™ SR AoIP Mixing Console - The Power of Simplicity. Axia Quasar AoIP consoles are the ultimate mixing machines, putting power at your fingertips for the best content creation in broadcast. Now with two models to choose from, XR and SR —the Quasar family offers broadcast engineers and less seasoned board operators alike boundless production possibilities, modularity, scalability, and workflow flexibility. Quasar SR is the direct replacement for Axia’s best-selling Fusion console and is comparable in both price and feature set but also delivers all the power, ergonomics, industrial design, and star appeal of our flagship Quasar XR console. Quasar SR is not reserved for the most knowledgeable broadcaster but is approachable to any board operator thanks to its streamlined surface design. Quasar SR uses the same frame, power supply, and master module as Quasar XR, but the fader modules are non-motorized, and there are fewer, larger, and easier-to-reach buttons on each channel strip. The Pinnacle of Console Design Quasar SR delivers exquisite appearance and high-quality architecture, including scratch-resistant work surfaces and components rugged enough for a lifetime of use. All parts subject to wear are industrial-, automotive-, or even avionicsgrade. The absence of an overbridge makes for easy desk installation, and the console is fanless for quiet operation, with redundant load-sharing power supply units. High-resolution color TFT displays and RGB pushbuttons are used throughout. With a sleek, easy-to-operate, industrial-grade 12.1-inch touchscreen user interface that is so familiar, Quasar SR operation is possible within minutes. Two types of UIs are available—Expert and Simplified—to cover all user workflow requirements. Quasar does not require an external display, although you can connect an external to duplicate the touchscreen interface. All Quasar consoles offer touch-sensitive Encoders, Faders, and User Buttons, offering responsive user interaction and bringing Quasar’s dynamic UI closer to your fingertips.
Remote Control & Monitoring Quasar Soft is an optional upgrade for Quasar SR that allows remote control of the console from your browser. You can generate up to eight HTML-5 pages and configure them to display any of the 64 input channels, plus a small monitor section, or even the entire master section of the console. Included as part of the Quasar Soft license, Quasar Cast is a remote monitoring solution that lets you listen to any Livewire stream in the network through the same web browser. Quasar Engine The Quasar Engine provides bulletproof signal processing for Quasar consoles and is a must-have for the operation of your XR or SR console. Allowing you to pay only for the number of channels you need, the Quasar Engine is modular, starting at 16 channels and scaling up in blocks of 16 channels up to 64 channels. This 1RU native AoIP powerhouse includes 4-band fully parametric EQ, powerful dynamics processing and automixer on every channel, four program buses, and eight auxiliary buses. Four Surface Layers and a Virtual Mixer (VMix) with 16 independent 5-channel V-Mixers extend the mixing capacity of your Quasar console far beyond physical fader count.
NOMINEE Teradek Teradek WAVE Teradek’s Wave is the only live streaming monitor that handles encoding, smart event creation, network bonding, multi-streaming, and recording – all on a daylight-viewable touchscreen display. Wave’s sleek form factor is compact and versatile. Users can simply set the device on tabletops with its leg stands, or mount Wave to cameras for on-the-go streaming. It’s hot-swappable battery plates and USB-C connector provide continuous power for long productions, and its daylight-viewable monitor makes it easy to see what’s happening on the screen at any time of day, in any brightness. These features make Wave a highly adaptable device for streamers in any environment. A big draw for Wave users is the ability to set up an unlimited number of events ahead of time with Wave’s easy-to-use project workflow: FlowOS. This intuitive operating system guides users in creating their live streaming events in advance – from video and audio configuration to network connection, and destination settings. With FlowOS, streamers can also monitor their video in real-time from Wave, and keep tabs on their stream settings and analytics, giving the flexibility to prep and plan for a stressfree stream. When using Wave with its mobile app, streamers can step away from their Wave, and easily review their stream’s stats like bitrate and network status to ensure a stable stream from their mobile device. Wave users can take their streams two steps further by pairing their device with Sharelink, Teradek’s cloud service. Sharelink enables users to utilize network bonding, which protects live streams by splitting the video bitrate across multiple network connections including Ethernet, USB modems, and cellular hotspots. If one connection becomes unreliable, Wave load balances across the other connections – locking in a stable connection in challenging environments. Sharelink’s secondary benefit is its ability to send streams to multiple platforms all at once, allowing viewers to tune in from whichever platform they prefer, and allowing streamers to grow their streaming audience. In the past two years, one of the fastest growing requirements for media that originates in the camera (broadcast, corporate,
educational, entertainment, etc.) has been for accessible streaming. Here’s what sets Teradek WAVE apart from all others in the field. Wave is the only live streaming monitor that handles encoding, smart event creation, network bonding, multistreaming, and recording – all on a daylight-viewable touchscreen display. n Wave is Teradek’s first monitor-encoder that allows users to view their video feed directly on the encoder itself – eliminating the need for an additional screen n Using Sharelink – Teradek’s cloud service – Wave users can enable network bonding for highly-stable streams, and broadcast to multiple streaming platforms simultaneously n It features hot-swappable battery plates for continuous power, and a USB-C connector for universal power connectivity n The monitor-encoder features a daylight-viewable 7” touchscreen with IPS LCD display, and 1000nits of brightness n It encodes in H.264 up to 1080p60 to any RTMP destination while supporting simultaneous on-board recording