Best of Show at IBC 2019

Page 1

AT IBC 2019 AT IBC 2019

AT IBC 2019

AT IBC 2019




FREESPEAK EDGE FreeSpeak Edge™, the latest addition to the industry-leading FreeSpeak™ digital wireless intercom family, is the most advanced digital wireless intercom system on the market, delivering the best audio quality and enhanced performance in some of the most complex live performance environments. The system gives the user more control and customization options, thanks to advanced frequency coordination capabilities and intuitive design features in the system’s transceivers and beltpacks. •Built from the ground up, FreeSpeak Edge is the result of extensive feedback from existing FreeSpeak II® power users, incorporating recent advances in the fundamental technology, leading to an all-new 5GHz chip set that features exclusive radio stack development which has been optimized for intercom. FreeSpeak Edge also leverages state-of-the-art audioover-IP developments in its architecture, utilizing AES67 connections between the transceivers and the host intercom frame for exceptional flexibility in deployment. •The 5GHz band is an ideal choice for large scale communications, as it can be managed with frequency coordination for reduced interference and offers the widest range of RF channels available for exceptional scalability. Its higher frequencies mean there is more bandwidth for data which allows for finer

control, additional audio channels, more robustness, lower latency and better audio quality. •FreeSpeak Edge leverages all the power of 5Ghz technology to perform flawlessly in even the most challenging venues and high multipath environments. The system takes advantage of ClearCom’s exclusive RF technology that uses OFDM (Orthogonal Frequency Division Modulation) to provide a robust transport layer that is immune to most forms of interference. •The system delivers the clearest 12KHz audio quality with ultra-low latency, and is highly scalable with the technology and bandwidth to support over 100 beltpacks and 64 transceivers

to accommodate the largest productions. It can also be combined seamlessly with FreeSpeak II 1.9GHz and 2.4GHz systems, providing three bandwidths across a single unified communications system. •FreeSpeak Edge transceivers and beltpacks offer more customization and control than ever before to accommodate increasingly complex communication needs. Each device uses two high throughput, low latency 5GHz radios to provide for redundancy and future features enhancements. The beltpack’s ergonomic design includes asymmetrical concave/convex top buttons for identification at a glance and touch operation; eight programmable buttons; rotary controls on both sides; and a master volume control and flashlight on the bottom. Each transceiver supports 10 beltpacks and includes attenuation and external antennas for custom RF zones as well as wall and mic stand mounting options. The system delivers the robustness and reliability that customers have come to expect from the award-winning FreeSpeak range. •While some manufacturers are trying to incrementally add limited 5GHz capabilities to existing solutions, ClearCom has leapfrogged right to the edge of what is possible with wireless intercom technology today, in readiness for tomorrow’s possibilities.



2028 VOCAL MICROPHONE A microphone that makes singers sound as though they are not using a microphone – that is what DPA achieved with the 2028 Vocal Microphone, which will debut at IBC 2019. Perfectly suited for live stage performances, broadcast and pro AV applications, this exceptional microphone is ideal for everyone, from indie artists to international touring singers, because its incredible clarity and amazing sound allows all types of vocals to shine. DPA’s background lies in creating highly precise microphones for studio recording and measurement purposes. With the 2028, this legacy has been transplanted to the live stage allowing vocalists to achieve studio quality performances without fear of distortion, feedback or sound bleeding into their microphone from other instruments and band members. On a live stage, the 2028 delivers the same sonic qualities that other DPA microphones deliver. No (or very little) EQ is required to sound just like you are standing next to the singer listening to their performance. This allows the singer to focus on their vocal performance as if they are not using a microphone, which in turn puts less strain on the voice. In addition, the transparency allows sound engineers to spend their time shaping the sound experience rather than covering up due to artifacts. “We’ve designed this mic so that it sounds like the singer isn’t actually

using a mic,” says René Mørch, product manager, DPA Microphones. “You get the full on, natural sound of the artist’s voice, not what the microphone “thinks” the artist sounds like. This gives a lot of freedom to the sound engineer to be creative – he can work on crafting a unique sound for the performance based on a clean, natural vocal track.” The 2028 has been optimized for the unique challenges of the live stage and cohesively designed to provide the same amazing sound as the legendary DPA d:facto vocal microphone. It features a brand-new fixed-position capsule, as well as a specially designed shock-mount and pop filter. It exhibits a supercardioid polar pattern, with the famous DPA uniform off-axis response. This gives the microphone a very high-gain-beforefeedback and makes it easier to handle bleed from other instruments in close proximity, picking up sound in a natural way.

With the expected wear-and-tear that comes with live performances, both the outer grille and the inner pop filter of the 2028 can be detached and rinsed. It also comes in three variants to suit all performance styles. These include a wired XLR with handle and two wireless mic configurations that are compatible with the industry’s most widely used wireless microphone systems – the SL1 adapter, compatible with Shure, Sony and Lectrosonics; and the SE2 adapter, compatible with Sennheiser. At only 500 Euro, DPA’s new 2028 Vocal Microphone is a game changer - an overachiever that performs significantly better than anything else in its price range.



GENELEC MODEL 8361A offering level calibration, mute, solo, and switching between setups.

Genelec Model 8361A is the latest addition to the companies highly successful and awarded range of products called The Ones. With this addition the range consists of 4 products. The 8361A features the following:

MAIN MONITORING Enable extended listening distance, higher maximum SPL, and more headroom using the W371 Adaptive Woofer System with 8341, 8351 or 8361.

POINT SOURCE Midrange and tweeter drivers in the centre of a diffraction-free aluminium enclosure. Dual woofers hidden behind waveguide. All drivers on the same acoustical axis.

DIRECTIVITY AND EXTENDING TO VLF Adaptive Woofer System increases directivity to low frequencies and offers neutral, minimum ripple in-room frequency response, typically continuing below 20 Hz.

FULL-SIZE WAVEGUIDE Integrated waveguide without discontinuities, covering the entire front, for excellent directivity and precise imaging.

UNIQUE LF ADAPTATION Novel in-room adaptation in LF and VLF, optimized using the latest GLM application, improves bass responsiveness and enables neutral character.

HORIZONTAL OR VERTICAL No sonic compromise in either orientation. IsoPod included for flexible tilt. Wide variety of mounting options available.

and recycled aluminium. Low power consumption and long life.

THREE-WAY COMPACT The most compact three-way monitors with LF directivity control like in very large monitors. Spectacular industrial design by Harri Koskinen.

SETUP AND CALIBRATE Management network for systembuilding and GLM auto-calibration. Analog and digital inputs, universal mains voltage power supply. Standard fixing points for flexible mounting.

LIGHT ENVIROMENTAL FOOTPRINT Sustainable production and use: Made in Finland using renewable energy

UP TO 72 CHANNEL SYSTEMS Build systems from stereo to immersive. Use GLM as integrated monitor control



DIVINE AOIP POE POWERED MONITOR DANTE® /AES67 DIECAST NETWORK AUDIO POWERED LOUDSPEAKER The new DIVINE DSP controlled Powered Monitor is our new concept in PoE powered network audio monitor speakers. Featuring up to 4 audio inputs, and powered by Power Over Ethernet, using just one cable for audio and power, this compact loudspeaker will enhance any configuration and work seamlessly with existing equipment: Interfacing to other manufacturers’ equipment within your AoIP infrastructure is completely trouble free as it supports both Dante® and AES67 protocols. The DIVINE is the latest in full range, compact, diecast loudspeaker design. Boasting a modern, low-distortion and low-noise 10 Watt class D amplifier, and with up to 96K crystal clear digital audio capability, the DIVINE is most at home in outside broadcast and production situations. The DIVINE is a general purpose loudspeaker and usefully has 4 audio inputs from the network. By using the front panel source select switch, these audio inputs can be individually routed to the loudspeaker, or they can be mixed together. Additionally, and uniquely, the DIVINE has the facility to prioritise a single audio input over any of the others. This is particularly useful if you wish to monitor programme audio and also simultaneously listen to important talkback from a director or producer. Alternatively, this facility could be used to prioritise a fire alarm signal over

anything else. As well as the source select switch, the front panel features 4 LEDs to indicate which source is currently selected. These 4 LEDs are multi-coloured RGB devices and can also be set to show the level of their associated source. For robustness and to prevent damage during demanding use the DIVINE features a recessed volume control and recessed rear connectors and controls. A unique feature of the DIVINE is a small rear panel display allowing for setup and configuration of a vast array of functionalities, including setting of EQs,

source priority, power saving mode, and for disabling the front panel controls. For increased flexibility, the DIVINE can be controlled by our Windows10 application GlenController. The application will have two main functions: to allow the volume and source of multiple DIVINEs to be synchronised, meaning that stereo & multichannel working becomes much easier than ever before, and also to facilitate easy updates across the network when even more features become available in the future. The DIVINE is aluminium diecast, powder-coated and portable, hardwearing and robust in design and build quality, ensuring it is able to withstand the knocks and bumps that come with outside broadcast and production environments. The recessed knobs, switches and RJ45 encased XLR mean that they are much less likely to become damaged during transit or in use. As well as being free standing, it has a rear panel VESA mount for easy installation and a useful mic thread on the bottom, meaning it takes seconds to fit to a mic stand. A steel grill across the front of the loudspeaker, fitted as standard, affords extra protection for the drive unit beneath.



To an audience, every single word is important. Whether a speaker is broadcasting live over the air or an on-stage theatrical performance, it’s essential for them to be h eard naturally and reliably. TwinPlex™— Shure’s new line of premium subminiature omnidirectional lavalier and headset microphones—stands up to the toughest conditions to do just this. The dual-diaphragm, patent-pending capsule technology offers best-in-class sound in a compact and easy-to-conceal package for when professional vocal performance is a must. Designed to enhance product accessibility for a live performance, TV broadcasts and a variety of use cases, TwinPlex supports those high-stakes audio moments—no matter the stage.



Scorpio is a 32 channel, 36 track mixerrecorder and the most powerful product ever designed by Sound Devices. With 16 mic/line preamplifiers, 32 channels of Dante in and out, AES in and out, 12 analog outputs and multiple headphone outputs, Scorpio is wellsuited for any production scenario. A fully-customizable routing matrix enables sound professionals to send any input to any channel, bus, or output. Up to 12 buses may be individually mixed. Due to its compact form factor, the Scorpio is equally at home over-the-shoulder or in a mobile rig. Scorpio incorporates Sound Devices’ most cutting-edge technology. An ultrapowerful engine comprised of three FPGA circuits and six ARM processors deliver the horsepower needed for the most complex tasks. FPGA-based audio processing with 64-bit data paths ensures the highest sound quality and reliability. The Scorpio also features Sound Devices’ latest and best in analog microphone preamplifier design. These preamps have the smoothest sound and lowest noise of any preamp in the company’s 20-year history and include built-in analog limiters, high pass filters, delay, 3-band EQ and phantom power. The Scorpio has an internal 256GB SSD and can simultaneously record to two SD cards for redundancy. For additional flexibility, sound professionals can send different files to their choice of media. The companion SD-Remote

Android application allows for control of the Scorpio via a large display. Great attention to detail has been paid to every aspect of the Scorpio’s design. Most common menus are accessible with only one or two button presses, and many menu shortcuts can be achieved with only one hand. Scorpio features a built-in dual L-Mount battery charger and may be powered with L-Mount batteries or via the TA4 DC inputs using smart batteries, NP-1 batteries, or in-line power supplies. The ultra-accurate, fullyfeatured timecode generator contains its own battery to hold timecode for up to four hours after power off.

Dugan Automixing automatically balances levels of microphones in-use and attenuates unused microphones, ideal for multi-microphone applications. On Scorpio, up to 16 channels can be automixed at a time, and two separate groups can be mixed simultaneously. Scorpio’s intuitive LED metering and sunlight-readable screen display accurate Dugan channel attenuation and gain distribution across all channels.



A new component of Wheatstone’s AoIP intelligent network, SwitchBlade is the first product of its kind to combine the power of AoIP logic control with SIP and codec bandwidth optimization, allowing transport of both high-quality audio programming and the control logic needed for full studio operation between distant sites. Previously broadcasters were able to transfer high-quality programming across public or private links, but had to leave logic control of elements back at the studio with the AoIP system. When connected to a WheatNetIP audio network, SwitchBlade offers 24 modules that show up as sources/ destinations in WheatNet-IP Navigator software, and are then available on every WheatNet-IP control surface and monitor station in the system. Each module carries ancillary signaling embedded in the audio streams for equipment activation and control, and each can be controlled, configured or even reset without interfering with the operation of the other 23.

SwitchBlade has two Ethernet connections: one for connecting to an SIP/VoIP service provider or SIP-enabled PBX phone system, and the other for connecting directly into the WheatNetIP audio network. It comes with major codecs, including up to 512 Kb/s stereo Opus and G.711, for high quality program distribution between studios, networks, affiliates, news remotes, sports venues—anywhere you need to connect separate locations. Practical applications include: •Consolidating program operations for several stations scattered across a region •Live remote production, including highquality programming and console/mic control between home studio and sports or concert venues •Multiple channels of IFB from remotes •Sharing program and operating control between sister studios over an IP link •One-to-many STL codecs between one studio and several transmitter sites; 1RU SwitchBlade at the studio feeds two, four, six or more existing SIP-compliant codec units at each transmitter site

•Transferring high-quality music between two facilities or from a cloud-based automation system over the common Internet Wheatstone’s Jay Tyler sums it up: “SwitchBlade is the missing link for connecting WheatNet-IP facility to WheatNet-IP facility, from city-to-city or across the world. Not only does it carry audio, it carries control—which means you can send and receive routing commands, automation control, and even fader levels across the two locations. This is a real game changer. SwitchBlade finally makes it possible to monitor and switch points of multiple local audio chains from different network operation centers around the world.”

BESTOFSHOWATIBC 2019 ADOBE AUTO-DUCKING FOR AMBIENCE Audio professionals know that ambience is an important storytelling tool for sound design; ambience adds atmosphere, texture, and a sense of place to your content. This is true no matter the medium -- from film, TV, and online video to broadcast to podcasts to radio, and beyond. But the key to a successful sound mix is balancing your ambient sound with others, like dialogue and music. In typical production workflows, this can be a tedious task -- requiring manual keyframes at each point where audio ducking is required. With the introduction of AutoDucking for Ambience -- now available in Premiere Pro (NLE) and Audition (DAW) -- Adobe has made this process possible with just a few clicks. This new feature leverages the power of Adobe Sensei -- Adobe’s own artificial intelligence -- to detect speech and automatically adjust the volume of ambient sounds below dialogue, music, or sound effects. Most importantly, users have full creative control over the results. To start, you have a handful of parameters at your fingertips before Auto-Ducking goes to work, including Sensitivity, Reduce By, and Fades. Once the results are generated -- almost instantly -adjustments made via Auto-Ducking are keyframed so you can fine-tune your mix. For this reason, Adobe’s new Auto-Ducking for Ambience is a helpful tool for both audio novices and audio

professionals. There are few production environments more demanding than broadcast. And with the need to push content to more platforms than ever before, time is tighter. Whether Adobe’s Auto-Ducking saves you minutes or hours, you’ll have more freedom to focus on the creative and storytelling aspects of your sound mix.

BESTOFSHOWATIBC 2019 NUGEN AUDIO VISLM NUGEN Audio’s VisLM loudness meter is emblematic of the company’s commitment to continually stay ahead of the curve, both with regards to technology and creativity. In the world of loudness management, standards are constantly evolving and NUGEN Audio frequently updates its offerings to reflect the changing needs of the industry. VisLM supports up to 10 channels of audio, making it the first loudness meter to accommodate loudness management for 7.1.2 surround sound - the default format for the increasingly popular Dolby Atmos bed tracks. Previously, there had been no loudness management solution for 3D surround audio, and standard practice was simply downmixing to 5.1 or 7.1 prior to loudness measurement. VisLM was the first loudness meter to consider the vertical axis and to include a dialogue-gated LRA measurement, compliant with Netflix’s Sound Mix Specifications & Best Practices document. In fact, NUGEN worked closely alongside Netflix in developing its loudness standards and specifications to ensure practical and straightforward application for content creators. When Netflix updated its loudness standards again in 2019, NUGEN Audio quickly

followed suit with an update. NUGEN’s latest VisLM software release (September 2019) adds a new ‘Flag’ feature. This noteworthy update to the interface allows users to easily navigate between True Peak alerts, short-term/ momentary loudness alerts and userdefined manual flags for other points of interest. This is a significant time-saving tool for correcting loudness and True Peak errors in a mix, particularly for longer projects. Those most recent updates notwithstanding, VisLM has practically revolutionised loudness management with its ReMEM and History functions. This gives users the ability to save up to 24 hours of loudness data in a project, in addition to loudness overdub for minor adjustments, which eliminates the need for end-to-end remeasurement via a hardware loudness meter. It also allows for detailed loudness analysis of the full

project timeline. VisLM is available in AAX, VST, VST3, AU and AudioSuite formats in both 64-bit and 32-bit versions, as well as a 32-bit only RTAS version. Additionally, VisLM is available as a standalone application for Windows and OSX for real-time monitoring. For Avid HDX hardware compatibility, the plug-in is available in a DSP version. The software supports ITU-R B.S. 1770, revisions 1, 2, 3 and 4, as well as all recommendations and guidance based on the international standard; ATSC A/85 (CALM ACT); EBU R128; EBU R128 S1; ARIB TR-B32; OP-59; AGCOM 219/9/ CSP; Portaria 354; DPP (BBC, ITV, C4, C5, S4C); and Netflix. Additional support for Leq(a) and Leq(m) measurement in both TASA and SAWA variants is also included. NUGEN Audio keeps on top of its customers’ needs via regular face-to-face contact at conventions, which it uses as a basis to brainstorm technologies, such as VisLM, that are most necessary to achieving high-quality results. The company continually remains personable and approachable, listening carefully to feedback and prioritising the needs of the customer, and has rolled out this latest version of VisLM as a direct result of user feedback.


The new Sennheiser XS Wireless Digital Series enables content creators to effortlessly step up their video production with wireless audio. XSW-D eliminates all the pain points that firsttime wireless users might shy away from: licensing, time-consuming settings and complicated operation. Instead, XSW-D is instantly ready to record audio directly into the (video) camera, provides simple one-touch operation and works on 2.4 GHz for worldwide use without a license, making wireless audio as straightforward as possible. Added to this is a range of up to 75m and antenna technology that ensures reliable transmission even if there is no direct line of sight. The transmitter and receiver work for up to five hours on a single battery charge, and can conveniently be recharged via USB. Creators can choose from four systems developed for typical usage scenarios, or combine their very own system from the individually available components and

add mics of their choice. As all XSW-D components – including the ones from the music systems – are compatible, users can re-use and newly combine existing XSW-D gear as desired. Additionally, any existing lavaliers or handheld dynamic microphones can be re-used. Audio for video systems at a glance Videographers and content creators can choose between a Portable Lavalier Set complete with an ME 2-II clipon microphone, a Portable Interview Set for use with an existing dynamic microphone, a Portable Base Set for use with an existing lavalier microphone, and the Portable ENG Set, which contains transmitters for a lavalier microphone (ME 2-II included) and an existing handheld dynamic microphone. All sets come with connection accessories for the camera receiver and USB charging cables. XSW-D TECHNICAL DATA Frequency range: 2,000 – 2,483.5 MHz

Transmission power (EIRP): max. 10 mW Modulation: GFSK with TDMA Codec: aptX® Live Audio frequency response: XLR/3.5 mm jack: 80 – 18,000 Hz; 6.3 mm jack: 10 – 18,000 Hz Audio ouput: max. 12 dBU Signal-to-noise ratio: ≥ 106 dB THD: < 0.1 % Audio latency: < 4 ms Battery pack: Li-Ion, 3.7 V nominal voltage, 850 mAh cell capacity Operating time: up to 5 hrs Charging time: typ. 3 hrs Input voltage of USB-C charging interface: typ 5.0 V Charging temperature: 0 °C – 60 °C Dimensions: XSW-D XLR FEMALE TX: approx. 102 x 24 28 mm XSW-D MINI JACK TX: approx. 86 x 24 x 28 mm XSW-D MINI JACK RX: approx. 86 x 24 x 28 mm

BESTOFSHOWATIBC 2019 VERITONE VERITONE ATTRIBUTE Veritone Attribute gives radio and television broadcasters the ability to measure the effectiveness of advertising campaigns in near real-time. This AI-powered software product simply and intuitively makes connections between advertising campaigns and the actions the target audience has taken on the advertiser’s website within the advertiser’s given custom attribution timespan. Attribute not only collects data on pre-recorded spots, but on live reads,organic mentions, and in-screen content as well by leveraging Veritone’s proprietary AI technology. Veritone Attribute enables broadcasters to quickly compare ad campaigns to the respective advertiser’s website analytics, establishing a path to broadcast media attribution for website traffic and purchases. Sales representatives can monitor an individual customer’s attribution analytics in a near real-time dashboard of top-level summary metrics as well as detailed web, daypart, placement, and creative analytics. This level of analysis allows sales teams to demonstrate ROI and optimize campaigns through for greater effectiveness in activating their advertisers’ desired target audiences. Additionally, data-driven branded reportscan be shared with clients with minimal preparation in the form of autogenerated, branded PDF and interactive

broadcasters can advise advertisers throughout their campaigns on how best to use their ad dollars to accelerate purchases.

PowerPoint reports (data manipulation is not possible in the application allowing for complete transparency). The solution enables broadcast sales and campaign managers to track near real-time analytics for not one but all advertisers they manage in their portfolio, while making the onboarding process quick and easy for their advertisers. For each advertiser, the campaign manager is guided through a simple, three-step new advertiser setup entailing the advertiser’s details, selecting the appropriate ad types by member station or channel, and inviting the advertiser to connect their Google Analytics account to Veritone Attribute via an automated email workflow. Veritone Attribute gives broadcasters and their advertisers tools to make connections between ad placement and the actions the target audience has taken on the advertiser’s website. As a result,

Systematic ad verification. Automatically verify pre-recorded spots with playout log and audio fingerprint monitoring as well as live reads through natural language processing (NLP)-driven watchlists in near real-time. Demonstrate ad efficacy. Measure online response to customer advertising campaigns with intelligent correlation of the advertiser’s Google Analytics website data and broadcaster playout logs. Measure organic response. Through Veritone’s proprietary artificial intelligence technology, aiWARE, create watchlists to automatically monitor broadcasts for organic customer brand or product mentions, then measure their impact. Simple customer reporting. Programmatically develop customer attribution reports with a near realtime advertising analytics dashboard organizable by advertiser or campaign view. Optimize ad placement. Leverage programmatic campaign response data to empower customers to perform multivariate tests, optimizing creative, messaging, placement, duration, and daypart to drive greater customer ROI.



SmartRadio is a cloudand web based radio-asa-service platform. Today, media companies find themselves in transitional processes where they now have to think from a brand perspective to create a higher content experience. The traditional technical backend however, does not allow for an optimal connection to their target groups, as consumers are embracing online more than ever. More often than not, there is a lack of the right personnel and funding in order to make this transition possible. It is made more cumbersome by the fact that consumers expect content from both traditional and new media. This dual way of thinking is where Broadcast Partners can help your organisation, with SmartRadio. SmartRadio is a 100% web- and cloud based radio-as-a-service platform, consisting of newly developed micro services, running in the cloud. All services are hybrid and prepared for all kinds of API connections, making it silky smooth, flexible, connectable and customisable. It comes in modular form allowing you to scale up or down on a monthly basis. We can install in your private cloud or a public cloud environment. Our goal is to help existing

and new media organisations to innovate and stay in contact with their audience without the need of large investments. SmartRadio allows a true data-driven production environment. What makes SmartRadio unique is that the foundations lie in our 40 years of experience in broadcast technology, but the most recent developments paths have been orchestrated from customers visiting our booths at IBC, NAB or Radiodays, our end users, dealers and partners. Rather that creating something that we hope the market wants, we have asked the market to show us what it wants and have either built it, or sought out the best partner for the job. We now offer the best modular software solutions, not only from our own team of 8 developers, but also from our technical partners who are

often market leaders in their specific field. A connection with Radiomanager for example, can be made in order to make multichannel publication possible. Our connection with Eumedianet in NL makes production for larger networks more efficient by using their new innovative newsroom products. Another unique feature is the cloud based modulation solution, called Smart processing. Powered by Orban Labs, 3 options are available to the user, Basic, Medium and high end. These options give you that chance of creating your own unique sound for online and digital radio channels, making you sound better than your competitors. These are just 3 of our long list of modules to improve or enhance your media company. In addition to the technical solutions, we also offer the required consultancy to use all these possibilities on a Smart way. Working hard is great, working smart is better. For more information: check our website or feel free to ask for a demo or temporary license on, or visit us at IBC2019 at Broadcast Partners booth. We look forward to meeting you!




DEVA Broadcast’s latest DB4005 model is the ultimate monitoring tool. Firmly in a class of its own, this third-generation digital FM Radio modulation analyzer and monitoring receiver redefines the idea of accuracy and versatility, combining the dependability and top performance associated with all our products to date with the latest cuttingedge technology. The DB4005 uses sophisticated DSP algorithms to achieve all signal processing. Upon demodulation of the FM signal, the RF signal is digitalized by the SDR FM tuner, while the high precision of the powerful digital filters enables the FM signal to be accurately and repeatedly analyzed with each device. A revolutionary feature of the DB4005 is the MPX input, thanks to which you can monitor external composite signals, no matter if they are from a composite STL receiver/stereo FM encoder, or from an off-air source. With its immense processing power, it provides detailed readings of all the multiplex FM signal components, while all measurements are refreshed simultaneously and synchronously. Another impressive asset of the

DB4005 is the Loudness Meter – a great feature which allows for measurements to be shown as defined by ITU BS.17704 and EBU R128 recommendations, as the product supports both standards. With this tool, you can measure the important parameters of your own signal and also keep an eye on other stations. Flexibility of the remote connection and control of the unit are ensured by the USB and LAN communication interfaces. The DB4005 is the most cost-effective way for regular monitoring of the quality and continuity of your station and up to 50 other FM Radio Stations, and provides many features such asTCP/IP and GSM connectivity, audio streaming, and automatic alerts for operationoutside of predefined ITU-R ranges. If transmission fails, maintenance staff will beimmediately alerted via e-mail, SNMP, or SMS, so normal service can be restored as soon as possible. The DB4005 has an easy to read, high-resolution OLED graphical display and ultra- bright bar graph LED 60 segment indicators. The built-in oscilloscope represents the observed signal change over time and helps you visualize the most important

signals in the demodulation process and stereo decoding. In addition to the Oscilloscope mode, the Spectrum analyzer mode allows for spectral analysis of the input signal. MPX Power and all other level measurements are supported by measurement history data. Additionally, RDS information contained in the processed MPX signal is easilyvisualized and represented as RDS/RBDS Data and detailed RDS/RBDS Statistics. All the channel measurements and logs are saved on the internal device memory. The DB4005 boasts a built-in FTP system, which manages the files in accordance with an assigned schedule. All the collected information is centralized in a database and can be revised, played back, and sent automatically to the qualified staff. The Interactive Software-based Log Viewer tool allows very detailed control and analysis of any station from the list of monitored channels. The DB4005 takes the best features of its predecessors and builds on that with some spectacular new characteristics for a truly remarkable product of multifaceted quality and reliability.




GatesAir’s Intraplex® Ascent is a scalable, multichannel Audio over IP transport solution that addresses the convergence of broadcast operations with IT infrastructure. Intraplex Ascent offers a direct connection to traditional digital and analog audio interfaces, and is compliant with both the AES67 standard and today’s leading AoIP networking solutions (Ravenna, LiveWire+ and Dante). Ascent is available in two form factors: a 1RU server, with configurable options for physical and AES67 channels; and a software-only solution that operates in a virtualized container. Both versions support up to 32 audio channels (AES3, AES67, analog) and are interoperable with most Intraplex Audio over IP codecs. The software-defined innovation is

GatesAir’s first Intraplex system to live on a COTS x86 server, and provides broadcasters with a highly scalable, redundant and cloud-based transport platform for multichannel contribution and distribution. The platform streamlines installation and management by removing the need for many separate codecs and auxiliary hardware components, freeing multiple equipment racks in enterpriselevel facilities. Ascent adds further value by allowing users to manage many Secure Reliable Transport (SRT) streams on a centralized platform – an industry-first in Audio over IP networking for broadcasters. SRT is a low-latency open source media streaming protocol which provides packet encryption and re-transmission capabilities for reliability and security.

To further strengthen stream robustness and reliability, Ascent integrates Dynamic Stream Splicing (DSS), an Intraplex technology that diversifies SRT data across redundant networks. The addition of DSS to SRT will add protection against certain types of packet losses and complete network failures. As an early innovator of Audio over IP codecs and networking solutions for broadcasters, GatesAir is taking the next logical step for our customers with direct integration into the IT infrastructure. With its Dynamic Stream Splicing application and SRT capability to fortify stream security and reliability, Ascent offers a simple platform to manage many channels coming in and out of the studio or headend - all on a single, highlyredundant, multi-core server.



RADIOMAN 6 LIVE Jutel RadioMan 6 Live is the radio industry’s first complete virtual browser-based radio production and playout system in the cloud. The goal of the RadioMan 6 project was to deliver on the promise of “Radio as a Service” with a virtual browser-based radio production, editing and playout system, where the audio processing is done in the cloud so that no specific hardware is needed as part of the studio and playout infrastructure. The new RadioMan 6 Live offers new cloud-based tasks: 1.Audio contribution streaming, On-Air playout and production mixing in cloud 2.Web-based audio editing with no browser add-ons needed. RadioMan 6 allows local studios to deploy a Radio Station with only two simple requirements: Internet connection 2.a room with decent acoustics. Modern cloud and web-based architectures provide a new generation broadcast production and playout platform that supports workflows from simple remote broadcasts to complex studio-operated mixed programs. The RadioMan user interfaces use native HTML5 so that the system is browser agnostic. The browser-based front-end supports all radio playout, audio editing, planning and production tasks.

The system utilizes cloud services like the Amazon Web Services and uses web native technology including: HTML5, REST APIs, WebSockets, ActiveMQ messaging and WebRTC audio streaming. The audio streaming between the workstation and the back-end processes were implemented with WebRTC and Opus audio encoding. WebRTC was chosen over standard software-based broadcast codecs because no additional add-ons or external software modules need to be installed. The back-end services are deployed in AWS in a clustered configuration with a virtual load balancer between the browser front-end and back-end processes. The playout processes were extended with controllable audio processing processes managing level controls, mixing, autoducking and audio streams. ActiveMQ messaging enables fast control of back-end processes for playout. Radio stations deploy workstations

equipped with USB-connected microphones and loudspeakers or headphones. The HTML5 OnAir control screen is equipped with simple microphone and level controls. The feedback from the playout mix to the workstations supports N-1 configuration. The system architecture allows the back-end processes to be installed either on a dedicated local server, on virtual machines, or in the cloud environment. The playout server can be configured either as a stand-alone unit, on a virtual server with IP-audio out (AES67, Livewire etc) or implemented in the cloud. Positive results from RadioMan 6 system tests lead to the launch of a live broadcast production with a major national broadcaster in June 2019. The launch was an overwhelming success, and the national broadcaster is planning additional deployments of the RadioMan 6 cloud solution. The launch has allowed the radio station to benefit from a streamlined web architecture that supports flexible browser-only studios, traditional studio implementations as well as touch-screen virtual studios with IP audio. This technological innovation will allow radio stations to streamline their broadcast operations across various locations enabling them to cut costs and simplify their radio broadcast workflow, automation and distribution.



NEOWINNERS PORTAL Collect your listeners consent to use their data (GDPR) and have them enter themselves information in your NeoWinners database ! NeoWinners Portal is a service provided by NeoGroupe that can be added to your NeoWinners and NeoScreener suite of applications. It lets you comply with the following GDPR rights, in using your listeners personal data information: •Right to list which data is stored. •Right to modify data. •Right to export data. •Right to delete their personal data. •Collection of their consent to use the data. All you have to do is let NeoWinners send automatically an email and / or an SMS to the Listener So, initially your staff only collects a phone number or an email, all the rest is performed by the winners themselves. It saves time (your team does not enter data), and it makes you compliant, your DPO will love it ! NeoWinners / NeoScreener is the professional callers / winners / prizing / contests / scripts application used by hundreds of broadcasters worldwide since 2002. Please visit or call NeoGroupe at +33 9 72 23 62 00.



AXIA QUASAR AOIP CONSOLE With cosmic precision, otherworldly sound, and real star power, Axia’s new Quasar sixth-generation mixing console draws upon the Telos Alliance’s rich history as the inventor of AoIP for broadcast with more than 9,700 AoIP consoles on-air worldwide. Axia has channeled all that experience into this new flagship console, consolidating its native AoIP architecture and refining it for the ultimate user experience with limitless production possibilities for radio and specialized TV applications. Sleek, Ergonomic Design Quasar features a sleek new look and extremely high-quality components, rugged enough for a lifetime of uninterrupted use. Designed based on extensive global customer feedback and ergonomic studies, Quasar has an easy-to-operate touchscreen user interface (no external display required!) that operators can also access remotely via any HTML 5 device. The absence of an overbridge makes for easy desk installation, and the console is fanless and modular, with redundant loadsharing power supply units. Customizable and Easy to Use Quasar makes the operator’s job dramatically easier, including new Source profiles (for source-associated logic automation), automatic mix-minus, and automixing an all channels. Extensive metering is built into the surface right

where it needs to be—on every channel display and next to each fader. Users can customize their Quasar surface thanks to user-assignable buttons in the Master touchscreen module and in every channel strip. For TV applications, the powerful new Quasar Engine delivers 64 stereo input channels—all with robust DSP processing—and loudness metering on all outputs. Four programmable Layers allow the user to control all channels, including DSP, even on smaller surfaces. Mature, Reliable AoIP Technology Quasar gives operators confidence

with world-renowned Axia audio quality and reliability. The Engine’s native AoIP processing based on a server-class hardware platform ensure high-performance audio. The console’s sixth-generation technology is mature and sophisticated, offering extreme reliability, with system modularity minimizing single points of failure. With Quasar, Axia has unravelled the mysteries of the AoIP universe.




The Tieline Gateway heralds a new era in multichannel IP audio codec streaming. The Gateway is a compact and powerful multichannel IP audio transport solution for radio broadcasters. The Gateway codec streams up to 16 IP audio channels with support for AES67, AES3 and analog I/O as standard. The Tieline Gateway supports up to 16 mono channels or 8 stereo streams of bidirectional IP audio in 1RU to increase efficiency and reduce rack space requirements for engineers. It interfaces with legacy analog and AES/ EBU digital sources, as well as newer broadcast plants with AES67 IP audio infrastructure. An optional WheatNet-IP interface is also available via a rear panel module slot. Tieline Gateway is perfect for STL, SSL and audio distribution applications, with support for multicasting and multiple unicasting technologies. It is also perfect for managing multiple incoming remotes at the studio and can simultaneously connect to up to 16 hardware codecs or

Report-IT Enterprise smartphone app users. Its feature-rich and compact design is interoperable with all Tieline IP codecs and compatible over SIP with all EBU N/ ACIP Tech 3326 compliant codecs and devices. Featuring Tieline’s revolutionary SmartStream PLUS redundant streaming, multiple redundant streams can be configured for individual mono or stereo connections. The Gateway codec also supports Tieline’s Fuse-IP data aggregation technologies. Tieline specializes in high quality and low latency audio transport over IP. A comprehensive suite of algorithms is included and the codec is backward compatible with all Tieline IP codecs. The Gateway codec front panel features a colour LCD screen and keypad, as well as PPMs. It is also configurable using an embedded HTML5 Toolbox Web-GUI interface, which allows full device configuration and control. Plus, the Gateway can

also interface with the TieLink Traversal Server, which supports call groups for simpler connections between codecs. It is also fully compatible with the Tieline Cloud Codec Controller, delivering complete remote control of the Gateway codec from anywhere with an internet connection.



We wondered: Could we combine what we know about AoIP with what we know about on-air audio processing to improve the quality of streamed music? Could we give streamers a fuller sound, and bring out those crisp highs and deep bass that we’ve been able to generate with our audio processor designs? Finally, could we then push all that through a codec, whose job is to lose as many bits as the pipe can handle, often indiscriminate of which ones? It turns out that with the right amount and type of audio processing, you can play to the psychoacoustic characteristics of lossy codecs. Wheatstone’s new StreamBlade, introduced at IBC 2019, is a multi-stream appliance for our WheatNetIP audio network that has selectable Opus, AAC and MP3 encoding, along with AGC, peak limiting and other processing tools specifically designed to optimize the sound quality of encoded audio content. How have we done this? 1) Agressive compression adds intermod products, which the codec has to spend bits on instead of the signals that are actually part of the program. StreamBlade uses RMS density calculations in its fiveband AGC design to invisibly smooth out

can of worms that got broadcasters into heavy clipping in the first place, so clipping isn’t necessary to keep streamed overshoots in check. A good limiter will suffice.

processing artifacts, and instead feed the encoder a steady diet of consistent audio. 2) Overshoot is a brick wall; the encoder will not accept any signal over 0 dBFS, which is why you need to limit peaks. StreamBlade uses a two-band final limiter with sum/difference processing developed specifically for streaming applications—it’s not density control; it’s peak control, so you have input to a codec that is friendly, even at low bitrates. 3) Clipping creates harmonics that weren’t in the original program and the encoder doesn’t know any better and throws bits at it. But those byproducts can sound much, much worse once the codec algorithm gets through with them. The good news is that streaming doesn’t come with the pre-emphasis

4) Big swings in L-R can trick the codec into disproportionately applying bits to L-R data rather than listenable program content. StreamBlade’s stereo width management produces a perceived stereo field without skewing the codec algorithm away from original content. 5) StreamBlade also has selectable high and low pass filters for removing noise, 4-band EQ for the best possible quality on a variety of bitrates, plus a two-stage phase rotator to correct voice asymmetry. 6) AoIP lets you can take music programming straight from the playout system to air (or stream) without A/D/A conversions. A bump in program quality is one of the more important benefits of installing an AoIP system, so why not extend that benefit to your streams as well? Wheatstone’s new StreamBlade accepts eight input sources of native WheatNet-IP audio directly from a soundcard or AoIP driver, each capable of four outputs for a total of 32 output streams.



The Audemat DAB Probe is a complete QoS monitoring solution for DAB, DAB+ and DMB radio applications. Designed to monitor DAB signal quality and service continuity, at the transmitter site or in a coverage area, the Audemat DAB Probe enables remote monitoring of a set list of channels and allows users to verify the conformity of their DAB network with both legislation and their broadcasting needs. Installed in SFN or MFN networks (Single or Multiple Frequency Networks), the Audemat DAB Probe is feature-packed that is ready for optimal performance of monitoring, measurement, and analysis. Some of its features include a user-friendly web interface, alarm notification by email or SNMP traps, a telemetry board (via ScriptEasy) and audio output connectors. The advanced software tools provide a deep signal and content analysis with impulse response representation, TII, audio or ETI recording, and more. Also designed for optimal monitoring of the user experience (QoE), Audemat

DAB Probe includes visual slide-shows, Dynamic Label and Services (DLS) display to enable users to hear and see in real-time the same content as their audience of listeners. “We developed this solution to meet the needs of the growing DAB market. Amongst other monitoring solutions available today, our product stands out as a complete solution, packed with rich features, at a very competitive cost.” explains Philippe Missoud, Product Manager at WorldCast Systems. Recently improved to continuously meet customer needs, Audemat DAB Probe now includes the following important additions: •Decoding of FIG tables for a more detailed analysis of the streaming content received •Display of real audio and PAD bit rates so broadcasters can visualize the real audio quality and have the possibility to listen to radio remotely using native codecs (MP2 or AAC+) or with an MP3 compression of 8kbps to 320 kbps

•Optional card for ETI output now available for a connection to analysis equipment or recordings -Management of telemetry inputs/ outputs to monitor and control the onsite measurement equipment or probes Highly scalable, the product can continuously be enhanced with new software versions or options through a simple, remote upgrade. Audemat DAB Probe is the result of the company’s 25 years of expertise developing analog and digital signal monitoring solutions for radio & TV. The Audemat range is recognized worldwide for its level of quality, accuracy, and reliability.



DTS CONNECTED RADIO With vehicle connectivity increasing rapidly around the globe, the need for compelling and engaging user experiences in the vehicle have never been more critical. DTS Connected Radio is helping broadcasters to solve this need by delivering a global best-inclass global Hybrid Radio solution delivered through the vehicle’s existing radio footprint. Leveraging vehicle connectivity, DTS Connected Radio’s innovative and seamlessly integrated technology enhances the over-theair radio experience for listeners with new standout features that broadcast alone cannot deliver. DTS Connected Radio provides listeners with an all-new experience with even more compelling ways to engage with their favorite stations, and access unique content with a truly differentiated listening experience. While offering this enriched and engaging experience for listeners, DTS Connected Radio also provides numerous benefits for broadcasters, including real-time broadcast metadata for all programming types, and the ability to gather actionable new insights around how listeners are engaging with broadcast content in the vehicle. Importantly, all of this is made possible while at the same time ensuring content protection and minimal impact to workflows to deliver one of the most exciting innovations to radio technology in the vehicle with a clearly

differentiated way for their listeners to access the content they love and now come to expect from their devices. At IBC 2019, Xperi previewed and demonstrated the competitive benefits of the DTS Connected Radio ecosystem, which is set to launch in 2020. The solution supports all global broadcast standards; Analog, DAB+, and HD Radio.


RadioPix is a voice-automated integrated production system for visual radio applications. Ideal for radio studios using fixed or PTZ cameras, RadioPix features a dedicated user interface designed for easy setup and operation of dynamic yet automated video-follow-audio productions. Sophisticated software switches cameras for automatic videofollow-audio production, eliminating the need for a live operator. ‘With more than 200 radio stations

in Europe already using Broadcast Pix Integrated production systems, we felt it was time to produce a dedicated product for visual radio featuring a complete toolset and a streamlined user experience,” said Tony Mastantuono, product manager, Broadcast Pix. Using Broadcast Pix’s advanced macros, RadioPix can roll clips and animations, add or remove titles, select camera presets, and even execute picture-in-picture compositions based on

the active microphone. Multiple macros can be assigned to each microphone, which allows the system to select between camera shots to create more dynamic productions. In addition, users can set conditions to avoid camera changes for coughs or one-word comments, and create dynamic camera moves within long discussions. The host can also override the automated system using a smartphone, tablet, or laptop.

BESTOFSHOWATIBC 2019 OMNIPLAYER TEXT BASED AUDIO EDITING Wouldn’t editing radio items be so much easier if you could just read what was being said? Everyone knows how much hassle it can sometimes be to find the right audio fragments. The deadline is approaching. The producer is getting nervous. And you’ve got a whole 90 minutes of material to work through to create a short item. Speed and efficiency are essential. Several good soundbites during the interview caught your attention. But when was it they were said? Thanks to OmniPlayer text-based audio editing, these problems are a thing of the past. While you’re sipping your coffee as your recordings are being uploaded, OmniPlayer’s hard at work turning the words into text. So you can read everything that was said, select what you want to use and, voilà, all that’s left is to put it in the right order, fine-tune the whole thing and send to the producer. Completely ready for the broadcast. Naturally, you can search both text and audio to jump directly to the parts you’re looking for. And that’s not all. During a live broadcast, you can instantly go back and find that fantastic soundbite, clip it out and share on social media. One of the great beauties of text-based audio editing is it can also be used during recording. This revolutionary solution from OmniPlayer adds ease to a radio editor’s

daily life. The ingenious technology allows you to work faster, more efficiently and in a more targeted way. Text-based audio editing makes it possible for you, as a journalist, to select just the right clip from the text to use in your item. After that, it’s a piece of cake to complete your editing via OmniPlayer’s built-in SmartTrack audio editor. Thanks to the availability of faster, better quality Speech to Text technologies and smart API links, it’s now possible to compose radio items in real-time and have the text version integrated in OmniPlayer’s SmartTrack editor. The audio and the associated written text are now fully linked, with the selected text immediately visible on the audio track and vice versa. The intuitive user interface makes it very simple to mark parts of text to use, as well as marking the text corresponding with a selected section of audio. This has all

been achieved through Zoom Media’s integration of speech to text software, with Google for English providing the transcribing ability. Availability of the Dutch language in this new feature was announced for the first time at IBC 2019. More languages will follow. OmniPlayer’s text-based audio editing is unique in the radio world. Brilliant in its simplicity. A revolutional use of technology. OmniPlayer is always looking for innovative ways to make creating radio simpler, more efficient and more innovative. Our motivation is fuelled by the high demands placed on us by stations, driven by the many changes in recent years. Thanks to text-based audio editing, the way radio is made will never be the same again. Are you ready to push the boundaries of radio?

BESTOFSHOWATIBC 2019 TELOS ALLIANCE TELOS ALLIANCE HYPERSTUDIO In much the way the IT industry influenced Telos to create broadcast AoIP, we now march towards virtualization in the broadcast industry. Automation systems, call management software, streaming processing, system control software and more are offered with almost infinite flexibility. One could argue there’s no reason not to go virtual. Not to be confused with cloud computing, virtual machine monitor (VMM) or hypervisor (host machine), is a computer software application that allows virtual machines (guest machines) to be quickly spun up on a single server to accommodate studio workflow requirements without the need for additional computer hardware. This is particularly possible today with the wide-adoption of Audio over IP. Once the audio signal is IP, virtualization is the perfect next step. At the Telos Alliance, we have decades of experience building dedicated hardware and software to perform complex audio workflows (both realtime and file-based) in preparation for broadcasting compelling content to listeners around the globe. From our work in telephony and codecs, on-air processors, development of Livewire (and AES67) AoIP including consoles, there’s nothing we’ve seen quite like this. Our march to virtualization comes in the form of what we call the “Hyperstudio Experience.” We have virtualized streaming encoding and

processing, AES67 AoIP audio drivers, SIP phone hybrids, automation and logic control systems, mixing applications, and even mix engines to create a single Virtual Radio Production System that can be deployed on a local server, external server, or used in conjunction with the cloud. While the topic is virtual, our application in the field is not. Our earliest approach has been implemented by the BBC’s regional radio network in the UK known as Virtual Local Radio (ViLoR). This represents a significant shift in how various components of a broadcast environment come together, all virtualized using specialized server technology, creating a production and on-air environment that is scalable, adaptable, flexible, and future-proof, while enjoying all of the sonic benefits that the normal rack-mounted equipment the ViLoR project replaced was so well known for, within a business model that makes it affordable by organizations large and small. The Hyperstudio Experience is the Telos Alliance gateway into the future of broadcast facilities. Telos Alliance offerings available

today and coming soon as part of the Hyperstudio Experience include the following: •For Streaming : Telos Alliance Z/ IPStream •For Processing : Telos Alliance Z/ IPStream 9X2 •For AOIP AES67 : Axia IP Driver •For Phone Hybrids : Telos Virtual VX •For Engineering : Axia Pathfinder CORE PRO •For Remote Control and Integration : Axia IP-Tablet •For Mixing VM Hosted AES67 Stream : Axia iQx Console Telos Alliance is the only company in the world that can propose the entire and complete system in a virtual world. That’s due in large part to the fact that Telos is the only company that has the expertise needed in all the various parts of the broadcast workflow and studio ecosystem, including audio processing, stream encoding and processing, consoles, telephone hybrids, engineering, control, monitoring, AES67 driver, and more.

BESTOFSHOWATIBC 2019 TELSAT S.R.L. BSP - BROADCAST SMART PLATFORM Thanks to the new digital technologies, all the hardware of a complete transmission site can now be integrated into a new device. The BSP by TELSAT is the most compact solution that allows to have all the functionalities of a TV or FM broadcasting site in a single, mastmounted, small-sized box. Low power consumption, low electromagnetic pollution, quick & easy installation are the key features making the BSP the brand-new frontier for broadcasting signals coverage. The technology developed by TELSAT and its valued Partners (TRX Innovate and Plisch) allows to set up totally selfsufficient transmission stations, by using mono-directional satellite distribution, with a power supply obtained by alternative energy-sources. The cell-based network-model allows to cover vast territories in a smart way, by transmitting the signal only to the needed areas, using low-power transmitters and avoiding power and money waste to cover unnecessary areas. Moreover, it also allows scalable investments, whose economic return would be impossible to achieve with the traditional business model. Complete turnkey platform from a single provider. Revolutionary technology designed to optimise costs and performances and compatible with the current standards. Flexible and adaptable solutions that can satisfy any specific requirement. Complete Network Management Software included.

BESTOFSHOWATIBC 2019 WHEATSTONE CORPORATION GLASS LXE Wheatstone’s LXE console is designed to let every knob, every button, and every display be programmed to perform virtually any audio mixing function you come up with. The LXE is our most modular console design ever. Simply group the modules into bays and connect them to your network with a single CAT6 cable. Full color OLEDs reflect your programming, and the touchscreen GUIs let you interact with your audio in fresh new ways, to do everything from pinching and dragging EQ curves to setting up router crosspoints in your network. The LXE’s ConsoleBuilder software is an intuitive GUI-based app that allows you to program and configure your console’s control surface to suit your requirements. Simply put, there’s never been a more customizable way to handle broadcast audio work flow. Now: Imagine the power and flexibility of that console design residing behind glass—a 32-channel virtual audio control surface that brings all the features and functions of the LXE audio console to any touchscreen location. Wheatstone’s new Glass LXE is compatible with existing LXE physical control surfaces or can function without a physical console, operating with its own rackmount mix engine and integrating fully with the WheatNet-IP audio network. Engine and glass can be in separate locations— different rooms or cross-continent. Control multiple remote consoles

with one central touchscreen. As Wheatstone’s Kelly Parker puts it, “You can run it on a laptop or on multiple PC screens from a cloud. Glass LXE gives broadcasters full audio and logic control of a 32-channel console with a familiar UI, anywhere that’s needed.” Glass LXE allows you to harness the full power of the WheatNet-IP intelligent network from anywhere with a touchscreen. Imagine what you could do with it.




4K is fast becoming the world’s visual standard. According to Statista (2018) it is predicted that by 2020, the global 4K display market could be with $52 billion. But the use of 4K technology in Broadcast workflows has been held back by constraints on 4K-ready KVM solutions. No more. Adder has removed the barriers to the adoption of 4K with the world’s first dual-head, high performance, 4K IP KVM matrix over a single fiver connection: the ALIF4000 Series. Designed to meet the 4K needs of broadcast professionals, the ALIF4000 delivers an easy-to-install solution for organizations that want to add 4K functionality without a costly rip-andreplace of existing infrastructure. Full compatibility with the existing ADDERLink INFINITY range means the ALIF4000 can integrate seamlessly into existing network, without disruption or

extended downtime. From top postproduction houses, to world-renowned TV networks and live sporting venues, high-pressure broadcast environments require the uppermost levels of resiliency and performance. The ALIF4000 relies upon its dual 10Gb network ports to minimize redundancy and protect against network failure. Delivering pixel-perfect, color accurate picture and video quality at 4K60 and USB2.0 with fast switching, the ALIF4000 gives users all the tools they need at extended distances, without image degradation or loss of latency. Using the same encoding system as the other products in the ADDERlink INFINITY range, the digital video is spatially lossless, with 1:1-pixel mapping, ensuring the digital video users receive is the same as that leaving the remote computer. Users can also connect to any USB human interface device from mice and keyboards through to graphics

tablets or grading tools. John Stevens, director or engineering at Californian post-production house, The Foundation, said, “The launch of the ALIF4000 gives us the opportunity to take control and add 4K functionality to our existing infrastructure, as and when we need it, without having to rip and replace; meaning we can continue to meet our customers’ growing need for 4K output”. Moving to 4K couldn’t be easier with Adder’s ALIF4000 Series.



AWS ELEMENTAL MEDIACONNECT AWS Elemental MediaConnect is a fully managed, on-demand cloud service that transports live video into, out of, or within the AWS Cloud. Customers use MediaConnect for live stream cloud contribution, content sharing, and video replication. MediaConnect also serves as a source for video transformation workflows, including video processing and playout, as well as primary distribution, in which live video is distributed at scale to multichannel video programming distributors (MPVDs) or affiliate stations. Traditionally, broadcasters and content owners have relied on satellite networks or fiber connections to send their high-value live content into the cloud or to transmit it to partners for distribution. These approaches are expensive, take a long time to set up, and lack the flexibility to adapt to changing requirements. As a result, some organizations have tried solutions that transmit live video on top of IP infrastructure, but have struggled with reliability and security. MediaConnect launched in November 2018 as a secure, reliable, cost-efficient alternative to satellite and fiber networks for live video transport. Available for use on a pay-as-you-go basis, MediaConnect lets customers deploy transport workflows quickly, at lower cost, and

with greater flexibility than traditional approaches. Users benefit from the agility, accessibility, and economics of the public internet while avoiding concerns inherent with public IP networks. With MediaConnect, content owners can distribute high-quality video within the AWS Cloud, avoiding the costs and physical limitations of traditional approaches. Users simply grant access to their content to another user’s AWS account, so video can easily be shared with customers, partners, or other content providers. This includes transmitting national channels to local affiliates or distributing streams directly to MVPDs. Typically, these highbandwidth, mission-critical workloads were served by satellite networks, requiring long lead times and significant costs to procure, deploy, and use. Now, with MediaConnect, these workloads can launch in minutes, with a few clicks. Using transport stream protocols

specially designed for live video, MediaConnect dynamically adapts to internet network conditions and applies error correction techniques in real time, protecting video traffic from network errors, packet loss, jitter, and other factors that can disrupt streams. Highvalue content is secured with industry-standard, end-toend AES encryption, and optional whitelisting limits access to trusted sources. MediaConnect is fully integrated with AWS Secrets Manager for key storage and retrieval, and users can configure unique encryption keys for each destination as video leaves MediaConnect for hands-on control. Additionally, the service integrates with the Secure Packager and Encoder Key Exchange (SPEKE) content protection standard for key exchange with conditional access system (CAS) partners. Built-in-monitoring supports broadcast-grade quality assurance as well as automated performance analysis and error recovery. The MediaConnect service supports a range of protocols for video delivery, including the Zixi protocol, Real-Time Transport Protocol, and RTP with forward error correction. Customers can use MediaConnect as a standalone service or with other AWS services, including AWS Elemental Media Services.



Hello Sony 1” Sensor. Meet NDI. P4K is the real deal. A huge 1” Sony Exmor R CMOS back lit Sensor and 14.4 million effective pixels enable P4K to deliver stunning pictures in 4K resolutions with Full Bandwidth NDI. With excellent light sensitivity on the Perfect for all broadcast applications, sports, remote studios, newsrooms, house of worship and any shoot where quality matters. P4K has a huge feature set including: •1” Sony Exmor R CMOS back lit Sensor •Full NDI •NDI HX2 •PoE •6G SDI •Triple output – NDI, SDI, HDMI all live •Genlock •10-bit •Front and back Tally lights •NDI Return Feed Decode •Audio intercom support with BirdDog Comms •12x optical zoom Zeiss lens with SRZ (Sony Super Resolution Zoom) technology provides expanded 18x (4K) or 24x (HD) zoom range without compromising detail •ePTZ function when using in HD mode •Optional Fiber Optic module •Optional HDBaseT module •BirdDog Cloud compatible – control the camera from the other side of the world •PTZ control with NDI, VISCA, Pelco, VISCA Over IP and ONVIF IP •4K H.264 and HEVC/H.265 •Extremely accurate and sensitive robotic movements •Black level control



At this year’s IBC in Amsterdam, the BT Media & Broadcast stand bears witness to a series of record-breaking firsts. Among those firsts, BT’s Media & Broadcast division and BT Sport are hosting an event to demonstrate what is believed to be the highest spec broadcast ever of a live English Premier League match, in 4K HDR with Dolby Atmos. In a stunning collision of the beautiful game and the cutting edge of livestream broadcast technology, 1.30pm CET on Saturday 14th September brings the Liverpool VS. Newcastle, English Premier League game, to the BT stand, at IBC in the Netherlands. Realtime. The demo chimes as BT Sport continues the rollout this season of BT Sport Ultimate, the first commercially available, seasonlong, HDR sports service in the market, and unique in Europe at present. This world class sports-viewing platform provides a transformative new livestream service, offering the best viewing experience possible by platforms, transmitted via a market-leading global broadcast network. This is a very exciting moment for BT, streaming OTT in realtime, in highest quality 4K HDR video, combined with spectacular Dolby Atmos audio surround sound. The experience will be fully immersive, the first to use Dolby Atmos in this way, fully transmitting the feel and atmosphere of the live match taking place hundreds of miles away, to the stand. BT is the first to offer Dolby Atmos sound globally, taking the meaning and feel of livestream to the next level. BT Sport Ultimate is a service in which

BT’s Media & Broadcast technological capabilities and BT Sport’s content come together in perfect harmony for the pleasure of the audience. The result of more than three years of research, innovation, investment and collaboration within BT, the event takes full advantage of the most advanced fibre and satellite networks ever at the service of live sports streaming globally, coming in over a network that guarantees “5 nines”, i.e. 99.999% availability. This translates to the lowest possible latency and highest possible guaranteed-live network in existence today. BT’s Media & Broadcast division has an extensive fibre outside broadcast network connecting to 150 venues in the UK, and with the recent launch of UHD HEVC over satellite can extend this reach to Europe and across the world. BT’s UHD-HEVC

capabilities mean in-depth, immersive, omnichannel, multiplatform experiences beyond the reach of most networks. This Premier League match will stream over fibre from the UK to the BT stand, with no delay, showing just what kind of global outside broadcast network delivery is now on offer from BT. The combined strategy, investment, and technological innovation at play, makes the BT Sport Ultimate livestream event at IBC, not only a key moment in the evolution of livestream sports broadcasting, but also a great example of the positive power of collaboration. Combining the capability of the Media & Broadcast division to deliver new formats such as HDR, and BT Sport’s content innovation, BT Sport Ultimate can deliver the world’s most advanced viewing experience, bringing audiences a new and unique event.



WIDGLETS™ API FOR THE VB440 NETWORK PROBE Inherent value The Bridge Technologies VB440 probe monitors all traffic on an IP network, and provides userconfigurable graphic views of that data. This allows network managers to quickly and intuitively obtain the insight necessary to identify and correct any anomalies, ensuring optimum quality of delivered content. The new Widglets API for the VB440 recognises that the data on the network – video, audio, waveforms – collected by the VB440 has significant inherent potential value that can, in effect, be extracted not only for monitoring and analysis of the performance of the network but also for other users, workflows and applications that can leverage the availability of live data. Full motion, colour-accurate video, for example, can be made available from any source to any application or user – anywhere in the world, such that a geographically-dispersed team can work together on the same project. The data - previously only available to network managers - can be easily imported into any browser-based application, enabling, for example, a virtually instantaneous high level judgement of production quality to be possible, thanks to the VB440’s close-tozero latency. Freely available

view; different people at different locations can view all this data, instead of splitting it by location or technician. As such: using the new Widglets API for the VB440 patently has incredibly positive implications for remote production, for example.

With the Widglets API, the VB440 repurposes freely available data and becomes a truly multi-functional tool, delivering even greater value to users. The VB440 centralises the gathering of network data – and the new Widglets API decentralises its availability, making it available at close to real time to enable better decision-making at the point of production, for example. Take a camera painter, for instance. Today, the number of cameras that can be analysed is typically limited to a very few – and with limited views available. More can be analysed – but with the need to acquire and deploy multiple space-consuming boxes. With the Widglets API, a user can have multiple cameras with multiple waveform vectorscopes and streams via a single HTML5 video monitor

Clear benefit This new product delivers clear user benefit in enabling more use to be made of existing data, creating new opportunities to leverage that data and allow new, more functional applications to be developed. It represents value, because use of the Widglets API is included within the standard license fee for the VB440. The new Widglets API for the VB440 is innovative in recognising the inherent wider value of data that exists on the network – and provides a simple, easyto-implement tool that enables users to leverage that value with the potential to transform production. The new Widglets API for the VB440 should be considered for the Best of Show Award because it delivers something new, unique and valuable to the industry that is potentially transformational in production – and the user effort required to leverage its potential is minimal. Although the provision of a Widglets API for a network probe sounds simple, its implications are far-reaching.



CINEGY CAPTURE PRO - NO ONE SCALES BETTER One of Cinegy’s latest Capture PRO flagship deployments is for a major sport league that requires more than 200 channels of HD capture to be able to concurrently record (and edit) all of the games taking place in North America. This successful installation is further proof of the scalable nature of Cinegy’s software architecture and that Cinegy’s motto “Software Defined Television” is absolutely real. What originally started more than 20 years ago as a project to ingest the footage for BBC’s Planet Earth documentary series, has evolved into Cinegy Capture PRO today. The foundation had been formed already then, with the ambitious goal of having a scalable, enterprise-class solution that would be all software-based, use network-attached enterprise storage, create multiple high-res and proxy qualities - all at the same time, in real-time. The foundation for Cinegy’s software video codec development work was laid then as well. Today Cinegy is one of the few companies in the industry that has its own video codecs, format and media libraries and SDKs, covering both legacy and future industry requirements - ensuring control over performance, efficiency and core features. Three years ago, Cinegy showed entirely software-based 8K @ 59.94 fps video ingest using an Intel Core i75960X 8-core CPU with simultaneous

playback on the same machine using an NVIDIA GPU for real-time decoding impressive technology that has become part of the shipping Cinegy Capture PRO software. A complete end-to-end 8K workflow at Cinegy’s IBC booth has live 8K video captured using a Sharp 8K broadcast camera connected using quad 12G SDI. The uncompressed 8K 4:2:2 10bit video signal hits a PC equipped with a BMD Decklink 8K Pro SDI card. The PC is running Cinegy Capture PRO encoding the 8K signal into Cinegy’s own Daniel2 production codec (also 4:2:2 10bit) and also HEVC in UHD. The video encoding is entirely performed by the installed NVIDIA RTX GPU using CUDA. The MXF files are written to a 10G connected NAS which then allows the connected edit stations – using Cinegy’s Desktop NLE or Adobe Premiere to edit the 8K video while recording. The cost for the 8K capture solution using Cinegy’s Capture PRO software is a fraction of what other solutions would cost.

More importantly, a machine using faster or multiple GPUs could also record two or more 8K inputs. For most broadcasters 8K will remain irrelevant for years. But 8K is the equivalent of more than 16x HD channels, or 32x given that most just do 1080/50i and no 10bit either. Cinegy Capture PRO today can ingest 16 HD channels or four UHD HDR channels using a single rackmount unit server – which is relevant today for broadcasters and media production companies. Of course, Cinegy Capture PRO also creates formats such as Apple ProRes, Sony XAVC or XDCAM as well and if required all in parallel. Add the fact that Cinegy Capture acts like a network appliance and can be controlled via web client or Windows app by multiple users simultaneously – and you have a value proposition like no other.



CINEGY AIR PRO - NO ONE SCALES BETTER Cinegy’s slogans have not always made industry friends - “SDI MUST DIE” for instance. Another is “Software Defined Television” exactly what Cinegy is about. For more than 10 years, Cinegy has been delivering on today’s buzz of software and cloud-focused world ‘revelations’ that almost anything that was done using bespoke broadcast hardware could be done entirely in software on commodity PC hardware given enough processing power. The trailblazer in terms of using massive scalable compute power is not the broadcast industry, but the highperformance computing industry and the developers at Cinegy have been involved with GPU-accelerated video processing for more than twenty years. From 3D-LUT based real-time colour correction to GPU-based video codecs, Cinegy has developed a wide range of GPU-based technology. Cinegy has applied more and more of its GPU technology to its flagship playout product Cinegy Air PRO. Dealing with resolutions such as SD and even HD was still easily accomplished by using the proverbial bigger hammer – a faster CPU. With UHD this already started to become a problem as despite the famous Moore’s Law CPU performance did not double every two years and neither did the PC systems bus speed. Going from HD to UHD expecting the same number of channels per server was a challenge. Now it is 8K and the

performance demand has more than quadrupled once more. HDR and higher colour fidelity add further bandwidth and computing requirements. Finally, in an ideal IP-based world the output should be readily encoded for cable, satellite and OTT delivery. Cinegy is showing 8K (4:2:2 10bit @ 50 or 59.94 fps) playout and branding using its latest Cinegy Air PRO software at IBC2019. The system requirement to achieve this is an 8-core CPU and one of the latest mid-size NVIDIA GPUs as well as PCIe SSD storage. Many new laptops meet these requirements and Cinegy can show 8K playout from a Dell notebook. While 8K is not relevant yet for the majority of the broadcasting world this translates into the following: less than one CPU core and a fraction of a new mid-size NVIDA GPU per HD playout channel (including H.264 or HEVC encoding and streaming output). To put it into cloud terms: four channels of HD playout from the smallest AWS EC with

GPU instance. This brings the AWS price per HD channel down to unprecedented levels of $0.25/h or $6 per day - without using any of the available discounts. This also means you can easily put more than 1000x HD channels into a single 19” rack if you want to stay on-premise. Hyperlocal ad insertion or program breakouts? Localised graphics for every head-end? The idea that everyone can have their own private TV channel – and we are not talking OTT here, but a proper TV channel with sophisticated branding and all bells and whistles, which again could be an OTT live feed. Cinegy Air PRO also has full SRT support for live signal inputs, IP output delivery and allows disruptive business models or to come up with completely new ones.



CLOUDIAN HYPERSTORE® XTREME, POWERED BY SEAGATE To claim media and entertainment industries were always devoted users of storage technology would be an understatement. Broadcasters and production houses used to shoot film on tape or celluloid, which was typically stored in large libraries or vaults. Then came digitisation, which shifted the dynamics of media and entertainment towards new formats and resolutions such as 4K, 360 degree, and 60 frame video. According to storage industry analyst Tom Coughlin at Coughlin Associates, the math is simple. Frame rates are increasing, from 24 frames per second (fps) to 48fps and 60 fps, and eventually they may go as high as 300 fps. Similarly, one hour of film content in standard definition (SD) equates to 112 gigabytes (GB) of storage space, whilst high-definition (HD) adds up to 537 GB, and Ultra HD to 6,880 GB. With 4K in the market, 8K on the horizon, and talks of 16K in the next decade, post-production houses, visual effects teams, broadcasters and studios need to understand that traditional storage solutions are becoming costprohibitive and insufficiently scalable to meet the demand. What they need to do is look for high-capacity scalable storage that is cost-effective and allows them seamlessly scale to hundreds of petabytes, without interruption or downtime. As the most widely deployed independent provider of object storage

systems that feature the industry’s most advanced S3 compatibility, Cloudian recently joined forces with Seagate Technology to deliver a new storage solution that enables media organisations to better handle the ever expanding volumes of content, meet demand for anytime, anywhere access and leverage AI/analytics to fully monetise or maximise the value of their data. Cloudian’s HyperStore Xtreme combines Cloudian’s object storage software with Seagate’s newest high-density storage systems to offer organisations the scalability and flexibility provided by the largest public cloud suppliers, but from within the broadcaster or production company’s own facilities, preserving full content control and security. Cloudian’s software-defined storage platform provides content owners and creators with the ability to store and manage massive video files within a compact footprint. For example, over 55,000 hours of 4K material in UAVC-

4K and Ultra-HD formats can now be managed in a solution that takes up just 12U of rack space. This represents a 75% space savings compared to an LTO 8 tape library with the same capacity, which is a notable benefit in an environment such as a broadcast centre where space is at a premium. Moreover, the solution is significantly faster than tape as well as more costefficient when tape management expenditure is factored in. In addition, at less than ½ cent per GB per month, it delivers up to 70% savings when compared with either traditional enterprise storage or public cloud storage. Cloudian’s native S3 API implementation also offers the industry’s highest level of S3 interoperability, which means organisations deploying HyperStore Xtreme can capitalise on the rapidly growing ecosystem of S3-enabled applications whilst being compatible with public cloud platforms, including AWS, Google, and Microsoft.



The CobaltÂŽ 9992-DEC-4K-HEVC is a software defined broadcast decoding platform capable of meeting the needs of everyday broadcasters to high end athome production environments. Housed in the award winning openGear frame, the 9992-DEC-4K-HEVC supports ultrahigh density applications. The decoder is powerful, able to process multiple codecs including MPEG-2, AVC (H.264) and HEVC (H.265), resolution up to 4Kp60, and a full complement of 32 channels of audio. The card can be run in either single channel 4K mode, or dual channel 2K mode for higher density applications that do not require 4K video. The flexible licensing structure that allows customers to manage CAPEX while allowing room for future growth as their needs change. Rich output connections include two 12G-SDI connectors and two 3G-SDI connectors that allow for either single wire 4K or quad-link 4K output. Inputs to the card include two ASI ports and two ethernet NICs for complete redundancy. The 9992-DEC-4K-HEVC supports a wide range of protocols, including internet protocols. With full support of Reliable Internet Stream Transport (RIST), the card can reliably with minimum latency receive video and audio from a RIST sender over the open Internet. To help ensure accurate delivery of packets, the decoder utilizes seamless packet switching between a number of inputs to provide a glitch free output. Now supporting the RIST Main Profile, the decoder will be able to provide encryption and VPN tunneling for secure AV transport.



Cy-Stem is a unique solution to integrate any specialty camera in professional live broadcast workflows. Designed for vision engineers being frustrated by the sub-par performance of specialty cameras, this universal control system does remove the technical barriers that prevent directors from achieving the shots they need. Starting with minicameras on events such as the 24h of Le Mans, Cyanview is able to control a whole range of mini, PTZ, ENG, D-Cinema cameras from a single RCP panel that features physical knobs. The solution comprises all the necessary pieces to connect the cameras on IP or wireless networks and deliver quality pictures that match the main production cameras. Being all IP based, non-IP cameras are interfaced with modules that translate the camera low-level protocols such as serial, RS485 or LANC over IP in an IoT type of architecture. As many specialty cameras don’t provide enough processing to match the main production, color correctors are usually added and Cyanview’s RCP will control them as a natural extension of the camera, supplementing missing controls with the post-processing ones. Standard color correctors only implement primary corrections though, that’s why Cyanview developed it’s own color corrector as a generic CCU that will add secondary corrections such as matrix, multi-matrix, detail or knee on top of the regular

white balance controls. This CCU delivers proper camera matching to any camera. With digital cinema cameras such as Sony FS5/FS7 that have very few available controls beside exposure, the CCU implements all the features that are usually available in a system camera. To address space and cost constraints on bigger productions, one remote can control an unlimited number of cameras by synchronizing the RCP with the router protocols. Selecting a camera on the router panel for monitoring will also select that camera on the RCP panel. Our latest addition called “RIO” extends control over any kind of communication channel, mostly for wireless applications and remote production. IO is the first universal

cellular control which is the ideal companion to cellular video transceivers. It has already been used to cover cycling races, marathons and golf tournaments. Applications of this new technology is not limited to ENG cameras, control over cameras and gimbals fitted on drones has also been demonstrated recently. An internet service relays commands between camera and OB. This same system is now used for lower-cost remote productions using camcorders over remote networks or even over the internet. Cyanview now delivers a full package to cover typical specialty camera needs as well as innovative new possibilities in terms of cellular and remote production.



DALET MEDIA CORTEX The new version of Dalet Media Cortex, the Artificial Intelligence (AI) powered media service from Dalet, puts content intelligence at the fingertips of media professionals. A Software as a Service (SaaS) platform that enables the orchestration of multiple cognitive services in a pay-as-you-go model, Dalet Media Cortex optimizes content production, management and publishing workflows by enriching content and automating processes. Adding integration with Ooyala Flex Media Platform, acquired by Dalet in July 2019, and smart captioning capabilities enables both Dalet Galaxy five and Ooyala Flex media Platform customers to produce and distribute more content faster, to more viewers in less time, as well as open doors to new markets. Providing the right insights, in the right toolset, with the right context, Dalet Media Cortex helps content producers, owners, and publishers across news, sports, programs, and radio operations make the most of their media assets. The service maximizes efficiency, effectiveness and enables new business models. It augments productions with better content search and insights, automates mundane tasks and processes, bringing collaboration to a whole new level. New integrations, services and feature highlights shown at IBC: ●Ooyala Flex Media Platform: Populate

automatically indexed metadata to OoyalaMAM for easy curation and distribution, increasing monetization opportunities for high volume catalogs and archives. ●Smart captioning: Dalet Media Cortex can automate up to 90% of manual captioning work, accelerating the process five fold with high quality results. Its speech-to-text capabilities are available in more than 30 languages, opening up revenue opportunities in new markets. ●Improved news and editorial workflows: Automatically tag stories and wires. New entity detection capability (locations, persons, organizations) with smart assistant facilitates associated content recommendations for stories from planning all the way through production. ●Custom Dictionary: The custom dictionary allows users to add specific words that are relevant to an industry, job, or market, optimizing accuracy and results. ●Enhanced Dalet Media Cortex API:

Enables media companies to build versatile and extensive workflows leveraging a dynamic combination of microservices to optimize workflow performance. Dalet Media Cortex can be deployed within a very short period of time and is a fully managed service. Its smart metadata approach to classification automatically identifies asset type, defining key search words, sentences and terms, optimizing indexing and use of content across the organization. Seamlessly integrated within the Dalet Galaxy five and Ooyala Flex Media Platform workspace, all AI-curated data is presented in a contextual manner, such as captions or recommendations. Over the last 18 months, we’ve seen the use of AI transition from experimental trials to real-world deployments. Using AI services to automate repetitive processes as well as to augment user workflows with contextual insights and timely recommendations resonates with broadcasters and media companies who have to serve more platforms and more markets with more content than ever before. We created Dalet Media Cortex to help users be productive, dedicate time to create work and collaborate effectively when they use AI-based tools and workflows. With it, audiences can be better served with higher-quality content and personalised multi-platform experiences.



DEJERO IRONROUTE Recently launched and currently being trialed by leadingedge broadcast organizations, IronRoute for media delivers dependable connectivity by blending broadband, cellular (3G/4G/5G), and satellite connectivity from Intelsat’s global network. Dejero’s Smart Blending Technology combines all available network connections to create a virtual ‘network of networks’ with the necessary bandwidth to deliver broadcast-quality content. IronRoute means the media customer doesn’t have to commit to certain types of connectivity and hope for the best, but can customise the best combination of connectivity based on business rules decided by them. It’s also about shaping the different types of connectivity and the amounts of connectivity that will be provisioned at these remote locations. Network connectivity is now scalable in terms of reliability and cost. The cloud-based solution simplifies the simultaneous IP distribution of content to multiple locations anywhere in the world, whether it be network affiliates, station groups, or to other broadcasters and media organizations. Live and filebased content can quickly and easily be distributed point-to-point, point-tomultipoint, and multipoint-to-multipoint with a simple drag and drop interface in a web browser.

Connectivity at the content hub and the destination points is reinforced with the blended Dejero and Intelsat connectivity managed and distributed in the cloud. For example, potentially fixed broadband Internet could be available at the content-origination point, and could be aggregated with cellular and/or satellite connectivity. The cellular or satellite connections are engaged when the main Internet connectivity degrades or suffers a service interruption. The cloud manages it all by looking at the connections at each location every 150 milliseconds and determines what is the right blend of connectivity for this video to go from the origination point into the cloud. Each destination point in the network can also choose to secure connectivity to the network by deploying the blended broadband, cellular and satellite

solution. At each endpoint location, securing the last mile provides the enhanced confidence to receive content under adverse connectivity scenarios. The result is an end to end solution, where the cloud manages the blending of the first and last mile connectivity. Through Intelsat’s satellite footprint, the reach of IronRoute is extended into regions where terrestrial connectivity can be challenging. A small amount of cellular or fixed broadband can be added to the connectivity stack and supplemented with satellite to create an aggregated pipe. We believe there will be a huge market for IronRoute in locations that host regular sporting events for example, with existing fixed infrastructure and an addon of IronRoute as an aggregated pipe for remote-integration feeds or anything that might require higher throughputs. It will also appeal to full-time services, those that have installed MPLS circuits at 20 or even 100 locations. IronRoute places the cloud in the middle, observing each of the endpoints every 150 milliseconds and managing the best blend of connectivity at each location. The IronRoute for media solution is available to customers through both Dejero and Intelsat.



At IBC2019, EditShare will be showcasing EFS 2020, EditShare’s next-generation file system and management console, slated to hit the market in Q4 of this year. This new release is the culmination of 3 years of hard work and will deliver EditShare’s most secure and most performant storage platform ever. Powering faster EditShare storage nodes and networks, EFS 2020 enables media organizations to build extensive collaborative workflows, shielding creative personnel from the underlying technical complexity while equipping administrators and technicians with a comprehensive set of media management tools. To the end-user, the latest release offers an improved, fast and flexible collaborative storage space with an increase in throughput performance of up to 20%. To the engineers who manage it, EFS 2020 offers a scalable, stable and incredibly secure media storage platform. EditShare’s fundamental ethos is to remove complexities from the creative workflow,

enable media professionals to focus on the artistic aspect of their work and optimize overall productivity. EditShare continues to deliver on this with its enhancements to the new EFS file system and management console. Featuring key advancements in speed and security, EFS 2020 enables Editshare customers to effortlessly manage the growing requirements for high-bandwith, multi-stream 4K workflows and beyond. Core to the EFS2020 system is a media-friendly architecture that banishes bottlenecks. Unlike generic IT storage systems, EditShare has written its own ultra-efficient drivers for EFS, for use with Windows, MacOS and Linux. EditShare manages the entire EFS 2020 technology stack from the file system to OS drivers, which means enterprise-level stability and faster video file transfers. This is the industry’s only solution to provide no single point of failure all the way to client workstations and servers. Users can expect more real-time

video streams, without the bottlenecks caused by legacy network protocols. Due to the high value of content and the tremendous losses that can occur if content gets out through unauthorized channels, any new storage system must offer extreme levels of security and accountability. EFS 2020 File Auditing is the first and only real-time purpose-built content auditing platform for the entire production workflow. With high-profile media content theft on the rise, security is top of mind for media professionals. Designed to track all content movement on the server, including a deliberately obscured change, EFS 2020 File Auditing provides a complete, user-friendly activity report with a detailed trail back to the instigator. An important and timely addition to the EditShare production ecosystem, file auditing is an industry recommended and best practice security measure that ensures accountability by answering “who did what and when.”



EMVIRTU ALL-IP CORE INFRASTRUCTURE AND MEDIA PROCESSING PLATFORM When addressing the new media over IP, broadcasters, media production and military/mission-critical environments must adapt and seamlessly integrate into their workflow, a wide range of signal types, formats, frame rates, timing and resolutions– this adaptation can present very real cost, time and space challenges, driving up CAPEX and OPEX considerably. To address these challenges, the Embrionix emVIRTU All-IP Core Infrastructure and Media Processing Platform is designed to provide an extremely high-density 1RU hub packed with virtualized IP signal processing services as a build-as-you-grow concept - by adding microservices one at a time. Users are free to build up their processing power modularly as their needs grow and change, seamlessly adding or interchanging virtualized processing functionalities including frame synchronizer, down converter, up/ down/cross converter, multiviewers, quad links to single-link UHD, SDR-to-HDR converters and color processors. Designed to drastically reduce energy costs and space requirements during processing, the emVIRTU comprises to up to sixteen (16 x) 100GE aggregation links that connect directly to an IP core switch, the 100GE links scaled with the number of microservices used. And, with passive connections between the IP-toIP processors and aggregation ports, there is virtually no point of failure. The

platform is designed so that control, synchronization and data use the same interface, greatly minimizing the use of cabling. Aggregation bandwidth reaching up to 1600Gb/s allows for high-resolution UHD, HD and 3G content to be efficiently produced and without constraints. For UHD environments, the platform is designed to offer up to 64 IP processing functions from its small footprint, in addition for HD environments, the platform scales up to 256 IP processing functions. The Embrionix platform supports a modular approach, enabling the selective addition of virtualized processing functionalities such as frame synchronization, up/down/crossconversion and multiviewer capabilities. Embrionix listened to the needs

of their customers as they migrated to IP and designed the emVIRTU in response to the signal processing challenges their customers have shared. With the feedback they received from customers, Embrionix was able to create an extremely dense platform hosting software-defined IP processors. This powerful solution is packed in a small 1RU and offers all the processing power their customers’ need, while minimizing energy costs, space and cabling and providing a robust design with virtually no point of failure. Users can build their key processing power modularly, as they grow and change – one-by-one if desired. The emVIRTU’s low-footprint 1RU frame can fit virtually anywhere and its passive hub offers full resiliency.



HUB SHARED STORAGE SYSTEM The Facilis HUB Shared Storage system is an entirely new platform. This is the answer to customers’ requests for a more powerful compact server chassis, lower-cost (SSD, HDD & Hybrid options), built-in Asset Tracking, and integrated Cloud and LTO archive features. Facilis HUB is the only shared storage network that allows both blockmode Fibre Channel and Ethernet, optimizing the transmission of video and audio data through IPbased Ethernet networks and delivering incredible speeds and block-level sharing options through Fibre Channel. It can connect through either method with the same permissions, user accounts, and software interface - concurrently if desired, for failover scenarios. This feature is truly differentiating because not every client workstation needs or wants the same speed and access. As one of the last advocates of Fibre Channel dedicated systems in post-production, Facilis not only offers the highest bandwidth Fibre Channel link speeds, it is also able to saturate those links through dynamic file-level and volume-level locking. Facilis HUB Shared Storage is OS and application agnostic, cloud connected,

and not burdened by clunky, inefficient network file systems seen in NAS-based solutions. Facilis HUB Shared Storage also solves many of the problems that post facilities have faced for years with file permissions, ownership, and access control. Facilis has simplified this to the extreme, ensuring that operations like reading a file, saving a project to a folder, and copying content to a certain location will always succeed. Facilis Object Cloud, part of the HUB line, is a new integrated disk-caching system for cloud and LTO backup and archive that includes up to 100TB of cloud storage for one low annual cost. A

native Facilis Virtual Volume can now display Cloud, Tape and spinning disk data in the same directory structure, on the client desktop. Facilis HUB Shared Storage has a unique way of creating virtual disks on demand that look and act like hard drives but are evenly distributed across a much larger drive set. In doing this, Facilis ensures that every virtual drive has the same performance, regardless of whether the drive set is completely empty or almost full. Administrators can easily change the size of the virtual drives to keep up with the needs of the project, using any space available. Traditional file/ folder permissions are removed and work only on the volume-level, ensuring that volume access automatically gives users access to all internal objects. Object Cloud utilizes object storage technology to virtualize a Facilis volume. This allows access to files that exist either entirely on disk, entirely in the Cloud or on LTO, or even partially on disk and partially in the cloud. Simply stated, Facilis HUB Shared Storage has raised the bar in network shared storage and represents the best value in collaborative, high-speed, high-availability storage systems in our industry.



INTRAPLEX ASCENT SRT VIDEO OVER IP GATEWAY GatesAir has added a new application to its Intraplex Ascent transport platform to support reliable and secure video over IP transport alongside audio. The new Ascent SRT Gateway is especially useful for broadcasters managing contributed video and/or audio content for point-tomultipoint distribution. The application’s video support also brings Ascent into the television market, with an exceptional value proposition for national broadcasters feeding many destinations. Ascent will now provide transparent support of any real-time, IP-based broadcast video protocol, with applicability to ATSC 3.0 and DVB-T/T2. GatesAir first introduced Intraplex Ascent as a next-gen Audio over IP platform. Intraplex Ascent represents an evolution in broadcast and IT convergence, and is GatesAir’s first Intraplex system to live on a COTS x86 server. Ascent provides broadcasters with a highly scalable, redundant and cloud-based transport platform for multichannel contribution and distribution. Designed for centralized control and maximum interconnectivity, Ascent is compliant with the AES67 standard and today’s leading AoIP networking solutions. Ascent offers two key differentiating factors for IP transport: an ability to manage many Secure Reliable Transport (SRT) streams on a centralized platform, and its integration of GatesAir’s unique Dynamic Stream Splicing (DSS)

software. The latter diversifies SRT data across redundant networks, and adds protection against packet losses and network failures. The Ascent SRT Gateway is an innovation to support reliable and encrypted transport of video content, both in point-to-point and point-to-multipoint configurations. Interoperability with Intraplex DSS software will support duplication of SRT streams with video and embedded audio over separate network paths, leveraging a single stream-splicing buffer for hitless protection against errors and failures. Since the Ascent SRT Gateway application supports stream splicing, it can effectively reduce latency for SRT retransmission, and optimize network redundancy for video distribution. This

is especially useful for broadcasters and network operators who are distributing real-time video and audio over microwave, fiber and IP connections, and transporting those high-bandwidth streams to multiple studios or transmitter sites. The Ascent SRT Gateway can scale to support more streams and more destinations as needed, with video bandwidth speeds of up to 200 Mb/s at the base level. That can include multiple 50 Mb/s streams, or a much larger single stream generated through video encoders or other headend systems. The Ascent SRT Gateway application scales very seamlessly with its multi-core hardware platform, opening a broad diversity of media transport applications to support national TV or radio networks.



MAXIVA IMTX-70 INTRA-MAST TRANSMITTER GatesAir’s newest product, the Maxiva™ IMTX-70 Intra-Mast Transmitter, is a very compact modular multi-transmitter system that has been scaled to allow installation within typical hollow mast/tower structures, or vertical poles. This tower structure itself provides complete protection from the outside environment, while allowing heat dissipation via convection, and forced air cooling through the unit. The diminutive dimensions of the Maxiva IMTX-70 chassis are key to this unique design. Measuring only 230 x 485 x 320 mm, it allows installation into a wide variety structures with access via a lockable and sealed door. Cooling air is provided by vertical air convection within the mast structure, and is complemented by small fans inside the chassis. Although extremely compact in size, the Maxiva IMTX-70 boasts many features normally found in full-size systems. This includes the ability to house up to six separate transmitter modules, each with up to 70W of average DTV power. Each module can be configured as a transmitter, translator, or on-channel gap-filler with echo cancellation, using optional input cards. A DVB-S/S2 satellite receiver option is also available, as well as GbE (TS over IP), ASI, T2MI, SMPTE-310M, and ETI inputs.

Digital modulations supported include ATSC, DVB-T, DVB-T2, and ISDB-T. For ISDB-T/Tb applications, an embedded ReMultiplexer and Layer Combiner/ TS to BTS can be integrated. The RF output stage is a highefficiency Doherty broadband amplifier for reduced energy consumption and minimal waste heat. For added redundancy, the Maxiva IMTX-70 includes dual redundant power supplies, each configurable as 2 x DC, 2 x AC or 1 AC plus 1 DC. Remote control includes SNMP and a Web interface. Key Features include: •Intra-mast transmitter/ transposer or on-channel gapfiller •Extremely compact dimensions •Up to 6 separate UHF or VHF modules per chassis •70W average DTV output power per module •Cooling via internal structure convection, plus forced-air through chassis •Each module is configurable as: -Transmitter -Transposer -Gap-Filler •Power supply: 2 x hotswappable power supplies (2 x AC, 2 x DC, or 1 AC plus 1 DC)



DIVINE AOIP POE POWERED MONITOR DANTE® /AES67 DIECAST NETWORK AUDIO POWERED LOUDSPEAKER The new DIVINE DSP controlled Powered Monitor is our new concept in PoE powered network audio monitor speakers. Featuring up to 4 audio inputs, and powered by Power Over Ethernet, using just one cable for audio and power, this compact loudspeaker will enhance any configuration and work seamlessly with existing equipment: Interfacing to other manufacturers’ equipment within your AoIP infrastructure is completely trouble free as it supports both Dante® and AES67 protocols. The DIVINE is the latest in full range, compact, diecast loudspeaker design. Boasting a modern, low-distortion and low-noise 10 Watt class D amplifier, and with up to 96K crystal clear digital audio capability, the DIVINE is most at home in outside broadcast and production situations. The DIVINE is a general purpose loudspeaker and usefully has 4 audio inputs from the network. By using the front panel source select switch, these audio inputs can be individually routed to the loudspeaker, or they can be mixed together. Additionally, and uniquely, the DIVINE has the facility to prioritise a single audio input over any of the others. This is particularly useful if you wish to monitor programme audio and also simultaneously listen to important talkback from a director or producer. Alternatively, this facility could be used

to prioritise a fire alarm signal over anything else. As well as the source select switch, the front panel features 4 LEDs to indicate which source is currently selected. These 4 LEDs are multi-coloured RGB devices and can also be set to show the level of their associated source. For robustness and to prevent damage during demanding use the DIVINE features a recessed volume control and recessed rear connectors and controls. A unique feature of the DIVINE is a

small rear panel display allowing for setup and configuration of a vast array of functionalities, including setting of EQs, source priority, power saving mode, and for disabling the front panel controls. For increased flexibility, the DIVINE can be controlled by our Windows10 application GlenController. The application will have two main functions: to allow the volume and source of multiple DIVINEs to be synchronised, meaning that stereo & multichannel working becomes much easier than ever before, and also to facilitate easy updates across the network when even more features become available in the future. The DIVINE is aluminium diecast, powder-coated and portable, hardwearing and robust in design and build quality, ensuring it is able to withstand the knocks and bumps that come with outside broadcast and production environments. The recessed knobs, switches and RJ45 encased XLR mean that they are much less likely to become damaged during transit or in use. As well as being free standing, it has a rear panel VESA mount for easy installation and a useful mic thread on the bottom, meaning it takes seconds to fit to a mic stand. A steel grill across the front of the loudspeaker, fitted as standard, affords extra protection for the drive unit beneath.



GV STRATUS One provides a cost effective, fully featured set of workflow tools, with ingest (SDI, web streaming or file based workflows), asset management, editor integration, playout and social media publishing all in a single, 2RU, easy to deploy solution. This allows multiple users to collaboratively deliver content more efficiently. Apart from being a single all in one solution, one of the key benefits of GV STRATUS One is its ease of deployment. The unit is preconfigured for multiple workflows to allow for instant use. For mission-critical news operations, GV STRATUS integrates seamlessly with the EDIUS NLE platform, resulting in a highly optimized hybrid editing workflow so journalists can mix and move low and high-resolution assets across multiple sites, and perform full editing in the field. Editing can be performed anywhere via an HTML5 web client, and collaborators can access material as it’s ingested (normally within 15-20 seconds), then begin editing it simultaneously. Because proxy files are used extensively and intelligently, there is no need for complex SAN configurations (although you can, of course, scale up at any time). Key features include: •4 Channels of Playout and Record •SDI Ingest and Playout (720P & 1080i) •GVRE Transcoding •16 TB (useable) Local storage (over 300 hours at 100Mb) •10 Simultaneous Users (floating licenses)

•GV STRATUS Rules Engine: Simplifies the organization and delivery of content • Streaming server for web frame accurate streaming •Growing File support – start editing whilst media is still being ingested •Social & CMS distribution/management GV STRATUS One is a product belonging at the core of customers’ production workflows. The platform provides a cost effective, fully featured set of workflow tools, with ingest, asset management, editor integration, playout and social media publishing all in a single, easy to deploy solution, allowing multiple users to collaborate to deliver content more efficiently. Customer benefits include: News Stations/Smaller Groups •Simplify management of social media •Allows for immediate deployment of content for stations with a limited amount of staff, space and equipment resources. Live Venues •Rules based automation to clip, tag and deliver clips efficiently. •2nd Screen Platforms: In stadium experiences, video monitors, venue based

applications. Education •Cost-effective solution for broadcast programs to teach and train students on media production workflows. •Create, record, store, edit, distribute, and stream educational webinars or online programs •Turnkey solution for sports broadcasting programs. Houses of Worship •Simplify workflow of content for producers with minimal production experience. •Capture footage from multiple cameras during live services, transcode and edit content simultaneously, edit and distribute highlights online via website or social media channels GV STRATUS One is not limited to smaller facilities. Customers that already use GV STRATUS, can deploy the system as a backup for disaster recovery (DR) operations. With consumer attention a hotly contested commodity, keeping services on air is critical. GV STRATUS One allows broadcasters to consistently serve their audiences from either a primary or secondary location until the main systems are back on air.



Haivision SRT Hub is an intelligent low latency cloud media routing service built on Microsoft Azure for broadcast contribution, production, and distribution workflows. SRT Hub is the ideal solution for broadcasters seeking alternatives to costly satellite links, purpose-built fiber networks, or proprietary transport solutions​. With SRT Hub, global media workflows connecting vendors and spanning multiple Azure datacenters and broadcast services can be defined, orchestrated, and launched within minutes. SRT Hub removes the complexity of spinning-up regional cloud resources and determining the best path through the internet. Leveraging the latest Microsoft Azure cloud architecture, SRT Hub automatically provisions the processing resources (containers) required in any data center to intelligently scale and route media from source to destination. By removing these complexities, SRT Hub provides on-demand media routing with massive scalability. This kind of scalability is important. When news is breaking, broadcasters need fast, on-demand ways of getting their content from the field to production, even from unexpected locations worldwide. SRT Hub helps broadcasters and video service providers easily build live and file-based content

routing workflows on-demand, with security and reliability. SRT Hub combines the global reach of Microsoft Azure with the security and reliability of the SRT video streaming protocol. Microsoft Azure has the largest footprint of dedicated fibre of any cloud provider in the world and is available in 140 countries worldwide – this ensures that the first-mile hope to the cloud is as short as possible, no matter where the source. This unprecedented reach, combined with the SRT protocol’s low latency Automatic Repeat Request (ARQ) packet recovery system, broadcasters won’t need to worry about losing packets along the way. SRT Hub’s reach is increased by its ability to connect to production workflows. SRT Hub can automatically route media into and out of third-party broadcast systems through the use of connectors called Hublets. Hublets are designed to support live-to-live, liveto-file, and file-to-file workflows for

delivering content to cloud service/microservice or onpremise systems. On September 9, 2019, Haivision announced the SRT Hub Partner Program and the availability of the SRT Hub Software Development Kit (SDK) with initial integrations with products from Avid, Cinegy, Haivision, LightFlow, Microsoft, Telestream, and Wowza Media Systems. The SRT Hub Partner Program, supporting third party vendor integration with the newly available SRT Hub SDK, provides an open and documented framework to develop input, output, and processing Hublets that can further extend the capabilities of SRT Hub. Visit our directory of SRT Hub partners: https://www.haivision. com/srthub-partner-ecosystem/ With its open and extensible partner ecosystem, SRT Hub is adaptable to specific workflow needs. Hublets are designed to support live-to-live, liveto-file, and file-to-file workflows for delivering content to cloud service/ microservice or on-premise systems. These Hublets enable broadcasters to connect multiple solution vendors in a unified workflow. Whether delivering a live feed to cloud storage for collaborative editing or to production suites at the broadcast center, SRT Hub is the fastest and easiest way to bring content in from the field and get it to air quickly.



InterDigital R&I’s Immersive Lab develops solutions for tomorrow’s interactive media environment, enhancing the immersive video experience. The Immersive Lab has developed a fully automatic “Digital Double” solution to quickly digitalize a person’s face and upper body. This “avatar” creation technology enables users to quickly create a fully animatable virtual character. By streamlining 3D facial animation, the solution automatically develops digital doubles that look more realistic and life-like than what is seen today with similar automatic processes. Creating digital doubles for TV and film production has traditionally been a manual and tedious process in which production teams hire dedicated artists to create digital doubles. Citing photos and scans, the artists then use software to sculpt, model and paint each individual’s face until they are as realistic as possible. Not only is this a long, arduous and costly process, but it is often necessary to avoid producing avatars that lack the ‘human factor’ - standard scanned avatars often do not have the believable, life-like characteristics they could, and real humans can perceive it.

InterDigital R&I’s Digital Double technology completely automates and simplifies the digital double creation process. Using a rig of 14 cameras, the technology computes a full 3D mesh of the face and upper body from the images to automatically create a digital double—in just 25 minutes. This digital double is unique because it is developed using images captured by the camera rig, creating precise facial expressions for animation and more life-like avatars than traditional tools available today. Once the digital double has been created, content producers can use this digital double to create video contents. As we enter the 5G era, InterDigital

R&I’s Digital Double tool will become increasingly important to content producers. With the ultralow latency and high bandwidth characteristics of 5G, users will desire, and be able to enjoy, more immersive video experiences. The applications of the digital double technology are vast and largely untapped. With this technology, individuals could see themselves, or their friends, as a character in a film or TV show in real-time. Or, even be able to virtually participate in a TV game show in which viewers see themselves alongside the TV presenter, contestants, and the audience on screen. InterDigital R&I’s Digital Double tool can also be applied to video gaming, where an individual’s digital double can be used and adapted to each context, or even dynamic news with a digitally created virtual television presenter. The digital double has the potential to enhance the immersive video opportunities of the future, but the technology has already been adopted by content producers today. Production companies like Disney and Paramount already use digital doubles to appear in their films.



TICO-RAW CODEC Engineered at intoPIX, TICO-RAW is an innovative, lossless quality, low-power, low-memory and line-based image processing and compression technology. Thanks to innovative processing and coding, the full power of the image sensor is preserved while reducing the bandwidth and storage needs. It offers high image quality and the capability to manage high resolution, high frame rate and high dynamic range workflows. TICO-RAW is the world’s first codec that can offer compression efficiency with such low complexity. Simply said, TICORAW creates files at JPEG sizes, with the full flexibility of an original RAW file. Offering unprecedented image quality, TICO-RAW is the world’s first codec that can offer this level of compression efficiency with such low complexity. To put numbers on it: an 8K60 TICO-RAW stream can go down to 2Gbps (=1bitper-pixel compression), with lossless quality and 0.1 milliseconds of latency - perfectly suitable for latency-critical environments, movies or live production. The technology is extremely low-power & tiny in an FPGA, and fast & powerful in CPU or GPU. This way, TICO-RAW can be implemented in any RAW video workflow, all the way from the camera to the final processing stage or storage. For video cameras and editing workflows, TICO-RAW enables: •Drastic power reduction Simpler support of UHD, HFR, HDR in RAW workflows •Bandwidth reduction during real time transmission over network infrastructures

without inducing latency •Efficient reduction of stored RAW imagery on storage media •Increased decoding speed while retaining the sensor data needed for complete control of the RAW processing pipeline

Unlike other camera-specific RAW codecs in use today, TICO-RAW is designed for full interoperability. It is therefore truly oriented towards the entire industry in order to simplify the way we all work with RAW image data.



BITSAVE LOUDSPEAKER The BitSave AI-powered upscaling encoding platform by iSize is a first-of-a-kind, fully AI-powered encoding platform that takes advantage of AI and deep learning to enable up to 70 percent bitrate saving versus non-AI competing codec enhancement solutions. It is important to note that BitSave is 100 percent codec independent, which means that it is not beholden to any standard or format and can be applied to any application, platform, or workflow that has to move data quickly and efficiently. The proprietary AI technology that drives BitSave substantially increases the efficiency and performance of all the latest codec standards including AVC/H.264, HEVC/H.265, and VP9, thus ensuring seamless integration with existing media workflows. Speaking of standards, BitSave does this without breaking any standards because it is an add-on solution for an existing codec infrastructure rather than a replacement. Being codecindependent means that BitSave can reduce video delivery system bitrate requirements without waiting for the often-lengthy process of new codec standards to be developed and ratified. BitSave has already been proven to accelerate encoding by up

to 500 percent, primarily because of its intelligent approach to pre-coding pixels before they enter an encoder, thus providing the encoder with a smaller, easier, and faster-to-process footprint right from the start. When people stream video today, they basically choose a streaming recipe that has been pre-cooked according to the platform they want to stream over. That almost always means there is a massive loss of quality, and if you’re in an area with poor Wi-Fi, as many people around the world still are, a 4K stream will quickly get switched to a much lower quality stream. So iSize accepted that challenge. The feeling was not to just optimise for lowlevels metrics like signal-to-noise ratio. They aimed higher. Much higher, into the scientific realm of perceptual metrics. BitSave is essentially a “pre-coder”

that performs perceptual optimisation of the content - as well as finding the best possible resolution for the content according to the desired bitrate stream - before the pixels reach the encoder. That means that BitSave is a straight pixel-to-pixel engine that ingests and produces pixels that are pre-processed to optimise whatever comes next, no matter what encoder is being used or whatever standard is being worked in. This approach has several advantages. As mentioned earlier, BitSave doesn’t break anything. BitSave is also unconstrained by standards because it produces a pixel output, which means it can do things like use advanced neural networks and add multiple quality and low-loss functions that just can’t be achieved otherwise. By essentially reverse engineering the metrics, BitSave takes advantage of a massive amount of research that has already been done in the scientific community regarding the perceptual assessment of content. BitSave can take an input stream and produce an output that is fully optimised according to the accepted standards of the perceptual quality metrics academic community, which delivers a much better perceptual quality, but by using far fewer bits than required by standard encoders.



IRON MOUNTAIN INSIGHT Today’s media organizations often find themselves with large content libraries spread across a complicated ecosystem of diverse systems and data types, making it difficult and time-consuming to serve the right content to the right audience at the right time. A lack of standardization across departments also negatively impacts their ability to generate business insights from their physical and digital assets – which can impact future growth. Add in the fact that many modern M&E organizations simply lack the internal skills, systems and resources to realize the potential of their media assets, and it’s clear that they have a serious challenge on their hands. But what if this wasn’t the case? What if these media organizations could easily identify media assets such as photographs, video and film and the relevant information contained within them? What if they could protect their brand by quickly finding copyright and other IP infringements? What if they could identify celebrity individuals and associated brand logo placement surrounding them for contract negotiation with sponsors? The key to achieving all this is metadata. As the amount of rich media content continues to explode, applying metadata to describe and categorize this content is vital, particularly when it comes to providing better search visibility. The quality of the metadata directly affects an organization’s ability to find content efficiently and accurately – and therefore optimize content for specific audiences.

This is where Iron Mountain InSight comes into play. Powered by Google Cloud’s machine learning and artificial intelligence (AI) algorithms, the solution gives M&E businesses the power to unlock the potential of their media assets. Iron Mountain InSight’s machine learning technology rapidly analyzes vast amounts of unstructured content and enriches it with meaningful metadata, shining a light on ‘dark data’ and classifying different types of content stored on multiple repositories on-premise or in the cloud. This, in turn, lets businesses drastically increase their speed and effectiveness in accessing and assessing their media inventory – as well as reducing risk through automated information governance. Iron Mountain InSight, awarded Google Cloud Technology Partner of the Year AI and Machine Learning in April 2019, also generates actionable insights and

predictive analytics to help M&E businesses monetize their valuable content libraries and drive future value for the organization. For example, film studios can use its simple, visual and intuitive user interface to easily search, find and ‘clipify’ content, while sport broadcasters can identify specific players and associated brand logo placement within minutes to support contract negotiations. Ultimately, it has never been more important for businesses in the media and entertainment space to maximize the potential value of their media assets. From enabling them to better service their customers, to driving operational efficiencies and opening up new revenue opportunities, Iron Mountain InSight gives M&E organizations the power to transform the way they manage content and monetize their content libraries for many years to come.



MOTORSPORTS LIVE VIEWING AND DATA-DRIVEN MEDIA SOLUTION PLATFORM LiveU has been working with Griiip, a media solution technology platform for motorsport series, for the past two years. Griiip launched its G1 racing series in 2018, using LiveU cellular bonding technology as a compelling, costeffective way to deliver a flawless HD live video experience – from inside the racing cars, around the tracks, and from airborne drones – without the need for any complex infrastructure, using unique bonded cellular technology. The solution has been successfully implemented in this year’s G1 Series across Europe, watched by fans around the world, including on ESPN in Brazil. At IBC 2019, we’re revealing a new motorsports live viewing and datadriven media solution platform, bringing viewers live action directly from the track, or other motorsport venues, and intensifying viewers’ experiences. The solution combines: •LiveU’s high-quality and reliable remote live broadcast technology using the ability to mount high-gain antennas externally on a car’s rooftop for its LU300 and LU600 HEVC series, which is crucial for high-motion video during the race. •Griiip’s data content solutions for engaging motorsport programming, using race data collected to create engaging layers and insights using AI and deep analysis for fans and drivers. LiveU’s and Griiip’s joint products offer

a plug & play solution for collecting, editing and distributing live videos, compiled with data-centric content for storytelling and enhanced viewer engagement, enriching the broadcasting of motorsports content like never before. This is all achieved at a price point that makes this a highly viable, endto-end solution for motorsports of all types, creating an engaging video and data package, making racing content more compelling for racing series, and as a result, for broadcasters, streaming platforms and viewers. See this video clip: watch?v=ph4qCL97TJk&feature=youtu. be




There are HDR to SDR conversion tools, which apply “global correction” across HDR content. However, for truly exceptional results which will automatically adapt for changes in image composition, brightness levels encountered in production or live events, then frame by frame processing is required. This processing can accomplished during a post-production process, which is in non real-time, expensive and time-consuming. HDR Evie, powered by greenMachine, addresses these issues by offering an automated, real-time and frame-by-frame HDR to SDR conversion. One difficulty involved in progressing from standard dynamic range (SDR) to high dynamic range (HDR) is the need to provide downward compatibility with the huge number of SDR television receivers already installed. The new generation of broadcast cameras can provide a wider scene contrast range than conventional displays and it is obvious that converters are needed to handle the different formats. HDR Evie was developed to enable live

automatic contrast compression avoiding the artefacts inherent in classic tone mapping techniques. Statistical analysis is performed on every image to preserve as much scene contrast range as is necessary but at the same time as little as possible. For example, it is not expedient to set a very dark exposure to prevent some small irrelevant highlights from clipping while the main part of the image is underexposed. Based on models of the human visual system, especially around the perception and preferences relating to contrast, the algorithm calculates the amount of clipping, or rather contrast compression, required. HDR Evie is an automatic process which analyses the incoming HDR image on a frame by frame basis. Changes in luma will also affect the color impression. That is why the chroma is automatically adjusted to the new luma. Temporal filtering is performed to avoid flicker. HDR Evie provides more than simply better image quality; it also affects the way in which content is produced. By optimizing the lighting balance and exposure, aperture setting becomes

less crucial. As long as the information in the captured scene is not absolutely crushed or burnt out, HDR Evie will try to map it into the final contrast range. As a result, controlling aperture is much less critical on the production side. In a typical televised football game, for instance, some areas of the image may be obscured by roof shadows while others are bathed in overly bright sunlight. The aperture needs to be constantly ridden but it’s likely that the overall image will nevertheless be fairly poor. HDR Evie can enable both a constant f-stop number and a balanced image as long as there is no clipping at the camera output. Sensors that can deal with a higher dynamic range therefore work better. Of course, the f-stop still has to be adjusted if the lighting condition changes substantially; on a very good HDR camera sensor, a single cloud would not affect the result, however an overcast sky would require a new setting. Production flexibility can be increased by reducing the need to light a scene. Watch this reel: com/watch?v=ObEdUd3yU4A&t=14s.



Mediaproxy is a leading provider of IP broadcast solutions specializing in software-based monitoring systems for logging and analyzing linear television and OTT production and distribution chains. Based in Australia but with international offices and users around the world, Mediaproxy has been developing specialized programs for broadcasters, on-demand services and streaming operators since 2001. LogServer is Mediaproxy’s flagship product and its entry for this year’s IBC Awards. It is a full suite of software that offers a comprehensive range of tools for Compliance, Monitoring and Analysis. These areas are increasingly necessary in a broadcast market that has become more complex through the growth and popularity of OTT services, as well as the proliferation of traditional TV channels and OTT platforms. Compliance ensures video material going into a playout system conforms to the specifications set by not only broadcasters but also regulators and governments around the world. Monitoring allows technicians to check that content continues to comply with regulations and tests for any faults through the distribution process across different platforms. Analysis identifies on-air problems during transmission. Since the original launch of LogServer, Mediaproxy has introduced regular incremental upgrades that respond to changes in broadcast technology and techniques as well as the needs of operational personnel.

Among these introductions is exception-based monitoring. This works on the concept of IP ‘penalty boxes’ that allow broadcasters and multiple system operators to efficiently monitor and analyze the large amount of video material and accompanying data involved in distributing programs. Another addition is a live source comparison feature, which is able to identify mismatched content in real time. LogServer also incorporates the latest standards and open protocols. These include the NewTek NDI network device interface and SMPTE specifications for distributing material using new technologies, specifically ST 2022-6 (transport of high bit rate media signals over IP networks) and ST 2110 (media over IP formats). Both of these have been deployed in the field to service the growing use of new uncompressed IP formats. LogServer also features integrated Ember+ control protocol capability, which further enhances monitoring automation and offers increased

redundancy. Another upgrade is the ability to work with Mediaproxy’s Monwall IP interactive multiviewer. In response to leading broadcasters now gearing up to launch 8k services, the latest version of LogServer is able to handle signals in the higher resolution. This is achieved by converting any input, including 8k, to a proxy resolution. The system is additionally capable of handling HEVC and TSoIP encoded streams. Broadcasting continues to react to and adopt emerging technologies. Mediaproxy is similarly looking at how best to harness the potential of developments such as artificial intelligence (AI). To this end the company will preview new AI tools for identifying content during IBC. Broadcasting today, on any platform, relies on accurate, efficient logging and analysis of media throughout the distribution process. Mediaproxy LogServer is the most comprehensive and up-to-date package of compliance software tools available.



CYGNUS 360 EVENTS To overcome the challenge of delivering live 360-degree video content to different classes of devices, MediaKind developed its Cygnus 360 Events solution. Launched in April 2019, the solution provides a cloud-based workflow for live 360-degree video processing and multi-platform publishing which enables consumers to become fully immersed and engaged in their favourite live event. The Cygnus 360 Events cloud-based workflow combines Tiledmedia’s ClearVR streaming technology and MediaKind’s microservice-based HEVC encoding to process and publish as viewport adaptive tiled 360-degree live video, while also simultaneously formatting and processing for publishing on social media platforms. Cygnus 360 Events minimizes the costs and risks associated with delivering live events through enhanced 360-degree video coverage instead of, or alongside, current TV production. This multi-platform publishing solution enables broadcasters and operators with content rights, especially in premium sport and eSports content, to access new monetization opportunities and reduce subscriber churn, by providing an augmented and differentiated service that enhances the viewing experience for consumers. The ability to deploy this cloud-based solution on an event by event basis offers service providers scalability and cost-efficiency, while

offering viewers and subscribers better value for money, with the flexibility to watch live events on a pay-per-view basis or via advertising. By deploying Cygnus 360 Events, operators and content owners can efficiently capture and process high quality live 360-degree live video at up to 8K resolution and deliver immersive content to the consumer with delivery bitrates of between 10-15Mbps. From a single high-quality contribution source, live 360-degree video can be processed and delivered into suitable resolutions and formats for simultaneous live publishing to an operator’s app over public Content Delivery Network (CDN) as well as via social media platforms. From a consumer perspective, Cygnus

360 Events puts fans of live sports, esports and music in the producer’s seat, with the opportunity to selfcurate and become more engaged in their viewing experience. This can be achieved through enhanced 360-degree live video coverage, either directly via a head-mounted device or viewed on a multiscreen device as an augmented and complementary experience to existing HD/UHD broadcast or streamed services.




The Ncam Mk2 is a camera bar that mounts freely on any camera, with Intel® RealSense™ technology that capture spatial data to feed back to the Ncam Reality server, enabling sophisticated augmented reality (AR) work. Ncam has partnered with the Intel® RealSense™ group to create the completely redesigned camera bar. The Intel RealSense technology has been adapted and integrated by Ncam to meet the requirements of the media and entertainment industry, while the latest Ncam Reality software runs on standard HP workstations that provide industry trusted processing power with easily

accessible global support. This powerful mix of Ncam’s software and industry knowledge, Intel RealSense technology and HP’s trusted workstations combine to make the Mk2 an incredibly accurate, robust and flexible camera tracking product. The addition of the Intel RealSense technology has enabled Ncam to reduce the size and weight of the Mk2 by five times compared to it predecessor; the Mk2 weighs just 332.6g with dimensions of 158mm X 39.1mm X 38mm. The weight and size reduction makes it much easier to use with handheld and stabilised rigs, which gives directors and camera

operators more options for shooting. It also provides easier and better balanced rigging. In addition, the Mk2 features more flexible mounting options with industry standard fixture points. The rugged casing is designed to be ‘set proof’, to protect the optical elements and ensure the bar can withstand the daily grind of production. The robust nature of the Mk2 also makes it more accurate, since everything is calibrated in manufacturing, meaning that users don’t have to re-calibrate on set. All existing Ncam customers will have an upgrade path to the Mk2 camera bar.



QXL – 25G IP ENABLED RASTERIZER The QxL, the world’s most flexible and most compact, feature-rich 25G rasterizer, has been conceived from the outset to address the needs of Professional Broadcast Media IP workflows. IP media interfaces are provided as standard and SDI media interfaces with optional SDI Eye and Jitter measurement are available as a factory fitted option. The flexible user-friendly GUI provides up to 16 userconfigurable windows with presets for rapid visualization of different traffic and workflow configurations. The QxL is fully 10G/25G IP-enabled, with support for JT-NM TR 1001-1:2018, ST 2110-20 (video), 2110-30 (PCM) and 2110-31 (AES transport) audio, 211040 ANC media flows, all with 2022-7 Seamless IP Protection Switching (SIPS), and independent PTP slaves on both media ports for fully redundant media network operation. Simultaneous support is provided for 1 video payload, 2 audio payloads of 211030 (PCM) or 2110-31 (AES3 transport) at level C operation with automatic detection of audio flow configuration and Dolby™ audio formats and one ANC flow. The Qx IP JT-NM TR 1001-1:2018 toolset provides support for default DHCP on all IP ports, unicast DNS-SD, AMWA IS-04 Discovery and Registration, IS-05 Connection management, system resource, and Network Topology Discovery using Link Layer Discovery protocol (LLDP). The user interface and stereo monitoring bus are available as IP media flows as standard and can be switched into HDR

mode. The optional IP to single link SDI and AES3 gateway provides up to 16 channels of embedded audio and 4 AES3 outputs as part of the base chassis. A suite of operator level IP flow health and PTP monitoring is provided with warnings and alarms. For detailed analysis and debug, the new IP-Measure toolset provides advanced engineering-grade information including four 2022-7 Packet Interval Time (PIT) displays, media port network statistics, real time measurements of Flow to PTP relationships and latency plus real-time measures of 2110-21 Cinst and Vrx. The QxL is amongst the first device of its type for which SDI is an option – not part of the core. It therefore delivers an unparalleled price performance entry point for the modern HD IP broadcast ops user

– with the flexibility to evolve the firmware for UHD over IP, QC, engineering, HDR, on the same platform to meet the evolving needs of the business, future proofing the investment. The QxL retains the same highly innovative and accessible under interface of the Qx, meaning that there are minimal re-training costs for existing Qx users – and that the complexities of ST 2110 and NMOS operation are presented to the user in an intuitive and accessible manner. The optional SDI interfaces still provide all of the advanced SDI connectivity and analysis that PHABIX is famous for with optional highly advanced development or manufacturer grade SDI-STRESS toolset that has already been pioneered by its smaller cousin, the PHABRIX Qx.



At IBC Pixit Media will show the latest version of PixStor, its leading softwaredefined storage platform. Purpose-built for demanding media requirements, PixStor dramatically improves workflows and reduces the cost of infrastructure and shared services. This latest release offers significant enhancements for users, including seamless deployment of Cloud workflows and security services aligned to stringent industry standards. Fully-Automated Cloud Deployment Constantly faced with tight project deadlines and competing for new projects, creative organisations need to burst to the cloud to gain access to more compute power at peak times. PixStor users can now rapidly deploy cloud storage infrastructure and integrate on-premise resources without compromising the sustained performance and flexibility they expect from their PixStor install. Driven from guided menus offering a variety of sizing options, with simple push-button deployment and billing handled entirely within Cloud Marketplaces, customers are free to spin up PixStor at exactly the moment they need it. Every PixStor in the Cloud installation features Ngenea, Pixit Media’s flagship data transport mechanism, which allows the automatic transfer of only the data that is needed between on-premise and the cloud. Having this level of intelligence in the data transport helps users meet deadlines by allowing work to start faster, while reducing significant egress

and infrastructure costs through the elimination of unnecessary file copies. PixStor is now available on Google Cloud Platform (GCP) Marketplace. This solution enables creative organisations to rapidly expand on-premise render pipelines into Google Cloud – whilst still enjoying all the benefits of an on-premise PixStor deployment. PixStor in AWS is coming soon! Secure container services New PixStor Secure Containers Services address industry demands for the highest levels of security and data separation to protect high-value assets. Containers

unlock true multi-tenancy from a single storage fabric. This enables media organisations to deploy audit-compliant media environments at scale, by hosting isolated instances of SMB and NFS across multiple networks with no performance overhead. The alternative would be to stand up an entirely dedicated infrastructure stack for each client and physically separate its networks - adding significant complexity, disruption and expense, which means doing business can be cost-prohibitive. Since its launch in Spring 2019, a number of leading organisations have already deployed PixStor’s secure container services including: * Red Bee Media: recently deployed PixStor as the central storage platform across its multi-regional managed playout services. Using secure containers, Red Bee can offer multi-tenant secure data services natively within the PixStor shared environment, with one infrastructure providing logical separation and isolation of the data. They can scale services – spin up and down - with no performance overhead or disruption to other tenants. * Jelly Fish Pictures: uses PixStor as its central storage platform across its multiple sites in the UK. With PixStor secure containers, the company can deploy secure, TPN-accredited media environments for its A-list clients including Netflix - and take advantage of the economies of scale of a centralised storage system without the spiralling costs and productivity hits associated with operating discreet stacks.



AI FOR CRICKET - CLEAR VISION CLOUD Sports broadcasters and streaming platforms are always looking for new ways to engage fans and deliver immersive experiences that get them closer to the real time action. PFT’s media recognition Artificial Intelligence (AI) engine, CLEAR™ Vision Cloud offers a custom model for cricket that helps achieve this. Vision Cloud mimics human learning and wisdom to make sense of cricket. It recognizes cricketing actions ball-byball, reads on-screen score graphics, analyzes in-stadia sound and discerns commentary. The engine is trained to recognize an exhaustive range of elements including cricket ball segments, thumbnails and synopsis for each ball, runs scored per ball, type of shot, wagon wheel of the shots played, fours, sixes, replays, crowd excitement levels, celebrations, wickets, bowling type, batting type etc. with over 90% accuracy. The machine cut highlights by Vision Cloud match the human editor cut highlights by more than 87%. It brings in AI-based audio smoothening during transitions, identifies non-sporting actions and events like Toss, Match Summaries, Team Line-ups, Huddles etc. to stitch together a complete story in the highlights. This cricket Machine Learning solution has 3 layers – 1) Instructional Learning layer that codifies cricket rules to form the basis of recognition 2) Machine Learning layer that synthesizes different

dimensions and facets using over 10 engines to analyse the content from specialised perspectives - compound objects, actions, score, sounds etc. 3) Machine Wisdom* layer that combines both by validating all facts and cues, moving back and forth in time and tagging validated attributes for each ball. The highlights are created using learnable business rules and the quality of highlights improve over time as we learn what works better for a highlight package and what does not. Vision Cloud solves specific pain points which off-the-shelf models cannot solve and provides flexibility to harness intelligence from PFT’s homegrown

models as well as the industry’s finest AI engines. The model has been built on the back of PFT’s decade of experience collecting, curating, and annotating content (400 million tags to date). Vision Cloud enables editors/ producers to create and publish sports highlights on-the-fly for live matches in near real-time. With Vision Cloud, content administrators can get fast, accurate search results across archives. They can also get new insights from footage for use in live commentary and post-match analysis, resulting in improved storytelling. The engine also helps OTT platforms deliver a createyour-own-highlights experience and a powerful free text search option for their end users. The solution’s unparalleled accuracy in cricket tagging and highlight creation, ability to power out of the box, immersive viewing experiences, coupled with its prowess to deliver multiple business benefits to sports content creators makes it truly award-worthy. It has immense potential to help sports broadcasters and OTT platforms achieve unprecedented scale and speed in sports production. The model’s capabilities can be extended to other sports as well, and it can be built quickly using footage from content rights owners. *Patent pending - “A system and method for automatic tagging of cricket metadata”



ELASTIC DATA VIEWER FOR AI Primestream is bringing Artificial Intelligence to multiple industries by providing easy access to media with AI in the cloud. Primestream enables users to analyze and modify video metadata alongside AI data sets in a simple-to-use platform. Primestream’s mission is to lead the way in developing systems that optimize media creativity, to move toward a reality in which working with media is as intuitive as speaking your native language. To achieve this, Primestream has a comprehensive MAM + AI solution that offers: •Automated multi-language Transcription workflows that eliminate the need to send multiple copies of the same file to extract different languages within media. •A natively integrated AI workflow with full support for a large range of AI classification results including facial, topic and keyword recognition. •Data Viewers dedicated to reviewing and modifying AI metadata. The interface is designed with AI data in mind to make sense of the vast amount of information accessible to the user. •A centralized, web accessible MAM platform for video and media assets allowing organizations to organize large libraries of media content generated by multiple departments wherever they are.

•A workflow automation engine that simplifies and automates repetitive tasks including AI processing while keeping users notified via email on the progress of their workflows. Practical Considerations For decades, media content has been a hand-crafted product, relying on skilled craftspeople to use their time and talents for even the trivial, mundane, repetitive steps. Just as the industrial revolution ushered in manufacturing automation to physical factories, AI offers similar productivity opportunities to media “factories.” Until recently, media management was about collecting, tracking, and utilizing information about data files. Any information about Content in those files relied on some level of human interaction and the time and potential for error associated with it. AI provides the opportunity to systematically learn about the content the data represents—

and use that knowledge to make better, more efficient decisions about workflows. At Primestream not only are we showing users the results of AI-generated visual and audio transcripts of content, but also providing a framework for using AI metadata to drive workflow activity. And the more we know about content, the more efficient we can make workflows. This is the greatest benefit of AI in media production: using computers to discover useful, interesting, or necessary moments in content, and freeing up humans to make better decisions about next steps. Primestream partnered with the Organization of American States (OAS), implementing AI to revolutionize the way the organization conducts its 35 member-country sessions. In this case, the AI integration was employed to recognize each representative speaker within any video generated from any session of the OAS. Concurrently, the audio content was transcribed from multiple languages so it could quickly be delivered to the member countries, which include every country in the Americas. This resulted in a significant improvement over the previous manual process—producing a finished product in a matter of days, rather than the approximately 30 days once required to accomplish the same tasks.




Over the years, media and entertainment organisations have been very adept at using a variety of methods for storing media, including spinning disks, hard drives, magnetic tape, celluloid and film. However, as higher-resolution formats and faster frame rates enter the market, adding to the massive volumes of content in both size and density, more pressure is added on media workflows and the storage they require. The industry is acutely aware that performance matters when dealing with UHD/4K, 360-video, and soon 8K, projects, and as such, they’re realising the need for an ultra-fast, highly available storage array for editing, rendering, and processing of media content. However, there is one storage solution that stands above all others. If the Oscars gave awards for the most optimal storage solution to tackle the advent of 4K and 8K, then NVMe flash-based storage would be taking home the ‘best actor of the year’. In fact, organisations are seeing the opportunities in deploying NVMe flash-based storage, such as the ability to

accelerate ingest, transcoding, rendering and playout, while giving editors a more responsive experience for working with multiple streams of high-resolution content. Quantum is tapping into this with a high-performance, highly available, and reliable line of non-volatile memory express (NVMe) storage arrays, called F-Series. Designed specifically for cutting edge media workflows, studio editing, rendering, and other performanceintensive workloads, Quantum’s F-series integrates with Quantum’s Cloud Storage Platform and its StorNext file system, to deliver powerful end-to-end storage capabilities, driving better margins and enabling more customer value for postproduction houses, broadcasters, and other rich media environments. With the latest ‘Remote Direct Memory Access’ (RDMA) networking technology, the platform also provides direct access between workstations and NVMe storage devices, delivering predictable and fast network performance.

With the introduction of NVMe flash, this new technology can remove performance bottlenecks so that organisations can achieve the full benefits of flash-based storage. It can also help address key performance requirements, enhance storage resource usability, and improve overall storage economics. While the cost of flash-based storage has forced some organisations to limit flash investments or restrict flash implementations to very narrow use cases, NVMe storage offers an important opportunity to capitalise on the performance benefits of flash, while still controlling costs. An organisation can attain Fibre Channel–level performance using much more cost-effective Ethernet technology. Depending on the size of the organisation, savings made on networking by using Ethernet instead of Fibre Channel could run into the tens or hundreds of thousands of dollars, which can be offset against the investment in NVMe storage. In the end, the total budget could remain flat, while the total aggregate performance to clients increases.



R&S is a cloud-based A/V OTT monitoring solution that broadcasters and content providers can deploy quickly and without dedicated hardware. With increasing reliance on either public or private cloud solutions customers require virtual probes and OPEX based deployment. Furthermore, as the media services offered to consumers change over time, customers want flexibility of OPEX to track those demand variations. R&S solution is designed to support these trends and allows customers to deploy cloud-oriented monitoring solutions, simply, easily, quickly and allowing customers to change their own cost model efficiently. R&S enables OTT providers to adapt their monitoring infrastructure, for example when peak loads during the transmission of large events require extended service. Thanks to the easy-to-use setup wizard, it only takes minutes to add new virtual sensors at different locations and connect them to the dashboard at any time. A live multiview function, automated analysis of A/V data and error assignment in real time make it possible to permanently measure the quality of service (QoS) or experience (QoE), store it in a cloud and visualize it in a timeline format on a web interface. R&S extends the

broadcasters view to outside of their premises and across their delivery ecosystem in an all IP end-to-end approach. In combination with the widely used on-premise R&S PRISMON monitoring solution, analysis data from physical and virtual sensors can be displayed. By monitoring the content and its adaptive bitrate profiles continuously, R&S helps content providers to safeguard their QoS and is entirely deployable from an internet browser on a single dashboard. End-to-end analyses reveal errors such as deterioration of video or audio quality or poor CDN performance as a cause of churn – with just one tool. Infrastructure at broadcast or media distributor companies require monitoring, based on IT or baseband media system types. In live media delivery solutions, the consumer delivery path will inevitably include live production, management,

distribution or contribution links over either type. The cloud services integrates seamlessly with physical on premises probes of the R&S PRISMON family and so provide a complete and in-depth overview. Transferring audio/ video monitoring data from strategically located on-premise R&S PRISMON probes into the R&S dashboard delivers an integrated end-to-end view of your media streams, including terrestrial, satellite, cable and OTT networks: •Place on-premise probes between process steps •Locate R&S sensors at data centre locations close to your customers •Combine on-premise and virtual sensors data in the R&S dashboard The advantage of this cloud-based infrastructure and Rohde & Schwarz’ constant development is that customers are always on the newest version. The flexible deployment and dynamic up/ down scaling for monitoring resources guarantee OPEX savings. Live MultiView, automated analytics of the content and a fault detection that works in real-time allow a permanent tracking of the customer experience. All that comes in a single GUI, without the need for any additional hardware investment.



ROSS ULTRIX IP SOFTWARE DEFINED PLATFORM Ultrix-IP and associated control innovations are the logical evolutions of Ross’ award-winning Ultrix connectivity platform. The Ultrix IP-IO board now allows customers to route and process signals via both IP & SDI transport streams agnostically using industry standards into one of the most powerful, integrated software-defined hardware platforms. Ultrix IP’s UHD switching, integrated multi-viewers, full audio processing, frame synchronization, clean switching, and an architecture supporting continuing SW feature progression, uniquely allow customers to deploy complete broadcast solutions for SDI, hybrid, or even all-IP topologies depending on their requirements. When combined with the Ultricore IP software license of the Ultricore BCS, users can build and operate their systems just like they do with today’s control surfaces lowering the barrier to transition to this technology. Ultrix allows users to optimize their system architectures with a fully integrated solution, minimize disruption to current workflows by presenting I/O independent of type into a seamless matrix space, and assist in the transition to IP by providing an upgradeable platform and complete integrated features set - routing, MV, processing, & audio management. Consistent Workflow & Control

Utilizing Ultrix, Ultrix IP, & Ultricore IP as a combined solution means no learning curve for operational staff. Continue to use traditional control and workflows to switch, process, and monitor signals across systems agnostic of I/O types. It’s all about the little things… An innovative approach to air flow and thermal management, reversible rack mounting for tight installations, a creative BNC / IP connector layout for easy maintenance, critical redundancy and operational access for professional environments.

Superior Performance Extending on the tradition of Ultrix baseband, Ultrix IP continues to differentiate by providing the highest performance, integrated solution. Given its unique architecture and technology patents, Ultrix guarantees the highest signal integrity and processing known to the industry.




Ross PIERO is an award-winning sports analysis tool. PIERO uses image recognition and state-of-the-art graphic overlays to augment sports content with visually engaging and informative effects. PIERO brings new angles to every game, on the screen and in the studio. PIERO systems provide cutting-edge graphics and analysis modules for over

20 different sports. The wide range of effects offered by PIERO includes data visualization, heatmap, speed and distance, 3D replay, moveable players and many more.


SINGULAR.LIVE SINGULAR.LIVE is a cloud-based digital overlay platform that is revolutionizing live video stream productions. With a robust authoring environment, builtin control applications, integration with the industry’s leading streaming software and devices, and an open API and SDKs for additional integration and customization, Singular is a complete platform for adding animating overlays to livestreams. Singular’s overlays are entirely HTMLbased, and can be used in powerful, innovative ways. With Singular’s local rendering, each viewer can have a personalized experience, with targeted ads, local date and time, customized color themes, and even unique overlays or information than a viewer on another device sees. Local rendering also means less equipment. There’s no need for expensive render engines or licenses; any computer with an internet browser is all it takes. Singular also allows for interactive overlays, increasing user engagement by allowing viewers to participate directly with the stream they’re watching. Users can vote on polls and see live updating

results or click to see specific stats and overlays during a sports stream. Singular overlays can also be authoring in adaptive mode to automatically resize based on the viewing device. Users on mobile phones and tablets will see different versions of overlays than users on laptops or smart TVs so every viewer gets the best possible user experience. While Singular is a web platform – allowing for cheaper and greener remote productions without the need for travel or shipping equipment – Singular can also be utilized in SDI or NDI environments, allowing Singular to fit into both traditional and new broadcast workflows. What’s more, Singular is free to use. With powerful and dynamic overlays, enhanced user experience, simple tools for authoring and controlling, a small equipment overhead, and a wide array of partners and integrations, ushers in a new age of broadcast and streaming possibilities at a fraction of the current cost. Singular is the new standard for digital overlays.



A.M.O.S. - AUTONOMOUS METADATA ORIENTED SCHEDULING Executive summary Stream Circle’s A.M.O.S. is a revolutionary, rule-based AI engine which builds continuous play-lists totally automatically and autonomously using static and dynamic metadata. When connected to a content source, it will immediately create and broadcast a new channel, all at minimum cost. The growing costs of human labour, higher numbers of channels, and growing volumes of content available on the market together with lower margins per channel, are impacting the broadcasting industry, and the time to minimise officerelated human labour and automate as much as possible is rapidly approaching. The A.M.O.S. solution developed by Stream Circle is based on the following major components: •Metadata rich and extendable content management – storage of all contentrelated metadata (technology, classification, availability, prioritisation, presentation, usage history, performance), •Schedule-independent contentpresentation preferences and timelines – the preferred way of content presentation (how to play it) and its presentation time-line (schedule-independent secondary events), •Content-criteria-based planning (selection and prioritisation) – the transmission rules which contain the content specification, prioritisation, volumerestriction and presentation preferences, instead of specific content items, •Template-based presentation rules –

primary and secondary event presentation rules for various scenarios and content preferences, •Demand-driven autonomous content scheduling and presentation – “just-intime” automated content scheduling and presentation items both generated by the automatic scheduler, driven by demand of the autonomous play-list controller, •“Just-in-time” commercial content upload – commercial content uploaders and integration for programmatic sales solutions ready to upload the commercial content just before it is needed in the playlist, •Play-list controller with continuous play-list extendability – autonomous play-list controller running a continuously extendable play-list independently from the scheduler,

•Metadata-driven play-list events – primary-event and graphics-event processing with direct access to content metadata with metadata-driven, rulesbased and scripting graphics, •Instant event as-run-log and performance reporting – event-by-event as-run-log, event reporting and instant feedback to the content metadata storage Content metadata is a crucial element for EPG, A.M.O.S. and other processes. Stream Circle now offers a new extendable metadata model adding custom attributes as well as timelines into its content management. All of those metadata can be used for EPG presentations, dynamic graphics or automated primary and secondary event scheduling. A.M.O.S. is a revolutionary and groundbreaking technology which will bring maximum automated efficiencies and increased profitability to the broadcasting industry. Among the numerous advantages of using the autonomous schedulers / TV channels are: opening more channels with no additional human effort for planning, presentation editing and playout; using content extensively in various distribution channels and the ability to save costs on the acquisition and production of more content. A.M.O.S. delivers an ingenious content-centric solution which utilises the content metadata to produce continuous play-lists based on channel specifications and preferences without the involvement of human labour. The first version of the A.M.O.S. solution is available for customer scenario verifications.



OPTIQ MONITOR At IBC Telestream will introduce a second OptiQ live service. OptiQ Monitor creates major efficiencies in CAPEX and OPEX whilst assuring optimum levels of Quality of Service and Quality of Experience for broadcasters, service providers and network operators. To successfully deliver monetized, rights protected, high quality events or channels requires more than just a world class encoder or packager. It starts with a good knowledge of what the broadcaster or streaming service provider is delivering through pervasive video monitoring and analytics. OptiQ Monitor targets customers that have already put in place the infrastructure required to support their live streaming channels but have no monitoring infrastructure, especially post-CDN: it enables users to integrate a superior level of video monitoring without needing to modify anything in their existing delivery chain. OptiQ Monitor enables users to integrate a superior level of video monitoring without needing to modify anything in their existing delivery chain. Building on this through OptiQ Channel, Telestream can provide all the necessary packaging, encoding, ingest

environments to help customers build high quality live channels quickly and easily. A key feature of the OptiQ framework is the ability to deploy Telestream technology in any public cloud data center. Now, Telestream has the ability with OptiQ Monitor to select any cloud data center, or as many as is required, and to specify the types of monitoring probes that customers want to push into those data centers. Then, the system architect hits ‘go’ and the entire monitoring network is automatically built up to perform robust QoS and QoE monitoring of a customer’s live streaming channels, even if they are not using OptiQ Channel to create those channels. OptiQ Monitor allows users to observe how their CDNs are performing across multiple geographies. Also, they can

monitor the performance of video encoders across their entire distribution network. If this performance is suboptimal Telestream possesses a fast and cost-efficient solution. OptiQ Channel will deliver robust and efficient live streaming channels as a service in a completely clouddeployed way. OptiQ Monitor enables successfully delivered channels in highly efficient and cost-effective ways. Without effective monitoring you don’t have a channel. If you don’t monitor extensively and have granular visibility of the channel across all the geographies that it serves, and the devices and platforms that you seek to leverage then you can’t be confident that you are delivering a high-quality channel. Having good visibility of the health of a channel centres on the ability to monitor and analyse video data. Telestream’s new OptiQ Monitor enables users to integrate this level of video monitoring without needing to modify anything in their existing delivery chain. Building on this through OptiQ Channel, we can then provide all the necessary packaging, encoding and ingest environments to help customers build high quality live channels quickly and easily. But always, video quality monitoring comes first!



PT-RE-2 (ROBOEYE2) ROBOTIC PAN/TILT/ZOOM CAMERA More fully-featured, reliable and with better performance than any PTZ camera on the market today, the new Telemetrics RoboEye2 (PT-RE-2) is an aesthetically pleasing indoor robotic camera system with a built-in lightweight 4K/HD broadcast camera equipped with a 1”-type EXMOR R™ CMOS sensor and zoom lens fully integrated to a Telemetrics compact pan/tilt head. It also features a Night Mode for lowlight conditions and professional image stabilization for rock-steady image capture. The system’s robotic servo controls, accessed via a web-based graphical user interface, leverage motors of ultra-high position and velocity accuracy to deliver smooth, steady, consistent on-air camera moves, which also makes RoboEye2 the perfect choice for use with augmented reality or virtual systems – without any required peripherals. In fact, the system is VR-ready out of the box. All positional information can be sent to render engines for seamless AR/VR projects. A network interface allows users to access video and control online, while accommodating PoE (Power over Ethernet), HD-SDI, and USB interfaces as well as IP/Serial/Wifi/GSM 3G-ready control. The RoboEye2 is supplied with a universal mounting bracket that accommodates wall-, ceiling- and shelfmounted installations. Other helpful features include: Audio Mic Input (line level Stereo); Universal Lens Control;

Sync Input (GL); and dual network interface cards. Future versions will include an embedded H.264/.265 Encoder for live streaming applications. The system’s rugged design satisfies military applications as well as demanding commercial and industrial applications where high reliability and smooth and accurate camera performance are critical.

Custom colors are available to fit any décor. The RoboEye2’s robotic servo controls and rugged design accommodates TV studios, government, and large room surveillance while ensuring high reliability and smooth and accurate camera performance for a wide range of IP-based remote production control applications.



ThinkAdvertising debuts at IBC 2019 – the industry’s most personalised addressable advertising solution ThinkAdvertising takes addressable advertising to a new level – far more granular than other solutions on the market that only work at a household-level. ThinkAdvertising Injects deep behavioural insight and segmentation analysis into targeted ad ecosystems for more potent ad campaigns, boosting engagement and driving incremental operator revenues. Starting with household-level profiling, it builds a detailed picture of individuals’ viewing behaviour in that household over time. It goes beyond understanding that a particular viewer is a fan of baseball and football, and learns which teams, players and competitions they watch and when. This results in advertising that is more personalised and localised than previously possible. It provides operators with multiple individual-level attributes for advertisers to select from, leading to more highly targeted campaigns. By combining ThinkAdvertising’s insight with in-house customer and demographic data, operators can offer advertisers the ability to target precise consumer segments, creating incremental revenue streams and making it easier for brands and advertisers to focus on business outcomes. Developed in response to requests from several customers, ThinkAdvertising lets operators generate a wide mix of individual-level attributes that advertisers can pick and choose from as a basis for affordable, highly targeted, dynamic ad insertion for broadcast and streamed TV.

Operators can optimise the value of their ad inventory while opening up a gold mine of incremental revenue opportunities from existing brands and advertisers; they can also reach a new generation of TV advertisers looking to target specific audience segments without paying traditionally high national TV ad rates. ThinkAdvertising is backed by the deep learning from ThinkAnalytics’ personalised user profiling, which understands the moods and emotions inherent in the content and indicated by viewer behaviour. This is crucial for advertisers, driving recognition and delivering a positive ROI. ThinkAdvertising’s deep insights from individual viewing and behavioural data are supported by ThinkAnalytics’ viewer personalisation and segmentation models. The firm’s AI and machine learningdriven content discovery platform uses demographic data to deliver richer, more accurate profiling. Highly targeted advertising campaigns are proven to improve impact and effectiveness including: a reduction in channel switching, increased enjoyment of TV advertising, increased ad engagement,

higher resonance of brand messaging, greater recall and higher purchase intent. ThinkAdvertising is available as a standalone solution and can be easily integrated with other analytics platforms and ad decision services such as Castoola. At IBC, ThinkAnalytics is showing integration with Castoola. Building on ThinkAnalytics’ leading position in content discovery and viewer analytics, ThinkAdvertising offers operators vast opportunities for incremental revenue streams, irrespective of whether they are existing ThinkAnalytics customers. ThinkAdvertising is also available as part of the ThinkAnalytics suite, which includes the content discovery platform and the realtime analytics platform, ThinkInsight. ThinkAnalytics is the leading content discovery solution worldwide, enabling TV operators to deliver personalised viewing experiences resulting in significant uplift in engagement, loyalty and ARPU. The firm has over 80 TV operators serving 250+ million subscribers in 43 languages, including: Liberty Global, BBC, DAZN, Deutsche Telekom, Sky, Astro, Tata Sky and Vodafone.



IS-MINI 4K - REAL TIME DIGITAL VIDEO COLOR PROCESSOR IS-mini 4K, a real time digital video color processor (4K/UHD Color LUT Box) is used for onset camera preview for live event shooting, SDR & HDR broadcasting and post production. The system supports standard SDI formats up to 12G-SDI, and offers 12G-SDI by-pass output and 12G-SDI and HDMI 2.0 simultaneous conversion output. In combination with WonderLookPro, the most powerful LUT creation and realtime color management software, the IS-mini 4K provides highly accurate color conversion and color management. The IS-mini 4K can be used simultaneously with different camera models to achieve a consistent look. One camera log source can be converted to HDR/SDR simultaneously by selecting the same parameters, and grading HDR/ SDR can be done simultaneously as well, enabling the visuals to have a consistent look. Color space conversion can be accomplished by selecting the same parameters. Typical conversions are accomplished as preset conversions with just one click. The IS-mini 4K can also be used for 4K HDR on-set grading. Just selecting the camera / mode and rendering begins the on-set grading process. Users can also achieve powerful color correction by simply using a mouse and keyboard or tangent panels. Shots can be stored in groups in a library and easily accessed. A control panel allows metadata monitoring as well as camera control. A

waveform monitor allows live viewing and image analyzation, and various conversions are provided as presets accessed by just one click For post-production applications, the IS-mini 4K easily converts existing LUTs for other camera input, and for other output color spaces in different file formats. Using the IS-mini 4K as a pattern generator, WonderLookPro can measure a monitor’s color characteristics automatically and generate error compensated LUT for the monitor which can be matched with a mastering monitor. WonderLookPro, the most powerful color management software, is provided to all IS-mini users for free to create and deliver best-in-class unique 3D LUTs to IS-mini 4K hardware and 3rd-party

hardware such as AJA FS-HDR in real time. WonderLookPro provides color spaces / camera color conversion LUT’s adding various pre-set rendering, your own Looks by various color correction methods, which are used on LUTBOX or camera in real time or saved as LUTs to be used for various systems, WonerLookPro is an unrivaled solution to manage and create color and Look. In addition to the IS-mini series, WonderLookPro software also supports 3rd party hardwares including BoxIO, Teradek, ASTRO’s SB series, Varicam eres, Alexa/Amira and AJA FS-HDR. The Varicam/EVA1 and Alexa/Amira can be controlled directly by WonderLookPro for recording start/stop, WB, EI and shutter. Metadata can be watched and displayed on UI or OSD.



VERSION 6.0 OF VELA LUNA/ENCOMPASS SMARTLOGGERS Vela launches version 6.0 of Luna/ Encompass SmartLoggers at IBC-2019 introducing a series of exciting new compliance monitoring and logging innovations into an extremely feature-rich and comprehensive solution that allows television broadcasters, MVPDs and media companies to turn what were previously capital cost purchases for the engineering department into operational and strategic investment that enhance competitiveness, deliver value throughout an organization, and produce a return on that investment. Available in the Cloud and On-Premise, Vela SmartLoggers include every advanced feature at no additional cost, so users benefit from a wide-range of capabilities, including: compliance monitoring, confidence logging, multiviewing, QoS alerting, Airchecks/ad-verification, clipping of original or transcoded streams for repurposing back to air or digital/social media, content matching with political and competitive analysis and reporting, OTT/ retrans content-comparison with notification when commercials/content vary, and much more. What makes version 6.0 unique? Version 6.0 contains extremely valuable enhancements and brand-new features, such as: • New AI/ML tools that enable broadcasters to analyze content and recognize faces, voices, logos, objects, on-screen OCR, and audio/video patterns… and integrate that metadata with the

underlying video, plus perform speech-totext and differentiate programming from advertising. • New Content/OTT Compare – Vela has always allowed broadcasters to monitor and import content from any number and combination of input types, but now introduces unique capabilities to enable comparison of source content with many OTT/return feeds for automated detection of where commercials/content vary from the original to confirm and enforce contractual agreements. • Enhanced ad-sales lead-list generation through monitoring, commercialrecognition, and analysis of competitive channels to prioritize those advertising more on other channels and improve closerate by providing details such as ratings comparisons. • Enhanced Notes and Discrepancy Logging enable analysts to track and tabulate political party air time and report on matches for things like ‘Republican Party’. News Directors can view, synchronize,

compare, and annotate media with voice/text notes and internal comments. Engineering staff can use our platform to communicate organization-wide about on-air issues and indicate things like, ‘Make Good Needed’. • Additional Standards Support, including: SMPTE 2022-6 and 2110, ATSC 3.0, OKTA, DVB & SCTE-27 Subtitles. • Greater Channel Density, Scalability and Reliability with newer, more-powerful GPUs/ CPUs, enhanced Cloud/Virtual environment support, and improved fail-over redundancy. US TV stations and broadcasters have recognized Vela as their preferred Voliconreplacement system as 150 stations moved to Vela since NAB-2019, including all stations in three more of the top twenty station groups - making Vela the largest current-US-provider by far with about 250 US customers. Vela Luna and Encompass are the smartest, most comprehensive and featurerich loggers giving media companies the most intuitive and powerful integrated user experience. It is akin to when cell phones evolved into smartphones. These Vela SmartLoggers raise the bar, addressing traditional silos of functionality in a unified platform that satisfy multiple customer use cases that previously required multiple tools - each considered an expense. They help operators not only exceed their own engineering standards and comply with the latest regulations, but also increase their market share and produce an ROI as a valuable investment.



VIONLABS CONTENT DISCOVERY PLATFORM Today’s consumers are spending 25% or more of screen-time looking for something to watch: this is far too long. Currently, the most common content discovery recommendations rely on external metadata sources for information on the video content in an operator’s catalogue, which results in poor recommendations. This in turn leads to a viewer leaving a service, because they are frustrated by the, seemingly, limited content choice. Vionlabs created its content discovery platform to solve this market challenge and provide consumers with hyperpersonalised recommendations and boost user engagement. The Vionlabs content discovery platform uses AI and machine learning to analyse each video in great detail and combine this with the viewer’s watch-history. While metadata still plays a role in content discovery, Vionlabs don’t believe that it’s anywhere enough by itself. AI has enabled Vionlabs to completely rethink the metadata paradigm and use a number of different neural networks to find patterns in colours, camera movements, objects, stress levels, positive/negative emotions, audio and many more features of the content. Vionlab’s AI engines have been trained

to analyse these variables and produce a fingerprint timeline throughout the content. The AI engine then learns what matters and how changes in these fingerprint timelines are connected to what content individual viewers enjoy. A key element of this is a technique we call content similarity analysis, which compares each content timeline with every other timeline to evaluate how similar each content asset is to every other asset. The AI engine uses this content similarity database and the viewer watchlist to provide viewers with the most accurate and relevant recommendations. For example, if a viewer has just finished watching a horror movie its platform will understand that some viewers may want to explore more films in that genre while others may want to watch a comedy to lighten the mood.

An additional benefit of the Vionlabs content discovery platform is that it doesn’t need any details from the viewer, apart from their watch history. This then makes the need for individual user profiles redundant, because AI can combine its understanding of the consumer’s favourite content with data points such as device type, time of day, and the chronological consumption pattern, so that operators can provide accurate and relevant experience for all members of a household without the hassle of switching between profiles. At a time where consumers have access to more media and entertainment services than ever before, operators cannot just rely on great content alone to attract and retain subscribers. Today’s consumers want to be able to easily dive into their favourite programming as well as unearth hidden gems, and if they cannot do this then they will desert a service. Vionlabs’ content discovery platform is helping operators overcome this challenge by using AI and machine learning to significantly improve the accuracy, reliability and relevance for personalization and discovery of video services, which in turn leading to happier viewers that spend more time and money on a video platform.


X.NEWS INFORMATION TECHNOLOGY GMBH X.NEWS™ - CONCEPTR™ is the award winning live monitoring, research, collaboration & verification application for media, corporates and public institution. At IBC we are now introducing our next gen AI based module called “conceptr(TM)” which increases the efficiency and accuracy dramatically for each individual user. conceptr(TM) is utilizing the latest machine learning technologies which will keep us ahead of the curve in our aim to support each user in a fast, accurate and relevant news creation process on any internet connected device. Personalized AI generated hot topic maps as well as relevance based results are just two of the features conceptr(TM) will offer.



THE ZIXI PLATFORM Zixi is an Emmy-winning cloud based and on-premise software platform that enables broadcast-quality video delivery over IP. The Zixi Platform makes it easy and economical for media companies to source, manage, localize, and distribute live events and 24/7 live linear channels in broadcast QoS, securely and at scale, using any form of IP network or Hybrid environment. Zixi provides enhanced control in large complex networks with ZEN Master, a cloud-based platform that provides visual tools to configure, orchestrate, and monitor live broadcast channels and events across industry protocols. ZEN Master serves as a virtual Master Control, allowing content providers to manage individual servers at their source, into and across various cloud configurations, server clusters and different geographies. ZEN Master offers tools like instream un-decoded orchestration, alerts, analytics, and the ability to quickly conduct root cause analysis. Through ZEN Master, stream throughput can be increased as needed while maintaining security, uptime and complete visibility to CPU, network and workflow-related attributes across the media supply chain. Developed with a keen understanding of content providers’ operational workflow and virtualized infrastructure needs, ZEN Master now benefits the world’s top media clients, like NBC Universal, Bloomberg and Warner Media. Because of ZEN Master, Zixi is trusted to stream some of the biggest live sporting events in the world, including the 2018 Olympics, 2019 Super Bowl, and most recently the 2019 Women’s World Cup.

With ZEN Master’s latest Version 13, customers will be able to take advantage of Zixi software enhancements such as: deeper integrations with leading cloud providers like AWS, Google Cloud Platform, and Microsoft Azure, universal origination live transcoding in 4K, extended content and business analytics, machine learning predictive analytics, sequenced hitless failover over and bonded hitless failover over hybrid IP networks, customizable reporting, automation and scheduling capabilities and more. There are many reasons why the Zixi Platform and ZEN Master should be considered as a candidate for Best of Show at IBC 2019. Zixi stands apart as a leader in IP delivery. As one of the first software providers of video transport over IP, Zixi recognized the need for a broadcast stream management control plane with ZEN Master that would enable complete confidence in quality and

performance of content transported over IP. The Zixi protocol, with dynamic FEC and packet sequenced hitless failover, delivers the most robust stream with the least transport latency of any protocol currently in production use. Sub-second worldwide transport latencies are practical and fully supported by the Zixi platform. Zixi has the unique ability conduct universal origination live transcoding in 4K, and package and deliver to CDNs, digital MVPDs, IRDs, social media and more, all while providing control through ZEN Master. Zixi provides a unified workflow with multi-cloud, multiCDN options for transport and delivery. It has unprecedented interoperability with over 100 OEMs and service providers for a truly connected ecosystem. Zixi serves well over 500 global customers, representing most of the top media brands around with 10,000+ channels delivered daily, enabling the democratization of live video streaming around the world.



GO-PANEL Designed for broadcast news and other run-and-gun shooting situations, the Zylight® Go-Panel is an LED panel light which features Bi-Color operation, IP65 weather protection and Active Diffusion™ technology for electronic filter adjustments. It is no longer necessary to carry diffusion gels or panels with the Go-Panel which features electronically adjustable diffusion. A fully dimmable white light with variable color temperatures from tungsten (3200K) to daylight (5600K) that can be adjusted on the fly, it provides a tight 26-degree beam spread for outdoor shooting against the sun. ± Green control makes matching any lighting source quick and easy. A single integrated friction hinge allows perfect placement at any angle. No bulky yoke to deal with and the small hinge keeps the fixture slim for easy travel. Diffusion filters or removable panels are a thing of the past with the GoPanel as Active Diffusion makes it easy to adjust diffusion amounts on the fly from the rear of the fixture. This breakthrough technology allows incremental adjustments of diffusion levels without reaching for a gel or diffusion panel. The Go-Panel delivers flexibility in the field while producing almost no heat. It includes a worldwide AC power supply, but can be powered by 14.4V camera battery through an attached battery plate. Plus, the Go-Panel features an innovative space saving friction mount allowing one-hand tilting

adjustment along with IP 65 weather protection to ensure reliable operation in extreme weather conditions. DMX and LumenRadio are also included for precise fixture control. For the ENG shooter, the Go-Panel

offers simple operation and variable color temperature for quality lighting inside or out on the street. Documentary shooters appreciate its durable construction and efficient battery use for long days in the field.

BESTOFSHOWATIBC 2019 ADOBE CONTENT-AWARE FILL FOR VIDEO IN AFTER EFFECTS Content-Aware Fill for video in After Effects is leading the charge as an industry- first. Powered by Adobe Sensei, Content-Aware Fill leverages intelligent algorithms to remove unwanted objects from video using an intricate new algorithm to temporarily harvest pixels from other video frames incorporating optical flow, 3D tracking and additional techniques. Artists often spend countless hours creating clean plates for visual effects compositing, but Content-Aware Fill removes a significant pain point in their workflow by enabling them to cleanly remove visual elements quickly, often saving many hours of tedious manual work, sometimes frame by frame. One of the most obvious reasons to use Content-Aware Fill for video is to remove production equipment that has inadvertently been included in a shot, such as boom microphones or special effects wires. Additionally, this tool-set is indispensable for immersive VR projects, as there’s nowhere “off-camera” to hide crew, tripods or lights. There are also endless creative possibilities: creating clean video plates for compositing, removing visual distractions like a car driving through the background of a scene, or eliminating dust on a camera lens. Content-Aware Fill for video in After Effects works by estimating the motion and depth of masked objects throughout a video clip. Marrying dense motion tracking with pixel replacement, the algorithm finds areas of adjacent images and backgrounds and reveals those pixels throughout the clip and subsequently removes the object by

replacing them with pixels that did not have the object in the first place. For videos where the region behind the object is never seen, the system employs Content-Aware Fill to guess what is missing. Ultimately, the algorithm figures out how to replace the unwanted pixels with new ones that best match the appearance of the scene.

Whether you’re an editor or visual effects artist, tighter timelines and an increase in workload are driving the need for quickturn post-production processes. ContentAware Fill for video significantly reduces time spent on a task that almost everyone in the post-production community needs to do.

BESTOFSHOWATIBC 2019 AGAMA TECHNOLOGIES AGAMA AI ANOMALY DETECTION SELF-LEARNING AI ANOMALY DETECTION – ENHANCING SITUATIONAL AWARENESS FOR VIDEO SERVICE PROVIDERS Agama Technologies, the specialist in video service quality and customer experience, will be showcasing Agama AI Anomaly Detection. Powered by artificial intelligence and machine learning, this extension to the Agama monitoring, analytics and visualization solution provides alarms with unprecedented precision. The result is enhanced awareness of real incidents and elimination of irrelevant alarms, which enables service providers to deliver optimal service quality to customers with greater efficiency. To deliver video services that meet or exceed customer expectations, service providers must act quickly when quality and usage Key Performance Indicators (KPIs) deviate from their normal range. What is normal, however, can change over time and vary greatly depending on the time of day or day of the week. If the number of active subscribers suddenly drops it matters greatly if it is Friday early evening or four in the morning on Tuesday. These fluctuations limit the usefulness of alarms based on fixed thresholds. A more intelligent approach is the way forward. Agama’s AI Anomaly Detection automatically identifies anomalies based on information from every subscriber

and provides actionable alerts, clear visualization of detected anomalies and powerful interactive analytics. By getting just anomalous situations flagged, the number of false positives is reduced, making the NOC (network operations center) more efficient and faster in resolving actual incidents. “Separating actual anomalies from normal variations in KPIs is an excellent example of how AI and machine learning can be applied to video service assurance in a way that addresses real-world needs, “says Johan Görsjö, Director of Product Management at Agama Technologies. Agama’s AI Anomaly Detection employs automated self-learning to recognize patterns in video delivery networks. Acting on information collected in real-time from as many as several

million client devices, such as set-top boxes and OTT player applications, the algorithm predicts how each subset of the population, from whole countries down to individual neighborhoods, will behave based on past observations. With AI Anomaly detection, service providers can quickly understand where in the delivery chain anomalies occur, what the current situation is, and what has happened before and after the detected anomalies. By detecting real anomalies and putting them into context, the Agama solution creates situational awareness that enables faster analysis and problem resolution. This means that service providers can assure optimal service quality and improve customer experience with greater efficiency and accuracy.

BESTOFSHOWATIBC 2019 AJA VIDEO SYSTEMS KI PRO GO Ki Pro GO is a standalone, multi-channel H.264 recorder and player, offering up to four channels of simultaneous HD or SD capture to affordable, off-the-shelf USB media. Ki Pro GO is the next generation of AJA’s Ki Pro family of production-proven, file-based recording and playback devices, combining intuitive design and flexibility into a compact 2RU, half-rack width form factor. AJA’s family of Ki Pro recording devices have set the precedent and quality standards for ProRes and Avid recording in hardware, and Ki Pro GO leverages the same trusted technology while providing lower bandwidth, lower cost, multi-channel SD and HD video recording and playback for H.264 codecs. Versatile and portable, Ki Pro GO is targeted at a range of production scenarios where H.264 is the primary capture format, including live events, concerts, sports stadiums, corporate, medical, training and beyond. Genlock-free recording eliminates the need to synchronize four input sources, while redundant recording provides multiple backups in the field to ensure recorded video is protected. Four 3G-SDI and four HDMI ports ensure a high-quality source with standard cabling compatibility options. Video and audio input matrix channel mapping simplify switching between physical inputs to the appropriate recording channel within the device’s UI from a mix of source inputs as needed. HDMI multi-channel monitoring during recording enables the device to display up to four channels of video as a matrix monitoring output on one HDMI or SDI monitor.

AJA developed Ki Pro GO to provide content creators with an affordable H.264 capture solution to meet increasing consumer demand for accessible, highquality video. As audiences have become accustomed to viewing high-end film, television and digital content across a range of devices – whether at home, onthe-go, at venues or live events – content creators are tasked with meeting consumer demand and require a cost-effective solution for streamlining capture and delivery in H.264, the most widely used codec and industry standard for video compression. High quality H.264 recording via Ki Pro GO provides video professionals with a simplified capture chain and an efficient encoding path for a range of distribution devices and OTT viewing. Using Ki Pro Go, professionals can more easily capture high quality, low-bandwidth content, while also reaping the benefits of being able to record to affordable USB storage devices. Additional feature highlights include: • Multi-channel H.264 recording up to

1080p 60 • 5x USB recording media slots, compatible with off-the-shelf USB 3.0 media • Redundant recording of any or all channels • Genlock free video inputs • 4x HDMI video inputs • 4x 3G-SDI video inputs • 4x 3G-SDI video outputs • HDMI and SDI multi-channel matrix monitoring outputs • Selectable VBR recording profiles, 4:2:0 8-bit • Balanced XLR analog audio inputs, mic/ line/48v switchable • 2-channel embedded audio per video input • 2x 4-pin XLR 12v redundant power inputs • Easy-to-use web UI, compatible with standard web browsers • Front panel button controls with integrated HD resolution screen • Stand-alone operation Ki Pro GO is backed by a three-year warranty and AJA’s industry-leading technical support.

BESTOFSHOWATIBC 2019 AMAZON WEB SERVICES (AWS) SECURE PACKAGER AND ENCODER KEY EXCHANGE (SPEKE) Developed by Amazon Web Services (AWS), Secure Packager and Encoder Key Exchange (SPEKE) is an open API specification that makes it easier to protect video content by streamlining how Digital Rights Management (DRM) solutions integrate with encryptors (encoders, transcoders, and origin servers) in video processing and delivery workflows. SPEKE replaces hundreds of combinations of proprietary API integrations between multi-DRM vendor key servers and encryptors; provides video operators in media and entertainment with greater flexibility and choice of vendors; and supports multiple DRM schema, as well as multiple packaging formats for different types of viewing devices. Previously, most integrations required a custom API for each DRM solutions provider and each encryptor, which proved costly and time consuming, and often delayed the launch of new services for customers. SPEKE solves these challenges with a standardized method for key exchange between encryptors and DRM systems, and a way to use SPEKE-enabled key servers or encryptors in on-premises, cloud, or hybrid infrastructures. The API spec utilizes the DASH Industry Forum standard Content Protection Information Exchange Format (CPIX) to standardize the method for carrying

key and DRM information for encrypting and protecting video content, and adds specifications for authentication and other important behaviors on top of CPIX. CPIX enables operational efficiencies while reducing costs and time-to-market for OTT video services. SPEKE also incorporates AWS Identity and Access Management (IAM) roles to allocate flexible yet secure permission policies which may be delegated to users, applications, or services to securely enable key exchange between a multiDRM vendor and a video transcoding or packaging vendor. Video operators may use IAM roles whether the key server and encryptor are running on AWS, on hardware in the operator’s headend or data center, a combination of the two, or even in environments where the key server and encryptor are running on different cloud infrastructure. With its comprehensive feature set, SPEKE can function as a single format

for MPEG-DASH, HLS, Microsoft Smooth Streaming, and future packaging technologies, and for multiple DRMs including Microsoft PlayReady, Google Widevine, Apple FairPlay Streaming, AES-128, and proprietary DRM solutions. It supports Apple HLS transport stream, fragmented MP4, and CMAF. SPEKE also supports static keys and key rotation. Eliminating complexity for media customers and technology vendors alike, SPEKE combines a single common API for any transcoder, packager, and key server; CPIX for MPEG-DASH and HLS; and authentication mechanisms. This combination delivers significantly faster integration time, greatly reduced test cycles, and an expanded ecosystem of integrated transcoders, packagers, and multi-DRM solutions, while also enabling operational tracing to troubleshoot issues. SPEKE’s rich ecosystem offers customers dozens of pre-integrated solutions, faster time to market, and greater flexibility to select combinations of video processing and multi-DRM solutions that meet their specific needs. Vendors already implementing SPEKE include: Axinom, castLabs, EZDRM, INKA Entworks, Insys Video Technologies, Intertrust Technologies, Irdeto, Kaltura, NAGRA, NEXTSCAPE, Verimatrix, Viaccess-Orca, VUALTO, WebStream and others.

BESTOFSHOWATIBC 2019 AMAZON WEB SERVICES (AWS) ACCELERATED TRANSCODING FOR AWS ELEMENTAL MEDIACONVERT As multiscreen viewing proliferates, and as the range and resolution of consumer display devices continues to increase, so does video complexity and the processing time required to transcode content for on-demand distribution. These delays present challenges for premium content providers. For example, a full-length 4K feature in high dynamic range (HDR) could take more than a day to render using traditional video infrastructure for all of the video formats and players now in use. Some providers have strict service level agreements (SLAs) they have to meet to turn around assets, with penalties attached. Others lose revenue as time passes; the longer it takes to make live broadcasts available as on-demand content, the less valuable the content as audience interest fades. Quality control may also uncover issues with processed content that must be fixed as quickly as possible. In April 2019, AWS solved these challenges with the introduction of Accelerated Transcoding. A new technology offered as part of the AWS Elemental MediaConvert file-based video transcoding service, Accelerated Transcoding takes advantage of the scalability, elasticity, and flexibility of cloud processing to render content up to 25

times faster than was previously available with the service, allowing video providers to meet requirements for transcode projects and quality assurance with time to spare. It works by dividing a video source into multiple, smaller segments for processing. The service determines the optimal number of segments to be processed and transcodes them in parallel. Segments are then combined into desired output formats in an adaptive bitrate (ABR) stack for delivery. Using Accelerated Transcoding is simple. A single checkbox within the AWS Management Console enables the feature, and AWS Elemental MediaConvert automatically performs the analysis and provisioning required to achieve the fastest transcode speed. The feature comes at no added charge as part of the MediaConvert on-demand professional pricing tier.

Accelerated Transcoding delivers significant benefits. For ongoing workflows, such as live-to-VOD conversion, customers can reliably meet SLA commitments, driving customer satisfaction and retention. By shrinking the time-after-event window, video providers can get fresh content in front of audiences more quickly to maximize viewership and monetization. For largescale library conversions, projects that would have previously taken months to complete can now be finished in days, resulting in faster ROI and freeing resources for other priorities. Moreover, quality control efforts can be better supported by quickly re-transcoding assets after source content is corrected Customers who apply Accelerated Transcoding continue to benefit from the quality and efficiency features of AWS Elemental MediaConvert. High Efficiency Video Coding is supported with Accelerated Transcoding, as is QualityDefined Variable Bitrate control, an AWS technology that reduces video storage and delivery costs by up to 50 percent. Accelerated Transcoding dramatically reduces the time required to prepare ondemand video for delivery to audiences, enabling faster time-to-audience for premium content without additional expense.

BESTOFSHOWATIBC 2019 APERI NAT & FIREWALL IP FLOW MANAGER APP – FOR SMPTE 2022-2/6 AND 2110 STREAMS The massive rise of videoover-IP content is exerting enormous pressure on a delivery mechanism not designed or optimized for bandwidth-demanding and latency sensitive video. The internet is, by its nature, insecure. Guaranteed packet delivery with the utmost security and quality, is essential for a media industry built upon reliable, low latency and high-quality secure media flows. As content, service and network providers expand content distribution to the web and across devices beyond TV, all video formats, including HD and UHD/4K must be rapidly and flexibly delivered in the highest quality with minimal latency. From live sporting events to concerts, the success of media companies is dependent on the video quality they provide. Aperi’s container-based software service manages broadcast functions as microservices, also known as apps, on generic hardware servers. These downloadable apps are location and hardware independent, moving the industry away from the proprietary hardware. These microservices can be deployed rapidly and orchestrate vital functions such as encoding and decoding and packet generation. Metered usage licensing allows a flexible and advantageous pay-as-you-use cost model. Aperi’s latest Network Address Translation (NAT)and Firewall app is an

FPGA-based application designed for the microserver platform to address the challenges of the internet for secure, high-quality content delivery. The app has an integrated web server with a GUI easily accessed through the platform’s management port. Easily available and downloadable from the Aperi App Store, Aperi’s NAT Firewall provides seamless inbound and outbound re-routing of RTP streams within network infrastructures. The process of sending and receiving RTP streams (rerouting) provides configuration for continuous source policy monitoring and on-the-fly modification of network parameters for each RTP stream, including SMPTE 2022-2/6 and 2110. RTP stream health monitoring automatically shuts down streams that deviate from a specified policy, preventing communication with potentially dangerous devices on the internet. Stream replication allows for two distinct output streams with independent IP addresses, ports,

and VLANs. Failover can merge multiple source streams, selecting the highest quality single stream and automatically switching to a perfect replication should the original stream degrade in quality or deviate from policy. Ensuring secure content delivery, the app supports the SMPTE 2022 standard for transporting both compressed (2022-2) and uncompressed (2022-6) media flows over IP. The NAT app provides an open bandwidth budget across 10 GbE interfaces and can route varying numbers of compressed (2022-2) streams depending on size, and up to six uncompressed (2022-6) streams per SFP port. It also provides support for the SMPTE 2110 standard, and can route video (2110-20), audio (2110-30), and ancillary data (2110-40) flows independently through networks. It also provides an open bandwidth budget across 10 GbE interfaces and can route up to 64 independent media flows per port. It delivers stream duplication for media flows via SFP, and A/B failover for flows outputting through SFPs. Aperi’s NAT Firewalll is the only truly virtualized, multi-format video over IP flow manager packed with all critical flow management functions. It combines flexibility and intelligence to meet evolving standards and today’s qualitydriven and bandwidth-demanding highresolution content.

BESTOFSHOWATIBC 2019 AVID AVID NEXIS | CLOUDSPACES Avid NEXIS | Cloudspaces is a SaaS storage offering that leverages the power of Microsoft Azure to provide news, sports, and post-production teams an easy way to safely store and park media in the cloud, freeing up local space for uninterrupted production. In today’s fast-moving media landscape, news, sports and post-production teams need scalable and reliable storage solutions to grow collaboration between teams and facilities, optimise the value of their media assets and create highquality media quicker and easier than ever before. Avid NEXIS is the world’s first softwaredefined storage platform that enables true storage virtualisation for any media application. It delivers unrivalled media storage flexibility, scalability and control for both Avid-based and third-party workflows. Avid NEXIS | Cloudspaces brings the power of the cloud to Avid NEXIS, giving organizations a cost-effective and more efficient way to extend Avid NEXIS storage to the cloud for reliable backup and media parking. It takes the stress out of media management and storage availability, and gives users instant access to more storage when needed. With Avid NEXIS | Cloudspaces, users can start storing media in the cloud today and benefit from Avid NEXIS’s robust workspace management and user access controls with the flexibility and scale of the cloud.

Users can sync or park on-premise Avid NEXIS workspaces to Avid NEXIS | Cloudspaces with commonly used tools – including Marquis Workspace Tools, Windows File Explorer, and macOS Finder—while optimized media workflows provide better efficiency and security over risky manual processes. Avid NEXIS | Cloudspaces provides the simplest way to move content to the cloud with complete peace of mind. Instead of resorting to NAS or external drives when budgets are tight, Avid NEXIS | Cloudspaces makes it easy to offload projects and media not currently in production. And because Avid NEXIS manages cloudspaces alongside the workspaces, users can spend less time searching for and wrangling media, and more time creating. Avid NEXIS | Cloudspaces offers one-

step activation and auto-provisioning from the Avid NEXIS Management Console, with activation from an Avid Link account, and leverages Avid NEXIS user access controls for end-to-end security. Avid NEXIS | Cloudspaces eliminates the time wasted manually shuttling media to and wrangling media from USB drives, so users can keep production moving and easily park projects and media. With a variety of subscription plans available, users can match their storage expense to their business needs. By extending Avid NEXIS storage directly to Microsoft Azure, Avid NEXIS | Cloudspaces enables customers to easily benefit from Azure’s hyper-scale storage capacity, global distribution and industry leading storage and networking capabilities.

BESTOFSHOWATIBC 2019 AVID MEDIACENTRAL | PUBLISHER POWERED BY WILDMOKA Tactics to get people to tune into a broadcast or watch content via a network’s website are changing in today’s connected and digital world. Sports teams and broadcasters are seeking new ways to engage fans on multiple platforms. And as people rely more on their digital devices for breaking news, news broadcasters are in a race to be first to deliver news to social platforms. According to the *Pew Research Center, 68% of Americans get their news on social media sites like Facebook, Twitter, Instagram, YouTube and LinkedIn. Avid’s MediaCentral® | Publisher, a new SaaS offering powered by Wildmoka, enables news and sports organizations to create, brand and publish breaking news and highlights fast, increasing viewership and engagement across social media and digital OTT platforms. For news and sports organizations, creating buzz across social media relies on getting the story out first across multiple platforms. With MediaCentral

| Publisher, content producers can be first to deliver news on digital platforms, enabling them to improve audience engagement on social, drive consumers to digital platforms, and increase digital revenue. As part of Avid’s MediaCentral 2019 media workflow platform for news, sports and post-production operations, MediaCentral | Publisher adds powerful tools for delivering content to social and digital properties fast. MediaCentral users can now log content, search for

and access media, create a highlight sequence in the timeline, add graphics, and then publish across social and digital platforms using MediaCentral | Publisher. MediaCentral | Publisher will be showcased at Avid’s IBC stand (7.B55) and available in late September. *Pew Research Center, News Use Across Social Media Platforms 2018: news-use-across-social-mediaplatforms-2018/

BESTOFSHOWATIBC 2019 AVIWEST STREAMHUB Today broadcasters need to quickly, easily, and simultaneously share highquality live content with multiple affiliates or other broadcast facilities. However, today it’s typical for each third-party transmitter to be connected to its own receiver with its own platform and management system. This results in major TV channels having several web interfaces and/or screens to receive and distribute feeds coming in from third-party devices. In the past, broadcasters set up dedicated links between multiple premises in order to share live video feeds with affiliates. Today, video professionals are increasingly using the internet to distribute live video streams because it’s more scalable and low cost. But they need a way to interconnect several third-party live transmission tools. By streamlining their production workflow and optimizing costs with the iconic AVIWEST StreamHub transceiver, broadcasters can offer an increased amount of live video content and boost viewer engagement. Deploying a scalable and tailored live video solution that manages and shares all live streams coming in from all transmitters via a single interface is important. Using only the StreamHub screen, video professionals can interconnect to numerous third-party live transmission tools at one time, saving time and money.

Knowing that broadcasters are often on the go and working from remote locations, the StreamHub offers a webbased user interface with a video thumbnail view. This tool allows them to build live video multi-view composed with all input streams, which can be fed to affiliates. The multi-view can be enriched with overlay information for each source, such as audio level and transmitter name. The main advantage is to reduce data while feeding affiliates with multiple streams, allowing affiliates to choose a main stream from among those. Through the StreamHub’s intuitive web user interface, broadcasters can easily control and oversee a fleet of remote devices, optimize and monitor the video transmissions, all through a single tool.

During a live event, broadcasters need to ensure there is no delay. Viewers do not want to miss a second of the action, especially for live sports. Choosing the AVIWEST live video solution with state-ofthe-art HEVC encoding will ensure high quality video at minimum bitrates, with low latency. The StreamHub receiver implements the Emmyꙩ Award winning SST (Safe Streams Transport) protocol that reliably combines multiple algorithms - FEC, ARQ, and VBR rate control- to provide a high quality of service. Video professionals are often using a variety of different types of transmitters in the field. Therefore, it’s also essential to choose a system that is totally agnostic and universal in receiving transmissions from various types of transmitters; that’s what the StreamHub offers. With the StreamHub, broadcasters have the ability to manage multiple streams and distribute to multiple affiliates from a single point of control. This reduces the cost of live content distribution as there is less equipment required. It supports a variety of streaming protocols, including RTMP, RTSP/RTP, HLS and TS/IP to ensure that broadcasters can freely distribute video content over the public internet and virtually any IP networks, including 3G, 4G and 5G.

BESTOFSHOWATIBC 2019 BIRDDOG BIRDDOG CLOUD BirdDog Cloud | Encrypted NDI delivery Send NDI anywhere in the world with SRT encryption. Cloud changes everything. Now you can expand outside your local network to distribute your NDI content to anywhere in the world. Utilising SRT, Cloud has 128/256 bit AES encryption all way through the pipeline so you can send over the public internet with total peace of mind that your streams are secure. SRT video transport protocol also ensures reliable streams even over lossy networks. A scalable and modular platform means you only pay for what you need and with support for Tally, BirdDog Comms, and PTZ Control, Cloud integrates seamlessly with all BirdDog hardware and software solutions. Production Everywhere. Production Anywhere. BirdDog Cloud allows for remote production from anywhere in the world. All you need is an internet connection and you can instantly and remotely access any NDI enabled device. Cloud has complete integration with all the tools you need for a real-time remote production with support for NDI Tally, BirdDog Comms, and full PTZ control. Can you feel the need for speed? Cloud is blazingly fast. The SRT integration means that even over lossy connections you can access any NDI content anywhere in the world at under ½

Audio, and PTZ Control all over a secure and encrypted connection.

a second delay.

Total control. You have the power. PTZ Control, Tally, and BirdDog Comms (free audio intercom software) are all totally integrated within Cloud. You can remotely control PTZ Cameras from the other side of the world, have complete talk back with your camera operators and Tally integrated into your production. Total encryption. Total Security. Cloud uses SRT video transport protocol, a 128/256 bit AES encrypted stream to ensure your content is protected from contribution to distribution. Whether it’s a single point to point delivery or a single stream to multiple sites it’s 100% encrypted. Scalable and Modular. •Endpoint Core – everything you need to get started for a single point to point connection. Includes Tally, Comms, Video,

•Endpoint Multi – multiple stream licence that is only limited by how many streams your computer can process. The faster the machine the more streams. This enables multiple, single point to point connections. •Multistream – enables a single point to multiple destinations whilst maintaining sync. Stream from Oslo to New York, Paris, and Melbourne simultaneously and in sync. •Multiview - multiple NDI sources in a user defined Multiviewer window. Multiview uses a lower bandwidth instance of NDI so you can you can send many more streams than you can of full bandwidth NDI. •Alpha – Adds support for sending remote graphics or other sources with alpha over the SRT link 
 •Record – add recording for both the source and destination. •WebRTC – Convert NDI to WebRTC with very low latency as Program out back channel to ENG or other remote productions. 
 Portable monitoring. NDI in your pocket. With the WebRTC module you can monitor your NDI streams on an iPad, Tablet or other portable device with zero latency.


MCX is the next-generation AV-over-IP solution from Black Box. Serving as a single-network solution that enables truly converged networked AV, MCX delivers up to 4K 60Hz 4:4:4 video uncompressed over 10GbE (or higher) infrastructures with the lowest latency and switch times — so fast they’re imperceptible — available in the AV marketplace. It also is the first commercially available AV-overIP system to incorporate Dante® audio transport, thereby eliminating the need for an external audio encoder. The Black Box system eliminates any compromise between low latency, high bandwidth, and high video quality, as well as the need to maintain dual networks to support AV and IT data independently across the enterprise. It empowers users to overcome sourceto-display latency with glass-to-glass encoding and decoding that happens in real time (0.03 milliseconds) . MCX also allows users to switch between video sources in fewer than 100 milliseconds

with no artifacts or screen blink. Delivering up to 4K 60Hz 4:4:4 plus up to 10-bit HDR, MCX allows users to take advantage of fully uncompressed 4K UHD technology. MCX makes it possible to handle video walls, video extension (pointto-point and point-to-multipoint), and digital signage on an IT network, with intuitive control over how content displays on every screen. The Black Box system brings versatility to video wall deployments with advanced video scaling options such as multi-view, picture-in-picture, split screen and more. As a result, users can control the entire audio/video system using the MCX Controller and touch panel or third-party party control platform of their choice. MCX can be deployed on 10G networks over Ethernet, on fiber, or on both, and users have the option of connecting to MCX through a variety of ports: discrete RS-232, IR, secondary audio channel, plus a separate 1-GbE

connection. Because MCX uses Software Defined Video over Ethernet (SDVoE) technology to distribute AV signals over IP networks, integrators and IT teams can create dramatically new architectures and user experiences at the speed and quality that pro AV applications demand. As a software-controlled system, MCX brings not only flexibility, but also infinite scalability that allows for unrestricted expansion. Delivering audio, video, and data over an IP network, MCX eliminates distance limitations in getting live feeds from cameras to the control room and to video screens. For sports, concerts, esports, and other live events, MCX gives users the ability to manage the network, audio and video signal distribution, and the display of visuals on stage or big screen with zero latency. Made for modern network infrastructures, MCX delivers the video quality high-end AV applications require to display eye-catching content.

BESTOFSHOWATIBC 2019 BRIGHTCOVE BRIGHTCOVE LIVE Brightcove Live enables every media company to deliver and monetise live video at a low cost with the flexibility to scale as needed. Brightcove Live is a broadcast-grade, cloud-based live streaming solution with wide device reach and integrated monetization capabilities using server-side ad insertion. Broadcasters, publishers, and marketers alike can originate live events using our globally distributed architecture, and deliver a high-quality experience to viewers with minimal delay across multiple platforms and devices. Brightcove has made substantial incremental change to the Brightcove Live product, adding advanced broadcast features to receive live feeds in the MPEG-2 Transport Stream format. Live TS streams may be delivered to Brightcove Live via the Real Time Protocol (RTP), including support for SMPTE 2022 Forward Error Correction (FEC). TS over RTP + FEC is well known and widely used throughout the broadcast market, and this facilitates the integration (or adoption) of Brightcove Live for those types of customers (Broadcasters). Brightcove has also deployed support for live stream ingest via the Secure Reliable Transport (SRT) protocol, becoming the first SRT Ready OVP with the feature available to customers across its global points of presence. The Emmy award-winning SRT protocol enables robust low-latency delivery of

live broadcasts over unpredictable and challenging network conditions. The newly enabled protocols (in addition to the existing RTMP capability) allow Brightcove to ingest and process in-band SCTE-35 control plan information for advertising placements, unlocking precise ad insertion through Brightcove’s Server Side Ad Insertion (SSAI) product. Furthermore, Brightcove is continuing to bridge the gap between linear broadcast and digital live streaming experiences for customers with our recently launched Ad Metadata API. The Ad Metadata API allows customers to integrate data from their existing broadcast workflows or include additional targeting data to boost the value of ads and pass specific data for ad targeting. This data can describe

anything about the ad opportunity including details about the content or the user so relevant targeted ads are served. Lastly, Brightcove has made improvements to ease the workflows for those looking to add live streams directly to their social channels right from their Brightcove dashboard. Users can now stream live on Facebook and YouTube through the Brightcove platform. An end-to-end live streaming platform, Brightcove Live is enabling customers to streamline workflows to make live streaming easy and achievable for companies of all sizes and in all geographies. Brightcove Live is supported by an award-winning support team to ensure each event is successful and provides the best experience for viewers.

BESTOFSHOWATIBC 2019 BRIGHTCOVE BRIGHTCOVE DELIVERY RULES Delivery Rules is a feature of Dynamic Delivery that allows media organizations to directly leverage the just-in-time manifest generation capability on a more granular basis than ever before. This allows customers to create rules to customize the media that is delivered in order to fulfill their specific business or technical needs. Delivery Rules are defined as “if” conditions that trigger certain behaviors and a series of “then” parameters that will define how the manifest is modified if the condition is met. This new feature enables organizations to more efficiently deliver content in response to end-user device type, bandwidth, and geographic regions. By tailoring video renditions to devices, mobile users with typically lower bandwidth will be served with faster load times, while those viewing on big screens will experience typically higher quality from the initial viewing. This feature is designed to help media companies of all sizes and in all geographies deliver content in a more efficient and cost effective way. Geography Optimized CDNs: While most CDNs are optimized for worldwide distribution, there are certain areas that are better served by a region-specific CDN. With Delivery Rules customers can create a rule that automatically detects where the viewer is located and set (the “if”) to be the region and utilize a region-specific CDN for those requests and a global CDN for all others (the

“then”). Device Optimized Renditions: With consumers viewing media from a variety of screen sizes it’s less than ideal to deliver the same set of renditions to each device. With device rules you could create a rule that detects if the device is a phone (the “if”) and excludes 1080p renditions (the “then”). You could then have a complementary rule that detects if the device is an OTT device (the “if) and excludes renditions below 360p (the “then”). Delivery Rules is an expansion of Brightcove’s other just-in-time packaging solutions such as Dynamic Delivery and Context Aware Encoding. Context Aware Encoding defines the highest manifest quality, Dynamic Delivery packages the renditions, and Delivery Rules carries out powerful delivery settings like rendition filtering and ordering. Not only does Brightcove’s Delivery Rules fit into its encoding capabilities, but it works with Brightcove’s other products including

Video Cloud, OTT Flow, Live and more. Delivery Rules can create customer rules around monetisation models, such as AVOD or SVOD and more. Delivery Rules fits into existing workflows, offering a holistic solution. Media owners want to deliver their content in the most efficient and costeffective way, while optimizing the quality for their audience. Brightcove’s Delivery Rules enables media owners to achieve this goal by setting custom rules for how the content is delivered based on geography, device or CDN.

BESTOFSHOWATIBC 2019 BROADPEAK BKYOU: BROADPEAK’S AD INSERTION AND ANTI-AD SKIPPING MANAGEMENT SOLUTION Many video content providers and operators with ABR streaming services center their business model around ad viewing. Yet, advanced ad blocking technology today enables viewers to block ads, impacting the revenue of content providers and their capability to invest in producing high-value content. Broadpeak’s BkYou personalized ad insertion and ad-skipping management solution is perfect for operators and video content providers that want to eliminate ad blocking for VOD content and manage ad-skipping policies with a high level of flexibility. Ad blockers typically rely on one of the following techniques to trigger ad blocking: •detect and intercept an event sent to the player that is in charge of inserting the ad •analyze the naming of chunks to detect the ad and block it. Broadpeak’s ad insertion solution is robust against these two techniques. Being a server-based solution, it does not necessitate that catchable events are sent and it hides the names of the chunks, making it impossible to differentiate the video content from the ad content. Supporting content in a wide range of adaptive bitrate streaming formats

(including HLS, MPEG-DASH and CMAF), Broadpeak’s BkYou maximizes monetization opportunities for operators and content providers, enabling them to be more competitive in the video streaming environment. Broadpeak’s ad insertion solution is a game changer for the video streaming world, offering several unique features and benefits for operators and video content owners that increase their monetization and streamline the delivery of ads within ABR content: •Server-side ad insertion ensures better resistance to ad blockers: Broadpeak’s solution is based on the manipulation of playlists on the server vs. the behavior of players. In addition to allowing the usage of various players without specific implementations, this approach offers a superior approach to resisting ad blockers compared with purely player-

based solutions that need to react to detectable events (i.e., events that are used by ad blockers to detect the ads to skip). •Innovative obfuscation technology for blending content: Broadpeak’s solution implements an obfuscation method to blend video and ad content, preventing detectors from blocking ads based on the chunk names. •Advanced ad tracking: Broadpeak’s solution features ad tracking capabilities that not only identify the completion rate for ad viewing but also can stop streaming if abnormal behavior, such as the usage of an ad blocker, is detected. •High scalability, fast startup time: Broadpeak’s solution is highly scalable and does not introduce any delay, offering a short start-up time for end users compared with solutions that rely on content reencoding, another method for blending video and ad content. •Simplified ad management: Broadpeak’s solution offers a flexible approach for managing ad skipping. From the server level, it determines whether ad skipping is permitted based on the content, the portion of the ad already watched or the viewer profile (i.e., whether it is a premium account, for example).

BESTOFSHOWATIBC 2019 CENTURYLINK CENTURYLINK CDN MESH DELIVERY CDN Mesh Delivery allows rapid scalability and resiliency even during peak traffic loads, helping OTT content providers to significantly improve user experience for popular live events and content with massive audiences, and in hard-to-reach areas. CenturyLink CDN Mesh Delivery combines: •a massive global CDN with 40+ Tbps of total edge capacity across six continents in more than 100 major cities, and •a software-based P2P mesh network of end user devices The platform offers an unprecedented level of integration, allowing it to intelligently acquire video segments from the source that provides them most quickly (either the CDN or local peer-to-peer network). It also optimises delivery by leveraging variables such as user location and ISP. This cuts roundtrip time, reduces buffering and enables higher bitrates. This allows OTT content providers to: •cost-effectively increase scalability without affecting reliability or end user experience. More devices mean a more powerful network with the flexibility and capacity to cope with unpredictable spikes in demand. •improve Quality of Service with higher bit rates and less rebuffering. This lets providers offer a high level of user experience even during the most demanding traffic spikes, such as popular live events and sports

championships. •access difficult-to-reach areas around the world, regardless of their proximity to the CDN, by using the mesh network as a remote delivery network. CDN Mesh Delivery’s unique integration and massive scalability were proven during a live sporting event in 2018 that made historic streaming records. Content providers around the world faced high demand on digital platforms as millions of people tuned in on laptops, mobile devices and connected TVs. This made delivering a quality viewing experience a huge challenge, particularly during large traffic spikes. To scale to meet this immense growth, rightsholding content providers across Europe and Latin America turned to the mesh network to deliver uninterrupted, highquality service to end-users. Implementing CDN Mesh Delivery allowed these companies to scale with the demands of the growing audience, as it was able to provide a flexible and

resilient mesh network to cope with more and more viewers tuning in to the live games. Multi-sourcing from the CDNs and a mesh network of devices allowed for greater bandwidth and for content providers to scale as the viewing demand increased. During the event, the mesh network powered over 40 million sessions across Europe and Latin America. Throughout the semifinals and finals of the sporting event, 62% of overall traffic in Europe was delivered using the technology. In addition to supplying the flexible capacity so critical for the event, the peer-to-peer network also helped improve the overall quality of many of the live streams. User location, ISP, network topology, device, type of content and bitrate profiles were all leveraged to determine the fastest and most efficient connections for each individual viewer. Those using the peerto-peer technology experienced up to 37% less rebuffering in Europe and up to 63% less rebuffering in LATAM.

BESTOFSHOWATIBC 2019 CINEGY GMBH CINEGY MULTIVIEWER Scalable monitoring and analysis solution for onpremise and cloud for SDI, NDI and IP. Cinegy Multiviewer provides GPU-accelerated live video analysis, monitoring, notification and deep analysis engine based on services and agents. Deep integration with Elasticsearch and Grafana to streamline historic and comparative analysis, and for visualising and recognising trends. The performance can be scaled by adding up to four NVIDIA GPUs allowing simultaneous monitoring of up to 128 HD signals, 32 UHD signals or eight 8K signals. SDI, NDI, various IP formats (including SMPTE 2022-6 and 2110, DVB and ATSC) are supported and can be mixed. When used in the cloud the choice of input formats is limited to cloud compatible of IP formats. In its newest incarnation SRT is supported for secure reception of IP signal or delivery of the output unreliable networks like the Internet. Alarms can be defined for all types of audio and video error conditions, triggering all types of notifications. The output can be displayed locally on one or tiled across multiple screens or streamed. Cinegy Multiviewer is a pure software-

based solution divided into functionality blocks for signal decoding, scaling, visualisation, visual output and analysis and alerts. The biggest obstacle when building a big Multiviewer solution, e.g. one that is supposed to monitor 64x H.264 or HEVC 1080p signals at a cable company and put them on a single display, is there are expensive ways to do this by using traditional hardware-based solutions. These will not work in the cloud, be it private or public, nor will can they be virtualised in a data center, something not only telcos and cable companies prefer to do these days. Cinegy Multiviewer being purely software-based has no problem with running in a VM environment or in cloud environments. To be able to scale performance, Cinegy for more than 15 years has been using NVIDIA GPUs. Decoding, scaling, filtering, colour space conversion,

and many more functions have been moved into an all-GPU pipeline. Only when SDI or SMPTE-flavour uncompressed IP is used will the uncompressed data be moved as is into the GPU, unless Cinegy Encoder, our SDI/SMPTE-IP gateway product, is used to move these signals into H.264/ HEVC or our own light-weight Daniel2 compression format first. Supporting up to four GPUs allows scaling the computer performance dramatically and therefore also the number signals. Cinegy Multiviewer handles 8K signals via SDI using AJA, BlackMagicDesign or Deltacast SDI cards, and via IP using HEVC, Daniel2 or uncompressed inputs. This feature alone is a USP for the moment, which surely other companies will try to match eventually. The flexibility of the wide variety of audio and video signal types (and subtitles formats) that Cinegy Multiviewer supports will be difficult to match. The integrated streaming video output, the cloud capabilities and the deep analysis features make it even more compelling. We have tier one customers monitoring their playout centers from their smartphone in the pub. Anyone that wants to move to the cloud or to a virtual environment needs Cinegy Multiviewer.

BESTOFSHOWATIBC 2019 COMMSCOPE COMMSCOPE SMART MEDIA DEVICE (SMD) Introducing the CommScope Smart Media Device (SMD). This category of devices combines the functionality of the most important gadgets in the connected home to create a single touch point for consumers’ digital lives. It’s a set-top, smart speaker, visual smart assistant, IoT hub, voice & video conference, and remote control—all in one. This combination allows the device to offer a more personalized, connected, and convenient way to enjoy all the media, services, and applications in the digital home. For operators, it’s the perfect opportunity to reclaim the service experience in the connected home. The smart settop market is expected to reach $2B (1.77B euros) by 2024 and the market will grow at a CAGR of 8% during 201824, according to market research firm Arizton. By combining the experiences that consumers already love—and are willing to pay for—with a smart and sophisticated device, service providers can carve out a new market for unique services that increase ARPU and improve stickiness. Service Provider (SP) Benefits • By becoming a ‘super aggregator’, SPs may increase ARPU by offering differentiated services that consumers are willing to pay for such as e-health, education, home security, and utilities management, to entertainment management and productivity tools • Just as the smartphone became a device that people cannot live without, SMD

products with their high-quality audio experience, together with the additional services they empower SPs to create, are set to transform the traditional settop into a much-loved device • This could lead SPs to be able to lease these products at a higher price in future • This stickiness and ‘love’ factor reduces the risk of consumers switching to a cheaper service from a competitor SP Consumer Experience • With the core functionality of a 4K set-top, and the addition of hardware including far-field microphones and speakers and software in the framework, SMDs also bring additional visual smart assistant and Bluetooth technologies to integrate seamlessly with familiar household services. For example, if you use different smart assistants for different things – say Google for search or Amazon for shopping – an SMD can provide access to all of them in one device, so you can choose the one you

want, when you want it • With SMD products, you can interact with each smart assistant visually, as well as orally, through your TV screen, giving you a much richer experience • SMD brings all your connected devices together centrally in one elegant, premium home hub. It reduces device clutter giving you easy access to all your entertainment, education, health, security, home services and productivity applications • You can tailor the SMD service to suit the family member using a large screen TV. Additionally, SMD products are inclusive, allowing you to watch entertainment or shop easily together as a family • SMDs allow traditional entertainment sources – such as Netflix, BBC iPlayer and Amazon Prime – to merge with visual skills. In the future, this will allow users to consume entertainment content whilst engaging with other IoT services in their home at the same time, from the same screen.

BESTOFSHOWATIBC 2019 CONNECT KYBIO MEDIA KYBIO Media is a unified endto-end Monitoring and Control (M&C) platform serving TV, radio, satellite, telco, and other vertical markets in media and broadcast. It is a simple yet powerful platform enabling users to oversee an entire ecosystem, centralize data, and streamline the management of IP-enabled, industry-specific or commodity, gear and technology. Highly scalable and open, KYBIO Media plugs in to any third-party or in-house technology with open protocols and APIs, and functions on an open-driver policy. Whether broadcasters need to monitor a small or highly complex infrastructure, KYBIO Media offers scale and resiliency. Its unique combination of modules enables users to -maximize their equipment uptime thanks to real-time alarms, notifications, time-based reporting and root cause analysis -save time for operations with time management features, event resolution tracking, and advanced control for remote actions over connected equipment with industry standard protocols -make actional insights by aggregating data from multiple equipment and locations, then transforming that data into comprehensible, visual insights and reports

Workflow engines, etc.)

With the 3.9 release at IBC, KYBIO Media will include a new generation of advanced drivers, enabling the stacking of protocols for communication, control and management. Indeed, with this new generation of drivers, users will have the flexibility to choose from a wide variety of protocols and even combine multiple of them to be even more efficient in the management and control of their devices (e.g. basic monitoring through SNMP, provisioning over SOAP, control over ModBus TCP, etc.) A new API will also be released as part of this new version, further enhancing the interoperability with existing customer ecosystems. This new API will come as a license add-on and will enable more advanced workflows and integration capabilities with enterprise grade systems (ERP, CRM, MAM/PAM,

Why KYBIO Media? The technical world is teeming with data, and the broadcast segment is no exception. Broadcasters have more services, content and technologies to manage, and typically fewer resources available. In this context, collecting and analyzing data is key to maintain successful operations. KYBIO Media aggregates the data and presents it to users in a form they can easily understand and translate to a course of action. This is key to effective use of the massive quantity of data available through network connections, IoT implementation and communication with devices of all kinds. Data can be filtered, organized by time, site, equipment, duration and other parameters enabling root cause analysis and insights-driven decision making. KYBIO Media is an incredibly simple yet powerful monitoring and control platform that will help users get their job done more efficiently, proactively, and with multiple operational and strategic benefits. Overall, it helps meet the critical challenges of broadcasters to capture, produce, and distribute quality content around the globe.

BESTOFSHOWATIBC 2019 CONVIVA CONVIVA CONTENT INSIGHTS We are launching Conviva Content Insights at IBC, expanding our streaming media intelligence portfolio to deliver a 360° view across content, social media, quality of experience and advertising. Built specifically for streaming video, Content Insights introduces a new industry standard for content strategy, promotion and monetization decisions, informed by the world’s most comprehensive, trusted and accurate streaming media data set. In an increasingly crowded streaming media market, identifying content that pulls viewers in and holds their attention is crucial, as is understanding where, when and how streamers consume that content. Engaging and retaining those audiences requires the ability to continuously act on that intelligence. With Content Insights, streaming media providers can effectively do it all. Because streaming equates to viewers on the move, it’s not easy to accurately attribute consumption to households. Conviva has solved for this with Content Insights, which analyzes every second, screen and stream, accurately mapping these factors at a household level. The resulting high-fidelity Household Consumption Graphs give Conviva customers a complete picture of how households engage with their content, whether sitting in front of the big screen together or watching on devices in different locations. In other words, with Content

Insights, the industry now has a way to understand the consumption patterns of streaming viewers across time, locations, content, apps and devices. Conviva’s Household Consumption Graphs also enable the creation of behavioral segments and pathing visualization for unparalleled comprehension of the viewer journeys of bingers, sports watchers, series loyalists and more. With these insights, uses can improve in-app content recommendations, increase conversion of tune-in campaigns and refine content strategies based on deeper understanding of viewer preferences and behavioral patterns. Content Insights is designed to be easy to use and address analytics needs across the organization. Standardized and validated metrics across all devices means consistent data for every business group, including content acquisition, marketing and promotions, research and analytics, and more. And with the introduction of Conviva’s first mobile

app, users are able to track content performance anywhere, anytime. Leading organizations are already leveraging these new capabilities to improve everything from content selection and marketing performance to pricing and packaging. Among the first to adopt Content Insights are Cartoon Network, CNN, DirectTV Latin America and TruTV. “Content Insights is an exciting addition to Conviva’s portfolio. With it, we have a much deeper understanding of what, where, when and how our viewers are streaming,” said Robert Jones, VP of Data Platforms & Strategy Operations, WarnerMedia. “Knowing what will keep them engaged, and staying on top of the trends that matter, is integral to delivering the personalized experiences we envision for every streaming viewer worldwide.” Why Conviva Content Insights: Graph Streaming Media Consumption by Household: Accurately map streaming viewer consumption patterns across time, locations, content, apps & devices. Understand Viewer Journey with Behavioral Segments & Pathing: Gain unparalleled comprehension of the viewer journeys of bingers, sports watchers, series loyalist and more. Leverage The Most Accurate Data Organization-Wide: Integrate insights across the business to drive critical decisions in content acquisition, marketing and research.

BESTOFSHOWATIBC 2019 CONVIVA CONVIVA AD INSIGHTS Advertisers must follow their audiences. As viewers make the transition to streaming, ad money must follow suit. But everything about streaming ads is different than linear or digital display ads including how they are engaged with, purchased and placed. Streaming ads are also predicated on a much more complex delivery system than ads for other mediums. In 2019 we launched Conviva Ad Insights, an intelligence solution purposebuilt for streaming video ad analytics, to address the multidimensional factors unique to streaming that determine ad effectiveness and the ability to monetize inventory. Ad Insights identifies deficiencies traditional ad measurement can’t with real-time monitoring, automatic AI alerting and diagnostics for visibility and control over the quality of the entire ad experience, across every second, every stream, and every screen. In developing Ad Insights, we identified and now address 11 potential failure points unique to streaming advertising: ad blocker management, empty ad response, ad request timeouts, wrapper ads, fallback ads, invalid ad responses, incorrect ad media rendition, media load timeouts, invalid media file, VPAID errors, and media playback errors. Many failures are also the result of delays including rebuffering that traditional beacon-based measurement

cannot capture. For example, after just a 5 second delay, 13.6% of viewers gave up and abandoned content. Only continuous measurement of every second of the ad can identify these issues. Across the ad delivery chain from ad request to decisioning and selection, creative delivery to playback, errors and delays cause missed ad exposure and revenue, resulting in as high as 47% of expected ad opportunities going unfilled because of tech failures. In other words, nearly half of streaming ads don’t deliver correctly. Conviva makes it possible for customers to understand which half, identify root causes in real time, immediately act to address problems, take steps to maximize viewer satisfaction and protect the billions in revenues that are on the line. Ad Insights also enables publishers to understand the impact that factors like ad length,

frequency and placement have on engagement. When viewers tune out, the effects are amplified across subsequent ad breaks, putting both overall viewer retention and inventory optimization at risk. Only Conviva can deliver insights into the full technical quality of ad creatives, lost impressions directly impacting monetization, how viewers engage with ads, lost yield due to slates and the holistic view of ads in the context of content. “Ad Insights is instrumental in monitoring our video ads,” said Jarred Wilichinsky, VP Video Monetization and Operations at CBS Interactive. “With this visibility, we’ve been able to resolve issues quickly and provide the best possible viewing experiences.” Why Ad Insights: Measure the Full Technical Quality: Know what’s happening with every viewer, every second, every screen to reduce churn, avoid inventory waste & prevent under-delivery. Identify Lost Impressions that Impact Monetization: Unique visibility across 11 points of failure from ad requests to ad delivery and decisioning to ad creative delivery and creative playback. Understand Ad Engagement: Maximize revenues by aligning strategies for streaming video ad breaks, duration and placement to viewer characteristics and expectations.

BESTOFSHOWATIBC 2019 CONVIVA CONVIVA EXPERIENCE INSIGHTS We are launching a completely reimagined Conviva Experience Insights at IBC, as we expand our streaming media intelligence portfolio to deliver a 360° view across content, social media, quality of experience and advertising. The next generation of Experience Insights has been rearchitected from the ground up for unprecedented levels of ad hoc multi-dimensional analysis, responsiveness, data accessibility and granularity. Long recognized as the industry benchmark for data accuracy and real-time quality of experience (QoE) optimization, it now features a redesigned, easy-to-use interface optimized for common workflows, to make it easy for users across organizations to better understand, visualize and perform individual queries and get instantaneous feedback on the literally hundreds of millions of combinations of metrics. Other enhancements include second-by-second reporting at minute granularity and on-demand filtering based on the user’s choice of dimensions. These dramatic improvements deliver the visibility and responsiveness publishers need to provide the next-level entertainment experiences their viewers expect. Providers today must ensure every viewer’s streaming experience is exceptional. Unlike cable hardwired to TV sets, the Internet was not built for video. Between transmission from the

server to ISP to CDN to the viewer’s location, application and device there are potentially 100 million problems that can impact the viewer’s experience. Providers must understand and be able to address each viewer’s experience. Experience Insights removes critical blind spots and provides real-time intelligence to enable churn reduction, increased viewer engagement, new monetization models, ROI growth and more. “Hulu’s rapidly scaling business requires a deep and detailed view of our video and customer systems,” says David Baron, Hulu’s VP of Content Business Operations and Digital Supply. “Through our partnership with Conviva, Hulu is able to gather granular insights in real-time across our live and on-demand services that help keep us at the top of our game.” “To stay ahead in the streaming TV market, you have to keep pushing the

boundaries of technology innovation and building for what’s next. Sling has done that many times over,” says Christine Weber, Sling TV’s Sr. VP of Engineering. “Conviva has repeatedly set the standards for real-time streaming media intelligence and does it again with the latest generation of Experience Insights.” “As a rapidly expanding global sports broadcaster, data forms a central role in our continued success,” says Nick Grimwood, DAZN Operations. “With each new market launch, it’s crucial that we understand the territoryspecific nuances in real-time. To deliver the world class service that our passionate subscribers expect of live sport streaming, we choose Conviva as our key strategic partner.” Why Experience Insights Reimagined: See the Full Picture: Dive into granular single session details or see the big picture at a glance to understand the complete viewer experience. Shorten Resolution Time: Get intuitive, ad-hoc analysis and incredibly responsive, on-demand filtering to pinpoint and rapidly resolve issues that impact engagement. Benchmark What Matters: Inform critical decisions with expansive historical data and granularity from Conviva’s unmatched 100B streams/year, 1 trillion data events/day, 180 countries.

BESTOFSHOWATIBC 2019 CONVIVA CONVIVA SOCIAL INSIGHTS Conviva Social Insights, launched November 2018, enables premium video publishers and brands to drive incremental revenue and create better video content across all major social media platforms. Social Insights saves time and delivers actionable intelligence by effortlessly aggregating social video, story, post, and audience data so our customers can save time and increase efficiency of social media across their organization. With Social Insights, we set out to solve a fundamental problem. Every social media platform measures video differently. Because of this fragmentation and limited reporting across data sources, historically up to 80% of time is spent collecting data and only 20% analyzing and acting on it. With Social Insights, we put an end to the drudgery of data collection, cleansing and reporting by automating the workflow and aggregating crossplatform account data into a single consistent view. We do this through platform partnerships and by measuring all of the major social platforms: Facebook, Facebook Watch, & Facebook Live, Instagram & Instagram Stories, Snapchat & Snapchat Stories, Twitter, and YouTube. And because we know what matters to publishers we continually play a direct role in defining the analytics for nascent platforms like Instagram Stories and Snapchat.

Social Insights delivers unified measurement, validated data, consistent KPIs, automated tagging and automated reports. The comprehensive capabilities of Social Insights enable our customers to reduce time spent on data collection to 20% freeing up 80% for analysis and action. With Social Insights, customers can identify target audiences across platforms, generate rate sheets based on past campaign performance, and optimize ROI by analyzing historical performance. And connecting social media content, with owned and operated platforms, delivers a truly 360 degree view of the viewer’s journey. Social Insights deliver innovative capabilities that enable publishers to enhance their content with data-backed insights. Our bespoke leaderboards provide competitive video intelligence and trending content insights across all major social media platforms. Tag Manager makes it easy to search for

and automatically group videos and post content using keywords, hashtags, and other metadata. Tagging tools let publishers automatically and quickly group videos and posts for deeper cross-platform analysis and to easily generate and schedule reports at their convenience. And publishers use the Branded Content module to identify, optimize, and report against branded content or native advertising opportunities to maximize the value of their social video inventory. “Conviva Social Insights has been an absolute game changer for us” says Alex Parker, Director of Marketing, Miami Dolphins Football. “We credit a lot of the success of our strategy to the insights we get from Social Insights. The data we are pulling from Social Insights, we are using every single day.” Why Social Insights Accelerate Time To Insight: Save time building reports with aggregated data from all the major social media platforms in one intuitive dashboard. Create More Engaging Content: Identify top performing posts, videos and stories across social media to bring data into your content strategy. Drive Incremental Revenue: Activate target audiences and content series across platforms to win and retain sponsors.


Densitron’s new HMI 16.3” Full Surface 2RU is a core module in our Modular-Hybrid solutions for Broadcast applications. It is based upon TFT display technology with capacitive touch. On this transparent surface “mechanical touch” or tactile control objects can be

mounted for precise user interaction. It’s applications include; signal and picture monitoring, or as a control surface with the use of multi-touch PCT, mechanical touch, tactile objects, haptics, or a combination of any or all. Our Full Surface display offers a high

resolution (1920x 285), which allows multiple video picture monitor and audio level displays – side-by-side. With the addition of AuroraTM SBX embedded computing a complete HMI solution available for a multitude of operational and engineering applications.

BESTOFSHOWATIBC 2019 DIGITAL NIRVANA DIGITAL NIRVANA’S MONITORIQ 6.1 AND MEDIA SERVICES PORTAL Market Requirements Reliable, scalable, and effective compliance logging and monitoring is an essential tool for today’s broadcasters for quality assurance and to meet regulatory and compliance requirements. The best compliance logging solutions give operators a comprehensive, efficient, and easyto-use mechanism for collecting and using valuable insights about aired content. As such, these systems have become standard equipment in broadcast infrastructures of all sizes. Broadcasters have come to rely on several baseline capabilities from their legacy logging systems, including: Robust tools for capturing, recording, storing, and streaming video, audio, and dataEasy access to archived content The ability to record compliance-related metadataA simple and intuitive, web-based interfaceEnterprise-scale reliability The Next Generation: Digital Nirvana’s MonitorIQ 6.1 Offered by Digital Nirvana, the leader in metadata creation, enhancement, and compliance, MonitorIQ 6.1 takes the foundational requirements of compliance logging and enhances them with nextgeneration capabilities including cloud storage, processing, and robust artificial intelligence (AI). These capabilities include: A reliable, scalable, and expandable architecture that enables on-premise, cloud, or hybrid implementation. MonitorIQ was

designed to run locally as a full turnkey solution or in a virtual environment, using local customer hardware or residing in the cloud or in a hybrid configuration. MonitorIQ also provides an extensible list of open APIs for easy integration into broadcast workflows or third-party systems. Linux OS to drastically reduce viruses and malware threats, as well as the need for emergency security updates. Advanced Analytics and Reporting with the ability to deep-dive into the full SCTE message for analysis and display SCTE messages from multiple points. Seamless integration to AI-based cloud microservices that can be spun up very quickly as needed and run on top of the compliance system. Microservices include: Automated speech to text for closed caption generation and transcription CC/Teletext conformance and correction to meet streaming services’ standards (Hulu, Netflix, Amazon, etc.) Powerful text and video-intelligence functions for detecting faces, logos, images, and onscreen text Ability to detect ads in competitive

programming and identify the category of the ad and advertisers Extending the Closed-Caption Ecosystem: Integration With Media Services Portal MonitorIQ 6.1 provides tight integration with Digital Nirvana’s Media Services Portal, a suite of smart self-service tools, which empowers broadcasters to apply AI capabilities to broadcast workflows. These microservices include speech-to-text for automated closed-caption generation, quality assessment for postproduction closed captioning, and transcription of recorded assets. A video intelligence engine provides logo, face, and object recognition. Benefit Summary The integration of MonitorIQ and Media Services Portal opens up a whole new world of AI-enhanced possibilities for broadcasters to record, store, and repurpose content. Through intelligent and immediate logging and feedback of content quality and compliance, broadcasters are betterpositioned to meet regulatory, compliance, and licensing requirements for closed captioning, decency, and advertising monitoring. About Digital Nirvana: Founded in 1996, Digital Nirvana, with its repertoire of innovative solutions, specializes in empowering customers worldwide with knowledge management technologies. More information about Digital Nirvana and its products and services is available at www.

BESTOFSHOWATIBC 2019 DISGUISE GX 2C Disguise technology platform enables creative and technical professionals to imagine, create and deliver spectacular live visual experiences at the highest level. With a focus on combining real-time 3D visualisation-based software with high performance and robust hardware, they enable the delivery of challenging creative projects at scale and with confidence. The disguise workflow tool provides 3D simulation, collaborative sequencing and video content playback in a single tool that allow designers and technical teams to collaborate to achieve the best creative results possible, in a fast-paced environment. The latest gx 2c media server was designed to meet the ever increasing creative demands and technological advances in the world of broadcast, esports and live events around the globe. The gx 2c is an innovative design born out of user and project needs for more processing power to drive projection and video content playback, and is the most powerful server in the market for real-time generative content. The gx 2c has been designed to scale and integrate as part of a bigger media server system, to go beyond the power of a single server solution into distributed systems. This allows users to maximise GPU and memory for greater

real-time content performance. Creatives can build environments with richer scenes at higher resolutions and smoother frame rates to push the boundaries of what is achievable in esports, live events, and studio environments. The gx 2c can also capture 8x 3G-SDI inputs, which unlocks new workflows so creatives and technical teams can utilise more feeds than ever before. The video inputs allow users to capture 2x2160p60 (2x4K@60) four times the capability of the predecessor. With double the amount of media storage as well, with a fitted 4TB NVMe drive, creatives are free to work with better quality codecs. Additionally featuring two VFC slots, for disguise’s unique Video Format

Conversion technology, the gx 2c allows users to output HDMI, SDI, DVI or DisplayPort without changing the system, as well as mix signal formats and resolution types in the same project. The gx 2c media server is the result of the drive behind disguise’s in-house teams who are constantly striving to develop their products to meet the creative and technological demands of their users and be at the forefront of the industry. The new capabilities of the gx 2c make it the most powerful server available for real-time generative content and is beginning to open exciting doors for broadcasters and live event creatives all around the world. The gx 2c is the technology powering disguise booth, as well as the White Light SmartStage - an immersive video environment which replaces the traditional green screen element of a virtual studio and allows the presenters and audience to see and interact with the content around them. SmartStage demonstrates creating virtual real-time interactive content, building virtual set extensions and aligning the real and the virtual to a perfection with #SmartStage technology, and in 2019, won the IBC2018 Innovation Award for its use in Eurosport’s coverage of the PyeongChang 2018 Olympic Winter Games.

BESTOFSHOWATIBC 2019 DRAKA COMTEQ GERMANY GMBH & CO.KG DRAKA IP MEDIALINE FIBER BU Multimedia Solutions of Prysmian Group presents its new IP fiber optic cables for high-speed data transport. The new Draka IP MediaLine Fiber based on the SMPTE ST 2110 standard comprises the central loose tube cable with 2 to 24 fibers. With its FireRes® sheathing, the non-metallic, gel-filled cable is ideal for indoor installation. The CPR Cca cables with very high flameretardant performance feature dielectric glass yarn armouring for rodent resistance and high waterproofness. The new IP MediaLine Fiber cable range also includes the MFC OS2 fiber optic cable designed for mobile use. The tight buffered 9/125 fiber cables are equipped with the patented BendBright® technology, making them very robust and resistant to bending. BendBright® combines three features: low macrobending sensitivity, the new Draka Colorlock XS coating and tight glass geometry. Together they create the ideal fiber for all patch cord, interconnect and jumper applications, offering companies measurable technical, economic and environmental benefits. More and more Fiber optics replaces copper Like many other sectors, the media and event industry is facing a digital transformation. This development requires changes in the broadcast infrastructure. High-resolution cameras and data-intensive additional services such as 4K UHD, 3D streams and interactive media are making ever higher demands on signal quality, transmission range and maximum bandwidth for live

broadcasts. The number of connected endpoints also increases continuously. Conventionally used analogue and digital technology are reaching their limits. Broadcast technology is therefore developing more and more in the direction of fiber optics. A switch to fiber optic systems is unavoidable. Optical fibers overcome the rigid length restrictions of copper systems and transport almost unlimited quantities of audio and video data over very long distances in accordance with specifications and without signal loss. In addition, glass fiber sections offer excellent resistance to temperature fluctuations, dirt, moisture or tensile load. This also makes them suitable for use in the field in harsh environments for example at festivals.

BESTOFSHOWATIBC 2019 EDGEWARE STREAMPILOT It’s no secret that a shift is currently taking place within the media industry. Providers of streamed media are increasingly not owning their own distribution networks and are thus turning to third-party CDN services to distribute their TV content. Such a multiCDN arrangement provides assurances around reach, redundancy and peak traffic offload. Spend on CDN services is therefore expected to grow from just under $11 billion in 2018 to nearly $16 billion in 2023 (Ovum), as having a choice of available CDNs becomes critical to ensuring streams start fast, play buffer-free and generally meet consumers’ increasingly high standards. However, multi-CDN environments come with some significant challenges, particularly the ability to control and orchestrate the delivery from the different CDN service providers, as well as getting access to instant metrics from the clients’ quality of experience (QoE). The content provider’s means to control the delivery when streaming from third-party CDN services are limited, with the control largely in the hands of the CDN-service provider. Furthermore, the visibility of QoE metrics are non-existent, thus offering no opportunity to impact the TV service in real time. This makes it hard to optimise QoE during ongoing session if problems occur. This is where Edgeware’s unique cloudbased StreamPilot session control platform makes all the difference. Sitting in the control plane between the client and the CDN, StreamPilot enables real-time, insession and per segment-based (patent

pending) delivery control in multi-CDN environments, independent of client, CDN and video formats. Along with the ability to switch sessions between CDNs in real time and during ongoing sessions, the platform provides a new and unparalleled level of granularity in the control of TV delivery. Content providers benefit from visibility and insights into viewers’ QoE, all without having to integrate with users’ numerous client devices. StreamPilot provides key session data - instantly - such as bitrate, the timing between segment requests, the type of device used, the video format (HLS, DASH, VoD, Live), the geographical location of the client and performance data from all delivering CDNs. The data is presented in a single dashboard with an open API to enable automation and easy integration with other systems. The real-time session data provides unique insights into the

viewers’ QoE, enabling providers to manage the session and optimise the QoE should problems occur. StreamPilot’s session data also provides valuable insights into how viewers consume streamed media, which can be monetized through new TV service offerings and campaigns. One example is A/B testing when rolling out new audio or video codecs. As consumer habits continue to evolve, the ability to analyse and manage multiple CDNs to deliver the best quality experiences cost-efficiently will be an important strategy for a growing number of OTT providers. Edgeware’s StreamPilot is offered as a SaaS solution and is the first session control platform to function independent of CDN - i.e. does not require an Edgeware CDN system - client and media format or any other component in the media delivery chain.

BESTOFSHOWATIBC 2019 EVERTZ SCORPION MIO-BLADE SCORPION MIO-BLADE: Smart Media Processing Platform with evEDGE services Broadcast facilities have been highly dependent on fixed function processing hardware such as video and audio processors, frame synchronizers and up/down/cross converters for day-to-day operational tasks. However, with this approach, operational workflows are limited by the functionality of the equipment that is available. This type of operational architecture, while functional, is not flexible or particularly cost effective in dynamic environments. Evertz evEDGE virtual IP media services support a comprehensive selection of processing functions which can be provisioned based on evolving workflow requirements. The flexibility enabled by evEDGE allows highly efficient and adaptable workflows for every application. Unlike fixed function processing hardware, evEDGE services can be run on agile hardware platforms. Processing functions are no longer permanently coupled to the underlying hardware platform. evEDGE supports virtual IP media services over agile hardware platforms including FPGA compute blades, x86 COTS servers and the Cloud. SCORPION, Evertz’ Smart Media Processing Platform, has quickly become a fundamental building block for any IP facility. The versatile and extensive

SCORPION I/O options for video, audio, and data signal are important for any facility, whether it is 12G/3G/HD-SDI, compressed over IP (JPEG2000), or uncompressed over IP (ST 2110 over 10/25/100GbE). The SCORPION mini I/O (MIO) modules provide media companies the flexibility to aggregate multiple video/audio/data signal types into one platform. The new SCORPION MIO-BLADE combines SCORPION platform and evEDGE services to allow media companies to take advantage of flexible compute engines to run video/audio processing functions as dynamic software services. The MIO-BLADE is the latest addition to Evertz evEDGE compute family to help media companies transition from single-function hardware to the more robust “software as a service” (on generic hardware). With the additional support of contribution encoding/decoding (such as JPEG2000), SCORPION and MIO-BLADE can be used in many applications that include: media transport over WAN, remote production, video assisted referee (VAR), and IP Gateways.

BESTOFSHOWATIBC 2019 EVERTZ DREAMCATCHER™ BRAVO DreamCatcher™ BRAVO is the latest addition to the DreamCatcher™ Production Suite that allows producers to maximize revenues of live broadcast. BRAVO unifies all the DreamCatcher™ tools together to allow users to produce high quality live content at lower cost for more platforms. BRAVO’s innovative platform is ideal for live sports, gaming, and entertainment production. As a uniquely collaborative production system, BRAVO allows more live events to be produced more efficiently. BRAVO is an application on the DreamCatcher™ architecture that provides all the features required for a multi-camera production including: video/audio mixing, dynamic graphics, support for external graphic engines, replays, and highlights. DreamCatcher™ BRAVO leverages the patented DreamCatcher™ network architecture to allow for any scale and size of production. The DreamCatcher™ network architecture treats all the DreamCatcher™ ingest/capture and playout nodes as a cluster of compute, similar to a software defined data center (SDDC). This unique approach allows media companies to use the DreamCatcher™ cluster in many different configurations for different applications. BRAVO is a software client on the DreamCatcher™ cluster, which leverages the scalability of the cluster and has access to all inputs. Thus, providing a producer unlimited flexibility to handle any size of production with any number

of operators. For example, media companies can use the DreamCatcher™ cluster to produce multiple small multicamera events with small teams at the same time or produce a large multicamera event with a large team. DreamCatcherTM BRAVO with its intuitive interface can be customized for one, two, or more person production. BRAVO builds on a DreamCatcherTM core feature of a customizable interface for modern touch surfaces for efficient workflows. The flexible interface uses Evertz’ VUE technology to adapt to the type of production and present operators the controls they require. BRAVO allows media companies to cost-effectively capitalize on rights and create top tier content for multiple platforms from traditional linear to social media channels.

BESTOFSHOWATIBC 2019 FRAME.IO FRAME.IO V3.5 is the market driver for professional video review and collaboration, designed to define and modernize the cloud-based creative process. Natively integrated into all popular NLEs, the platform, with the addition of our new App Marketplace and public API, is uniquely positioned as the hub of the video creation process, enabling teams to organize and share project assets, and to connect the tools they use across their entire organization, regardless of location. New for IBC 2019 is the launch of Frame. io v3.5, another significant update to the platform that media organizations like VICE have said they can’t live without. Serving as the connective tissue for freelancers, brands, facilities, and media conglomerates of all shapes and sizes, v3.5 boasts enterprise-grade security features that enable content creators to keep heavily guarded media content safe from leaks. Security controls are customizable so that highly sensitive content can be easily protected with access controls and permissions across organizations, teams, or individuals. is also the only video collaboration platform to have voluntarily completed TPN (Trusted Partner Network) assessment, a joint venture between two of the industry’s most important associations, the MPAA and the CDSA. Video creators incorporate myriad taskspecific apps into their process. That’s why we created the App Marketplace. A hub from which users can access more than 1,000 apps and easily integrate them

into their workflow—whether uploading content, tracking projects, or publishing video— has made it possible to automate numerous processes that save time and boost productivity. By using integrations built by our world-class partners (such as Kyno, LumaFusion, and Hedge), it’s possible to view dailies within minutes after a director calls “cut,” to transcode and backup files, and to edit from an iPhone in the field or on location. Additionally, the Developer Platform is designed to empower users to create flexible and scalable workflows. Custom integrations built by leveraging our API can enable a host of automated features, such as: - Push uploads directly from an in-house DAM to - Automatically generate review links for clients (effectively turning into their client front-end on top of their own workflow/CMS backend)

- Pull periodic metrics to create unique analytics dashboards Most recently, Blackmagic Design developed in DaVinci Resolve 16 Studio. The integration is built on top of the same public APIs we use to build our own software, and woven natively into Resolve’s UI and timeline. This enables an unprecedented quality of integrated user experience, from the way’s cloud media appears as a local drive, to seamless media import, and synched comments on the Resolve timeline. From dailies to delivery, is the core of media workflows at organizations ranging from 1 to over 1,000 users. As the demand for video content increases and deadlines shorten, enables real-time feedback for quick responses and iterations, allowing distributed teams to work together from anywhere on the planet.

BESTOFSHOWATIBC 2019 GATESAIR MAXIVA PMTX-70 OUTDOOR POLE-MOUNTED TRANSMITTER GatesAir’s new Maxiva™ PMTX-70 is a unique product with multiple configuration options. The unit can be configured as a transmitter, transposer, or on-channel gap-filler and it is completely sealed by a telecom-grade weatherproof enclosure, ensuring continuous operation even in the harshest outdoor elements. For the utmost reliability, the Maxiva PMTX-70 contains no moving parts, cooling fans, or air filters and mounts directly to a metal tower or pole, which serves as an efficient heatsink. A variety of mounting plates and adapters ensure that it can be mounted on virtually any structure. The highly flexible Maxiva PMTX-70 can be configured as a VHF-TV/ DAB+ or UHF-TV transmitter, either for analogue or digital modulations and supports all major worldwide modulations. With a rated output power of up to 70W rms, or 130W analogue, it can be used for a wide variety of lowpower applications including singlefrequency network (SFN) applications. Transmitter inputs are varied, including optional DVB S/S2 satellite receivers

multiplex. For the output stage, a high-efficiency broadband Doherty power amplifier is combined with state-ofthe-art real time adaptive correction to ensure unprecedented levels of RF performance and stability in this power class. The same housing also includes an RF output filter to ensure compliance with mask requirements. Control and monitoring for the PMTX-70 can be achieved via a built-in via SNMP/Web interface (Ethernet RJ45), or Integrated 4G modem.

(up to four), ASI, ETI and GbE (TS over IP). When configured as a Transposer or on-channel Gap Filler, an off-air receiver option is available. An on-board Remux option provides the capability to combine different programs from multiple satellite receivers or off-air receiver, and combine them into a new

Key Features include: •Outdoor, pole-mount unit •No mechanical parts, fans or filters. 100% sealed unit •Heat is dissipated to tower leg/pole •Output power (after integrated mask filter) - 70W average DTV, or up to 130W analogue •Configurable as: -Transmitter -Translator (Transposer) -Gap Filler •Power supply: DC, 36-72V (positive or negative).


GB Labs has introduced across its entire FastNAS range 25 Gb Ethernet (GbE) connectivity as standard in place of 10Gbe. This more than doubles the existing industry standard 10 GbE offering. This is highly significant because most types of uncompressed 4K demand far more than 10Gbe can deliver. Clocked by the AJA speed test at over 2.3 GBytes/s, FastNAS with 25Gbe is now significantly faster to the desktop. 25 GbE connectivity is not new, nor is it a GB Labs invention per se, but it is an emerging Ethernet speed option that, although a significant advance on traditional 1 GbE, 10 GbE or higher options, is woefully underutilised because, until now, there has been no NAS technology capable of truly pushing it to its full potential. For example, most fibre channel systems using SAN typically run either 8 GbE or 16 GbE. 25 GbE, on the other hand, represents a significant step-change upwards from what most

organisations are running with fibre channel today. i.e., it is potentially significantly faster but only if there is storage technology behind it that is designed to drive it. GB Labs professional range already delivers up to 100GbE, but requires special cabling and other infrastructure considerations. However, GB Labs has found a way to exploit the full power of 25 GbE by making it integral to its intelligent, high performance FastNAS storage range. The method by which GB Labs has implemented 25 GbE connectivity in FastNAS has resulted in professionally validated speeds of up to 2.3 Gbs from server to desktop, which more than doubles what would typically be experienced from a 10 GbE fibre channel. What’s more, a 25 GbE FastNAS can use existing LC fibre channel cabling infrastructures. If a SAN user wants to join the 21st century with a 25 GbE FastNAS product, they needn’t change much of anything to get it done cable

wise. Existing cabling, usually embedded into walls, can be left right where it is to deliver the blistering performance now offered by FastNAS. This convenience not only represents a substantial cost saving, but a vast reduction in unwanted disruption. And this is not a case of GB Labs doing 25 GbE “because it can”. It’s about applying experience and expertise to utilise 25 GbE to the fullest extent of its performance capability for real world benefits. In short, there’s no point investing in 25GbE fibre if you don’t have the hardware to drive it its maximum benefits. Now, with FastNAS, both can be done at the same time without having to tear down and rebuild the walls, pipework, and flooring. By empowering its entire FastNAS range - shipping as from IBC 2019 - GB Labs has brought the speed, power, and productivity benefits of 25 GbE to the masses.

BESTOFSHOWATIBC 2019 IBM ASPERA ASPERA ON CLOUD IBM Aspera on Cloud provides the fastest way for media companies to securely and reliably move content across on-premises and multi-cloud environments. It provides seamless access to data wherever it’s stored, enabling users to collaborate in a secure environment that tightly controls access to content and application functionality. Large files and data sets are transferred across the storage environment using Aspera’s patented FASP® protocol, which overcomes the limitations of FTP and physical disk shipments to move content at maximum speed regardless of network conditions, physical distance between sites, and file size, type, or number. With full visibility and control over the Aspera transfer environment, users can: •Manage transfer activities, storage usage, and digital packages. •Monitor activity logs and service alerts. •Manage membership in workspaces, user groups and shared inboxes. •Easily create a uniquely branded web presence. New automation functionality will enable users to quickly build and configure event-driven transfer workflows using an easy-to-use graphical workflow designer. Organisations will be able to streamline content delivery workflows by automatically triggering transfers with an action such as a submission to a shared inbox or folder, or an API call. Workflows will also be able to issue notifications of a transfer. While media organisations are increasingly

adopting hybrid-cloud workflows that use a combination of public cloud, private cloud, and on-premises storage and compute resources, moving content has become more challenging. For example, files are often stored in multiple clouds and on-premises systems. Traditional transfer technologies are slow and unreliable, and physical disk shipments are impractical, often exposing data to unnecessary security risks. IBM Aspera on Cloud overcomes these challenges by allowing media companies to securely and reliably move content across on-premises and multi-cloud environments at unrivalled speed. Users can securely collaborate with anyone around the world, as enterprisegrade security protects content as it’s shared and exchanged. The service authenticates users upon login, encrypts data in transit and at rest using strong cryptography, and verifies data integrity to protect against

man-in-the middle attacks. Aspera On Cloud is available in numerous cloud data centres across the world, and in several languages. It’s available to order online as a pay-as-you-go option, or an annual pre-committed volume rate plan. Major film studios, post-production companies, and broadcasters rely on Aspera on Cloud to reduce their production cycles while securely delivering high-resolution media worldwide, to provide consumers with their best content, faster and more efficiently than ever. Supporting information: • watch?v=xTsp4bMakrI • Datasheets/AsperaonCloud_DS_2018.pdf • storage/2159-bitcine-speeds-pre-releasecontent-delivery-with-aspera-on-cloud •

BESTOFSHOWATIBC 2019 IMAGINE PRODUCTS TRUECHECK - A FILE ANALYSIS APP TrueCheck is a dynamic multi-tool that specializes in verification activities beyond copying. This file analysis application is perfect for comparing, reporting and managing files. TrueCheck helps organize and preserve the integrity of the most valuable part of the media and entertainment industry, the media assets. In this ever-evolving industry, data integrity and workflow management is now more important than ever. Customer feedback is always a central factor when Imagine Products creates a software application. TrueCheck is a combination of comparison tools from an older app and feature requests from ShotPut Pro, the companies well-known offload application. There is no other software tool on the market that can do what TrueCheck offers. The ease of the user interface is ideal for professionals on any level, as well as students and novices breaking into the industry. TrueCheck’s power lies in comparing files and volumes for sameness or differences, creating detailed reports or verifying checksums without copying. Its ability to do deep, smart searches of entire hard drives based on more detailed criteria than Finder and to verify previously sealed MHL reports make it indispensible for anyone who needs to manage, organize or maintain data integrity. Folders and volumes can be compared based on user specified criteria like; checking folder structure equality, showing folders with commonalities

and the most robust and detailed comparison; comparing the files inside a folder. This app is ideal for teams that need to stay organized across multiple hard drives. Compare one hard drive to another to ensure an exact mirror image. Results are displayed in an easy to understand interface with thumbnails and metadata for each file. Once that comparison is complete, generate a PDF report with thumbnails and metadata to begin the audit trail. Multiple report options are available and can be run off any attached media. Choose from MHL, PDF, CSV and TXT, choose whether or not to include

checksums in any report. TrueCheck was created for the media and entertainment industry. And because of that, its search options include criteria like, camera manufacturer and file extension. This tool will revolutionize many workflows and improve the organization and safety of many’s assets. Imagine Products has been creating software for the media and entertainment industry for almost 30 years. They are known for being thought leaders and workflow experts and are dedicated to making powerful solutions that simplify processes and maintain data integrity.

BESTOFSHOWATIBC 2019 IN2CORE SCREENPORT SDI MILESTONES April 2019 - ScreenPort SDI prototype has been introduced for the first time during the NAB Show and has been selected by NAB Show Live! field experts as one of the Favourite Picks of the show.

BACKGROUND Today’s tablets and smartphones feature amazing displays that often exceed the quality of dedicated portable/on-board monitors. In addition to wide gamut colors, low power consumption and ultra-low weight, they offer superior touchscreen interface and incredible computing power, but they lack video input. WHAT MAKES IT SPECIAL? ScreenPort SDI is a unique device that turns your iPad, iPhone or Mac into smartest SDI monitor & recorder. Why buy another dedicated monitor, when you can just take advantage of a device you already own? It makes all field monitors look obsolete: ScreenPort SDI makes your iOS/macOS device the only video monitor that allows you to communicate, surf the web or use any other app. Just imagine that by utilizing the Multitasking iOS feature, you can now for example answer emails while monitoring the work on set. It is a perfect companion for run & gun video assist, script supervisors or video engineers. It is lightweight and can run on batteries. When connected over USB it will even charge the iPad (or use USB to power itself with a Mac). If wireless connection is preferred, it can stream video to your device over Wi-Fi.

In addition to timecode and record flag, ScreenPort SDI can read camera and lens metadata embedded into the SDI signal. This includes clip name, camera speed, shutter, focal length, focus distance and other useful information. All extracted metadata is available during live monitoring and it is recorded with each clip to have it available during playback. ScreenPort SDI comes with a free ScreenPort app that provides monitoring, recording and playback, color calibration (X-Rite ColorTrue), custom frame lines, video scopes, focus peaking, 3D LUT support, etc. Optionally, powerful QTAKE applications can be used to provide real time compositing, editing and color correction tools, cloud based streaming and project collaboration, hand-drawn frame-based notes and much more. More information at

August 2019 - Shipping has started. September 2019 ScreenPort SDI will be presented for the first time at IBC 2019, in booth 12.G33 (Ovide/ QTAKE). ABOUT THE COMPANY ScreenPort SDI is manufactured by IN2CORE, developer of software and hardware tools for filmmaking professionals. The company is known worldwide for development of the most advanced video assist software QTAKE, as well as related hardware devices (QOD+, MetaCoder) and software applications (QTAKE Server, QTAKE Monitor, QTAKE Cloud and other). Since its launch in 2009, the QTAKE ecosystem has become industry standard and has been used on movies such as The Avengers/Mission Impossible/James Bond film series, Game of Thrones TV series, Oscar-awarded movies such as The Bohemian Rhapsody or A Star is Born and many others (

BESTOFSHOWATIBC 2019 INTERRA SYSTEMS, INC. INTERRA SYSTEMS’ ORION-OTT WITH OCM The streaming media market is growing rapidly and everevolving, putting complex requirements on OTT service providers. Today, service providers need comprehensive video insights and a streamlined file-based workflow approach in order to successfully deliver high-quality OTT offerings. Recently, Interra Systems added a new ORION Central Manager (OCM) to its industry-leading ORION-OTT end-to-end monitoring solution, providing users with an aggregated, real-time view of linear and VOD OTT services to ensure fast and efficient resolution of issues. OCM offers enterprise-wide visibility by enabling central management of multiple ORION setups, whether they are located in the same or diverse geographic locations. Giving unprecedented visibility, with a realtime view of linear and OTT services, alerts, and QoE information for each video, OCM enables service providers to adopt a proactive approach to service monitoring and improve the quality of their OTT offerings. Unique features and benefits of ORION-OTT with OCM include: Enterprise-wide visibility: OCM simplifies control and monitoring of OTT services by providing operators with a centralized view of errors occurring from multiple monitoring probes. As content is routed through many systems

such as encoders, transcoders, origin servers, and CDNs, OCM helps in error isolation and correlation by aggregating alerts from multiple probes in the video streaming workflow into a single location. Comprehensive OTT monitoring: ORION-OTT is the most comprehensive monitoring solution for OTT services, offering monitoring support for improving the quality of VOD content and live event streaming with thorough ABR manifest file validation, audio-video checks, and real-time alerts. Support for the latest standards in closed captions and ad insertion offers broadcasters and service providers the necessary tools for compliance and monetization. User-friendly: The intuitive user interface of ORION-OTT allows service providers to drill down the monitoring runs to identify the most important issues, their location and occurrence(s) in an asset, along with more contextual information for debugging the issues. The web-based interface of ORION-OTT allows remote monitoring through any

browser-enabled device. OCM takes the drill-down features of ORION-OTT one step further by allowing OTT service providers to drill down into specific probes to view indepth monitoring information. Unlike other monitoring systems that only support broadcast or OTT, live or on-demand content, OCM provides real-time monitoring information on all content. This greatly aids with speeding up debugging and resolving the reported issues. Flexible: Based on a distributed architecture, where probes can be deployed at multiple points in the workflow, including the origin server, CDN, or at the edge, OCM increases flexibility and ensures efficient bandwidth management for operators by providing the most accurate depiction of quality of experience and quality of service. Interra Systems’ ORION-OTT monitoring solution with OCM deserves to win this award because it is the only comprehensive web-based enterprise level solution that centrally monitors the health of probes, collects monitoring data from probes, and provides global alert summary across probes to cable operators for both broadcast and OTT services. This provides unsurpassed visibility and troubleshooting tools — a capability that is critical for today’s complex workflows.


The Linear Acoustic LA-5291 Professional Audio Encoder is an essential tool for television broadcasters creating live content in Dolby Atmos immersive audio. Supporting Today’s Dolby Atmos Workflows. The LA-5291 provides decoding, encoding, and transcoding to and from PCM and select Dolby® coded formats for up to 16 audio channels. “The amount of content produced in the Dolby Atmos format continues to grow, reaching viewers via satellite, cable, and streaming services, and the required workflow can be complex,” says John Schur, President, Telos Alliance TV Solutions Group. “The LA-5291 is designed with every stage of the immersive audio process in mind.” Developed in Partnership with Dolby.

The LA-5291 is the result of a close partnership between Linear Acoustic and Dolby, and is designed to maintain the functionality provided by the Dolby DP591 Audio Encoder while offering additional features and capabilities. Equally at Home for Contribution and Distribution The LA-5291 earns a place throughout the immersive audio journey, starting with OB trucks where it can encode multi-channel PCM audio to Dolby ED2 or Dolby Digital Plus Atmos for distribution. Further downstream, it can transcode Dolby E and ED2 directly to Dolby Digital Plus and Dolby Digital Plus Atmos and encode PCM and PCM with PMD to Dolby Digital Plus and Dolby Digital Plus Atmos for final distribution.

Ready for SMPTE ST 2110-30 The LA-5291 features 16 channels of bidirectional AES67 I/O to support users moving toward or currently using SMPTE ST 2110 facilities. SDI I/O includes two independent 3G SDI inputs with deembedding and re-embedding for up to eight audio pairs and optional quad-link 3G SDI for 4K workflows. Four AES-3 inputs/outputs are standard, while 64 channels of MADI I/O is available as an option. Dual Ethernet ports are included, one for AES67 and another for networked remote control via an intuitive, platformand browser-agnostic web-based user interface. Dual internal redundant autoranging power supplies are standard as is a Telos Alliance 2-year limited parts and labor warranty.

BESTOFSHOWATIBC 2019 MARSHALL ELECTRONICS MARSHALL CV506 HD MINIATURE CAMERA The Marshall CV506 FullHD Miniature Camera offers performance, flexibility, with simultaneous 3GSDI and HDMI options, and value in a tiny form factor. It is designed to capture detailed shots while maintaining an ultra-discreet miniature point-of-view perspective without sacrificing versatility or convenience. The CV506 offers a powerful professional video solution for users working across multiple broadcast applications. As a frontrunner of Marshall’s new line of POV cameras and an upgrade to the popular Marshall CV505, the CV506 incorporates new technology including a 30 percent larger sensor for better picture, color depth and low light performance, along with an improved housing design. Users will notice a true step-up in color and clarity, as well as improved signal strength and ultra-low noise output. The CV506 delivers ultra-crisp, clear progressive HD video up to 1920x1080p at 60/59/50fps and interlaced 1920x1080i at 60/59.94/50fps. With interchangeable lenses and remote adjustability for matching with other cameras, the CV506 is suitable for a range of professional workflows as it can capture detailed shots while maintaining an ultra-discreet pointof-view perspective without sacrificing versatility or convenience. The CV506 offers all frame rates in

one model. In addition, newly added stereo audio inputs and 23.98 fps for TV and cinema production is now standard on all models. The design of the CV506 demonstrates Marshall’s commitment to providing high-quality solutions for users calling for discreet yet powerful image capture capabilities. Measuring in at just a few inches, the CV506 allows for camera placements that many traditional broadcast cameras simply cannot offer. It features a new body style with locking I/O connections, and the addition of white clip and pedestal control with remote image adjustment capability. It is also field-upgradeable, as new firmware is released, for added value. Performance-wise, the CV506 continues

Marshall’s trend of providing top-notch image quality. Featuring a next generation 2.5-Megapixel sensor, the CV506 delivers ultra-crisp, clear progressive Full-HD video. The CV506’s threaded M12 lens mount offers a wide range of prime and varifocal lens options, allowing users a versatile set of shot angles. The CV506 also comes with an impressive array of picture adjustment settings including white balance, gain control, white clip, exposure, gamma and more, all of which can be adjusted through an OSD menu joystick. Sporting full-sized BNC and HDMI outputs and a locking I/O power connection, its Hirose Breakout Cable triples as a power, control and stereo audio input port. The camera’s rear panel offers easy and reliable connections while maintaining a concise housing unit. The CV506 features industry-leading low power consumption and ultra-low light technology to offer the lowest noise ratio on the market today, allowing users to capture sharp, vivid color images in all sorts of light conditions. It also offers focal length and field-of-view flexibility with interchangeable lenses, remote control over RS485, and the convenience of full-size 3G/HD-SDI and HDMI outputs to maintain broadcast-level standards for POV camera applications.

BESTOFSHOWATIBC 2019 MARSHALL ELECTRONICS MARSHALL CV503-WP ALL-WEATHER MINIATURE CAMER Marshall Electronics’ CV503-WP All-Weather HD Miniature Camera offers the ideal professional video solution for users working in a range of outdoor broadcast events and Pro AV workflows in which weather, dust and moisture become important factors. Marshall Electronics has seen to it that no conditions are too extreme for pristine video capture. Produced as a successor to its award-winning CV502-WPM and tailored to the specific needs of industry professionals, the CV503-WP reflects the versatility and dependability of all Marshall products and presents users with all-new features that make this camera a must-have for video professionals. The Marshall CV503-WP offers full protection from the elements through a compact IP67-rated weatherproof housing, giving it the edge it needs to remain dependable while delivering professional quality results even in the most unforgiving conditions. While most weatherproof cameras on the market provide adequate protection from such conditions, the CV503-WP offers exceptional durability in a compact piece of equipment. This makes the CV503-WP ideal for capturing exciting angles in tight spaces while allowing for an ultra-discreet presence. The new CV503-WP utilizes a larger sensor

and more powerful processors to offer better picture performance, true-color, reliability and versatility. Built around a next-generation 2.5-Megapixel, 1/2.86-inch sensor and ultra-low noise signal processors, the CV503-WP delivers ultra-crisp, clear progressive video up to 1920x1080p at 60fps and interlaced up to 1920x1080i at 60fps. The CV503-WP offers a flexible 10’ (3m) weatherproof cable that carries HD video, control, and power to a fullsized 3G/HD-SDI (BNC) output, RS485 connection, and a locking 12V power connector. The CV503’s threaded M12 lens

mount with wider weatherproof cap offers a wide range of prime lens options for greater flexibility in obtaining the right shot in the field. Lens interchangeability and compact design provide a versatile range of applications for the CV503, from live broadcast productions, sports and newscasts, reality television, concerts, corporate events, government and military applications, houses of worship and many more. The CV503-WP also comes with an impressive range of picture adjustment settings including white balance, color, gain control, exposure, gamma and more, all of which are delivered via common RS485 (Visca) commands and can be selected by an OSD menu joystick on the back of the camera. Remote adjustment of a wide range of picture settings via RCP or Camera Control Software provide ease of matching other cameras in the workflow from back in the truck or the control panel. The CV503-WP features incredible image quality, interchangeable prime lenses, remote adjustable settings, and protection from the elements. It can capture clear and crisp shots from impossible angles, in a range of harsh outdoor applications while maintaining an ultra-discreet presence.

BESTOFSHOWATIBC 2019 MATROX MATROX M264 S4 HARDWARE CODEC CARD Broadcasters have come to expect their PC-based production servers to handle multiple channels in HD. Using PC-based video production equipment is highly desirable in broadcast due to the inherent flexibility of CPUs. However, CPUs alone cannot meet these workflow demands in 4K—especially with the highly efficient, but computationally expensive, H.264 mezzanine codecs (XAVC and AVC-Ultra). Designed specifically for high-density encoding/decoding (Baseline Profile to High 10 Intra Profile up to Level 5.2), M264 S4 cards provide the highest density and quality possible when it comes to 4K UHD and HDR implementations. M264 S4 can encode/decode up to four streams of Sony 4K XAVC class 480 @60fps, or 4K XAVC Long200 or Panasonic 4K AVC-Ultra in real-time. The game-changing capabilities of this card are far superior to anything else currently available in the market, which are “high-end” multi-socket PCs which struggle to encode just one 4K/UHD stream. This type of density, quality, and performance support from a single-slot card provides OEMs with unprecedented flexibility to deliver unique multi-channel UHD/HDR solutions—with the same ease and integration process as they do with HD-based platforms. Furthermore, for broadcast organizations who are doing a mix of HD and UHD/HDR workflows, they can still benefit from industry-leading HD performance as the M264 S4 card can encode/decode up to 40 streams of AVC-Intra 100, 40 HD AVC LongG25, or 64 streams of HD at 4:2:0 8-bit. With onboard multi-channel, motion-

adaptive deinterlacing, and up/down/ cross scaling, the M264 S4 can repurpose content into any resolution before encoding or after decoding, avoiding costly transcoding—particularly beneficial for real-time production workflows. As a result, Matrox M264 S4 cards are the ideal H.264 accelerators for 4K instant replay system, channel-in-a-box systems, video servers, broadcast graphics systems, multiviewers, high-density transcoding systems, and other broadcast media equipment. Technological Impact Matrox M264 S4 can help broadcasters set themselves apart by delivering high-quality, high-resolution, and highframe-rate content to their audiences. Matrox M264 S4 takes the hassle out of the transition from HD to UHD/HDR—a natural next step for broadcasters—by providing instantaneous H.264-quality and density boost, and offering the pristine quality needed for broadcast contribution, production, and distribution.

Real-World Value & Applications As one of the few ingest, playout, and replay solutions on the market—and the only one on the market to offer fully PCbased, customizable, multi-channel 4K/ HD, and multi-purpose codec support—the Matrox M264 technology is revolutionizing 4K live production workflows. Broadcasters are now equipped with a means to easily meet any encoding/decoding density requirement for the most demanding realtime production workflows, all possible from a single, cost-effective, PC platform. With its superlative production and post-production technology, the Matrox M264 S4 codec card helps PC-based solutions thrive in major live production events. The Matrox M264 family of codec cards have been employed by broadcast organizations worldwide: in major professional sports leagues, for major global sporting events and competitions such as the PyeongChang 2018 Olympic Winter Games, for internationally-renowned cultural celebrations such as Chinese New Year’s anniversaries, and more.


Encode and decode live, multi-camera video streams at resolutions up to 4K or quad HD with confidence using the powerful MatroxÂŽ Monarch EDGE decoder. These compact, robust, and lowpower remote production encoder and decoder appliances make producing live, multi-camera events more affordable than ever by keeping talent in-house. Monarch EDGE securely encodes and decodes REMote Integration (REMI) streams to your master control room with remarkably low latency. Allowing for 4:2:0 8-bit or 4:2:2 10-bit encode/decode, the Monarch EDGE encoder and decoder pairing is Ideal for demanding REMI broadcast workflows, or providing backhaul feeds to your broadcast network. Key features and benefits: Exceptionally Low Latency With an exceptionally low 50ms of latency between video capture and stream output, the Monarch EDGE encoder paired with the Monarch EDGE decoder achieves some of the lowest glass-to-glass latencies on the market, ensuring that live productions appear synchronized and professional.

Flexible Protocols There are a variety of streaming protocols available to Monarch EDGE users. On closed networks, MPEG-2 TS or RTSP streams can be selected for decoding. For IP-based deliveries, or when the network is congested, SRT may be more appropriate. SRT is a new open-source format that provides the reliability of RTMP, while reducing latency, for use on open networks. SRT streams can also be encrypted if security is a concern. Comprehensive SDI and IP Connectivity Both the Monarch EDGE decoder and encoder offer flexible, future-proof connectivity with 3G, 12G SDI, and ST 21101 over 25 GbE network connections. Inputs are auto-detectable and allow for a wide range of connectivity to devices such as cameras, switchers, vision mixers or routers. Additionally, audio can be selected from two channels of embedded audio per video input, or balanced XLR connectors. Simple, Easy-to-Use Tally and Talkback The Monarch EDGE decoder provides tally signals and talkback audio channels

to facilitate bi-directional communication between on-site camera operators and in-studio personnel. These provisions on the device help reduce the amount of equipment required on site when paired with the Monarch EDGE encoder. Convenient, Centralized Control Monarch EDGE Control Hub is a powerful application that provides management and configuration remotely over all Monarch EDGE units on the network. This convenient software provides authorized users with high-level views of all devices on the network, and enables full access and control from a single, easy-to-use interface. Localized Preview Allowing up to four simultaneous input previews on a single desktop monitor, Monarch EDGE’s DisplayPort output ensures that videos are valid and ready to use. Monarch EDGE Control Hub allows users to effortlessly configure how they would like to preview audio sources of input. From the DisplayPort and line out, users can choose to monitor one audio input at a time, or mute all.

BESTOFSHOWATIBC 2019 MEDIAKIND AQUILA STREAMING With the increasing rise of live video content particularly delivered on OTT platforms, today’s operators require costeffective, flexible and highly compatible solutions that deliver advanced offerings (such as UHD content) with low delay and high availability. Today’s dynamic media industry demands video solutions that are able to spin channels up or down as needed and respond directly to escalating costs in live production – without compromising on quality of best of breed features. Operators therefore require ‘‘onestop’ shop solutions that differentiate their services and quickly turn innovations into end-user services. Launched in September 2019, MediaKind’s Aquila Streaming is a convergent, cloud-based OTT and broadcast headend solution that enables content to be received, transcoded, multiplexed, packaged, encrypted and managed. It has been designed with three different business models which can be tailored to suit an operators existing infrastructure or digital transformation program: as an on-premise solution, a channel as a service, or as a cloud native offering. The solution enables tier one and tier two operators to easily manage and operate live convergent OTT and broadcast headends, as well as to deploy services rapidly with added scalability and without compromising on the end quality to the

consumer. In short, the solution facilitates the delivery of quicker, more affordable and seamless live content offerings. Aquila Streaming incorporates MediaKind’s award winning technology and products, including Encoding Live media processing, offering tightly integrated workflows for the smooth delivery of seamless consumer experiences. Using cloud-native technology, Aquila Streaming ensures efficient optimal-quality processing of assets for all screens, at anytime, anywhere, addressing multiple audience needs and viewing habits. This new solution benefits from operability on MediaKind’s chosen public cloud provider, Google Cloud. This cloud-based architecture enables media operators and service providers to scale quickly, spin channels up with ease and adapt to the demands of changing consumer consumption habits and trends. Aquila Streaming enables operators to easily overcome the increasing complexity

of video solutions without compromising on quality. This solution is built to handle multi format requirements – SD/HD/ UHD – without compromise on delay to the end-user. It delivers a highly efficient, broadcast-grade service and enables innovative consumer experiences with flexible deployment and operating choices. In addition to faster time to market operations, customers also benefit from a scalable, unified platform for live ingest with Secure Reliable Transport, enabling them to deliver content to end users with low delay in Common Media Application Format. This flexibility removes the need for dedicated single-purpose products, which come at a very high total cost of ownership. This includes the price of the hosting facility, the need for highly skilled resources to operate and monitor proceedings, the regular price of updating hardware, and additional infrastructure and licensing costs. Customers can launch temporary services for events; configure their own channels while plugging in new features; access disaster recovery and enable content replacement; and have the option to deliver highly personalised advertising and time-shifted content services. It also benefits from automated hardware refresh and is supported by the deep expertise of the MediaKind services teams.

BESTOFSHOWATIBC 2019 NET INSIGHT NIMBRA EDGE 5G promises to put a phenomenal set of capabilities into the hands of public and private enterprises across industry verticals. The 5G ecosystem will give organizations unprecedented opportunity, flexibility and choice in the networking tools to drive their digital operations. For broadcasters and media companies, 5G deployment is corelated to an increasing need for higher quality as well as personalized content to be distributed to end consumers. As the market for internet contribution and distribution is reaching the technology and quality maturity needed in order to move live content over public and private cloud infrastructure, it enables customers to move workflows to the cloud and leveraging the elasticity, robustness and price benefits that cloud infrastructure brings compared to traditional broadcast and media models. As an answer to the industry market shift, Nimbra Edge is an elastic and interoperable cloud transport platform for live professional live media contribution and distribution designed for the modern broadcasters. It focuses on Primary Distribution and Sports and News Interconnect and offers a revolutionary approach to networking where the operational aspects typically associated with networking is automatically managed under the hood. Nimbra Edge relies on a sophisticated cloud-based orchestration engine. The system automatically adapts to scale to network demands, while simultaneously providing a real-time view of all services and resources. By utilizing an elastic, micro-services

architecture, Nimbra Edge drastically simplifies network operations by offloading otherwise complex tasks such as scalability and redundancy to an automated orchestration engine, letting customers focus on what really matters – their content. The content can be tailored to the requirements of each specific endpoints to meet the cost as well as quality requirements that modern media applications require. Moreover, the inherent complexities normally associated with networking are aggregated and monitored in an interactive and intuitive user interface designed to focus on the core values of the task at hand – the content itself. Nimbra Edge is based on a fundamentally open architecture, enabling media companies to seamlessly exchange content

with one another without having to worry about video formats, technologies or protocols. Media companies can flexibly ingest content from anywhere in a variety of video formats, using any of the popular transport protocols such as RIST and SRT, This open and flexible platform maximizes content reach and accelerates time-to-live for new endpoints and services. Cloud-ready and deployable anywhere, media companies can choose to host Nimbra Edge on-prem, leverage any of the large public cloud providers or use a combination for the ultimate flexibility between cost and reach. A selection of customers is going live with Nimbra Edge during first half of 2019 with mass market deployments planned in second half the year.

BESTOFSHOWATIBC 2019 NEVER.NO BEE-ON It has become harder than ever to effectively engage audiences across an ever-growing number of platforms, devices and programming. Broadcasters, content producers and brands are continuously competing for viewers, so the need to create captivating content and bridge the gap between the audience and the content itself is key to extending the lifespan of a campaign or programme and growing revenue through monetising content. Innovations within streaming and broadcasting are enabling content producers to push those boundaries and look beyond the traditional delivery limitations of linear TV production Formerly known as STORY, Bee-on -’s audience engagement platform - helps content producers enhance and personalise content with socially interactive programming – bringing a level of engagement to content not previously offered. Bee-on is a cloud-based platform that provides advertising agencies, broadcasters and brands with the tools to engage with their audiences in real-time. This drives increased viewership, engagement, brand awareness and purchase intention, enhancing traditional advertising campaigns by seamlessly integrating real-time data, social media engagement, and dynamic and captivating graphics into broadcast, digital and OTT content. Bee-On has been utilised by the likes of Channel 4 and MTV in the UK, SBS Australia and CBS in the US, delivering social content - including comments, pictures and videos into live and pre-recorded programming for Crufts(1) and Eurovision(2). It also recently

powered a social media ad campaign for Coors Light ‘That’s Cold’(3) broadcast on Channel 4 – featuring an interactive poll that influenced the final ad content. BeeOn’s easy to use tools manage online and socially driven competition formats, filtering results and publishing under strict voting and compliance regulations. The integrated Real-Time Social Graphics - used by MTV for Madonna’s music video premiere(4) - enables content producers to moderate and display graphics outputs as a web source. The graphic overlay is rendered directly in a web browser, meaning the user’s computer becomes the graphics engine, making it an effective solution for managing digital signage and on-set event displays, without the need for expensive equipment. In addition to the platform’s ability to work with traditional broadcast graphics engines,’s solution provides an extra layer of flexibility to traditional TV broadcasters, as well as enabling those who use streaming services such as Facebook Live or YouTube.’s platform was used by Grabyo and Livewire Sport for the ‘Wimbledon Coffee

Morning’(5) show, multi-streamed across Facebook Live, Twitter and YouTube in-line with the live broadcast of Wimbledon’s official sports programming – reaching one million views. Bee-On is a fully integrated platform that powers audience engagement across traditional broadcast and digital channels. Delivering audience generated content from social platforms, Bee-On enhances the viewers experience with personalised advertising and programming, enabling the audience to interact with and influence the content like never before. 1 2 3 winner-coors-light-rap-battle-will-be-decidedduring-ad-break-using-live-social-media 4 madonnamtv 5


The Artisto software audio engine recognises that, in a broadcast world in which IP is rapidly establishing itself as the standard, users are valuing the enormous benefits that true interoperability brings to them - and Artisto is designed to bring the highest level of interoperability to any audio workflow. It will be publicly demonstrated for the first time at IBC. Artisto simplifies the automation and the integration of any workflow involving sound. It eliminates the frustrations inherent in complex hardware infrastructures, and dispenses with the need for outdated, insecure control protocols. Artisto is fully configurable and controllable via a simple, open webAPI. From simple monitoring and detection of audio levels for visual radio to advanced workflows for cross-platform media diffusion or automated video productions, Artisto covers every audio processing and synchronisation needs. Because Artisto is modular, it can

be precisely tailored for the specific requirements of any audio application. Artisto can be flexibly configured with an extensive library of processing blocks such as routing, level measurement, equaliser, dynamics, web streaming, recorder, player, SIP/RTP transport and so on. These can be virtually wired together to build a processing pipeline for the desired workflow. It also means that expensive DSP cards in specialised racks, or the constant need to deal with dongles and converters to get one system to communicate with another, can be a thing of the past. Artisto runs on standard physical or virtual servers. Artisto also responds to user requirements for e.g. AES67, Dante or MADI inputs and outputs. It is compatible with all major computer sound card vendors and is planned to support NewTek NDI out of the box. From simple monitoring and detection of audio levels for visual radio to advanced workflows for crossplatform media diffusion or automated

video productions, Artisto provides a solution to users’ audio processing and synchronisation requirements. Artisto is also the perfect audio companion for software-based video mixers such as OBS or vMix. Artisto deserves to be considered for the Best of Show Award because it is highly innovative - the first solution of its kind to be entirely virtualised & based on future-oriented web technologies - and solving real world tedious problems. Also because, by simplifying the automation of many routine, timeconsuming processes it removes the complexity of interoperability, making a huge contribution to increased productivity and reduced cost.

BESTOFSHOWATIBC 2019 OWC OWC THUNDERBLADE™ With time and performance being critical to creative productions, the super-fast ThunderBlade Gen 2 external SSD offers data transfer speeds up to 2800MB/s, capacities up to 8TB, dual Thunderbolt 3 ports to daisychain up to five additional ThunderBlades (for up to 48TB capacity), rugged portability, and nearly silent operation - on set or in the editing room. The newest ThunderBlade is stackable and runs cooler than the previous iteration, while still maintaining a near-silent environment with its fanless operation, all factors that can be critical in an operation on the go or in a tight space. With dual Thunderbolt 3 ports users can daisy-chain up to five additional ThunderBlades, and by connecting more than one drive with SoftRAID, speeds can be increased up to 3800MB/s. ThunderBlade Gen 2 runs up to 32%

percent cooler during peak performance and transfer times have been cut almost 20%, with the unit able to complete a 1TB content transfer in under four-and-ahalf minutes. The ThunderBlade is currently in use at a number of production houses and by creative teams across the globe with great success. Offering the fastest transfer speeds OWC has ever produced, creative professionals and prosumers can

count on ThunderBlade to help them get their dailies daily, providing safe, creative content management and huge time and money savings in production budgets. The ThunderBlade is a natural fit to the OWC line of products - In all OWC products, across the board, from productiongrade SSDs and external hard drives to expansion products and enterprise storage, OWC strives to deliver perfectly tailored workflow solutions for every creative project. In everything we do at OWC, we believe in making a better world where technology inspires imagination and everything is possible. For a demo of the ThunderBlade and to experience the speed to create, please stop by and see for yourself at stand 5.C10.

BESTOFSHOWATIBC 2019 PROVYS PROVYS VOD BRIDGE Every broadcaster today has to be aware of new market opportunities and business models which require modern and futureproof workflows. From a simple web content presentation to sophisticated paid VOD services, the labour costs and related effort demand substantial efficiencies in order to improve the bottom line. PROVYS, experts in the field of linear broadcasting management, have developed a ground-breaking solution called VOD BRIDGE which fully automates the workflows associated with launching and operating multiple nonlinear platforms. Continuously increasing demands from the market for more and more content, coupled with the ever-growing costs of human labour, broadcasting distribution and the management of complicated rights are leading broadcasters to depend more heavily on non-linear services. But they do not come free of charge, carrying with them their own labour and operational costs which frequently spiral out of control resulting in only inadequate net revenues. The time is clearly right for a software solution which exploits all the advantages from the world of linear broadcasting and transfers them automatically to nonlinear platforms. Such features include for example usage of rich metadata, full utilisation of rights, catch-up and timeshift possibilities or compliance with geographical limitations. PROVYS are pleased to announce they

have recently developed and perfected such a software solution which has now been tried and tested, with great success, in several installations in Europe. The principle behind the software is that it uses the information available from the linear schedules, valid rights and content libraries, and automatically generates a wide range of offers on multiple nonlinear platforms. In many broadcasting stations, whole teams are still required, at great cost, for these operations. Provys’s VOD BRIDGE is estimated to save 30% on personnel costs by reducing the manual labour requirement. Additionally, the time required for new VOD platform on-boarding is estimated to be reduced by 90%, delivering a massive benefit in time and cost efficiencies.

To the great advantage of broadcasters, Provys’s VOD BRIDGE can be integrated with the world’s standard MAM solutions. In combination with access to the rights management database, the VOD BRIDGE spans all different non-linear services for broadcasters. Another great advantage of the solution is the financial management of the value of the non-linear content including different amortisation strategies, cash flow management, predictions and multi-currency support. The VOD BRIDGE supports many other advanced functions for non-linear planning such as page planner, crossplatform promotion or mini-playlists.

BESTOFSHOWATIBC 2019 QUMULO CLOUDSTUDIO CloudStudio: End-to-​End production in the cloud accelerates productivity and brings remote talent closer together Editing a film is essentially an art of what can’t be seen. It’s hard enough to try and stitch together a beautiful visual story. But editing under pressure is four times harder! Imagine a creative person editing a clip against the clock. What if he is being asked to add in some last-minute extra graphics, or a special sound design that will really lift the piece, but he is not in his office? The issue: Legacy solutions require being in the office and local to the content to do visual effects, and often doesn’t have the performance scalability needed to meet a quick turn around render job. Qumulo CloudStudio solves these issues, revolutionizing the production workflow. Qumulo CloudStudio allows design talent to work on projects from anywhere in the world with uncompromising speed, flexibility, and security. Qumulo CloudStudio allows artists to focus on their creative ideas: cut by cut, choice by choice, on the path to a coherent, impactful result. No longer are restrictions on proximity to the studio or other artists a limiting factor, nor, do users need to worry about not having enough performance or capacity to get the job done. Customers can deploy CloudStudio in Amazon Web Services (AWS) or Google Cloud Platform (GCP) in seconds, allowing local artists to connect from the hardware of their choice, while experiencing seamless 30+ frames per second (FPS)

video playback. CloudStudio gives administrators peace of mind about security, as all assets are securely stored in Qumulo’s distributed file system where data access can be seen, managed, and controlled in real-time. Because it allows film and animation studios to burst rendering and other production workflows to the cloud, CloudStudio frees them from “big bet” capital investments in advance of potential revenue growth. Cloud-based infrastructure such as CloudStudio allows users to avoid “boom-or-bust” cycles in their business by growing and shrinking production capacity to match the exact needs of a project. It can also be run and billed on a per-minute basis. Facing super-tight deadlines has never been easier. Qumulo CloudStudio enables users to render workloads entirely in the cloud. It’s easy to spin up thousands of

CPU and GPU instances in AWS or GCP, and turn render time into an adjustable variable. Qumulo CloudStudio works seamlessly with many of the major software tools used in post-production today – Adobe Premiere, After Effects, Cinema 4D, 3ds Max, Nuke, Maya, and others. Qumulo’s file storage also supports NFS and SMB protocols, allowing artists to use their preferred applications, and focus on what they do best – producing, editing and delivering great content. CloudStudio in AWS also enables studios to more easily collaborate across geographies. With Teradici’s remote PCoIP workstations providing low latency across geographically-dispersed artistic teams, and Qumulo’s high-speed distributed file system, real-time global collaboration on a project becomes a practical reality.

BESTOFSHOWATIBC 2019 QVEST MEDIA QVEST.CLOUD Qvest.Cloud is the first multicloud management platform which is specifically built for the media industry. Qvest.Cloud transforms people and businesses by converging Qvest Media’s excellence in media systems integration into an easily accessible cloud management platform, enabling seamless integration of best-ofbreed services for the media industry with workload orchestration across on-premise, hybrid, or multi-cloud infrastructures powered by business intelligence. It addresses the particular needs of media and broadcast clients when moving business processes into the cloud and sets up any media company for the integration and operation of media services and thirdparty-applications using modern web technologies while breaking down the barriers between installations in local data centres, private clouds, and public clouds. Based on an open interface design, third-party IT and media applications can be easily aggregated in the backend and operated in the front end via one single portal, eliminating the need to manage separate user accounts for every individual application. Integral components of Qvest.Cloud, the multicloud management platform that has been designed from scratch as a cloud native application, are crossapplication functions such as workflow orchestration, cloud automation, user management with single sign-on, access management, monitoring & measuring, cost control and comprehensive IT security. Qvest.Cloud is available in two different versions: Qvest.Cloud Ultimate and Qvest.

Cloud Go!. Qvest.Cloud Ultimate is Qvest Media’s enterprise architecture solution and the key to complete system integration in the cloud. Media groups and broadcasters can implement their end-to-end workflows and system architectures through Qvest Cloud Ultimate. Providing the perfect management platform for fully integrated multi-cloud architectures, Qvest.Cloud Ultimate is the center of a constantly growing ecosystem of cloud solutions from major key vendors and technology leaders. Qvest.Cloud Ultimate controls and monitors these third-party applications via connectors which are part of an Integration Package. While Qvest Media will provide more and more pre-built workflow adaptors for well-known media applications over time, custom connectors can be added on project basis. The

integrated partner applications can be browsed in the Qvest.Cloud portal’s application catalogue and are provisioned in an automated way with a few mouse clicks from the catalogue. Qvest Media bundles best-of-breed partner solutions into comprehensive packages which address clearly defined business needs using the foundation of Qvest.Cloud Ultimate and its expertise in systems integration. The Qvest.Cloud Go! packages are available as a SaaS model that allows flexible use and scaling. The initial five ready-to-use packages are Q.Live for live event production, Q.Create for easy and scalable postproduction, Q.Archive for archiving and managing metadata and content, Q.Safe as a toolbox for disaster recovery securing linear playout in the cloud, Q.Air for content distribution of popup and online channels.

BESTOFSHOWATIBC 2019 ROHDE & SCHWARZ GMBH & CO. KG R&S BSCC BROADCAST SERVICE AND CONTROL CENTER The Rohde & Schwarz solution for the first complete 5G Broadcast transmission consists of the core network functionality provided by the new R&S BSCC broadcast service and control center and R&S Tx9 transmitters, which support FeMBMS. This all-in-one solution covers all entities from the content provider to the consumer and is fully compliant with 3GPP Release 14. It enables broadcasters to contribute their valuable assets for efficient distribution of high-quality video to the future 5G ecosystem. In recent years, media consumption has shifted significantly toward smartphones and other portable devices. Reaching the billions of smartphones around the world will be the future of broadcasting. 3GPP Release 14 is enabling this future by using FeMBMS (Further evolved Multimedia Broadcast Multicast Service). As a part of the Bavarian research project 5G TODAY, Rohde & Schwarz is investigating largescale TV broadcasts in the FeMBMS mode. Using high-power high-tower (HPHT) transmitters allows broadcasters to distribute video over 5G networks in downlink-only mode with all the advantages of classic broadcasting. This

provides the high quality levels known from HDTV broadcasting, low-latency live content as well as enormous spectrum efficiency and wide coverage. There is no need for a SIM card in the mobile device. To help utilize all these benefits, Rohde & Schwarz is the first technology vendor worldwide to offer a complete 5G

Broadcast solution for network operators. This solution covers all entities from the content provider to consumers. R&S Tx9 transmitters provide FeMBMS based on a unique software-based signal processing platform. The pure softwarebased signal processing platform gives network operators great flexibility and ideally prepares them for future signal processing requirements. At IBC, Rohde & Schwarz is launching its new 5G Broadcast core network element, the R&S BSCC broadcast service and control center, which enables Rohde & Schwarz terrestrial transmitters to deliver LTE/5G Broadcast content. This element provides core network functionality so that network operators can distribute content over LTE/5G Broadcast. The main benefit of the R&S BSCC is that it reduces 3GPP core network entities’ complexity to meet broadcasters’ needs for a straightforward LTE/5G Broadcast experience. The overall Rohde & Schwarz solution makes it very easy to configure network parameters. Multiple Rohde & Schwarz FeMBMS transmission sites can be centrally configured in the R&S BSCC core network element.


SkyDolly utilizes advanced Ross Furio Motion Camera Systems technology to unleash the creative potential of ceilingmounted cameras. Ceiling-mount cameras can now be used as primary cameras, contributing smooth, fast, accurate on-air quality shots and movements suitable for live, virtual, and augmented reality productions. SkyDolly’s Ross Furio roots ensures that it delivers smooth, stable performance – even while carrying a full payload, including a full-sized camera, lens, teleprompter and other accessories. This allows Furio SkyDolly to deliver much more than just beauty shots, but also serve as the primary camera in the production, providing maximum flexibility and practicality.

The Furio Dolly has been completely redesigned to ensure that the SkyDolly provides the most stable platform possible. A three-wheeled base (TriDolly) eliminates the possibility of instability or loss of traction resulting from an imperfectly levelled track. The wheel base has been lengthened by 60%, and widened the track by nearly 40%, virtually eliminating the possibility for the dolly to rock on the tracks. Unlike most other “hanging” dolly systems, the Furio SkyDolly does not actually hang below the rails. This means that the entire weight of the dolly is above the rails, helping to raise the overall center of mass as close to the rails as possible – minimizing the pendulum effect that creates unwanted swaying in other systems.

The rail frame supports up to five times the maximum payload without flexing. Low-noise cable trolleys mounted to inverted safety rails allow SkyDolly to be used on-air without disturbing talent or the audience. Absolute Encoders on the track axis avoid the need for home or re-targeting required by other hanging rail systems and ensures reliable, accurate camera tracking for Virtual Sets and Augmented Reality productions. Cameras can be tethered to the head and heads can be tethered to the dolly, ensuring the utmost safety for staff and equipment. SkyDolly replaces cranes and jibs for primary cameras, removing set design restrictions and improving the production/ audience experience.


Ross Ultritouch System Control Panel is a touch based, adaptable control panel for innovative Ross & 3rd party applications. Ultritouch adds a powerful new control option to Ross Video’s awardwinning Ultrix connectivity platforms, and in addition, due to its integration of Dashboard, Ultritouch controls a host of other Ross equipment and 3rd party signal processing products. Full system, control solution only limited by your imagination! Ultritouch is a forward thinking, powerful, system control panel from Ross Video. The panel is a 2RU rack-mountable touchscreen that combines traditional functionality while adapting workflows based on users fluent with smartphones. Ultritouch adds a powerful new control option to Ross Video’s award-winning Ultrix connectivity platforms, and in addition, due to its integration of Dashboard, Ultritouch controls a host of other Ross equipment and 3rd party signal processing products. Ultritouch not only comes with an array of fully integrated, configurable control templates with paged operations

and intuitive control workflows, but also allows users to develop fully customizable virtual hard panel solutions for any application across any equipment using Dashboard custom panel support and extensive interoperability. The Smart Touch feature integrates traditional functionality with modern workflows, allowing you to build your panel around what you want to do and how you like to do it, not based on the number of buttons you have. Ross Ultritouch System Control Panel is a touch based, adaptable control panel for innovative Ross & 3rd party applications. Ultritouch adds a powerful new control option to Ross Video’s awardwinning Ultrix connectivity platforms, and in addition, due to its integration of Dashboard, Ultritouch controls a host of other Ross equipment and 3rd party signal processing products. Full system, control solution only limited by your imagination! Ultritouch is a forward thinking, powerful, system control panel from Ross Video. The panel

is a 2RU rack-mountable touchscreen that combines traditional functionality while adapting workflows based on users fluent with smartphones. Ultritouch adds a powerful new control option to Ross Video’s award-winning Ultrix connectivity platforms, and in addition, due to its integration of Dashboard, Ultritouch controls a host of other Ross equipment and 3rd party signal processing products. Ultritouch not only comes with an array of fully integrated, configurable control templates with paged operations and intuitive control workflows, but also allows users to develop fully customizable virtual hard panel solutions for any application across any equipment using Dashboard custom panel support and extensive interoperability. The Smart Touch feature integrates traditional functionality with modern workflows, allowing you to build your panel around what you want to do and how you like to do it, not based on the number of buttons you have.

BESTOFSHOWATIBC 2019 SCALITY ZENKO Scality’s Zenko provides a unified interface to manage, orchestrate, and search data across on-premises storage and public clouds. Zenko allows application developers to write against a single API while supporting any type of storage. Through customizable metadata search and an extensible workflow engine, Zenko enables media companies to create sophisticated media data workflows from creation to asset management to AI and machine learning to archive storage. Zenko provides: •Lower cost of creation, processing, distribution, archival by intelligently placing data across clouds resulting in reduced cloud egress and network costs. •Integration with external systems for asset movement, tagging and workflow enabling the integration of on-premises commercial or custom applications and cloud services for fully automated, end-to-end data workflows. •Support for any cloud for storage and/ or value added services supporting onpremises and cloud services for the highest choice. Zenko provides a number of technological advances to meet today’s media workflow requirements. •Enterprise and open source editions with an open source community of over 1,000 members. •Unified interface across disparate storage locations via an Amazon S3 compatible API. •Data stored in open, readable format of

each storage location so it can be accessed by any cloud service. •Global metadata creation and search supporting both system and user-defined metadata. •Extension of the Amazon S3 CRR functionality from 1-to-1 to 1-to-Many enabling the replication of data to multiple locations at the same time which reduces cloud costs and increases application performance. Zenko can be operated either programmatically (via APIs) or through a graphical interface. Applications and users interact with storage locations and cloud services through Zenko and Zenko handles the workflows and complicated interface translations automatically. For example, Zenko can enable an application to copy media files from on-premises storage to Azure Video Indexer which can automatically generate advanced metadata through AI and then return that metadata to Zenko where that metadata can be leveraged for intelligent content distribution, search, targeted advertising, and other value added services. The Internet, mobile devices, public

cloud, and social media have transformed the way media companies create, distribute, and monetize content. Media companies must contend with the challenges of distributing content to a global, on-demand customer base; with increased expectations for creation and delivery speed; and with competition from non-traditional media companies. To address these challenges, media companies are combining on-premises infrastructure with cloud services to enable sophisticated data workflows. Building on the use of cloud storage as a long-term archive, media companies are now reaping additional value from cloud services that enable active workflows such as encoding, transcoding, indexing, content distribution, and team collaboration. Zenko orchestrates data seamlessly between on-premises media solutions and cloud media services. Zenko enables the combination of legacy systems with advance cloud services to create content faster and more efficiently, distribute content globally at a lower cost, and protect content both on-premises and in the public cloud.

BESTOFSHOWATIBC 2019 SIGNIANT JET The Media & Entertainment industry is an inherently global and collaborative space. Today, media companies are exchanging larger files over longer distances than ever before, both between their own locations and with partners around the world. Signiant is dedicated to improving the performance and security of the global media supply chain, and Signiant Jet is the company’s latest advancement. Jet is a cloud-native SaaS solution for automated high-speed transfers of large data sets between locations around the globe. Built on Signiant’s innovative SDCX SaaS platform, Jet offers powerful, enterprise-grade capabilities to any size business that regularly moves data between their own locations or with partners, customers and suppliers. Signiant Jet provides a number of key benefits to media companies, including: •Speed: Jet employs Signiant’s fastest transport technology yet and is capable of multi-Gbps transport speeds. •Ease: Within Jet’s intuitive visual interface, administrators can easily deploy and monitor transfers and configure alerts. Customers can be up and running in a day. •No Limits: Jet can handle any size file quickly, and never imposes limits on the amount of data transferred or puts constraints on bandwidth use.

Signiant is a recognized security leader across the M&E industry, including earning the DPP ‘Committed to Security’ mark. •SRE Team: The Jet solution is built on Signiant’s multi-tenant, auto-scaling, load-balanced cloud control layer, managed 24/7 by the company’s professional Site Reliability Engineering (SRE) •Advanced Next-Generation Transport: Jet employs a new intelligent transport mechanism that adapts to network conditions to provide optimal throughput in every real-life situation. •Checkpoint Restart: Any transfers that are interrupted are automatically restarted from the point of failure, which is critical for automated, large file transfers. •Visibility: In the fast-paced, securityconscious world of media, it’s important to have a reliable record of when and where files were transferred. Jet provides customers with easy access to this data. •Inter-Company Transfers: Companies that regularly exchange data with partners will benefit from Jet’s simple, secure handshake mechanism where two parties can agree to send and receive files from one another. •Security: Following defense-in-depth design principles, Jet incorporates multiple layers of security controls.

team. •Simple Pricing for Every Size Company: Jet’s SaaS implementation scales to the needs and budget of every size media company. Jet’s simple pricing model is based on the number of locations that need to exchange data without limits on file size, amount of data transferred or users. Jet’s lights-out automated file transfers bridge the gap between time zones and locations, allowing fast and reliable files transfers across any distance and easing the pressure of today’s fast paced content creation and distribution cycle. Jet replaces FTP scripts, rsync and other legacy tools offering the industry’s highest security and performance standards, including Signiant’s fastest transport mechanism yet. Jet’s unique model brings advanced acceleration and automation to any size business furthering Signiant’s mission to connect the global media supply chain.

BESTOFSHOWATIBC 2019 SIMPLESTREAM CLOUD TV Simplestream’s Cloud TV Platform is an end-to-end, white label solution that enables the rapid deployment of nextgeneration TV Everywhere services across multiple screens and territories – including support for unlimited live channels, automated catch-up with backwards scrolling EPG, with a comprehensive user and entitlement management built in. Our recent work with Nova in Iceland demonstrates the power of our solution and as the leading Telecommunications Operator in the region, Nova wanted an OTT partner who could offer a world-class end-to-end streaming service, with all the features and functionality of a high quality TV experience to its growing customerbase, making available local free channels with the option of additional Pay-TV channels payable via their mobile contract as a way to attract new custimers and maintain existing ones. Functionality includes over 40 live channels, with catch up immediately available after transmission, a sophisticated reverse EPG, a quick channel changer supporting free and pay options, plus pay per view for live events. Specific Features: Project Delivery Time: 12 weeks Channel capture- feed ingest and satellite downlink from ThorV plus encoding on premise at playout in Rekjavik. Live streaming – Unlimited channel capacity with support for DRM Automated catch-up creation, 28 day

catch-up for selected channels. EPG - Sophisticated backwardsforwards scrolling EPG with fast channel change Content rights management – 128 AES encryption for free to air and studio approved DRM for premium channels TV-like viewing experience – industry leading UX/UI and design frameworks across desktop, mobile, TVOS, Android TV and Amazon Fire. Comprehensive user and entitlement management – free local channels with thematic packaging to cater for a wide range of customers. Future-facing, continually evolving

roadmap. Nova TV features a wide array of over 40 Icelandic free-to-air and subscription channels including RUV, RUV2, Sjonvarp Simans, Stod 2, Stod 2 Sport, Tonlist, Hringbraut and N4 with automated catch-up functionality up to 28 days, so viewers can access their favourite programming at a time of their choosing. The service is currently available on Apple TV, iOS and Android mobile devices and on the web. Following the success of the launch there are plans to add more channels and make them available across additional devices.

BESTOFSHOWATIBC 2019 SPECTRA LOGIC CORPORATION SPECTRA® RIOBROKER Spectra® RioBroker is a tightly integrated software that brings efficiency and agility to a transforming market, helping users to harness the full value of their content. It front-ends Spectra’s BlackPearl® Converged Storage System, an object-based platform that moves media assets seamlessly and economically into disk, tape and cloud. Acting as a data mover to speed file transfers, Spectra RioBroker streamlines workflows and facilitates scaling out BlackPearl to accommodate growing amounts of digital assets – innovating media storage through greater performance, parallelism, scalability and ease of implementation. The advancement of BlackPearl via Spectra RioBroker offers a simplified RESTful API that reduces the effort to support, integrate and certify a comprehensive range of applications. By providing Partial File Recall and FTP capabilities, Spectra RioBroker enables easier client development for partners, accelerating the integration process. It also de-couples BlackPearl API code changes from the applications, eliminating the need for customers to recertify as enhancements are released. Spectra RioBroker also enables clustered* and scale-out access to BlackPearl, offering higher, consistent performance across applications and environments. It offloads data transfer jobs from the application and load-balances between nodes so that further performance and capacity are achieved by simply adding servers and BlackPearl nodes. Moreover, Spectra RioBroker maintains its own job queue and status information, presenting a single namespace for search and restore across all storage types within multiple BlackPearl systems and locations.

The solution is suited to environments that require resiliency and consistent uptime, as failover is seamless and allows for redundancy without extra overhead in its multi-node cluster configuration. Finally, Spectra RioBroker’s migration tool kit provides the tools to non-intrusively migrate assets, including metadata, from legacy middleware and asset management systems. The solution eases and automates migration from older, often proprietary systems – supporting the conversion of assets into open standard formats and reconstituting them on other storage mediums (tape, disk or cloud) with a simple change of storage policies. During this process, Spectra RioBroker automatically re-directs restore requests to the legacy software to grant users immediate access to content that has not yet been migrated. These capabilities offer non-disruptive, automated or semi-automated background migration of assets to a hybrid, open and modern storage platform that guarantees

perpetual access and is adaptable to market changes – all while the system remains operational and in production. Content is the lifeblood of media and entertainment organizations, and it is of the utmost importance for those assets to be protected and preserved. Spectra RioBroker should win this award because it significantly improves media management workflows and the way users leverage their storage architecture, innovating content management and storage at large. Its modern approach to media lifecycle management includes a multitiered, policy-based, object storage platform that can easily adapt to changing workflows and seamlessly support new storage mediums upon their entry into the market. Spectra RioBroker, in combination with BlackPearl, transforms how users store, access and share media assets, enabling greater workflow efficiencies and customization, increased cost savings, and superior content protection to multiple storage targets.

BESTOFSHOWATIBC 2019 TELEMETRICS, INC. PT-CP-S5 COMPACT ROBOTIC PAN/TILT HEAD As production studios continue to streamline their operations with device automation and robotic camera systems, they are looking for the most costeffective way to accomplish this. Increasingly this has included the use of compact 4K/HD cameras that offer high performance at a low price. This is in light of the variety of standard PTZ cameras now widely available but not quite up to the challenge of daily production use. To address this new demand, Telemetrics has introduced the PT-CP-S5 Compact Robotic Pan/Tilt Head, which overcomes the limitations— features, performance and robustness—of standard PTZ systems. This helps customers lower the cost of highly precise camera robotics systems while giving them the freedom to choose the camera and lens, or camcorder, of preference. There’s nothing like it on the market today and it’s all from a trusted name like Telemetrics. More fully featured than any PTZ system in its class, the new PT-CP-S5 is specifically designed to support the new generation of smaller, high-performance professional 4K/HD cameras (and lens) weighing up to 15 pounds. It enables users to mount the latest

compact 4K/HD camera plus lens of their choice from such leading manufacturers as Blackmagic Design, Canon, Fuji, JVC, Panasonic and Sony. Like the other products in its pan/ tilt portfolio, the new Telemetrics PTCP-S5 features high-quality robotic servo controls that leverage robust and reliable mechanical motors to deliver fast, ultra-high positional and velocity accuracy for smooth, steady, consistent on-air capable motion. This also makes the CP-S5 the perfect choice for use with augmented reality or virtual systems.

Other features of the PT-CP-S5 include PoE (Power over Ethernet) inputs, full servo flexibility, and a web-based GUI that makes it easy to operate remotely. The system also offers multiple configuration choices to accommodate a broader range of applications with space limitations. The processor unit can be mounted on the cradle, below the head, or detached completely from the PT-CP-S5 to minimize the unit’s physical footprint. “The PT-CP-S5 answers the call for lower-cost robotic studio operations without compromising the high quality or operational efficiency found in our entire line of pan/tilt heads,” said Michael Cuomo, Vice President of Telemetrics. “We’ve identified a whole new generation of production operations looking for value and the ability to pick their camera of choice. We think this will be a big hit with broadcasters and others around the world cost-effectively producing video around the world with the latest compact 4K/HD cameras.” The PT-CP-S5 system’s robotic servo controls and rugged design accommodates TV studios, government, and large room surveillance while ensuring high reliability and smooth and accurate camera performance for a wide range of IP-based remote production control applications.

BESTOFSHOWATIBC 2019 TEKTRONIX PRISM Tektronix Video’s PRISM is the industry’s first hybrid IP/SDI media processing platform that bridges the gap between traditional SDI and IP media networks. PRISM incorporates SMPTE 2022-6 and SMPTE 2110, supporting video, audio and data networks for live IP production. Originally developed to work in 10GE networks, a new mixed media card enables support of 25GE networks. This approach means that existing products can be easily upgraded to become 25GE network capable. The addition of the 25GE interface within PRISM provides support for a variety of media networks from SD to HD, 3G and 4K, enabling the transmission of the new generation of super high-resolution 4K/ UHD video and audio signals. This 25GE capability allows users to move through the media network in the same infrastructure these 4K signals as well as HD, 3G and even SD video, audio and data files. This 25GE capability enables PRISM to monitor streams within the higher bandwidth segments of the data network. Typically, a media network originated in the HD area and has a 10GE infrastructure for the various ports on its configuration. Behind this is either a 40G network or a 100GE network in the spine of the media architecture. By moving towards these higher data rates of 25GE today and 100GE in the future, we can look at the flows within the network media infrastructure and monitor from the edge to within the fabric of the network. With this advance, PRISM transforms from being an edge device that is monitoring a 10GE sites to moving it centrally into

the media infrastructure, looking at these higher data rate media streams. Even though PRISM has evolved, the traditional monitoring of audio and video levels can be performed by operators, and the wide variety of measurement displays provide engineers with the ability to troubleshoot issues within the IP and SDI networks. Within the transition from SDI to IP media networks there is a need for the ability to monitor traditional SDI signals with tools that are familiar in that space. These tools should allow monitoring of the network by IP engineers and SDI video engineers as they make the transition to an IP media network. Prism provides familiarity with existing industry reputable tools – being able to decode and view the signals and

monitor them whilst providing additional syntax analysis and data stream analysis to help troubleshooting problems that may exist within the network easily and quickly by using PRISM. To highlight a few points: We offer the 25GE option in both of our PRISM form factors – the 3RU 9” full touchscreen MPI225 as well as the 1RU rasterizer MPX2-25. So even if users need only 10GE for HD broadcasting at this stage, they are ready to start 4K broadcasting whenever they want to start with PRISM. The proven IP measurements we have offered through 10GE models are also available with 25GE model. All HDR /WCG features such as STOP, CIE, False color can be used for the 4K content creation that came through 25GE.


TSL Products new SAM-Q-SDI audio monitor can now MADI monitor 128 MADI channels, by way of an optional software license. The SAM-Q-SDI is the first audio monitor to realise the benefits of TSL’s new SAM-Q audio platform, designed to provide a new approach to audio monitoring. SAM-Q users, regardless of their audio knowledge and experience, can finally choose how they wish to control and visualize their audio content based on application, environment or personal preference. Key features that differentiate the SAM-Q platform from traditional audio monitoring solutions on the market include, customizable system behaviour, system settings protected by personal PIN code and expandable capabilities with optional software licenses. Designed for customers operating with SDI infrastructures, the SAM-Q-SDI demonstrates exactly how customers can maximize operational efficiency and reduce operational error by simply choosing their preferred method of

interaction. With no extra hardware required, SAM-Q-SDI customers can add MADI functionality at any time, enabling SDI, AES, Analogue and MADI sources to be monitored and mixed simultaneously. SAM-Q’s agile approach to audio monitoring, coupled with the ability to add extra licensed functionalities has been designed to address customers changing technical requirements, and help future-proof their investment. Also new to the SAM-Q line is the addition of new modes of operation that go beyond those operator-focused modes with which the platform initially launched. The SAM-Q is the first audio monitoring platform of its kind with the ability to expand its functionality based on the requirements and preferences of the user. These new modes are especially beneficial for sound engineers who need to accurately measure and guarantee audio delivery. These new modes will include audio phase measurement, audio peak value and

peak latch monitoring and the ability to perform loudness measurement on multiple program sources. This all serves to make the SAM-Q the ‘go-to’ tool deeper level audio measurement. Audio phase and peak value modes will be available as free of charge software upgrades, with customers able to purchase the loudness probe modes as a license option. The SAM-Q platform is the only audio monitor that can be configured specifically to address the needs of different applications, skillsets and workflows. Using a PIN code, engineers and supervisors can restrict specific audio sources, operational modes and front panel control functions to help speed up operation, reduce user error, and to meet the personal preference of operational staff. The ability to add further operational and engineering modes to the SAM-Q, provides customers with both a flexible and powerful audio monitoring platform whose feature set can grow and adapt over time.


Veritone aiWARE provides media and entertainment organizations with an extensible software infrastructure and low-code tools enabling fast-to-market AI deployments at scale, that don’t require AI development teams or a bespoke IT infrastructure. With aiWARE’s patented approach to leveraging an expanding ecosystem of cognitive engines through a single platform architecture, organizations have the ability to easily access over 320 proprietary, market-leading, and curated niche AI engines to transform audio, video and other data sources into actionable intelligence in real-time. The technology employs multi-cognitive processing across its engine ecosystem for greater breadth and accuracy providing results superior to single AI engine solutions. The Veritone aiWARE platform launched April 2015. In March 2019, Veritone released aiWARE 2.0, offering a series of new capabilities further driving enterprise AI adoption including: a realtime processing framework, expanded cognition capabilities, advanced

configuration options as well as multiple industry-specific turnkey applications. The Veritone aiWARE enterprise-grade AI platform enables organizations to deploy both custom and turnkey end-toend, AI-powered solutions fast, without machine learning expertise: Broad Data Support. Leverage both structured and unstructured data types to generate insight from complex analyses. Customers can leverage Veritone applications or developers can create their own applications that analyze and correlate sources such as public and private databases with unstructured files (photos, video, audio) for greater insight. Real-Time Framework. Extract insight from data in real-time with aiWARE’s. Unlocking results within a few seconds, users can accelerate decision-making, streamline workflows, and address compliance requirements to improve efficiency and reduce costs. Cognitive Engine Ecosystem. Harness a cognitive engine ecosystem of over 320 engines across 29 different cognitive capabilities to extract actionable insight

from data. Individually and working in concert, these cognitive capabilities further expand the universe of potential use cases for Veritone customers. Whether to enhance new or existing business solutions, these engines can be easily trained for specific use cases to achieve greater accuracy. Custom AI-Enabled Applications. Create solutions customized to unique business challenges with a self-service development sandbox, low-code customization tools, and application components for easy extension of cognitive capabilities and fast time to market. Prototype AI-enabled solutions quickly reducing risk and investment in innovation projects for internal and customer applications Turnkey AI-Enabled Business Applications. Access turnkey media and entertainment applications designed to tackle prevalent industry challenges without the need for AI expertise. These applications include Veritone Attribute, Veritone Discovery, Veritone Core, and Veritone Digital Media Hub.

BESTOFSHOWATIBC 2019 VERIZON MEDIA THE VERIZON MEDIA PLATFORM At IBC2019, Verizon Media is debuting advanced capabilities for delivery optimization, ad performance visibility, content personalization, and blackout control on its Media Platform. These enhanced offerings enable Verizon Media to deliver ultrapersonalized, TV-like experiences to viewers on the largest and most reliable streaming network. Smartplay Stream Routing Smartplay Stream Routing delivers video traffic over multiple CDNs, giving viewers faster startup times and reduced rebuffering, resulting in a high-quality viewing experience. New for IBC 2019, Smartplay Stream Routing uses Verizon Media’s server- and client-side performance data from across its global network and device distribution to dynamically deliver traffic. If one CDN experiences an outage, traffic can be automatically re-routed, protecting broadcasters against catastrophic network issues. This enables broadcasters and content providers to more confidently deliver the best quality video to viewers wherever they are in the world. Smartplay is entirely CDN-agnostic, so decisions about how to route traffic are made purely on performance metrics. Ad Server Debug Verizon Media’s enhanced Ad Server Debug empowers content owners with end-to-end visibility into the ad insertion

process, highlighting errors, timeouts, and tracking issues. The technology automatically collects and stores data on every ad transaction, including response times and timeouts from third-party party ad servers. Comprehensive, session-level data can be stored up to 14 days, so there’s never a lack of data for in-depth troubleshooting and analysis. Fragmented and evolving industry standards around OTT advertising have made it difficult to get a clear view of what’s happening during the ad insertion process. Ad Server Debug changes this by delivering far greater transparency and insight into how ads are delivered, enabling service providers to improve the quality of experience and deliver personalized streams to millions of viewers worldwide. Smartplay Content Targeting Smartplay Content Targeting delivers TVlike quality video streams that are ultrapersonalized via manifest manipulation technology. Blackouts require content distributors to restrict content based on

a viewer’s location or device type. When a blackout occurs, broadcasters must deliver alternative content to keep audiences engaged. Within Verizon Media’s streamlined, updated UI, customers can easily schedule blackouts ahead of time and plan the distribution of personalized content to better control the viewer’s experience. Within the UI you create audiences, build rulesets, and then apply these criteria to the assets that matter. The Verizon Media Platform also supports scheduling through its API or by uploading a CSV. To extend legacy broadcast workflows, content replacement and audience management can be automated for any workflow using ESNI, saving time and extending reach, while creating harmony within existing broadcast workflows. Smartplay Content Targeting is intelligent enough to discern a viewer’s location, device or environment to deliver an experience optimized just for them. This ensures approved content is delivered to the right audience at the right time. OTT personalization hinges on the performance of the manifest server to generate a unique playlist of content, ads, and playback instructions for every subscriber. Users can build audiences and rulesets through Smartplay that are reinforced on every asset, for every viewer that presses play, anywhere in the world.


Strata 32 is Wheatstone’s newest television control surface, a compact IP audio console with dedicated faders for eight subgroups/VCAs (mono, stereo or 5.1 surround), two masters, 16 aux sends, routable per/channel MXMs, 16 dedicated MXM busses, and 32 physical faders that can be layered to 64 channels, all fitted into a 40” wide countertop mainframe, making it equally at home in newsrooms, remote vans or sports venues. Fully compatible with the WheatNetIP intelligent network, Strata 32 provides easy access and control of network resources through a powerful touchscreen interface, with intuitive menus for adjusting EQ dynamics,

setting talkback, configuring mix-minus feeds and bus matrices, muting mic groups and managing sources and destinations. Per channel full color OLED displays show all relevant editing and operating functions at a glance. With all I/O managed through the WheatNet-IP audio network, this compact board has no fixed connection limitations, having full access to all network sources and destinations using any preferred audio format whether it’s HD/SDI, AES, MADI, AoIP, Analog or TDM. The console’s IP Mix Engine is capable of 1024 channels of simultaneous digital signal processing (including 4-band parametric EQ, hi/ lo pass filters, compressor/limiter,

expander gate and programmable delay) and provides additional advanced specialty features, such as a new automixer with four separate automix groups and onscreen weight control. Also new this IBC is the optional 4RU StageBox for extending console I/O, providing 32 mic/line inputs, 16 analog line outputs, and 8 AES3 inputs and 8 AES3 outputs as well as 12 logic ports and two Ethernet ports. (StageBox works with all WheatNet-IP audio networked consoles.) The Strata 32 interfaces with all major automation systems and is AES67 and SMPTE 2110 ready. Price is under $75,000 including StageBox.

Millions discover their favorite reads on issuu every month.

Give your content the digital home it deserves. Get it to any device in seconds.