TM Broadcast International 78, February 2020

Page 1




ISE 2020: Goodbye Amsterdam!


20 40


WRC (World Rally Championship)


Extreme production to become remote in 2020 Interview with Florian Ruth, Director of Content & Production, WRC Promoter GmbH.

The best of IP is yet to come, by Grass Valley

76 Achieving nextgeneration media experiences with High Dynamic Range

HDR: analyzing PQ and HLG


Editor in chief Javier de Martín

Creative Direction Mercedes González

Key account manager Susana Sampedro

Managing Editor Sergio Julián

Translation Fernando Alvárez

Administration Laura de Diego

Test Zone Calrec Brio 36


TM Broadcast International #78 February 2020

TM Broadcast International is a magazine published by Daró Media Group SL Centro Empresarial Tartessos Calle Pollensa 2, oficina 14 28290 Las Rozas (Madrid), Spain Phone +34 91 640 46 43 Published in Spain ISSN: 2659-5966

EDITORIAL After the renowned creation of Vizrt Group (Vizrt, NewTek and NDI) and the acquisition of Ooyala by Dalet, we have recently been witness a new, interesting movement: the integration of Embrionix within Riedel. All these changes point towards one reality: the hyperspecialization that seemed the way to go by the appearance of a plethora of companies at the beginning of this millennium has been gradually fading to give way to common solutions. Companies aim at providing alternatives throughout the whole workflow of their clients by making processes more efficient as a result of such unification of resources. These powerful alliances, which will keep taking place over the coming years, point towards a significant change in paradigm. We will follow it closely. Setting the focus on projects, we can see how the industry keeps maintaining a busy activity in the beginning of 2020. Good examples of this are the huge integrations taking place in the United States and Europe, in a great extent driven by the inevitable transition into IP. However, we must not lose track of the technological evolution taking place currently in Asia and Africa. An increasing number of initiatives are being carried out in these territories, which can be regarded as an alternative for those willing to expand their operations. The broadcast market is global, like the international shows that are set in our calendar. These meetings dynamize the sector, multiply opportunities and showcase important technical news. We are already getting ready to assist NAB 2020 (19 – 22 April, Las Vegas Convention Center). Before that, a wide team of TM Broadcast International will visit the latest edition of the AV show of reference, ISE (Integrated Systems Europe), in order to tell you about common issues and the latest innovations in these two worlds that are so closely related. In fact, many of the significant firms in our scope will not miss the event to be staged in RAI (Amsterdam). In addition to a "preview" of this event, in our pages you will find other contents of interest such as comprehensive analysis on ntercom and talkback tools or an exclusive interview with Florian Ruth, Director of Content & Production, WRC Promoter.



Ross Video and Van Wagner Sports & Entertainment took a fundamental part in last Super Sunday

Ross creative and technical personnel and partners Van Wagner Sports & Entertainment powered the game-day experience that entertained fans fortunate enough to have had tickets for the Miami’s 54th Super Bowl.

Dolphins football club and

integrates the control room

NCAA Miami Hurricanes

production and LED content

football team, has been a

management system. The

long-time Ross customer. In

solution is comprised of

March of 2017, they

several Ross products,

undertook an upgrade to

including the flagship

their video production

Acuity production switcher,

control room and LED

XPression Studio 3D

display control systems and

graphics render engine,

The Hard Rock Stadium, home to the Miami

chose a Ross Unified Venue

XPression Tessera 3D

Control System. This system

graphics render engines



(LED control) and the

camera lens/head data to

DashBoard Control System.

overlay tactical graphics on

For this year’s event, additional Ross solutions

sports content. Commenting on the

were added to the

production horsepower

production, including the

deployed for this year’s

XPression real-time

game, Kevin Cottam, Ross

Augmented Reality (AR)

Video’s Director of Sports &

system, to display scoring

Live Events, highlights the

drive summaries and in-

importance of giving the

game player statistics, and

fans in the stadium an

the Piero Sports Analysis

experience to remember.

tool, that uses image

“Last weekend’s game was

recognition or encoded

one of the world’s most

widely-viewed sporting events, and everyone involved wanted to ensure that the fans were entertained and engaged. We’ve partnered with Van Wagner Sports & Entertainment on a number of high-profile sports productions over the years and this was our 4th Super Bowl working together, so we were confident we would entertain fans throughout the game”. 


LiveU collaborates on two additional pan-european EU 5G projects LiveU has increased its collaboration with leading European partners on 5G Infrastructure Public Private Partnership (PPP) projects, testing and validating content contribution and media production use cases over advanced 5G Release 16 testbeds. The projects’ goals are to provide the broadcast community and other verticals with insights into 5G performance in realworld scenarios. The projects are funded by Horizon 2020. LiveU is a technology and use case partner in 5G Tours and in 5G Solutions. Both projects test and analyze 5G performance KPIs, including media use cases related to LiveU 5G bonding technology. The core 5G technologies being validated include network slicing, New Radio (NR), low latency, edge/cloud computing, SDN/VNF, 8

service orchestration and more, bringing the 5G vision closer to realization. As part of these tests, LiveU’s field units are being used in live events, for example, the Turin festival where orchestral music in auditorium will be synchronized live with street players (5G-Tours) and the Patras festival where multiple cameras will be transmitted from a crowded street (5GSolutions). The 5G-Solutions ICT-19 RIA project “5G Solutions for European Citizens” aims to prove and validate that 5G provides prominent industry verticals with ubiquitous access to a wide range of forward-looking services with orders of magnitude of improvement over 4G, thus bringing the 5G vision closer to realization. The project

provides validation of more than 140 KPIs for 20 innovative and heterogeneous use cases that require 5G performance capabilities and that are expected to have a high future commercialization potential. (EU’s Horizon 2020 research and innovation program grant no. 856691). The 5G-Tours ICT-19 RIA project “SmarT mObility, media and e-health for toURists and citizenS” involves advanced 5G validation trials across multiple vertical industries, deploying full end-to-end trials to bring 5G to real users for thirteen representative use cases. The project will provide efficient and reliable close-tocommercial services in three different types of cities: Rennes, Turin and Athens. 


NYSE uses LTN Network to improve media organizations’ access to live news and floor activity

LTN® Global is providing transmission services for video covering opening and closing bells, floor activity, and other financial news taking place at the New York Stock Exchange.

LTN® Global is providing transmission services for video covering opening and closing bells, floor activity, and other financial news taking place at the New York Stock Exchange (NYSE). Global broadcasters and business news channels also can book live shots for their reporters on the exchange floor and then use the LTN Portal and LTN Network to route feeds to the appropriate destination for broadcast. "By working with LTN, we've made the NYSE 10

more accessible as a source for highly soughtafter financial news content throughout the business world," said Ian Wolff, manager, Broadcast/AV Services at the NYSE. "LTN's fully managed network and intuitive client portal make it easy for broadcasters everywhere to book NYSE content and provide it to their viewers."

and other customers guaranteed SLAs for highreliability, low-latency, professional broadcastquality transmission. The LTN Network is built on thousands of directly connected sites, and it is trusted around the world by major broadcasters, affiliates, and studios for full-time and occasionaluse contribution and distribution over linear and digital networks.

LTN gear installed on site supports the multicast of the NYSE channel to any broadcaster across LTN's worldwide network of media organizations. Broadcasters that cover the activities of the NYSE can simply request LTN through the NYSE master control room and manage IP-based transport of their own live feeds through the LTN Portal.

"Our unique, patented technology leverages the power and flexibility of IP-based transport to support agile transmission services without sacrificing convenience, quality, or reliability," said Chris Myers, executive vice president and chief revenue officer at LTN Global. "We're excited to offer media organizations a new cost-effective transmission solution from such a newsworthy institution as the NYSE."

LTN's fully managed, IPbased media delivery service offers the NYSE


M for Media installs GB Labs FastNAS intelligent storage GB Labs, innovators of powerful and intelligent storage solutions for the media and entertainment industries, today announced that M for Media, an Abu Dhabibased media production house that produces TV shows and adapts international formats for the GCC and MENA region, has installed GB Labs’ FastNAS shared storage system. M for Media’s Andre Aouad said, “I first heard about GB Labs from the creator of MIMIQ for Avid systems, who forwarded some information that he thought would interest me.

full of hard disks. It’s unproductive, and it was beginning to show. However, GB Labs soon came up with the ideal solution.” Supplied and installed by partner, MediaCast, the GB Labs FastNAS F-16 Nitro system installed at M for Media can be accessed by up to 10 users of Avid Media Composer, simultaneously if required. As leaders in intelligent storage solutions, GB Labs

recently introduced even more acceleration for FastNAS storage systems with the introduction of new 25 GbE connectivity. FastNAS media production storage systems are available in a range of configurations, including FastNAS F-8 Nitro Studio; FastNAS F-8 Nitro; FastNAS F-16; FastNAS F.16 Nitro and FastNAS F-16 Nitro MAX.

“It did, particularly because, at the time, we had no existing storage and it was rapidly becoming a problem because of the volume of work we were taking in. You can’t keep running back and forth with arms 11


Broadcast Solutions delivers additional HD OB trailer to Belarus

Broadcast Solutions GmbH has delivered another OB truck to a public broadcaster in Belarus. After the company completed a major order for Belteleradiocompany last year that consisted four HD OB Vans (including support trucks), an additional vehicle will now follow. ONT - the second public broadcaster in Belarus has received delivery of an OB Van developed by 12

Broadcast Solutions. Broadcast Solutions supplied an HD OB Trailer with 8 system cameras (prewired for 18 cameras) and the broadcaster plans to use the new production vehicle to enhance its coverage of events in Belarus and abroad. The station will focus on cultural, political and entertainment programs. ONT will use the new OB truck for concerts, shows and political and national

events such as city festivals, elections, military parades, etc. With the new OB Van, ONT is massively increasing its capacity with the new vehicle being the largest in the broadcaster's fleet. The OB truck is currently equipped with eight Grass Valley LDX 82 Premiere HD cameras, but is geared up for eight more plus two wireless cameras and the full technical capacity of the OB truck will be used in 2020.


The OB truck is designed as a trailer with one extension. Besides the Grass Valley cameras, ONT is relying on further Grass Valley equipment inside the vehicle. A GV Kahuna 2 M/E Vision Mixer, GV MV820 Multiviewer, a GV Vega Router with 180x180 crosspoints and additional GV Glue equipment are in place. For the audio area ONT

trusts in a Calrec Brio 36 audio console, TC Electronics audio effects and RTW Audio Measurement. A Riedel Artist 128 system is used for communication within the truck and with on-site teams. An IHSE Draco tera KVM system secures access to all systems. For operation, control and workflow management, the ONT OB

Van uses the hi - human interface control system developed by Broadcast Solutions. hi controls all production-related areas in the truck, such as video router, multiviewer, audio router and tally. Intuitive touch interfaces and hi hardware control panels simplify daily production operations in the OB Van and provide access to all important functions. ď ľ


Broadpeak completes “world’s first” unified packaging and encryption of DASH and HLS formats Broadpeak, provider of content delivery network (CDN) and video streaming solutions for content providers and payTV operators, has performed the “world’s first” unified packaging and encryption of DASH and HLS formats. Using the latest version of its BkS350 origin packager, Broadpeak successfully delivered DASH and HLS video fragments using the same chunks (encrypted with CBCS) and container (CMAF) for both protocols, “a unique capability that will optimize storage costs for OTT service providers”, according to the press release. Until CMAF was 14

container. Now that Widevine allows CBCS encryption, it is possible to encrypt one CMAF fragment for both HLS and DASH formats. Broadpeak’s BkS350 origin packager offers this capability.

developed, HLS and DASH needed to be delivered in different containers. In addition, Apple FairPlay and Google Widevine used different encryption schemes (i.e., CBCS and CTR), creating the need for a different chunk for each streaming format (i.e., HLS and DASH) even with the new CMAF

“Today, OTT delivery can be costly in terms of network bandwidth and storage due to the multiplication of streaming formats. Service providers need to deliver video content in two entirely different packaging formats and two different encryption schemes in order to reach all devices,” said Jacques Le Mancq, CEO at Broadpeak. 


PlayBox Neo CIAB Playout goes live at SoundView Broadcasting, New York SoundView Broadcasting, a global broadcast service provider, has invested in a PlayBox Neo multichannel playout system for its headquarters in New York City. "The new installation includes the latestgeneration AirBox Channel-in-a-Box servers in content ingest, master playout and backup configurations," says SoundView Broadcasting Director of Operations

Sarmad Zafar. "They are straightforward to install, configure well and perform very reliably. We also have fast access to support from PlayBox Neo USA if needed. The PlayBox Neo user interface is very operatorfriendly, easy to learn and displays exactly the information needed at each stage of the operation from ingest right through to transmission." "We've provided three of our full size 3U servers

for this project," explains PlayBox Neo US Director of Operations Van Duke. "Each is configured with AirBox Neo-19 playout software and TitleBox Neo-19 channel branding software. Servers 1 and 2 include ANSI/SCTE 35 insertion cue messaging. Servers 2 and 3 are additionally loaded with PlayBox Neo MultiBackup Manager. Server 3 has been installed in an offsite disaster recovery facility." ď ľ

Playout control suite at Soundview Broadcasting in New York



MBC produce the 2019 Indian Ocean Island Games live with Bluefish444 IngeSTore Server and Avid

The Mauritius Broadcasting Corporation (MBC) is the national public broadcaster of the Republic of Mauritius, including the islands of Mauritius, Rodrigues and Agalega. MBC is based in Moka with another station on the island of Rodrigues. Broadcasting in 12 languages, MBC provides 23 television channels and 7 radio channels across the country. 16

MBC has worked with South African-based systems integrator, Jasco, for over 20 years. Jasco provides end-to-end solutions across broadcasting, telecoms, communications, security and fire sectors, incorporating intelligent technologies including power and renewable energies and data centres. MBC was looking for a way to ingest live feeds of the 2019 Indian Ocean

Island Games (IOIG) in Mauritius and approached Jasco to recommend and install a solution that could record material live from up to eight HD-SDI feeds. With MBC being a user of Avid products for over 10 years, including Avid NEXIS network storage and Avid Media Composer, the recording solution needed to integrate easily into their Avid-centric workflow. Having integrated Bluefish444 products in the past, Jasco suggested MBC install Bluefish’s IngeSTore Server for their requirements. “The Bluefish IngeSTore Server is an amazing piece of technology, so it was a simple choice to make when we had to propose an ingest solution,” said Jonathan Smith, Technical Manager of Jasco. 


Reynolds Journalism Institute upgrades its workflow with Quicklink TX The University of Missouri School of Journalism’s Reynolds Journalism Institute recently used Quicklink TX for an on-air interview with C-SPAN on the Senate presidential impeachment trial. Staff from RJI utilized Quicklink TX to broadcast Missouri School of Law Professor and author Frank Bowman for a live, one-hour interview on CSPAN’s morning program, Washington Journal. For this set-up, RJI

established a two-way, “high quality, ultra-low” delay call. “Because of Quicklink, we’re able to sell our small-scale facility to others as a site from which to do these national and international broadcasts. It’s like we’re able to play with the big boys without owning a microwave truck or satellite truck,” said Travis McMillen, senior media producer at RJI. “I’ve found that Quicklink TX provides a

powerful, yet straightforward experience for both the far site and the near site.” RJI also uses Quicklink TX on a weekly basis during Global Journalist, a “virtual roundtable” program featuring guests from anywhere in the world. During the show, three journalists join the in-studio moderator as guests to discuss a topic from their point of view as the journalist covering the story. 



Scott-Oliver Lührs joins Qvest Media As principal, Scott-Oliver Lührs will be responsible not only for product strategy, but also the international business development of Qvest.Cloud. In this position, he will capture the clients’ technology requirements and incorporate them strategically into the further development of Qvest.Cloud. Lührs, who has comprehensive experience in strategic, conceptual and operational product management from the IT and media industry, will be closely involved with the decisionmaking processes for the development of Qvest.Cloud in order to play a key role in its shaping. In this newly created role, Lührs will report directly to Qvest Media’s top management. Scott-Oliver Lührs looks back at more than 30 years of product and IT expertise and previously worked at BFE as Division Manager for KSC Controller Systems amongst others. Prior to joining BFE, Lührs spent ten years with Avid in product design as a product owner, as well as a Global Solutions Consultant. 


Vitec strengthens its broadcast contribution and remote production vertical with the acquisition of IPtec Inc.

Vitec has announced the strategic acquisition of IPtec Inc., a developer and manufacturer of solutions for low-latency transfer of telemetry and video-over-IP networks. This acquisition is the third the company has made within this vertical in the last 18 months, continuing Vitec’s growth in broadcast contribution and remote production. Effective this month, this acquisition will add IPtec’s product families to the Vitec lineup. This includes the VNP Series video network processors, which support fullduplex encode-decode; H.264, AVC-I, and MPEG-2 video standards; and AAC and MPEG1 Layer II audio standards, in a redundantly powered ½ RU package. The current IPtec VNP Series will be maintained as its product roadmap merges into Vitec HEVC portfolio offering. Vitec will also add the TNP Telemetry Network Processor Series, which addresses the need for transmitting timingsensitive data signals over IP networks in telemedicine, defense, and industrial applications. 


EditShare partnership with Annex Pro increases sales and professional services reach across the Canadian media market EditShare® has announced that Annex Pro, Canada’s premier system integrator and reseller for creative broadcast, production, post-production, animation, VFX, film and finishing markets, has joined the EditShare Channel Partner Program. Designed for resolutionintensive productions, EditShare’s innovative technology can manage high volume, high bandwidth content with granular detail. Straightforward workflow orchestration automates redundant and complex tasks while embedded editorial production tools, hybrid cloud configurations and comprehensive metadata tracking with AIaccelerated enrichment bring the practicality

desired to manage media across diverse needs and at scale. “EditShare and Annex Pro share the same core values: empathy, pursuing mutually beneficial solutions and up-leveling customer deployments to ensure success. This makes EditShare not only a great business partner, but it creates an exciting and fulfilling work environment,” says Kerry Corlett, CEO, Annex Pro.

With access to the full product portfolio of EFS media engineered shared storage and Flow media management tools, including SaaS and perpetual licensing models for on-premise, hybrid, and cloud configurations, Annex Pro can service wide-ranging workflow and technology needs across all markets. 19




Integrated Systems Europe (ISE) has succeeded in its own right in becoming one of the major referents in the audiovisual world, a crown only being contested by competitor InforComm, from the American market. Very few could foresee that such a small event that kicked off back in 2004 with just 120 companies and 3,489 visitors would grow to reach only 15 years later 1,300 exhibitors and 81,268 visitors. By Sergio Julián

After this long trajectory -an inescapable reason for recognition- the initiative hosted by Integrated Systems Events LLD, a joint venture between AVIXA and CEDIA, has made it to the next level. This is because, as you well know, ISE 2020 will be the last edition of the show to be hosted at RAI Amsterdam (The Netherlands). And it has not been an easy decision: even more than that, it has taken 18 months of endless debate to clarify whether keeping this event in this Dutch city was feasible. However, an annual growth –nearly 10% per annum- in the number of both exhibitors and visitors, was the final sign unmistakably hinting that moving to a new location

was required: the Gran Vía premises of Fira Barcelona. Located just a few minutes away from the core of this Spanish city, these facilities will enable to boost ISE's commitment towards strengthening its global leadership and not hamper -boost insteadthe growth of an unstoppable event. We will gradually get to know more novelties about this new venue. For the time being, the organizers have already unveiled the addition of a new area: “Live Events & Lightning”. But this is the future. Let’s focus on what's available now, which is exciting enough as it is. The biggest ISE show in history is awaiting us. Let’s say farewell the way it deserves.

Sharing knowledge and experiences ISE will be, one more year, a cross-cutting proposal that will not only host the latest technological developments, but will also work as a forum serving as platform for sharing of opinions, knowledge and experiences by profiles of all kinds. This convergence of knowledge, boosted by the willingness to go digital shown by a large majority of companies throughout the world, will be further driven by the holding of thirteen conferences alongside the event. We find the novelties of a new Control Room Summit ISE plus two events from CEDIA: 21


ISE 2019.

Cybersecurity Workshop and Design & Build Conference. These events are added to other classics in this show: Smart Building Conference ISE, AudioForum, AVIXA Higher Education AV Conference, XR Summit ISE, Digital Signage Summit ISE, AVIXA Enterprise AV Conference, Digital Cinema Summit ISE, Hospitality Tech Summit ISE by HTNG, AGORA and AttractionsTECH by Bloolop. ISE, because of its own concept, is endless. It is physically impossible to cover during the four days it lasts all stimuli proposed, considering all conferences, meetings with companies or just spending time enjoying the latest technological innovations: for instance, the videomapping to be projected at RAI Elicium Centre. However, such versatility conceals a wellkept secret: its ability to attract professionals from all kinds of sectors. 22


Amongst the hottest trends, we will be witnessing yet a further episode of the increasingly fierce battle between projectors and screens.

The sector’s technological future at stake Digitalization is no longer a choice or a strategy for the future, but a need. It is a way of streamlining workflows, improving productivity, impressing or being impressed; lies in the present and future core of nearly all companies in the world; it will inevitably permeate the global business network. This massive technological need has driven the mushrooming of technological resources -both hardware and software- being made available to companies,

installers, consultants or end users. As a result of this, efforts made in R&D cause technical developments being made by corporations evolve at great speed. Many of them will be there for testing in 15 showrooms at RAI Amsterdam. Amongst the hottest trends, we will be witnessing yet a further episode of the increasingly fierce battle between projectors and screens. On the one hand, projectors keep evolving with innovative products, adaptability solutions, brimming proposals that take lumens to record figures and resolution levels that are able to

support the industry's top creative needs. Against this backdrop, and in part as a result of decreasing costs, LED technology is gaining ground thanks to its scalability and versatility. Quite a few auditoriums are choosing a solution based on these panels; in a similar way, companies are seeing an increasing offering of LED micro-panel packs for easy fitting in corporate spaces. This battle is also being fought in a pitch as broad and diverse as that of the educational sector. We will tell you everything about the latest innovations being showcased in Dutch soil. On the other hand, we 23


will be closely following the space monitoring and management scene. Process automation was very much the main focus in the latest edition of InfoComm held in Orlando, and this year seems to be poised to repeat this trend.

Manufacturers creating full ecosystems that will solve all needs relating management of spaces and monitoring of technical elements or automation of processes can be counted by the tens. The path towards

digitalization, as we pointed out, does not allow a single step back. As usual, the challenge lies in achieving full stability of systems and a smooth coexistence with more traditional models up to full implementation of these new paradigms. ISE will help clarify how close we are to achieving this goal, which fortunately enough is attainable.

A meeting point between the broadcast and av worlds Many of the major manufacturers of the broadcast world will be present in ISE, either with products that are exclusive of the AV sector or by showcasing the adaptability of broadcast systems to the needs of global corporations. Amongst a number of areas, we will closely follow novelties on video capture resources, focusing our attention on PTZ solutions, an increasingly relevant technology in audiovisual environments. Also present will be 24


those systems which features are best adapted to shows as well as to novel and ambitious TV productions. At present, a significant number of studios are using resources intended for the AV world and we are convinced that a broad range of players from the broadcast industry will be present in ISE with a view to adapting these systems in their daily workflows. We should not forget that showiness is a top priority for millions of viewers and the AV scene is certainly a gold mine for drawing keys with an aim of being ready for the future. A further element staging a meeting point is IP technology. Nowadays both industries are relentlessly working towards standardization of transmission formats and renovation of infrastructures. These challenges are faced by means of shared technologies, which will be present at RAI in the form of hybrid solutions, converters, controllers and audio and video solutions, amongst a number of many other fields. We will also closely follow

innovations in this area, which will in turn be inevitably replicated and enhanced in the next major meeting for the broadcast market. NAB 2020 in Las Vegas. Many other areas having direct impact on our world will be awaiting us at ISE 2020: Either translation systems, audio processing tools, wiring solutions, conferencing and collaboration systems, novelties relating screens, interactive display tools, innovations regarding projection, wireless communication solutions or augmented reality and

mixed reality creations, all of them can be implemented in our environment. A broad team of our magazine will closely examine each proposal in order to provide you with a comprehensive summary of trends and technologies in the next issue of TM Broadcast International. And now, you can learn about a choice of the latest efforts from major exhibitors that will be showcasing in ISE 20202. See you in the corridors of RAI! ď ľ 25


Yamaha highlights solutions for entertainment and enterprise At February’s Integrated Systems Europe exhibition in Amsterdam, Yamaha will be highlighting its support for both the entertainment and enterprise markets with its range of high end, integrated solutions. ISE takes place at the Amsterdam RAI from 1114 February 2020, with Yamaha exhibiting on Stand 3-C95. Yamaha’s digital mixing systems will be on show, including RIVAGE PM and the latest V5.5 firmware for CL/QL series digital mixers, which supports ProVisionaire Control/Touch; the latest Version 3.6.0 enabling complete control over the entire network chain, from mixers through to

Yamaha PC-D amplifiers


amplifiers (including Yamaha’s PC Series and the NEXO NXAMP4X4MK2), processors and speakers. The new white models of the flagship DZR/CZR series loudspeakers and DXS XLF/CXS XLF subwoofers will also be shown in Europe for the first time, following their launch at the NAMM Show in January. The powered DZR range comprises eight full-range models and four DXS XLF subwoofers, including ‘D’ versions that feature onboard Dante IN/OUT capability. Sharing the same cabinet and speaker components as the DZR/DXS XLF models, the passive CZR/CXS XLF models are matched with Yamaha’s all new PC-D range of amplifiers to deliver “very high powerhandling and focused, professional sound”.

All DZR/CZR and DXS XLF/CXS XLF models will be available in white as well as the existing black finish, providing more options for a greater variety of uses, such as hotel banquets, houses of worship and auditoriums. Yamaha will also be highlighting its Audioversity education content at ISE, with a variety of training sessions available on the stand. These will cover a variety of topics, including overviews of the RIVAGE PM systems, hands-on with CL/QL consoles and integrated AV control using ProVisionaire software. The company is also presenting in the series of AVIXA Flashtracks short seminars at the Flashtracks stand 13-N110. At 12 noon on Thursday 13 February, Holger Stoltze,


senior director of technical sales and marketing at Yamaha Unified communications presents “AI Is Improving The UC Experience”. Meanwhile Andy Cooper, pro audio application engineering manager at Yamaha Music Europe, asks the question “AV Networks: Which Topolo y Works Best?” at 11.30am on Friday 14.

Marshall CV-380-CS Compact UHD Camera

Marshall Electronics, provider of professional cameras, monitors and accessories for professional AV and broadcast production, will feature its CV506 HD Miniature Camera and CV380-CS Compact 4K Camera at ISE 2020 (Stand 11-D150).

improved signal strength and ultra-low noise output with stereo embedding ability (mic or line level) on all models. The updated line features a new body style with locking I/O connections, remote adjustable settings, and an expanded selection of output frame rates and formats. New models also include a micro-USB port for fieldupgradable firmware updates as new features are added.

Marshall recently upgraded its miniature and compact HDMI/3GSDI cameras with larger HD sensors, next generation processors and improved industrial design. POV camera users will notice a step-up in color and clarity, as well as

A frontrunner of Marshall’s next generation HD camera line is the CV506 HD Miniature Camera, which offers simultaneous 3GSDI and HDMI options. The new CV506 serves as an upgrade over the popular Marshall CV505, with a 30

Marshall Electronics showcases miniature and compact cameras

percent larger sensor “for better picture, dynamic color depth, and low light performance, along with an improved housing design”. The CV506 delivers “ultra-crisp, clear” progressive HD video up to 1920x1080p at 60/59/50fps and interlaced 1920x1080i at 60/59.94/50fps. With interchangeable lenses and remote adjustability for matching with other cameras, the CV506 is suitable for a range of professional workflows as it can capture detailed shots while maintaining an “ultra-discreet” point-ofview perspective. Marshall Electronics will also feature its latest 4K camera: the CV380-CS Compact 4K Camera at ISE 2020. The CV380-CS uses 27


an 8.5 Megapixel sensor to capture video images over 6G/3G/HD-SDI and HDMI formats. Simultaneous 6GSDI and HDMI outputs offers ability to plug into multiple workflows with CS mount lens options. The CV380-CS is designed for POV camera applications where a small, miniature camera is needed to fit into unique locations for compelling angles and viewpoints. The CV380-CS contains a CS/C lens mount where fixed prime or variable focal length lenses can be used to enhance customization. Special attention has been given to improve durability of the CV380-CS in the field with the addition of new structural “wings”, designed to give greater protection to rear connectors during heavy use.

Magewell will debut a new multiprotocol live stream decoder Magewell, developer of innovative video interface and IP workflow solutions, has announced a new conversion product that brings together the worlds 28

of IP-based streaming and baseband AV presentation equipment. The new Pro Convert H.26x to HDMI multi-protocol, SRTcompatible streaming media decoder will make its first public appearance in stand 8-G475 at ISE 2020 exhibition in Amsterdam. While earlier Pro Convert models transform professional AV signals to and from NewTek's NDI® AV-over-IP technology, the new Pro Convert H.26x to HDMI decodes a standard H.264 (AVC) or H.265 (HEVC) compressed video stream into a high-quality HDMI output for connection to baseband monitors, projectors and switchers. Supporting a wide range of streaming protocols, the low-latency decoder applications include multi-site video distribution between corporate, educational and church campuses; remote production; surveillance monitoring; digital signage and more. "While dedicated AVover-IP technologies get a lot of attention and are ideal for many applications when used on

Magewell Pro Convert H26x to HDMI

robust networks, a lot of customers are already generating live H.264 or H.265 streams in distribution-friendly protocols and wish to incorporate them into their presentation and display workflows," said James Liu, VP of Engineering at Magewell. "The Pro Convert H.26x to HDMI complements our NDI encoders and decoders, which are often used in conjunction with internal networks, by enabling users to leverage common streaming protocols internally or over the public internet." The Pro Convert H.26x to HDMI is Magewell's first product to support the SRT (Secure Reliable Transport) open source protocol. Enabling “secure”, low-latency


automatically optimizing output parameters or providing the user with a range of compatible choices. The device can be powered via an external adapter or Power over Ethernet (PoE).

video delivery over “unpredictable networks”, SRT “ensures high-quality streaming experiences even over the public internet”. Other supported protocols include RTSP, RTMP, UDP, RTP and HTTP streaming. The Pro Convert H.26x to HDMI decodes streams up to 2160x1200 at 60 frames per second for output over its HDMI 2.0 interface. Built-in, FPGA-based video processing enables the device to automatically up-convert HD or 2K source streams to 4K for viewing on Ultra HD displays. The plug-andplay decoder features DHCP-based network configuration and can detect the video and audio characteristics of the target display device via EDID metadata,

Users can specify source stream URLs and control the decoder's settings through a browser-based interface; wired or wireless keyboard or mouse; or using two ondevice buttons that overlay an intuitive menu on the HDMI output. Eight channels of AAC or MP3 audio are supported in the input stream with user control of audio gain, sample rate, channel selection and on-screen VU metering. Additional integration features include image flip for inverted projector installations; safe area controls; and aspect ratio conversion. The decoder can be paired with Magewell's Ultra Stream encoders or third-party hardware or software encoders. Owners of Magewell's Pro Convert for NDI to HDMI decoders can also add multi-protocol H.264

and H.265 decoding capabilities to their existing devices with a free firmware upgrade. The new firmware update is available immediately on the Magewell website, while the new Pro Convert H.26x to HDMI hardware is slated to ship in March.

disguise to showcase integrated solutions Returning to the world’s largest exhibition for AV and integrated systems on 11-14 February 2020 in Amsterdam, disguise will present its renowned solutions. Attendees visiting the stand will have the opportunity to witness the different features of disguise’s solutions in action, working together at scale to deliver one fully integrated system. It will also feature an “eyecatching” projection mapped kinetic sculpture with real-time generated video content. Various demo stations on the stand will also present how disguise’s solutions are used on huge global projects and productions: from enhancing the stages 29


of music festivals and live events, such as Eurovision Song Contest 2019 and Marco Borsato live at De Kuip Stadium, to powering architectural facades around the world. disguise’s hardware and software also sit at the heart of fixed installations, enabling designers, consultants, and systems integrators to transform environments and create visual experiences on building facades, like the Burj Khalifa, in theme parks and cruise ships, as well as in museums and retail spaces. disguise will also be showcasing how its solutions bring “a unique offering” to production teams in esports, allowing them to make use of content that responds to gameplay and integrates seamlessly with other systems. On booth demos, will show how the platform is being used to create experiences for both online and live audiences, with recent applications in live tournaments and studio installs around the world including PUBG Global Invitational, FACEIT Major and Gfinity Studio projects 30

in the UK and Australia. Finally, disguise will be presenting its latest hardware: vx 4, which has been engineered to playback video “at the highest quality and resolution possible”; and the gx 2c, which combines “unparalleled output and processing power”, according to the press release. This year, ISE attendees will also be treated to a visual experience as they enter the exhibition centre, as disguise is partnering with Novaline and Carbon Black to create a multidimensional holographic installation. The installation will combine industry-leading technologies, including disguise’s gx 2c, to “shape a new era of visual language” and celebrate the next chapter in ISE’s global story.

Muxlab will present its new signal transmitter At ISE, Muxlab’s newest signal transmitter will premier that uses an IPbased Ethernet network to stream AV. Because it

supports the H.264/H.265 video codec setting, the HDMI over IP H.264/H.265 PoE Transmitter, 4K/60 (model 500764) can stream content from the Internet with “near zero latency”, delivering it locally, remotely or across the world. Each transmitter connects a source via an Ethernet switch to a MuxLab receiver, such as the HDMI over IP H.264/H.265 PoE (model 500762) receiver, which delivers AV to a connected display. According to Muxlab, upscaling to accommodate multiple sources and displays is “exceedingly easy”. The technology also has the


ability to support potentially “hundreds of displays” with content delivered in a multitude of combinations. In addition, integrators can create user configurable video walls supporting multiviews, MotionJPG (MJPG) is supported for low latency applications and two-channel audio can be delivered alongside HDMI. On the other hand, the new DigiSign Plus CMS is making its debut at ISE. DigiSign Plus now supports HTML5 and RSS feeds, and the user interface has been renewed to create a “much more pristine appearance” for the end user.

Bluefish444 to demonstrate Kronos developer video cards and Ingestore live recording appliance Bluefish444, manufacturer of professional video industry uncompressed 4K SDI, ASI, Video Over IP & HDMI I/O cards and mini converters, will be showcasing its KRONOS K8 Developer video I/O cards and IngeSTore Server multi-channel capture appliance on stand 15T290 at Integrated Systems Europe 2020. The features of K8 make it suitable for developers in the professional AV industry, offering eight independent, bidirectional channels of 3G/HD/SD and multichannel 4K/UHD SDI I/O at its core. The K8 also features 12-bit video processing quality, 20 years of continuous crossplatform SDK driver development and hardware “reliability”; “making it perfect” for 3rdparty developers of live production, live events, display, presentation, interactive media, archival

and virtual or augmented reality solutions. Bluefish will also demonstrate its four channel video recording appliance, IngeSTore Server, supporting multiple video interfaces including SDI, HDMI and NDI inputs. IngeSTore Server can record multiple simultaneous channels and encode to a host of production, archival and streaming formats, create proxies, and integrate with Adobe and Avid NLE’s to live edit the files while recordings continue. The IngeSTore server features a flexible API allowing for the full functionality of the IngeSTore software to be integrated into existing solutions and infrastructure. IngeSTore Server can save to local storage or be connected to network storage where files can be recorded and shared for collaboration before being broadcast, streamed or archived. The IngeSTore Server recording appliance is a multichannel, multi-format real-time recording device featuring a programmable interface. It has been 31


developed for those corporate, educational, medical, government, security, military, houses of worship, TV stations and live events areas that need an extensible multichannel recording capacity. Bluefish will be exhibiting both the KRONOS K8 video I/O card and its IngeSTore Server multi-channel capture appliance on stand 15T290 at Integrated Systems Europe 2020.

products on display at the NewTek booth at ISE include: • NewTek VMC1 – Digital media production system with 44 source inputs. • TriCaster TC1 – Production system with 16 source inputs. • TriCaster Mini –Now in 4K with 8 source inputs. • TalkShow® VS4000 – Multi-channel video calling, broadcast-quality production control.

NewTek to demonstrate software-defined AV technology NewTek, manufacturer of IP-based video technology and part of the Vizrt Group, will be demonstrating its software-defined AV technology at the Integrated Systems Europe exhibition in Amsterdam, Stand 15-M255. NewTek products take advantage of IP-based production and delivery with the power of NDI®, the “world’s most adopted” and free-to-use video over IP technology. Among the NDI-enabled 32

Tricaster Mini 4 HD

• MediaDS™ – Stream direct to viewers with or without the Internet.

Matrox KVM: Its new “Aggregator Mode” is now available Matrox’s Aggregator Mode, a multi-system control feature, is now available with Matrox Extio™ 3 highperformance IP KVM extenders. This mode of operation permits an Extio 3 receiver unit to


Matrox Extio 3’s Aggregator Mode enables users to control multiple systems from a multi-display station using a single keyboard and mouse.

aggregate multiple video streams coming from multiple Extio 3 transmitter units. Multiple computer desktops can then be viewed and controlled simultaneously from an Extio 3 receiver unit with a single keyboard and mouse. Moving the mouse across a multi-screen desktop of up to four displays, shifts control “instantly” from one computer to the next. Aggregator Mode will be in action at Integrated Systems Europe (ISE) at stand #11-D120, showcasing its products

for a wide range of control room applications including process control, industrial and automation, military and defense, broadcast, and more. Extio 3’s Aggregator Mode supports computers with multiple video outputs, including independent or stretched desktop mode, while remote user-station setups can be arranged in various configurations such as 4x1, 3x1, 2x1 or 2x2. Guest connections are also available for collaborative environments, where different users can

connect to the same source system simultaneously, with or without USB control. Aggregator Mode is built into Matrox Extio 3 IP KVM extenders. It also does not require any additional software to be installed on the source computer system, ensuring that all system certifications remain intact for a truly simplified KVM installation. Aggregator Mode is now available as a free Extio 3 firmware and Matrox Extio Central Manager software upgrade. 33


“Control room operators are often required to monitor and access information from multiple computers to complete their daily tasks, and Aggregator Mode helps streamline their workflows by providing seamless control of multiple systems,” said Caroline Injoyan, business development manager, Matrox. “This efficiencyboosting functionality eliminates unnecessary distractions so operators can focus completely on the task at hand.”

Atlona ships the second-generation of its Velocity AV control platform and announces a new scheduling touch panel Atlona, a Panduit company, has proclaimed the immediate availability of Velocity 2.0, the next generation of its fully IPenabled platform for AV control, room scheduling, and asset management. It features a new architecture to support various room and application requirements, several of which Atlona will demonstrate within its 34

ISE 2020 stand (5-T50). The new release integrates several new capabilities at no extra cost, including room scheduling, unlimited BYOD usage for AV control, integration with leading soft codec platforms (Zoom Rooms, Cisco WebEx), AV asset management, and remote monitoring and management from the cloud. At ISE 2020, which marks the public debut of Velocity 2.0, Atlona will showcase these and other benefits in multiple, live demonstrations across the stand: • A standalone environment that highlights Velocity AV system control configurations, room booking and scheduling applications, and integration with the Atlona Management System (AMS) for configuration and management of Velocity and IP-connected AV devices. • Integration with Atlona OmniStream AV over IP, Omega presentation and collaboration products,

and Opus HDR distribution and matrix products with a focus on video wall management, source routing, and ease of use for system users and integrators. “These demonstrations will communicate the speed and accessibility that represents Velocity’s ease of use when managing AV operations over IP networks,” commented David Shamir, Director of Product Management, Atlona. “They will show how installers and end users


panel, which will ship in early February. With integrated bezel LED lighting, the panel has been designed for lobbies, hallways, or meeting space entrances that require a quick and easy way of visualizing room availability at a distance in scheduling applications.

Absen miniLED

can further simplify the transition to all-IP AV control, enable holistic management of AV components, and execute fast, conflict-free room bookings in meeting spaces and venues.” The updated Velocity System platform includes new hardware (VGW-HW) and software (VGW-SW) server gateways to serve AV control installations over the network. Atlona also has announced the AT-VSP800 8” room scheduling and AV control touch

Alternatively, the panel can instead be used within huddle spaces, conference rooms, classrooms, lecture halls and other spaces that require AV control. In these cases, the lights on the outer frame of the touch panel can be used to reinforce awareness of AV control functions, such as AV mute or divisible room state. Systems integrators and tech managers can configure the AT-VSP-800 for either room scheduling or AV control within the Velocity System Gateway, eliminating the expense of purchasing and managing separate software platforms.

Absen to unveil the next evolution of MiniLED LED display manufacturer Absen will

return to the RAI in Amsterdam to present its latest range of MiniLED products for the fixed installation and rental markets, as well as introduce a large range of new and enhanced products. Absen will occupy 324 sqm of floor space, representing the company’s largest ever presence at the show. Spread across two neighbouring stands, these will respectively be dedicated to fixed installation (12-C50) and rental solutions (12-E80), though the theme of “Evolution. On Display” and focus on MiniLED will remain consistent across both sections. Absen’s MiniLED are ultra-fine pixel pitch LED panels, incorporating 4-in-1 pixel configuration (IMD) and common cathode technology. According to Absen, demand for Absen MiniLED products has been very high since the launch of the CR series in 2018, and intensified significantly with last year’s launch of the Aries (AX) series, a sub-2mm 35


MiniLED rental display. Absen MiniLED now comes as standard in four core product lines: the Aries (AX), Acclaim (A27 Pro) and Control Room (CR) series, as well as the newly launched HC series for mission-critical environments.The A27 Plus has been Absen’s best-selling fixed install product, seeing over 300% growth in 2019. The Acclaim series is Absen’s flagship display for indoor fixed installation, predominately in corporate and retail environments. This year’s ISE will see the series become available as a MiniLED range – the A27 Pro series. On the other hand, since its launch at ISE 2019, AX1.5 has become “the best-selling” 1.5mm MiniLED rental product on the market. “Due to popular demand”, it will now be available in 1.2 and 1.9mm varieties. Finally, the CR series will now be available in 0.7mm pixel pitch, which represents the company’s smallest pixel pitch and highest resolution display. As well as this improvement, Absen will 36

also present the HC series, a new 0.9mm MiniLED display for the high-end control room market. Absen will also launch a new product for the Digital Out-Of-Home (DooH) advertising market. The AW series is Absen’s new outdoor fine pitch LED for light box displays, bus shelters and other types of urban street furniture. The 5,000nit AW series will also act as a “smart” pole display to cater for the growing trend of LED display and 5G technology integration. Absenicon, the meeting room solution unveiled at ISE last year, is also back with a slimmer frame, a flatter surface and better ease-of-use. It will be available in four standard sizes. In addition, Absen’s stand at ISE will also be showcasing existing products, including Polaris (PL), a rental series for indoor and outdoor applications, the N-Plus and K-Plus series, which are suited for retail and corporate applications, as well as various LED solutions for the outdoor and sports market.

Absen is also due to partner with several providers at ISE; with 7th Sense supplying its Delta Media Server - Infinity - to drive its primary booth at ISE, whilst Green Hippo and Notch will be showcasing an interactive large-format LED wall on the rental booth using Absen’s multi-awardwinning Polaris PL2.5 Pro and Green Hippo’s Hippotizer Media Server. Absen products will also be on display on the Novastar booth (8-G160).

Digital Projection wants to “change the game again” Digital Projection will return to the RAI in Amsterdam this February to give the first public EMEA showing of its Satellite Modular Laser System (MLS) – in Hall 1, F90. The company is also due to show two of its MultiView setups on the ‘VR at ISE’ section of the tradeshow, located in Hall 14. As promised, the Satellite MLS will make its full EMEA debut at this year’s ISE. This new technology offers a small


number of building blocks that allows users to address a wide range of applications, from single projector installs to complex, multi-channel domes, caves and simulators. The cornerstone of the system is the separation of the light source, with its associated power and thermal management, to a remote location, which enables a small, compact projection ‘Head’ that only contains the minimal optical and video processing, and consumes “very little power”. By separating the projection Head from the light source and linking the two by fibre-optic cables up to 100m long, Digital Projection’s latest innovation offers new options, particularly where space and access are restricted. The Satellite MLS will be developed around WUXGA, Native 4K and 8K resolutions and will also incorporate Digital Projection’s ‘MultiView’ technology. Given the

remains appropriate to their changing position. This allows the users to see and interact with each other. The INSIGHT 4K HFR 360 offers both native 4K and high frame rate.

modular design of the system, there can be a one-to-many, or many-toone relationship between the light sources and the projection Heads. A new feature for ISE’s is the VR area in Hall 14, Stand B100, which will allow attendees to explore the latest in virtual collaboration, as well as experiencing an interactive VR ride. Digital Projection is set to demonstrate two of its MultiView VR systems, which are based around the company’s INSIGHT 4K HFR 360 projectors. With Multi-View 3D projection, a single projector, with fast frame rates, can accommodate multiple viewers, each being tracked and each having a view of the image that

One MultiView system will enable attendees to walk around and manipulate a virtual object in a number of ways, while the other will demonstrate how the technology can be used in the visitor attraction space by allowing people to virtually explore ancient artefacts and buildings. Finally, digital Projection E-Vision Laser projectors will provide the visuals as part of an interactive VR theme park ride. This attraction is designed and run by Lightspeed Design and makes use of DepthQ technology - to plunge users into an immersive aquatic environment. The E-Vision projectors will be teamed with a VIOSO Panadome screen at Stand B120 in Hall 14. ISE attendees will be able to witness a variety of 37


other Digital Projection technologies up close, including a giant 4K LED wall.

Panasonic wants to offer users “The Freedom to Create” Panasonic will show its latest and widest range of AV technologies at Hall 1, Booth 1-H20 at ISE 2020. The company will divide its booth in several areas. In “The Freedom to Entertain”, Panasonic will build a live eSports arena. Its show will combine live entertainment solutions and industry innovations, such as live IP based switching solution, 8K ROI camera, 4K Switcher, 4K laser projectors and monitors. Furthermore, the company will showcase its 50,000 lumen 4K laser projector and a preview of the upcoming “super compact” 30,000 lumen projector. Inside “The Freedom To Engage”, Panasonic will recreate the interactive real-time digital museum exhibit that was already part of ISE 2019. The installation will combine Panasonic’s laser projectors with the 38

“world’s first” zero offset Ultra Short Throw lens with zoom and shift and high-end 4K displays. Moving on, “The Freedom to innovate” will feature the brand’s unified AV technology solutions. Assistants will be able to explore Panasonic’s comprehensive range of projectors, lenses and professional displays. Education will be also taking a fundamental part at Panasonic’s booth. “The Freedom To Educate” will show how Panasonic’s visual and ProAV technologies can enhance learning, freeing educators to build student-centered, active environments (scale-up classrooms). In addition, in “The Freedom To Collaborate” will feature interactive meeting rooms with the company latest innovations.

AV Stumpfl to showcase new PIXERA v1.6 and projection screen systems AV Stumpfl®, Austrian AV technology manufacturer, will present their latest projection

Panasonic PT-RZ21K.

screen products and a v1.6 preview of their nextgeneration media server system PIXERA, at ISE 2020 in Amsterdam (#1-H5, 1H10). PIXERA is a 64-bit system for real-time media processing, compositing and management. It combines a “powerful” render engine with a “revolutionary” GUI approach. Some of the highlights of the new and upcoming PIXERA features that will be presented at ISE include: 1. Live Preview Editing: This feature lets users edit timelines in the preview window while the output shows content from a different section of the


4. Direct-API Tracking Support: A new area of the PIXERA API gives more direct access to objects as they are rendered by the engine, making it possible to realize advanced tracking scenarios.

timeline. This allows changes to running shows to be previewed by the operator and then blended into the output on the fly. 2. Game Engine Integration: PIXERA will be able to natively host both Unity and Unreal game engines. This gives users the ability to use projects they have created with these authoring and rendering environments. 3. Dynamic Softedge: A softedge blend can be calculated automatically per frame, using the projector and screen information. This leads to a quick setup time for static surfaces, as well as giving users the ability to use blends on moving surfaces.

In addition to showing exciting new PIXERA software features, AV Stumpfl will also present the new PIXERA two RT hardware, as well as PIXERA mini, PIXERA one and PIXERA two. Built to fit into the same compact chassis as the PIXERA two server, the new PIXERA two RT server offers more processing power for demanding realtime graphics projects. Its rendering performance combined with NVMe (non-volatile memory express) read speed of up to 10GB/s make it “the fastest comparable media server on the market”, according to the press release. The PIXERA two RT allows for the playout of six simultaneous uncompressed 4k60 8-bit streams or four uncompressed 4k60 10-bit streams. ISE visitors will be able to enjoy a short video

design and projection mapping show at the AV Stumpfl stand, courtesy of the Portuguese creative studio OCUBO, who created the content especially for the ISE 2020. PIXERA will be used as the stand's mapping and playout system. AV Stumpfl will also show ISE visitors a variety of their projection screen systems, including 4K and 8K projection setups. In addition, AV Stumpfl will showcase AT64-SHIFT screen system leg. With the AT64-SHIFT, it's possible to assemble a large mobile projection screen “in minutes” and to adjust its height “in a matter of seconds”. No extra measuring is needed to ensure that the frame is level, due to a marked height scale. Based on the same principles as the T32-SHIFT, the AT64-SHIFT projection screen legs can be used for much larger mobile projection screens and are compatible with all mobile AV Stumpfl screen systems.  39






Either at the Monte Carlo hills, on the Mexican mountains, in the snowed lands of the deepest Sweden or in New Zealand's wilderness, WRC always succeeds in offering a complete production that reached back in 2018 an audience of 825 million viewers and 155 TV markets. This job is not free from challenges and ever more so considering that one of them is the promise of ensuring the All Live OTT service, which makes available to viewers access to full coverage for each section, the onboard cameras and all images taken from the helicopter. We talked with Florian Ruth, Director of Content & Production, WRC Promoter GmbH, a few days before the start of the WRC 2020 season in order to discover all details on the challenging broadcast for this exciting championship.

FIA WRC is a really special motorsport competition, especially when it comes to broadcasting. In your opinion, what makes it so special? What is the big deal about WRC? The challenge we have at the WRC is the overall logistics of the races. In comparison to other motorsports which happen at a circuit, we go to races where there is usually nothing, so we have to build all our setup from scratch. The biggest challenge is that our race is not at a circuit where you have three fiber cable 42

The challenge we have at the WRC is the overall logistics of the races. In comparison to other motorsports which happen at a circuit, we go to races where there is usually nothing, so we have to build all our setup from scratch.

camera positions and you know exactly the shot

beforehand. We have to cover a huge terrain. The


rally spread across: the terrain we use can be at a distance of up to 200 kilometers away from the service park where we produce the feed. So we have to cover all areas with our setup. Sometimes, the stages go one day 100 kilometers north of the service park and the next day 50 kilometers south. We have to deliver the same quality and the same specs as in a circuit racing, where you have to cover 4 kilometers of racetrack. That is the difficulty of what we are facing at the WRC.

You have worked as Director of Content and Production at WRC Promoter for the past four years. How has the production evolved during this time? When I joined the production, it was a very good production, but very static. Three years ago it was produced in exactly the same way as it was during the last decade. Mainly, we had three crews that covered positions, brought back the footage for highlights programs and then produced between one and three live stages.

Those are produced in a classical traditional way: they drove a huge OB van to the end of the stage, wired a few kilometers, covered some parts of the track and covered the rest of the track with helicopter and onboards. What I figured out‌ I have done a lot of different sports productions in my career and all kind of sports. For example, I produced many Red Bull events, like the Red Bull Air Race. For me, in terms of the development of these media, it was all about real-time communication and 43


instant communication. Fans out there, when something happens, want to see it right away. In total, this championship has about 4 million on-site spectators, so when something happens there is someone recording it with an iPhone and posting it on YouTube. And we had the problem in our production that, even when we had a camera 44

there or received the footage from the fan, we had to bring it back to base to capture, edit and upload it. This sometimes took just too much time. Since I came to the championship, I was pushing for mobile and dynamic production. The real question was how can we capture the entire rally live or as close to live as possible, in order to

deliver the “WOW moment� to the fan, the things that have happened in the best quality and the best way. Consequently, in the last three years, I completely changed the production from a pretty static production to a very mobile and versatile production. All our production completely now happens wireless. Everything


we follow it: we are mobile, we are flexible. So, if the rally goes 100 kilometers to the north,

happens more or less via RF (Radio Frequency). The crews we have at the stages, our helicopters, the live onboard cameras‌ Everything has been linked to a relay plane circling above the stage and sending all those links back to the service park, where our main hub is located. That is where we produce the feed. Wherever the rally goes,

we fly there with a plane, with our helicopter, with our whole setup.

All our production completely now happens wireless. Everything happens more or less via RF (Radio Frequency). The crews we have at the stages, our helicopters, the live onboard cameras‌



In addition, two years ago we have launched with this new setup a product for our OTT platform WRC+ called WRC+ All Live. And “All Live” really means “everything live”. The rally is quite a complex sport, it is not like soccer, where you have 90 minutes and you need to process those 90 minutes and maybe two interviews before and after. The rally starts Thursday evening and ends Sunday midday. And there is constantly something going on and we capture everything. So, at every race, on our OTT platform, we broadcast between 25 and 30 hours live. It is a lot of live content. In these feeds we capture more than 350 hours of live content.

We would like to talk about planning. How do you prepare for each event? What about when it is a brand new rally? Do you study the orography and connection specifications of every stage? We have a so-called “Recce Team”. The drivers and the manufacturers 46

recce the stages before the races and we do the same. We go to every race about three months before. Right now, for example, our recce team is in Rally Mexico, which takes place in March. They have to look at the service park and then drive every stage and assess every stage. They measure RF connectivity and distance to our service park. With

our RFs and the plane we can cover a distance up to 100 kilometers. If we go over 100 kilometers, we need a hop-in between: we downlink somewhere in between and then reuplink the feeds. They are also checking the 3G-4G availability in those locations, because obviously, we also work with 3G - 4G - LTE backpacks. We work with


LiveU technology. With every stage, they define each position. For example, they define the position of the cameras and mark it all down on maps based on GPS positions. Then, they write all necessary data: RF availability to our service park, 3G-4G availability... In addition, they also are in charge of the access roads. The rallies can

sometimes be very long. We also have the support championships: WRC2, WRC3, Junior WRC… Sometimes, there are more than one hundred cars going through the stages. That means that when we have crews at the stages, they cannot leave after the top ten cars. They have to wait until the whole rally passes… and this takes

hours. The rally is very complex and difficult in terms of planning, because there are so many factors you have to consider.

You are currently working on a standard 1080p configuration. Are all your feeds in 1080p? Do you plan to adopt technologies such as HDR or 4K? Maybe in the future, but not at the moment. Working with 4K would be too difficult at the moment due to those distances. Obviously we work with codecs, such as the HEVC codec, but instead of transporting 4K signals, we prefer to transport four HD signals. I rather have four different angles of HD in the car, so in each car I have up to four cameras. That is why we still work with HD and we will probably continue working with HD for the next two years… Then, we will see how the technology has developed.

Regarding data transmission, do you think that 5G could be a solution for WRC? It will definitely help 47


everyone in the broadcasting industry, but for us 5G is not the solution for everything. Because, even if 5G comes, the question is how good would 5G be in the mountains of Monte Carlo, how good would 5G be on an island such as Sardegna, how good will 5G be in the mountains of Mexico. That is the big question. I still do not have the answer and I think no one has it, because nobody knows how 5G will be globally developed in the next couple of years. I think that RF technology is and will be the only source of reliable content or signal delivery for the next two years.

What is the standard of production of a WRC Rally? If you see it in total, we produce with around 75 to 80 cameras. A huge chunk of them are the onboard cameras in the cars. Each WRC1 car has a minimum of three onboard lipstick cameras. We equip all 15 cars, which makes a minimum of 45 cameras. They are not bigger than your finger. We can build 48

them everywhere in the car: in the wheel arch, on the wings, outside, inside‌ We like to capture every possible angle inside and outside the car to get a variety of nice shoots. We also have regular OB van cameras: we operate with a full OB Van with between 10 or 15 action cameras. We also shoot high-speed shots with Sony cameras. Furthermore, obviously, we are with GoPros. In the service park, we have some more RF broadcast cameras, studio cameras for TV studio and remote-

controlled Bradley cameras. We also work with a helicopter with a Cineflex camera.

How all those onboard feeds are ingested in the live feed? It is all RF technology. We have boxes from our production company, NEP. We send the content from an antenna on the roof of the car directly to our relay plane. And from the relay plane, we receive those RF signals directly at our TV compound in the service park.


Other motorsport TV production companies are already working with remote production. Are you moving to remote production soon? The step will be made. Mid of latest season, together with our production partner NEP and our fiber provider TATA Communications, we have started to perform the whole postproduction remotely in our production office in London: all our highlight programs, web clips, TV news, etc. And what we will do in mid-2020 is that we will start producing the live program completely remotely. Our main gallery, our main MCR, will be in London and will no longer be on-site. We will try to bring all the individual signals back to London and mix the word feed. We will also produce the WRC+ All Live product in London. Most of the largest global productions are working with remote productions at the moment. That is exactly what we are doing too. For us, to be fair, it is quite a challenge. Also, as I said before, all our events are

We produce with around 75 to 80 cameras. A huge chunk of them are the onboard cameras in the cars. Each WRC1 car has a minimum of three onboard lipstick cameras. We equip all 15 cars, which makes a minimum of 45 cameras. They are not bigger than your finger.

very remote. We are not in big stadiums; we are not always close to big cities. It will be quite a big challenge for our fiber providers, TATA Communications, to bring high speed fiber to those remotes locations. So yes, in 2020 we are planning at full speed to go remotely with our live production.

be one of the most challenging remote productions ever made.

Do you think anyone has ever done a remote like the one you are planning?

Obviously, the key sound of the championship is the sound of the car. At our live TV stations, where we have our OB van and our action cameras, we also

I do not think anyone has done this before. This will

What about the sound? How do you handle this important part of the transmission? What communication system does the crew use and how do you manage this crucial aspect of coverage?



have external microphones where we really record the sounds of the car. We also record through the camera. A bigger challenge for us in terms of sound is for the other parts of the stages. The average stage is between 15 and 20 kilometers long. With the cameras, perhaps we can capture only two or three kilometers of this. The other parts are captured by a helicopter and the onboard cameras. What we have are external microphones built in the car and in our recording device, that box placed in the car. Therefore, every car has a “broadcast pod” that includes microphones as well as all the recording devices for the onboard cameras. In addition, we record there the telemetry of the car: speed, gear, break point, handbrake and all that stuff. But also, in terms of sound, in rallies is very important the sound of the communication between the driver and the codriver. Therefore, our sound mix consists of natural sound, general sound, engine sound of the car and the 50

communication between the co-driver and the driver, which is mainly the co-driver reading out the pacenotes for the driver. That is a very essential part of the sport. With regard to communication between teams, we use regular radio communication. We also offer the radio communications service for teams through our plane. As I said, we are in some remote places: there’s no cellphone connectivity, so the radio communication is the only reliable communication way we have at those distances and in those terrains. For communications between our productions hubs, between the on-site production hub and the remote production hub, we use IP communication.

What graphic system are you currently using? Do you plan to increase the use of AR graphics? Yes, absolutely. At the moment, we work with a Ventuz graphic system. I am very happy with the performance of Ventuz. We will use AR technology for our mobile on-site

studio. We have already conducted several tests this year and we have a partner that develops systems with us and is managing how to make the AR really fit our use cases during the rally. The service park is quite a big area: there is a media zone, hospitality, etc. We are very mobile in this area. So we will use AR to show results, to show parts of the car, to show parts of engines, to show drivers when we talk about them… This is only possible with AR and I think it is the most creative and state-of-theart system we can use today in studios.

How has your OTT platform WRC+ evolved? Did you meet your goals?


we have launched the All Live program. The All Live program was the game changer for our OTT platform. Now, the OTT platform is basically a completely live platform with 25 to 30 hours live of rally. You can watch everything that happens on the platform. We launched the platform six years ago. When this OTT platform WRC+ was launched, the main idea was to give the fan the opportunity to rewatch all the onboard cameras. The rally sport has a huge community of hardcore fans who want to see all these onboard cameras from every stage and from every angle. On this platform, we basically offer fans the opportunity to do this. During the development of the rally, they can analyze all the cameras of their favorite driver or of the other drivers to compare them and so on. That was the main purpose in the beginning. But now, obviously, the OTT has further developed. As I said to you at the beginning of our interview,

How will the 2020 FIA WRC season technologically surprise us? Obviously, we will launch some new graphic elements. We will have more line cut technologies at the stages to offer fans more sequences of the car. We will also work on new camera angles… And, in terms of technology, as I said, in the middle of the season, we will completely switch to a remote live production, which is the biggest challenge for us in 2020.

What’s the biggest challenge you ever had to face when broadcasting FIA WRC? The biggest challenges for sure are issues that we cannot influence, such as weather, for example: if there are bad

thunderstorms or just really challenging weather conditions. This may cause the helicopter to not fly. In the very worst case, not even the plane can fly, which is the backbone of our production. If that occurs, we have to become very creative to still produce our programs. Nonetheless, they will look different. When we lose the plane, we lose all our RF technologies, so we are only left with the ground cameras, which are connected to the OB van at the stages. They only cover a very small part of the track. Another challenge we have faced is when something happens on the track and the race is delayed. You do not know when the race will continue and you still have to fill the broadcast with an appealing program. This, for sure, was one of the challenges that we have had, but we have survived in a very good way.  51


ANALYZING PQ AND HLG At TM Broadcast we have already published a number of articles about HDR (High Dynamic Range), although all of them always within the context of UHD (Ultra High Definition). Today we would like to go a bit deeper on HDR and, in particular, with regards to the different gamma curves existing at present: PQ and HLG.

By Yeray Alfageme, Service Manager Olympic Channel

As TV has been progressing, large changes in both standards and technologies have dramatically improved image quality. After the momentous introduction 52

of color, a series of developments followed, such as the switch from analogue to digital, from 4:3 to 16:9, and from SD to HD. With each further step, broadcasters have

had no alternative than cope with challenges in workflows and compatibility issues with existing equipment. As modern broadcasters have been switching from HD to


UHD and from SDR to HDR, this change has become even more significant. As it happens with any new development, several approaches have been adopted for HDR production. Two main formats exist: PQ (Perceptual Quantization) and HLG (Hybrid Log Gamma). Being very differing concepts, each of them offers unique advantages for different applications. PQ is capable of representing a higher luminosity range for a given bit depth -10 bits, for instance- by using a nonlinear transfer function designed to adapt to the human sight system to perfection. The goal is to reach a system with no harmful visual effects, such as quantization errors. There are several PQ coding variants such as HDR10 –which uses 10 bits and static metadata- and its elder sibling, HDR10+, including dynamic metadata- which improves representation of hue in each image. The main manufactures supporting HDR10 are

Panasonic, Hisense and Sansumg. On the other side we have Dolby Vision, a proprietary implementation supporting 10-bit and 12bit representations in addition to static and dynamic metadata. This format is licensed by most manufacturers, both for TV sets and Blu-Ray devices, and many producers have adopted it as a cinema production standard.

required for representing the image properly. Carrying metadata through traditional linear broadcasting channels – either terrestrial or satellite- or through OTT platforms is not an easy task, not to mention the issues associated to changing content, as it may be the case when switching to commercials, overlapping graphics and similar items.

On the other hand, HLG is based –at least in parton the transfer curve of the older CRT monitors. As said curve is not optimized for the human sight system and some coding areas are not considered especially the darker parts in the image- a lower brightness peak is available. Therefore, HLG cannot achieve the same dynamic range as PQ, although it is easier to combine by the existing TV sets at a standard luminosity level.

Another major difference between PQ and HLG is the fact that PQ is referenced to the screen, while HLG is referenced to the image, but what in the world does this mean?

HLG also attempts to provide backward compatibility with HD-SDR and 4K-SDR. One of its biggest advantages is that no metadata accompanying images are

The difference between a screenreferenced transfer function and an image-referenced transfer function Under SDR (709) the power signal that used to be carried (or still being carried for the few remaining SD broadcasts) was referenced to the brightness of the image, which results in an imagereferenced transfer function. Under PQ, however, the power signal 53


Graphic 1

carrying the image's luminosity information is referenced based on the screen in which the image is to be displayed and this obviously results in a screen-referenced transfer function. This is because the PQ gamma curve is referenced to the luminosity being perceived by viewers and it is optimized for this purpose. The human eye is capable of adapting to the image’s luminosity. We do not see in an identical way in daytime or during the night and the same occurs with the PQ gamma curve transfer function as well.

for its backward compatibility. But this is not optimal for the human sight system, although it is agnostic in regard to the setup of levels each viewer may have at their own home set, which allows to display a HDR image that is valid in any environment.

HLG inherits the imagereferenced transfer function as the SDR (709) standard already did, this being one of the reasons

Some clarifications on graphic 1:


Graphic 1 is a representation –in a theoretical fashion- of the concept differences between both standards PQ and HLG-, being the former screen-referenced and the latter -as it was the case with SDR (709)image-referenced.

• OOTF (Optical-Optical Transfer Function): transformation that

takes place from the camera to the screen. • OETF (Opto-Electronic Transfer Function: transformation that takes place between the camera and the signal. • EOTF (Electro-Optical Transfer Function): transformation that takes place between the signal and the screen. In a PQ environment, if content has been mastered in a 2,000-nit monitor and finally viewed on a 500-nit OLED TV set, said content must be adapted from the source 2,000 nits to the target 500 nits, which is awkward and costly in terms of processing. This is one of the main reasons why HLG has become so popular in live production


environments. The other reason being that under HLG there is no need to generate two simultaneous -one SDR and one HDR- signals regardless of definition, of course, but just one HDRHLG is also valid as SDR signal. The only thing that needs to be done is ignore luminosity ranges outside the 709 standard and voila, we will have our SDR image.

Format conversions One of the issues having the biggest impact on the final look of an image is the conversions undergone by the same from production to display. Most current production environments use 8-bit or 10-bit based systems. In a 10-bit environment, coding errors in the image remain hidden behind the electronic noise generated by the cameras, so in practice it is regarded that 10-bit production environments are adequate for HDR images. Theory says that 12 bits are required for generating an HDR image as set forth by the BT.2020

standard, but hardly any production facility or mobile unit are conceived for this. As it happens between HD and SD -much more evident in this instanceusing gamma conversions between PQ and HLG and vice versa is not recommended, as image artifacts could be inadvertently created. Just a couple of conversions are enough to create obvious coding errors, especially in the darker and lighter areas of the image. This causes us to choose our production standard from the outset and keep it throughout our entire production chain in order to avoid problems. Color implications are now left aside, as an UHD image with WCG (Wide Color Gamut) forces a remapping of colors for conversion to 709, but we will deal with this some other time.

Conclusions The PQ standard is much better in regard to image quality and luminosity range that can be displayed. However, if the

whole broadcast chain, from camera to final display is not properly monitored, maintaining a proper representation of the image becomes complicated and costly. Its main representative Dolby Vision- is a proprietary standard, which requires payment of royalties, which does not help to promote a widespread use. In HLG, the gamma curve is based on the older SDR (709) standard, extending the same in order to be able to display broader luminance ranges. This is not optimal for the human sight system but offers backward compatibility and simplicity as far as production is concerned, especially in live events, which has boosted its adoption by broadcasters and producers all around the globe. To sum up, PQ is ideal for environments such as cinema or fiction production such as series, while HLG features a much simpler model for adoption in live productions and for general broadcasters. ď ľ 55


INTERCOM & TALKBACK TOOLS By CARLOS MEDINA Expert and Advisor on Audiovisual Technology

Most of us have heard some time or other references being made to the theory of communication


throughout the various stages of our education. In this occasion with would like to remind the six drivers that are required

for said communication to take place: the issuer is who issues the message; the receiver is who receives the information;


the channel or physical means through which the message is issued and/or disseminated; the code, as system of signals, rules, regulations and signs used for conveying the message's content; the message, in regard to the content in question (data, order, words‌) through which issuer and receiver contact each other with the aim of sharing it; and the context, which is related to all conditions that are external to the communication process itself, such as situation,

environment, place and time, but also associated to the topic or background of what is being communicated. The main purpose of this article is to deal with the channel. Nowadays, the physical medium being used for communication between issuer and receiver is clearly influenced by the technology that is within reach. In fact, society is increasingly leading citizens to interpersonal relationships and then the exchange of information is

dependent on a technological device: a smartphone. Therefore, an interaction is created with the others, be it either a spoken conversation by means of a phone call; or written conversations and/or with icons or drawings thanks to smartphones and various applications being developed based on social media. If these communication factors are taken to a production/business environment -thus establishing an inter-



professional relationshipthe same six factors are identified. The difference is that hardly any errors can exist in the given communication process, for there are many things at stake, and two in particular: the economic impact due to the “value of information content” being conveyed, and the security inherent to the message itself –in regard to the possibility of a risk associated to an occupational accident of workers and agents involved in the work environment taking place-, most especially when citizens, audience or spectators are present. With regards to the professional side of matters, each of the six factors must be examined. On the one hand, learning for fostering efficient communication with an impact on issuer, receiver and message; technological assurances in the channel and in the code being used; and precise, customized adaptation with regards to the context, in view of the fact that each job has its own peculiar features, varying communication


needs and mixed work environments. Dynamics in a vast amount of jobs base their daily routines in a nearlyperfect communication process in order to satisfactorily achieve their own goals and the missions: Fire brigades, a hospital, the construction industry, airports, the police, hotels, air, land and sea transport, the army, security agencies, space travels, and then countless work environments. Success is also necessary in the communications, audiovisual, leisure and entertainment worlds. There are many and quite different job profiles behind the shooting of a film, a mega concert, a musical a play, an award ceremony, a mapping, lights and sound show... amongst others. Decision making and dissemination of communication and operating orders have both an impact on individuals and/or groups of people, which can be classified as follows: • Management professionals (production, security, protocol teams, content

directors and editors, agencies, representatives, authorities…) • Professionals featuring more technical profiles (lighting staff, stage managers, sound and video operators, rigging crew, video cameramen), • The arts team (musicians, singers, presenters, actors and actresses, choreographers, dance crew…), • And most specially, with presence and participation of an audience, spectators attending to shows, inauguration events,


is quite revealing but nowadays the proper operation of technology used for establishing communication in terms of emission, transmission and dissemination is at least as important as what this information is or means.

At present, all professional environments and human teams involved in each of them have one thing in common: their dependence on technology. Which means they all have a bond with the tools, devices and codification that are used to enable the various professionals involved in work to be able to properly perform their duties and/or comply with their responsibilities.

The presence of a mobile phone in our lives has both caused and enabled that each of us is twenty four hours a day connected and available with regards to everything relating to our personal lives, but also at a professional level. A mobile phone is the channel, the tool being most successful in regard to the technology being used for interpersonal communication processes. But is it valid in interprofessional work environments? Are all variables under control in terms of efficiency and security through the use of a mobile phone? Where and when can personal communications (a call, SMS, icon‌) interfere with or taint professional communication?

A very popular sentence says: Information is power. Undoubtedly this sentence

The answers to these and to other questions lead us to ponder about the use of

ceremonies and live events.

mobile phones as the only devices and therefore it becomes necessary finding other technological solutions in order to meet very specific requirements, get work done within budget, taking action in very short response times and solve situations under very demanding security levels. The solution: Intercommunication or communication systems, also known as Intercommunicator / Talk Back. These are equipment and solutions for the transmission of orders and of an internal nature, that is, to be exclusively used by the human team involved in the relevant project, job and/or event. In a first communication level within this type of equipment we find the well-known walkie talkies. Very much in mind and frequently used due to two main reasons: low financial cost –both with regards to purchase and during the communication process- and enormous ease of use, which translates into a short learning curve.



A walkie talkie is a radio device that may be used both as transmitter and as receiver (this being the reason why it is known as a ‘two way radio’. It was designed for military use and with the purpose of setting up portable communications between communication stations and the various units within the armed forced deployed on other places. All walkie talkies are based on what is known as PPT (Push To Talk), a method to begin talking in half-duplex communications lines that enables one-to-one calls and/or one-to-many calls (group calls). In the walkie talkie manufacturing market we find the PMR446. Their use has spread quite a bit for amateur/home environments, as they are very basic pieces of equipment. But they are also a resource used in work environments, as there is an intermediate range of walkie talkies that can be used. Both basic and midrange devices use a technology known as PMR46 and so they are


named in stores and catalogues. PMR stands for Personal Mobile Radio 446 MHz, open radio spectrum within UHF, which can be used by any radio amateur with no need to pay a licence fee (this is fully legal in Europe). There are analogue PMR446s (NFM) and digital TDMA (DMR).

Their usage is similar to that of US FRS, but as they operate in different frequencies transmitters are not compatible. They feature a lower range in communications and are highly dependent on the terrain and weather conditions (1.5-3km up to little more than 8-10Km),


ease of use, and are lightweight. They have clear sound and dual power (NiMH rechargeable batteries or AA alkaline lithium batteries), can be equipped with a charge base, backlit screen and feature an attractive compact design.

When we talk about using a PMR466, this entails having a unit or they also come in packs of 4 or 6 (with full compatibility amongst them). Some may include a hands-free kit as accessory for better handling and increased functionality; SCAN function and IP protection.

They operate in 8 or 16 channels plus 121 privacy codes (subtones). Adjustment of these kinds of walkie talkies enables a maximum of emission of 0.5W (500W) PIRE. A more professional range of walkie talkies is available in order to cater to the most demanding work environments. This range requires the use of a licence in order to be used with a VHF and/or UHF frequency code (for example: the police, fire brigade, ambulances, security companies, civil protection‌). They do not allow intrusion from undesired third parties in communications, which offers higher security, assurance and privacy levels; higher range over longer distances thanks to the use of relays, higher number of channels and very high sound quality. In view of the fact that the analogue and digital worlds live side by side, there are both types of walkie talkies in the professional range. Amongst them worth noting are devices featuring a high degree of specialization such as



ATEX (for explosive atmospheres like air mixed with mist, vapours and gases) and TETRA (featuring more advanced encryption and protection systems). These types of walkie talkies have a high IP protection, customizable buttons for speedier, programmed communications, universal charging and battery systems and CTCSS and DCS tones enabling us to choose what signals will be heard and which will not. Radio communications under licence may take us


to the fascinating field of radio amateurs, a service provided by the International Telecommunication Union. Although this matter is outside the scope of this article it is advisable to provide information on their existence as it allows open access to all bands provided by the IARU (International Amateur Radio Union). Radio amateurs have been allocated several segments within the radioelectrical spectrum. These are the so-called Radio Amateur Bands or Bands

allocated to the Amateur Service. Suffice to say that having an operator diploma, a radio amateur approval from the Administration and a station licence are the requirements for making use of an owned communication station. But there is also the socalled Citizen Band (CB) o Local Band, an unlicensed communication service through two-way radio available to all citizens. This band is also known as the 11-metre band (wave length corresponding to the 27 MHz frequency) and located –within the radio-electrical spectrum


within frequencies between 26,900 kHz and 27,400 kHz approximately. The Citizen Band is divided into channels or fixed frequencies, starting from channel 1 (26,965 kHz) up to channel 40 (27,405 kHz). One of the trades using CB27 equipment more intensively -with handy devices, mobile devices and base devices available- are longdistance truck drivers. Another way we may use –although hardly implemented at presentour mobile phone is to transform it into a walkie talkie through the PoC (Push to talk Over Cell

phone) service, which makes use of telephone networks through the standard VoIP protocols. And last, as most relevant and highly widespread option within the audiovisual industry is the so-called –genericallyas the intercom – talk back. Setup of this highly popular solution is normally based on wired installations combining a base station with external stations (remote desktop stations, also known as desk stations) that in turn receive connectivity with the various communication spots, also called belt packs. The base station, together with the external

stations, is the communications hub in fixed installations (such a TV set, a theatre, an auditorium...). Line intercommunication (twowire, shared) may be found, as well as others using a digital array or equipped with IP technology, turning it into a smart remote station. AS for belt packs, these are autonomous units (wired or wireless) equipped with a microphone, a headset, volume controls for hearing and talking and a call alert LED. There are multiple types of headsets: open or closed; for one or two ears, and with a condenser or electrect microphone (headset). The huge advantages of intercom - talk back



systems are: full duplex (two-way conversations), immediate, simultaneous, continuous communication without any licensing or communication costs associated, work with several different channels that enable creation of groups (with the possibility of choosing between talk, broadcast to all or mute channels); HD voice (7kHz broadband); two-thread and fourthread operation, broadcast (XLR 3) connectivity, 16-bit DSP technology, 48 kHz audio processes involving 24 bits/sample; externally monitored listening. Furthermore, IP-based intercoms feature setup and monitoring software along with compliance with the AES67 standard for audio streaming and AoIP sources (such as Cobranet, Dante, Ravenna, LiveWire or Q-LAN…), SMPTE 2110 protocol, balancing, non-balanced digital input or S/PDIF. As for wireless intercom systems, there are both analogue and digital types at present, though manufactures and users are increasingly choosing


the latter. In this regard, it must be said that digital wireless allows from 1.9 GHz to operating on the so-called ISM band ranging from 2.4 GHz to 2.48 GHz (three RF channels)- or on the unlicensed 5 GHz UNII band (up to fifteen RF channels). Very important is the capability of the base station to extend coverage through the number of relays with which it can be connected. Quoting American actor Chris Hardwick: “We are not in the information age an longer. We are in the information management age instead." All systems and technologies described above are an essential channel in the communications scene. Very often we rely on the proper operation of every single one of them, but we cannot omit in this article the fact that the way they are used is important as well. We all perfectly know that technical failures or interference may occur in the channel, so we must have assurance that some of the other factors intervening in the communications process

will be efficient. A key one is the need of raising awareness in both issuer and receiver about message, context and the way of using the technical means –the channelthrough some advice in order to strive for efficiency from the outset, so: • Only messages understood at the first try are efficient. • Use brief, specific messages. • Use standard orders and protocols approved by the set of professionals, thus avoiding customization of


decisive; no hurry whenever the legibility of the message sent from talker to listener is crucial. • Avoid dissemination of orders that are mistaken or just made up, which may cause doubts and/or false communications.

messages (communications tuning). • Intercommunication systems and devices are not toys for personal use and their proper operation is a need, knowing all their functions and capabilities handed over issuers and receivers by the relevant manufacturers. • Intercommunication means that everything is being heard, so we must avoid any messages out of the professional context giving rise to such communication. • Conversations must be simple, clear, and

• Do not transmit any information that may hurt the sensitivity of the various users being interconnected. Avoid any conflict issues (politics, religion, sports, gossip, matters relating trade unions...). • Be careful with the tone of communication. Always adopt a normal, soft, conciliatory tone that conveys calmness and confidence. • Know how to listen and have in mind who will receive the message being given, as many times there will be several receivers. This entails not interrupting communication processes. • Do not leave communication lines open if absent. Always take care to let everyone else know we are no

longer present or using the communication system. • Always reply if you are required to do so in the communication process in accordance with the message and coding protocols put in place. In sum, the importance of communication is everyone’s responsibility, in order to remove or avoid any elements, phenomena, variables and situations that do not favour a constant, clean and efficient flow. This is known as noise (both technical and in communications). Workflows are successful thanks to the success of a communication adapted to the technical means known to be required. And let us not forget some recommendations such as checking the power supply of the intercom and talk back tools (batteries or network) check buttons, microphones and listening devices, tweak parameters such as power and volume, carry out some previous tests and determine the role of each participant in the communication process.



Calrec Brio 36 Small but a real killer 66


By Yeray Alfageme, Service Manager Olympic Channel

Today at TM Broadcast we analyze the Calrec Brio 36, a console 100% targeting broadcast that features exceptional processing power and connectivity capabilities if we consider its size and price. Let’s get down with it.

to this circumstance, the size of the console remains really compact and enables placing level monitoring in a flexible fashion in our workplace. Once a display is connected, it is fully customizable, of course, both in regard to channels and to metering. From the typical level to loudness, peak meters, and even a combination of channels, buses and masters are all possible because this is a fully customizable, flexible interface.

One of the first things surprising me when I switched on and started operating the Brio 36 is the absence of a monitoring control. In fact, an external monitor needs to be connected –through DVI in this instance- in order to have a monitoring display. This means that in order to operate the console, both the console and an external screen are required. This could be seen as a drawback but thanks

The second thing that strikes the eye is the faders, the quality of the device and a small thing we are reviewing below. In the Brio 36, faders are not touchoperated, which means that the channel we use is not changed by just touching the fader control, but we must remember this is a console for broadcast. In my personal opinion, the fact that faders are not operated by the mere touch is a requirement for live consoles as in this environment reaction time is a critical 67


issue and the number of channels to handle will typically be higher than in broadcast. This makes it a must to save every second in operation, either when live or in any prior trials. However, in broadcast, where both the number of channels and live improvisation are somewhat more limited, this feature is not a real must. It is true that bigger consoles have this option available even for broadcast, but let’s bear in mind the range of consoles in which the Brio 36 falls and its approximate price, which is quite behind large English or US studios, but 68

adequate for 90% of day to day productions. Aside from this, the Brio 36 has 3 sections with 12 faders each, two of them in the main area and the other on the upper side, adjacent to the touch screen. Right below this screen are located the physical controls relating the display. Still today some nostalgic operators –myself being one of them- prefer to turn a knob or push a button instead of just rotating the finger on the screen or pinching with both fingers to pan a channel, in addition to the USB connections and all other

general functions. Each fader module is really easy to operate, featuring AFL (After Fader Level) and PFL (Post Fader Level) buttons, ON/Cut button (not to be confused with the MUTE button), two user buttons, and the “Access” button for channel selection. Each channel has a small screen that displays information on the channel and even some degree of monitoring, but only for information purposes. I do not think this can be used as a real monitor. The one thing I really loved about the fader -


some may think this is nonsense, I know- is the small touch featured, which can be disabled when passing over zero. I am a bit of a freak and I like to operate with levels always set to zero and then adjust gains, especially when in broadcast. Whenever I am dealing with live music I tend to be "looser" and discovering that I can really have all channels in my production set to zero was something I liked and which I did not expect in a console within this range.

A welcome surprise, no doubt.

Hands-on, audio and network model

The screen in the Brio 36 is really suitably responsive to touch: it is neither too hard as some live consoles that must withstand weather conditions or sometimes even an ice beer, nor as soft as a home tablet, too delicate to my taste. Its size is commensurate to that of the console and I do not find it small at all. Furthermore, the design of the bundled software is really well adapted to the screen and the operations.

Let’s get our hands on this device and see how it really performs. Figure 1 shows the internal logical structure of the console and provides us with an overview of possibilities at a glance. To the credit of the console’s designers, it may appear straightforward at first sight, as simple solutions to complex issues is normally the best approach, but underneath this apparent simplicity,

Figure 1



we can –in a second looknotice this console is full of features and allows us to tweak signals and buses almost the way we please. This has a good side and a bad side, as usual. The good side is that expert operators can really do wonders with this console and make it work as if it were one within a higher range. However, lessexperienced operators might be overwhelmed by the possibilities offered and even find it hard to carry out an initial setup and get started with it. My recommendation: set up the console based on the initial "demo" configuration provided rather than start from scratch if you are in the second group. If you love playing with signal routes and squeeze buses as much as we can, clean the console with the thick brush –the so-called ”Factory Reset”- and enjoy getting everything to your own taste. Another good hit by Calrec is the inclusion – already in this range of consoles- of its Hydra 2 network as shown in Figure 2. 70

Figure 2.

And the thing is that Calrec’s Hydra 2 network enables expanding the console not only by adding its own twins, but also other larger consoles and network resources as patches. To this purpose, the network creates a virtual patching panel that can be operated from any control surface, thus enabling us to make use of any signal available within the network in any of the consoles that are part of it. Obviously, this can be restricted in order to prevent an operator from stepping on the hose of another, but in a wellcoordinated environment in which permits are properly configured the full potential of the Hydra 2 network is enormous.

This allows seeing the Brio 36 as yet another element within a much bigger audio network. So this console is not as small as it first seemed, right?

Operating the Brio 36 The first thing to be done is allocate a channel to a fader and an physical input –or a virtual one, of within a Hydra network- to said channel in order to have the required signal available in the fader for proper operation. This intermediate step in a notion of a channel between input and fader may seem confusing, but it is actually very powerful if properly grasped from a conceptual point of view as it allows not only


selecting inputs as signals, but also buses and other internal signals in the console by using the same concept, thus avoiding limitations in the processing of any signal available in the event. For each input we find the typical controls for adjusting gain, phantom feed, available by channel in stereo inputs, and tone generator in the selected input. In addition to all typical features the source and target protection feature is something really necessary during operation of consoles like this one -with such a flexible setup- thus avoiding common errors like using the same source twice or modifying parameters for a target already in use by another bus or fader.

Signal processing – let’s play All controls by channel keep in mind, by channel, not by input- are the usual ones, such equalization, dynamics, automations and group controls such as VCAs. Special mention deserves the side-chain

configuration for dynamics, which allows controlling parameters of dynamics for a given channel based on the level of a different channel in any of them in a very simple way, just by linking the parameter with the channel. That’s all.

the rise and fall in levels as if it were a noise gate, but directly executed on the fader. A good thing is the 100% graphic configuration, which makes it a very intuitive task.

Already well-know and good nonetheless are Calrec’s automixes. They are really useful in broadcast environments as they ensure the same kind of mixing throughout the whole event in a consistent manner, thus allowing operators to focus on what is really important. Automixes are really efficient in recorded sources, as they are triggered in punctual moments and always at the same level, which results in a really good performance of these. In voices or other sources from microphones I would not use an automix as it is too risky to let machine decide for us what mix is going to make in unknown scenarios, in my opinion.

The Calrec Brio 36 is a bigger console than it seems by just looking at its size. It has features and capabilities typically found in much bigger consoles, both in size and price. Maybe some things must be learnt from scratch to make use of them, such as signal routing or initial setup of faders and the external monitoring display. But it is precisely this flexibility what provides enormous possibilities. Such a compact size requires the use of a external monitor for control, for instance, but its high-density control surface and optimized software for the screen size cause that a higher number of controls for daily operation will not be missed.

Another really interesting automated function is the autofaders, which allow automating


Undoubtedly a very good option for our daily broadcast productions.  71


The best of IP is yet to come By Kim Francis Applications Specialist - Infrastructure Grass Valley The success to date of the adoption of IP by the broadcast industry has been largely thanks to work by industry bodies like SMPTE, AIMS and AMWA, which has pushed the development of IPbased solutions forward enormously in a relatively short space of time. Four years ago, it became very clear that there was a need for a standards-based approach to IP migration across the industry. With this


common goal encouraged collaboration between broadcasters and vendors to drive open standardsbased IP forward. As a result, the SMPTE ST2110 specification, which addresses IP stream interoperability and ensures comparable functionality to SDI environments, was born. The widespread adoption of these standards by vendors across the content production chain has seen pricing driven down and

contributed to the accelerated deployment of IP-based solutions. Today, most of the major broadcasters, OB and production companies – from RTE and Televisa to NEP and Mobile Television Group (MTVG) – have IP infrastructures in place. IPbased live production workflows support premier events on the global sporting calendar, from English Premier League (EPL) soccer and the NFL to the Alpine


World Ski Championships and Wimbledon. As broadcasters and content producers drive to meet consumer appetite for high-quality content and stunning images more efficiently, they need infrastructures that are agile and scalable. IP meets this need. The technology can handle much higher bandwidth than legacy SDI environments, supporting more live streams from a live event and opens up new, smarter approaches to content creation. As an industry, we have passed a critical milestone for IP with SMPTE ST2110 which has driven enormous momentum and acceptance globally. We’ve reached a tipping point where customers can see the clear benefits

of the improved flexibility, workflow efficiency and the format/resolution agnosticism that open standards-based IP delivers. As they drive to stay ahead of the curve in a fast-moving mediascape, IP-based infrastructures allow broadcasters and media organizations to build successful, futureready media businesses.

Systemizing change The advent of SMPTE ST 2110 has undoubtedly propelled the industry forward – customer demand for open standards solutions means all the major vendors now have ST2110-enabled solutions on the market. Flexibility and scalability are a given with IP, but the

real potential lies the technology’s capacity to transform the way production teams work – especially in live environments; it can open up smarter ways of working, regardless of where staff are located. Remote/at-home production is just one way that IP can radically shift broadcasters’ approach and we are beginning to see how IP is changing the way content is produced. We’re still very much in the early stages of IP migration and have only seen the tip of the iceberg. The biggest challenge facing the industry is not to rest on our collective laurels; the belief that SMPTE ST2110 takes us far enough will halt real progress. Both vendors and broadcasters must



look ahead and continue to build on the work achieved to date. Making IP easier to deploy and maintain is the next steppingstone. Broadcasters are looking beyond the cost of ownership; they want systems that are easy to maintain and grow. Customers also tell us they want to choose best-ofbreed systems without having to worry about complex integration. They want to be able to build seamless multi-vendor systems within a single IP ecosystem. 74

The Joint Task Force on Networked Media, whose members include the Advanced Media Workflow Association (AMWA), the European Broadcasting Union (EBU), the Society of Motion Picture and Television Engineers (SMPTE), and the Video Services Forum (VSF), is picking up the baton from SMPTE ST2110. Aimed at making it easier to adopt and maintain IP infrastructures and workflows, JT-NMTR10001-1 outlines practical approaches for easily connecting IP systems beyond the

transport layer and streamlining deployments. In order to leverage the as-of-yet untapped benefits of IP adoption, the momentum we have gained to date must continue. Vendors need to invest in JT-NM-TR10001-1 across their product portfolios, rather than stopping with SMPTE ST2110. Upcoming testing rounds will help spur this on, as will growing demand from customers.

The shape of things to come The momentum behind


development will be solutions that make systemization of IP more straightforward. As a frontrunner in driving the industry transition to IP, these areas are a priority for Grass Valley.

IP will continue to drive the software-centric future of our business, with workflows hosted in the cloud – or virtualized on commodity hardware – and applications being orchestrated using software. As customers find it easier to deploy, maintain and leverage IPbased solutions, they will become more prevalent across the broadcast chain, moving beyond simple IP I/O to areas like IP processing. The longterm success of IP centers on lowering the pain of adoption for customers. The next big thrust in

Crucially, we are also moving beyond viewing IP as a like-for-like replacement for SDI; IP is changing the way we think about what constitutes a broadcast facility. Shared resources, like control rooms that can be allocated where required, are already a reality and we’re seeing remote/athome production become more widely adopted – particularly for live sports production – delivering significant time and cost savings. The next logical step – and certainly what our customers are telling us they need – is the wider distribution of resources. Allowing staff to be located anywhere and enabling multiple locations to be utilized thanks to high bandwidth connectivity means leveraging the very best directors, operators and editors that the industry

has to offer, regardless of where in the world they are. We are not too far away from a scenario where a technical director can work on several events in a single day, regardless of where in the world they are taking place. Similarly, on-air talent will be able to cover multiple games without having to travel. Looking ahead, IP will underpin the virtualization of more functionality giving media companies the flexibility and agility they need to achieve rapid speed to market and stay ahead of a fast-evolving mediascape. We have yet to see the full power of IP and are still just scratching the surface of what is possible. As we continue to drive momentum, the impact that IP-based solutions will have on the industry will be far-reaching, changing the way we work, create and deliver content.  75


Achieving next-generation media experiences with High Dynamic Range Matthew Goldman, SVP Technology, MediaKind

Watching television, be it live sports, movies or TV shows, has continually redefined how we experience and enjoy the world. Today’s consumers are thinking bigger and reimagining the possibilities of video content. They not only want the classic ‘window to the world’ – they want their viewing experience to completely immerse them in the content they are watching; 76

a feeling of ‘better-thanbeing-there’. The popularity of 4K Ultra High Definition (UHD) content has thus played a major part in pushing the boundaries of live and ondemand media experiences, by increasing the image resolution to four times that of ‘Full HD’ (1080p50/60). 4K UHD is now the assumed viewing experience for most forms of content, be that live or linear. Yet, while this technology has been paramount in delivering the immersive, high quality experiences we are familiar with today, 4K UHD is about far more than just spatial resolution.

Realizing the potential of HDR for immersive viewing experiences High Dynamic Range (HDR) technology first came to market as a second phase of the full 4K UHD feature package. An ‘HDR system’ consists of three technologies used in combination under the name ‘HDR’: HDR transfer function, Wide Color Gamut (WCG) and 10-bit sample depth (quantization). This rollout gave consumers a taste of the next-generation of viewing experiences, which also bore witness to other major developments in broadcast technology such as 10-bit HEVC (aka H.265) encoding, and more advanced 3D audio to


create a highly immersive media package. The combination of UHD and HDR technology are driving the immersive experiences we know today and fueling expectations industrywide. Content owners, broadcasters, services providers and consumers alike all have a shared desire to provide, deliver and receive content in this format. There is a decipherable difference in picture quality and color definition between traditional (what we now call standard dynamic range or SDR) TV viewing and that of HDR. The importance of having an HDR supported service was made evident with the broadcast of the Game of Thrones episode, ‘The Long Night’, shot entirely at night. The episode was so dark that viewers watching on a non-HDR capable TV were unable to make out much of the action happening on screen. Those viewing on HDR-capable TVs however had a dramatically improved experience, enabling the image of a well-lit area to

‘pop out’, while also being able to discern details in the lowlight areas. In a live production setting, HDR technology also is adept at adapting to varying contrasts of bright sunlit scenes. In soccer games, HDR can easily adapt to the hugely varying contrast of a soccer ball traveling through a sunny section of a pitch during a match, without washing out the details that normally would happen in an SDR production. The BBC has now established HDR as a core part of its live production workflows, having deployed it for the 2019 FA Cup final, Wimbledon and a Royal Wedding. HDR creates far more realistic colors and digital image details that bring a real ‘sizzle’ to the viewing experience. This enables far more image realism and, most excitingly, reproduces much brighter specular highlights through the use of far higher peak white levels while simultaneously enabling details in areas in shadow through the use of more discernible black levels. This creates the

impression that the picture really ‘pops’, a much better real-world reproduction. There is now a fair amount of 4K HDR content available on Netflix, Amazon and other streaming services, and the market for 4K UHD TVs with HDR has risen in tandem. From selling around 12.2 million units in 2017, IHS Markit forecast HDR TV shipments to rise nearly 300% to 47.9 million shipments in 2021.

The current HDR situation Currently, there are more than a half-dozen different HDR systems present in the media industry, including PQ10 (PQ being perceptual quantizer), HLG10 (HLG being hybrid log-gamma), HDR10 (PQ10 plus static metadata), Dolby Vision, Samsung HDR10+, and SLHDR1 (the latter three are PQ10 plus different forms of dynamic metadata). Despite the industry buzz surrounding HDR, having such a plethora of formats is hampering deployments and the advancement of 77


such an outstanding enhancement to the TV viewing experience. The Ultra HD Forum has made useful progress in driving the industry to realize HDR, by creating and publishing important guidelines over methods for future implementation. Yet the industry remains conflicted as to which format will produce the most compelling HDR quality and will represent the best long-term investment. This indecision has consequently led to a relatively delayed HDR deployment. The live mixing of content produced in more than one dynamic range system (SDR and/or one of the various HDR systems) adds another challenging layer to the wide-spread rollout of this technology. In a live production environment, it is paramount for everything to work flawlessly in realtime, as there are no retransmission options for live production. There are obvious bandwidth challenges when it comes to producing and delivering 4K content too – a situation that is further 78

exacerbated when HDR is added to the mix.

So, are we HDR ready? Solutions are in place to help make HDR a reality. We already can deliver live ‘color volume’ mapping, converting SDR to HDR (via inverse tone mapping), HDR to SDR and distinguishing between different light levels of HDR. We are also able to deploy end-to-end UHD to mix live and pre-produced content. This means content producers can determine native formats, and broadcasters can perform any necessary conversion to ensure all content conforms to a uniformed format. The question again ‘is which format should I use?’ HDR has seen a slow takeoff due to a lack of content, as was the case with 4K. Yet the marketability of HDR is quite substantial given the noticeable difference in the viewing experience, meaning it certainly has potential to be successfully monetized. In fact, it’s arguably the best improvement to television


since color displaced black and white. Broadcasters are increasing the amount of content in 1080p50/60 HDR to address the issue of bandwidth limitation. 4K displays can upconvert these images to 2160p and if these sets are HDR capable, we can deliver a viewing experience that is very close to native 4K (HDR) for most consumer viewing environments.

Enabling nextgeneration immersive media experiences with HDR It’s not uncommon to witness major developments in broadcast technology come blockbuster, international events such as this year’s Olympic Games. These events often ignite a big leap forward in the way live content is captured, moved and produced. The 2020 Olympics will be the first to see 4K HDR developed as a mainstream, end-to-end workflow. While there will be content at the

Tokyo Games shot and produced in 8K and this certainly will capture the media’ attention, the main impact for practical viewing today will be on the ability to establish 4K HDR throughout the TV ecosystem. This is an important year for 4K HDR, and the immediate challenge to overcome to realize its potential is the need to collaborate and converge around one long-term HDR format. Improvements to encoding standards, notably the rollout of Versatile Video Coding (VVC), as well as the promise of 5G for media backhaul, will greatly improve bandwidth requirements and allow broadcasters to develop and deliver a richer, more lifelike picture quality. As the popularity of immersive media experiences continues to surge, we will see a rapid increase in next-generation immersive UHD HDR services coming to the market.  79

Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.