TM Broadcast International #87, November 2020

Page 1

Summary 22 6


Discovering Yle On the verge of a comprehensive transformation


Cloud services: From production site to home sofa

44 Live X: Challenging the limits of streaming


Dagmar Weaver-Madsen Passion, creativity and technique

PTZ Cameras playing key role in IP-based television production

76 Understanding content key to success in streaming wars

Editor in chief Javier de Martín

Creative Direction Mercedes González

Key account manager Susana Sampedro

Administration Laura de Diego

Managing Editor Sergio Julián

Top 10 things to consider when migrating your VOD library to AWS


70 Enterprise-grade intelligent and accelerated file transfer on the high seas


Test Zone: AEQ CrossNET


TM Broadcast International #87 November 2020

TM Broadcast International is a magazine published by Daró Media Group SL Centro Empresarial Tartessos Calle Pollensa 2, oficina 14 28290 Las Rozas (Madrid), Spain Phone +34 91 640 46 43 Published in Spain ISSN: 2659-5966

EDITORIAL We are used to find the word 'revolution’ in countless communications throughout the broadcast industry. This happens, in many instances, as a result of the drive shown by some market players towards a radical transformation of workflows. It is true that we must look to the future. IP technologies are arriving in order to offer decisive alternatives to the industry, and they do so by means of service-based business models warmly welcomed by a broad share of the market. However, we should not lose perspective. When looking at bright headlines and promising futures, a closer look should be taken at the also exciting present. And there are two major concepts emerging here: transition and adaptation. The broadcast world has succeeded in making a move to get the most of all tools at reach. Manufacturers have responded with versatile solutions. The arrival of the 100% IP infrastructures is unstoppable, but let us allow it to develop without rushing things in order


to ensure flawless service to TVs, radios or digital platforms. TM Broadcast International has set this month its sights on companies that, in an ever-changing world, find their place to offer content delivery in a brilliant fashion. For instance, the Finnish Yle, a public corporation that aims to take in just 6 years its entire production and distribution to the Cloud; or Live X, a streaming service company that builds complex, evolutionary architectures for each individual project. We complete this issue with other exclusive contents, such as an in-depth feature article on Cloud services by one of our main experts; or an interview with Dagmar Weaver-Madsen, a photography director in series as ‘Dare Me’ (Netflix), who instils her passionate vision on technology. Innovation is a promising horizon, but present times are no doubt very interesting. We invite you to discover them with us.


InSync Technology and FOR-A release MCC-4K-A

InSync Technology and FOR-A have announced the latest in its range of motion compensated UHD frame rate converters, the MCC-4K-A, developed by InSync Technology Ltd. and exclusively distributed by FOR-A. Meeting the needs of today’s live UHD workflow demands, MCC4K-A is a advanced motion compensated frame rate converter that also provides workflow tools such as HDR/SDR management, closed caption handling, and multi-channel audio. 6

MCC-4K-A builds on the MCC-4K converter, with entirely new motion estimation algorithms that provide more accurate motion processing, enabling MCC-4K-A to provide “stunning” results when converting between frame rates. Furthermore, new format conversion tools helps moving between HD and UHD and between SDR and HDR. “I’m delighted to have yet another high-quality product within the FOR-A video processing lineup,” said Matt Soga, Senior General Manager,

Marketing Division of FOR-A. “Our ongoing collaboration with InSync Technology continues to generate fruitful innovation which is world-class.” “Bringing this level of innovation to the market is both exciting and challenging,” added Lee Hunt, Head of Hardware at InSync Technology Ltd. “MCC-4K-A owes its performance to a highly skilled and dedicated team of engineers at InSync and to the tremendous support and collaboration we have had with FOR-A.” 


Ross launches LUCID – Virtual Solutions Control Made Clearer The LUCID platform replaces the previous UX solution from Ross as the primary configuration and control interface for all Ross virtual studio and augmented reality applications. Customers will immediately be struck by the brand-new user interface, and LUCID

is the first Ross product to feature the new ‘Aura’ visual language that will be a key aspect of many future solutions from Ross. Highly dynamic and offering impressive new levels of flexibility, this new interface enables operators to design their own customized layouts,

as well as save and recall layouts based on personal preference or production stage. 


Black Box expands KVX Series of DualHead KVM Extender Kits with new plug-and-play solutions for remote computer access Black Box has introduced six new KVX Series KVM extender kits, each providing control over a remote server or computer from a local workstation with one or two screens. In addition to extending KVM signals over distances up to 328 feet (100 metres) over CATx cable or 18.6 miles (30 kilometres) over fibre with an SFP connector, the plug-and-play KVX extender kits from Black Box are unique in allowing users to connect a display at a remote location for local monitoring. The new KVX extender kits boast pricing and functionality that address the requirements of design, graphics and video production, as well as software development. Allowing users to control systems that are remote or physically inaccessible due to location, 8

Black Box Expands KVX Series of Dual-Head KVM Extender Kits

environment or security measures, they serve as a convenient solution for office connectivity, server management, remote monitoring, process control or medical and industrial device control. Black Box will release three copper and three fibre KVX Series extender kits to provide support for DVI, 4K HDMI, and 4K DisplayPort connections. DVI KVX models support 1920 x 1200 at 60 Hz video and legacy VGA interfaces via an adapter. DisplayPort KVX models support up to 3840 x 2160 at 30 Hz UHD video. The company will also introduce a new dualhead KVM extender that includes DisplayPort/HDMI mixed inputs to support dual HDMI displays at 3840 x

2160 Hz UHD video. EDID emulation supports different resolutions and eliminates display compatibility issues. While the copper kits (KVXLC-200, KVXLCH-200, KVXLCHDP-200) enable remote computer access up to 330 feet (100 metres), the fibre kits (KVXLCF-200, KVXLCHF200, KVXLCHDPF-100) facilitate remote computer access up to 18.6 miles (30 kilometres) using SFP modules. Along with extension of USB 2.0, analogue stereo audio (speakers and microphones) and serial signals, all six extender kits feature two additional video ports on the transmitter to enable monitoring of the computer in a data centre or equipment room. ď ľ


intoPIX introduces a new range of 8K TICO-XS IP-cores intoPIX, provider of compression solutions for audio-visual applications, has announces the release, with direct availability, of 8K TICO-XS encoder and decoder IPCores. Supporting the JPEG XS ISO Standard, the cores can target mid-range

FPGA families – from Intel & Xilinx – or ASIC with low gate count. Furthermore, the cores are visually lossless, as defined in the JPEG XS standard, and support the complete UHDTV2 specifications. More features include encoding/decoding performed in less than

one millisecond in total, light line buffers handled in the internal RAMs, embedded 2-level downscaler in the decoder and manageable streams, as 8K compressed flow can be transported over limited data interfaces – down to 2.5Gb or 10Gb Ethernet for example. 


BCE streams the European Regional Hub – Luxembourg during the 59th ICCA Congress 2020 The International Congress and Convention Association (ICCA) selected Broadcasting Center Europe (BCE) to cover and ensure the live stream of the two stages for the European Regional Hub – Luxembourg during the 59th ICCA Congress 2020. ICCA “a global association leader” for the international meetings industry and specializes in the international association meetings sector, offering


data, education, communication channels, and business development and networking opportunities. For its 59th edition of the ICCA Congress, the event was transformed into a hybrid experience that took place in key regional hubs: Cape Town, Kuching, Latin America, Luxembourg, Malaga, North America, Riyadh and Seoul. BCE ensured the coverage and live streaming of the

European regional hub in Luxembourg. BCE production team was on site with its HD (High Definition) outside Broadcast vehicle to cover the two studio sets with camera crews, video and Visio conference engineers, 4K cameras, lights and highquality audio systems. Prior to the event, the infographic team prepared all the templates and titles to ensure the branding of the event according to the ICCA


Congress. Due to the pandemic, the studio sets were in communication with speakers located on remote locations. The interaction between the studio and the externals was orchestrated by BCE’s producer. “Producing major media events, our teams are able to manage large scale events,” explains Xavier Thillen, Head of Production & Digital Media Operations

at BCE. “With our numerous references in live streaming digital conferences, we can deliver high quality programs to the world.” For the live streaming, BCE relied on its subsidiary: Freecaster. Its extended CDN (Content Delivery Network) with strong partners made possible the availability and quality of the event for all the viewers. In addition, the adaptative bitrate of the

player allowed viewers to follow the live stream. “With its global footprint and prestigious speakers, it was paramount for the ICCA to find a partner which can deliver high quality and seamless services. Our experience, media professionals, state-of-theart production means combined to our rock-solid streaming platform is the perfect match for the ICCA” concludes Xavier Thillen. 


High school students dive into media production with help from Digital Resources and AJA Gear For the last few decades, U.S. school curriculum has centered on college preparation, but in recent years, parents, teachers and administrators have begun advocating for the inclusion of more professional skillsbased training. Their vision is quickly becoming reality as more high schools invest in the technology and instructional staff to support more careerminded education, ranging from 3D animation to masonry, robotics and more. According to Texas-based AV Systems Integrator Digital Resources, one such field that’s drawn interest from the education community is news production and live event AV. Year on year, the company continues to see new clients requesting sophisticated oncampus studio builds that incorporate real-world media production equipment from companies like AJA Video Systems. Schools then harness the


facility and equipment to train students on how to deliver daily news casts from start to finish, with students serving as the production crew, news anchor team, director/technical director and floor director. In addition to learning the tools and roles of the trade, they’re taught how to write story scripts; achieve the right voice quality, diction and timing; performance and pacing techniques; and more. Prosper Rock Hill High School in Frisco, TX is one of the many high schools embracing this methodology, and with help from Digital Resources, recently built a cuttingedge TV studio that its students use to produce and broadcast a live daily news show campus-wide. Similarly, Northwest Independent School District (ISD) in Fort Worth, TX, is using its Creative Media Production Academy, built by Digital Resources, to

train students in 3D animation, live event production, broadcast, and more. Some students from the program have gained such valuable experience that they’ve gone on to land positions assisting proAV management for collegiate athletics and professional sports productions postgraduation. With educational outfits like Prosper Rock and


Northwest ISD continuing to demonstrate the value that early media production training can provide, Digital Resources expects more districts and campuses to follow in their footsteps. Digital Resources takes a carefully curated approach to each project, selecting each piece of gear through the design and build process to ensure it meets the school’s needs. “Whether a student plans to go to college or enter the workforce after high school, there’s value in getting onthe-job experience, and there’s no better way to learn the tools of the trade

than in a real-world setting,” shared Tim Bock, director of marketing and sales, Digital Resources. “We’re helping schools bring that broadcast studio experience to the classroom, including all the standard production equipment, and AJA gear is always a part of our designs. It’s a staple in most modern studio environments, easy for students to learn quickly, and highly affordable, which is ideal for schools with tight budgets.” A number of factors influence each of Digital Resource’s designs,

including space, budget, instructor knowledge level, existing architecture and infrastructure. Although each setup varies, all studio blueprints feature a range of cameras, lighting, teleprompters, and custom set designs. A majority of the equipment is housed in an on-campus control room outfitted with Gigabit Ethernet and Fiber. There students can monitor every part of the production from graphics to credits and switching. As most schools currently produce and deliver in 1080p, Digital Resources incorporates two AJA Ki Pro Ultra 4K/UltraHD and 2K/HD recording devices into each facility plan – one for master recording and playback and another for ISO recording. An AJA HELO H.264 streaming and recording device is also standard for streaming content to the school’s website and social platforms like Facebook Live, while an AJA KUMO 1616 router is included for routing signals in and around the studio without any degradation.  13


London’s Nu Boyana Film Studios opens two DaVinci Resolve Studio suites Blackmagic Design has announced that Nu Boyana Film Studios has added two DaVinci Resolve Studio color correction suites at a new London facility. Owned by one of the longest running independent studios in Hollywood, Nu Image, Nu Boyana has serviced hundreds of big budget blockbusters from its headquarters in Sofia, Bulgaria. They include Hellboy, The Hitman’s Bodyguard, 300: The Rise of an Empire and The Black Dahlia. The first feature to be realized through the new space is an upcoming action comedy, “The Hitman’s Wife’s Bodyguard,” starring Salma Hayek, Ryan Reynolds and Samuel L. Jackson. The film was lensed by Terry Stacey and graded by colorist Vanessa Taylor. Paula Crickard, head of post production at Nu Boyana UK says, “Alongside CEO Yariv Lerner, we made the decision to expand into London as many of our


Nu Boyana Film Studios

projects are filmed here in the UK, and we wanted to be able to service films in a big physical space that allows directors freedom to choose their post production talent.” She continues: “In building this new facility, we chose to reflect the workflows that have afforded solid and consistent results time and time again at our HQ in Sofia, led by head of post Jivko Chakarov. Running the same system also allows the opportunity to collaborate remotely across the two locations.” The main grading theater features a DaVinci Resolve Advanced Panel, Eizo

monitor and 4K Christie projector, and is powered by the latest generation Mac Pro hardware. A secondary online grading suite is built to the same specification but features a DaVinci Resolve Mini Panel. Paula adds: “Our theatre is quite unique, featuring a 24ft cinema screen, 7.1 and 5.1 surround sound, with room for 16 socially distant attendees. It’s a big space with a big screen, and we find it helps when working with blockbuster action sequences.” All grading work feeds into two G-Technology G-SPEED Shuttle SSD devices. 


Daystar Television Network upgrades its comms with Riedel’s Artist and Bolero Daystar Television Network, a 24-hour faithbased network with over 5 billion viewers worldwide, has completed a massive upgrade of its communications capabilities using Riedel Communications’ Artist digital matrix intercom and Bolero wireless intercom. The new Artist/Bolero intercom solution enables communications for Daystar crew members throughout the network’s studio complex in Dallas.

cinema equipment provider based in Austin, Texas, provided the Riedel solutions and installation services. The Daystar installation consists of an Artist-128 mainframe equipped to support AES67, VoIP, Dante, and analog intercom signals; 28 SmartPanel App-driven user interfaces; and 10 Bolero antennas to support 20 intercom beltpacks. A separate Bolero antenna in standalone mode will support remote ENG shoots.

intercom packs have

Omega Broadcast, a full-service broadcast and

“The Riedel Artist series and Bolero wireless

Luminex switches and

greatly improved our floor operators’ ability to communicate with mix positions and technicians in a timely manner. With Bolero’s expanded performance range, we’re able to respond faster to constantly changing needs throughout our campus, says Doug Leake, Post Audio and Audio Systems Engineer, Daystar Television Network. Since Artist is equipped with both Artist Bolero support AES67 signals, the entire system is IP-ready. This will be important for the next phase of the project, an IP link to facilitate communications between Daystar’s Dallas broadcast center and a separate studio in Jerusalem that is

Daystar Television Network chooses Artist and Bolero

expected to come online in June.  15


CTV selects Phabrix Qx and Sx for new ST 2110 OB truck

CTV, a provider of OB trucks and facilities, chose PHABRIX’s Qx advanced rasterizer to provide test and measurement for IPbased workflows, including the company’s brand new OB12 32 camera truck, which is currently being used on The European Golf Tour. CTV also invested in an Sx TAG handheld hybrid IP/SDI unit with ST2110 and ST2022 10GbE options for use with flypack systems. During the development of CTV’s IP workflow, the company needed test and measurement solutions that would cater for IP, SDI, UHD, HDR and audio as it moved forward with 16

building new IP-ready trucks, flypack and remote systems.

such as PTP vs video reference alignment. The Sx TAG portable unit is also very convenient and feature-rich in both IP and SDI. The products offer good value for money considering the strong feature sets, and confidence in the PHABRIX engineering is a great asset to our workflow.”

“The test and measurement solution we were looking for had to be suitable for all stages of IP development, with POC testing, the feature set required to integrate a real system, and it had to be easy to use for daily operations of the tour. We were also looking for a manufacturer with a strong development roadmap and good customer support,” said Richard Morton, Head of Projects, CTV. “PHABRIX was one of the first manufacturers to produce a useful and working instrument – Qx – with all the required functions,

Last month CTV launched OB12, a new fully IP, ST 2110 32 camera truck comprising two large expanding production trucks and a third separate data centre truck. The Qx system is installed in the OB12 data centre for critical ST 2110, PTP and network measurements, and can be accessed from any area of the three IP connected trucks using a KVM system. NMOS enables the broadcast controller, Cerebrum, to route any video, audio or ancillary flow to the device for analysis. 


Dalet Galaxy xCloud will support France Télévisions remote workflows France Télévisions has subscribed to Dalet Galaxy xCloud to enable critical news workflows from home when needed. Hosted by Dalet, the SaaS-based offering is a full-featured version of the latest Dalet Galaxy five platform that leverages the cloud to facilitate end-to-end, remote news production workflows. Once fully rolled-out, Dalet Galaxy xCloud gives France Télévisions the flexibility to support the remote work requirements of its 300 plus journalists. The solution will facilitate news production workflows across national channels and the franceinfo rolling news channel, as well as support newsroom editing applications Adobe Premiere Pro and Dalet OneCut. “Dalet Galaxy xCloud offers us a path to news production with remote editing that is connected to our central Dalet Galaxy systems. Once

integrated within production workflows, it will extend the content and the workflow tools that we use day in and day out to deliver the news beyond the physical newsroom,” comments Emmanuel Gonce, Head of Engineering for the Newsrooms, France Télévisions. “The integrated editing will allow us to be very efficient as well as connected to each other no matter where we’re reporting from.” A cornerstone of the Dalet Galaxy xCloud offering, the remote editing framework features integration with the on-premises Dalet Galaxy five system. It supports advanced proxy editing workflows, mixing of central and local content, with proxy content cached locally to eliminate disruptions due

to limited bandwidth. Rendering is offloaded to the Dalet Galaxy xCloud system, optimizing performance. France Télévisions will first use Dalet Galaxy xCloud running entirely on AWS cloud, connecting high-resolution file exchange with the onpremises Dalet Galaxy five system. In a second phase, the broadcaster will move to a hybrid configuration that utilizes the Dalet Galaxy xCloud on-premises gateway component to manage heavy media processing such as creating proxy files and rendering of projects on-premises. Only proxy content will be sent to the cloud, optimizing media transfer between the on-premises system and cloud. The Dalet Galaxy xCloud hybrid implementation will also support low latency workflows such as editing growing files.  17


Rive Esports uses Zero Density Reality Engine to power their live Twitch shows with AR In April 2020, the esports channel Rive Esports (originally a web TV) began broadcasting online via Twitch. Their studio is equipped with 4 PTZ cameras (Panasonic AW-UE150) with a built-in tracking system. Reality Engine powers the augmented reality elements that the program regularly uses to visualize trophies, game characters and battle locations. The project kicked off in March 2020 at the height of the coronavirus situation where international travel was halted. This situation accelerated the remote services development in Zero Density and the team finished the first successful remote training with Rive Esports team. Zero Density’s regional partner Studio Service assisted the client for Reality installation 18

and configuration also remotely. “The workflow is very simple, which allows the client to get started without prior preparation and with minimal experience in graphic environments and still achieve maximum photorealism in the created scenes. Another advantage is the free Unreal library, using which you can create virtual studios, combining

a set of elements obtained in this library and quickly prepare highquality content for broadcast” said Anastasiya Ushakova, Technical Director at Studio Service The combination of these factors allowed Rive to go on the air using augmented reality objects the very next day after the completion of commissioning and training. 


Republic TV implements Grass Valley’s iTX integrated playout platform Republic TV, an Indian free-to-air news channel, has decided to update its playout and production with Grass Valley’s iTX integrated playout platform. The production side of Republic TV’s operation is now standardized on Grass Valley’s Karrera K-Frame and Kayenne K-Frame video production centers; NVISION router; and LDX 82 Series and Focus 75 Live camera chains. As it drives to meet the demands of a fastevolving market, Noidabased Republic TV, which provides 24/7/365 news content through cable networks and mobile platforms, such as JioTV and Hotstar, needs an end-to-end solution capable of handling integrated, multiplatform playout. Grass Valley’s iTX solution gives the Indian broadcaster an integrated playout platform that combines IP and SDI support for futurereadiness. The end-to-end workflow tools also

deliver “greater process automation and lower OPEX”, according to the press release. With their modular design, Grass Valley’s Karrera and Kayenne video production centers are scalable, and allow the Republic TV team to simplify workflows, while creating content in multiple formats, including 1080p and 4K UHD. The LDX 82 Series cameras are easily upgradable to HDR via a

straightforward software upgrade. Both the LDX 82 Series and the Focus 75 Live cameras are flexible enough to handle being out in the field or smaller studio-based applications. Also forming part of the deployment is a range of Grass Valley infrastructure and network attached storage solutions. The project was completed in partnership with Cineom.  19


TVC Solutions is now part of the Broadcast Solutions Group With the acquisition that was completed on 22 October 2020, Lithuania-based TVC Solutions UAB with Managing Director Ramunas Dirmeikis became a whollyowned subsidiary of the Broadcast Solutions Group.

Together the two companies have crafted the MedwaymediARC connector, designed to facilitate bidirectional integration with AVID Interplay.

Left to right: Stefan Breder (CEO Broadcast Solutions), Ramunas Dirmeikis (CEO TVC), Wladislaw Grabowski (COO Broadcast Solutions)

TVC’s focus areas overlap only slightly with those of Broadcast Solutions and to a large extent, complement the activities of the German systems integration company. In the Baltic region and the CIS countries, in particular, Broadcast Solutions is now strengthening its presence with the takeover of TVC, thus being able to offer an even more extensive array of solutions as well as further reinforcing service and support. Although fully integrated into the Broadcast Solutions Group, TVC Solutions UAB will act as an independent entity and under its given name. Stefan Breder, CEO of the Broadcast Solutions Group, comments: “TVC’s business strategy with approx. 60% of fixed media installations and 40% of OB vehicle construction perfectly complements and mirrors the development and strategy of the Broadcast Solutions Group. We welcome Ramunas Dirmeikis and the entire TVC team as part of the Broadcast Solutions Group, and we are looking forward to successful cooperation.”  20

NOA partners with Marquis Broadcast

NOA and Marquis Broadcast successfully implemented this new technology at Sharjah Broadcasting Authority (SBA) in May 2020. As part of the United Arab Emirates media house’s efforts to streamline its AV archiving workflow, the broadcaster integrated Avid Interplay into its already installed NOA mediARC Archive Asset Management (AAM) system at several of its regional stations. With bidirectional Avid integration, SBA managed to organize its archival assets through a WAN infrastructure that is interconnected at multiple locations. Through NOA mediARC, the media house is today driving the workflow from several sites in parallel and centrally stores all content within its AAM. 


SGO becomes an official LAPPG sponsor “It is our pleasure to support LAPPG as we believe their activities are a vital part of the Hollywood and Los Angeles Creative Community. LAPPG builds a crucial bridge between technology creators such as SGO and filmmakers and being closer together will help us react quicker to their needs and provide an added value to post-production workflows,” shared Geoff Mills, Managing Director at SGO. “With a rapidlygrowing Mistika Technology user-base, we look forward to

introducing our differentiating solutions to LAPPG members with latest developments that include a completely overhauled Color GUI and many new interactive grading tools, combined with other useful features, that truly sets Mistika apart and gives users a competitive creative edge.” Executive Director of the LAPPG, Wendy Woodhall, explains that LAPPG strives to bring their members the latest information on outstanding technology from around the globe.

“We are honored to bring SGO’s Mistika products, leading high end software post production solutions, to our diverse membership. With so many of the standard workflows and pipelines being disrupted currently, our members are searching for superior tools to deliver their inspiring content. We are grateful that SGO is entrusting Los Angeles Post Production Group with the opportunity to promote their finishing and workflow solutions to the filmmaking community at large.” 

Peter Wharton joins TAG Video Systems as Director, Corporate Strategy Wharton, an industry veteran with deep roots in transformative cloud-based media operations joins TAG on a permanent basis following two years working with the Company as a consultant. Kevin Joyce, Chief Zer0 Friction Officer, revealed details of the appointment noting that Wharton’s technical capabilities combined with his market knowledge played a pivotal role in opening TAG’s platform to new applications and increasing the Company’s international market share. In his role as Director, Corporate Strategy, Wharton will shape corporate strategy by evaluating business and technical opportunities and identifying areas of growth. He will work with senior management and executive leadership to structure solutions that add value to customers’ financial, technical and operational business units. 



On the verge of a comprehensive transformation Established in 1926 and offering TV service since 1958, Yle (the Finnish Broadcasting Company) has made it to present times offering a full public service under one key premise: “To produce content and services for audiences in equal conditions regardless of their residence status, wealth, age or gender”. This mission is directly related to its motto: “For all of us, for each of us”. Funding is exclusively made through the so-called "Yle Tax", which enables the company to remain "politically and financially independent". This, quite on the contrary to what many might think, has no impact on its ambition: throughout their programming we find, beyond the indispensable news service by public corporations, a wide range of first-class fiction content and a strong commitment towards digital platforms. In fact, Yle Areena is nowadays the country’s most appreciated VOD platform. We wanted to find out more details on their technological strategy, the way its production is managed and short-term and mid-term goals. To get all these doubts sorted out we contacted Janne Yli-Äyhö, CTO/CIO at Yle. By Sergio Julián Gómez, Managing Editor at TM Broadcast International





INTERVIEW WITH JANNE YLI-ÄYHÖ, CTO/CIO What’s Yle understanding of broadcast technology? Do you constantly bet on technological updating? Do you consider that technology is just a means for your public service? Media technology is going through major

YLE IN NUMBERS Yle covers a total of four television channels (Yle TV1, Yle TV2, Yle Fem and Yle Teema), six radios of national scope (Yle Radio 1, Yle Radio Suomi, Yle Puhe, YleX, Yle Vega, Yle X3M, Yle Mondo), a complete selection of online services (Yle Areena – Yle Arenan,,; Yle ID, Uutisvahti – Nyhetskollen) and produces content in twelve languages: Finnish, Swedish, three Sámi languages, sign language, in plain Finnish and plan Swedish, Romanian, Karelian, English and Russian.


changes. This has also happened before. Think about radio, television, colour television, HD, Internet, social media and so on. As we want to stay in the market we need to update the technology platform continuously. Technology and content development go hand in hand. Creative people push for new technologies and new technologies enable new kinds of content. We’re constantly creating and looking for better opportunities to maintain and develop the dialogue between the tech people and the creative


people but there’s still quite a lot to achieve.

How is it broadcast in a country like Finland? What are the main challenges you have to face to reach the whole country? Finland is a big country and as a public service provider we need to cover it as a whole as well with our content and from the distribution perspective. This means that we have a number of regional offices

to cover all aspects of Finland. From the distribution point of view we have to answer to all kinds of receiving needs. This is not so easy.

Yle has lots of in-house TV contents. What are your main production centres? What kind of content is produced in each of them? We have three major production centres. Helsinki is the biggest one

covering the whole spectrum. Tampere covers children programs and drama and regional TV news. Vaasa is the place for Swedish speaking programs.

Are all these centres technologically unified? I mean, will we find the same model of cameras in each one of them? The same viewers, the same mixers‌? We have a fairly uniform architecture in all these





places. This brings us cost advantages as well as makes things easier from the competence perspective. It is easier for the support personnel to maintain similar structures.

Are any of these production centres IP ready? What’s your vision on these environments? Is the broadcast world moving towards IP environments? To IP or not to IP has been a hot topic for the media industry for a while. Now the answer seems to be clear and IP technologies are maturing. We have started our IP implementation from radio studios in Helsinki. Cost level of IP technology is decreasing rapidly and at the same time we have increased our own competence level in this area. This means that when the next renewal projects come we are ready for the transition.

In addition, what’s your vision on virtualized production? Are you comfortable with tools such as production systems as a service rather than as an onpremises solution? Our slogan is “Everything in the cloud 2026” as Yle turns 100 years on that year. In short: yes, we believe that more and more of the traditional media systems will be provided as service. 27


Do you try to have external production companies to collaborate with you in the production of these TV formats? By contrast, do you prefer to do all of this on your own? We are working extensively with external production companies. This is done both for creativity and capacity.

What is your current emission standard? Are you broadcasting in HD? Do you consider moving to 4K + HDR in the next few years? Do you think we’re heading towards a paradigm in public service televisions in which UHD will a European (or global) standard? We started nationwide terrestrial HD distribution in June 2020. SD distribution will continue at least another 2-3 years. At the moment we have no plans to distribute widely in 4K or HDR. These standards will be used in internet distribution. 28

Are any of your sets ready for virtual productions, including the inclusion of AR or MR environments? What graphic system are you currently implementing? VR/AR has been used in various sports programs and for example when reporting on elections. Vizrt is the main platform for us. In Tampere we have had some very

interesting pilots on virtual production using game engines as graphic tools.

What about outside broadcasting? Do you implement mobile video transmission systems? Do you opt for DSNG? And 5G? Will it be a solution for Yle? Mobile is strongly present especially in journalistic work. Some weeks ago our mobile


journalism concept got awarded by EBU. 5G will be replacing some of the older generation communication systems. We are also following with interest the development of the private 5G networks. That could be a solution when setting up sports production in remote places.

Let’s move on to your MAM system. What has been your choice? Does

it integrate remote workflows, commonly needed nowadays? Today we are using the Avid tools in MAM. Thinking of the future vision all the feeds from cameras and microphones should go directly to the cloud. They should be processed and stored in the cloud and taken out from it only when the consumer uses the media. We are not quite there yet.

VOD platforms are defining a new model of TV consumption. What’s Yle stake on this? Do you consider that the days of linear television are ending? Linear tv will go on for a while. We have a 10 year deal for terrestrial distribution. However the trend towards internet is clear. Our Yle Areena playout service is the most widely used video streaming service in Finland. We believe that Yle Areena will be much bigger in the future.

What’s the future of Yle? Do you plan to update any of your technological areas in the foreseeable future? We need to renew the whole technological platform during the next five years. It will be based on cloud, IP and IT. This will bring down the cost level of the present services and enable us all the necessary future needs from the content producing side and as well from the consumers.  29


From production site to home sofa One of the tenets that have been voiced out over and over the past few months is that the pandemic has sped up ‘digitalization’ of businesses, thus bringing about in the matter of a few months the changes that were envisaged for coming years. In our industry, this has been mainly applied to cloud content creation solutions. We shed some light on this in the lines below.

By Yeray Alfageme, Service Manager Olympic Channel

Cloud services, compression and bandwidth Ingesting, editing and broadcasting in the cloud is not anything new. Since the appearance of YouTube and similar platforms, content creation in the cloud is something that any ‘millennial’ knows and in fact uses naturally on a daily basis. However, implementing these kinds of solutions in professional environments, in which quality of the content thus created must meet certain standards demanded by viewers, is not that simple. 30




Not very long ago, the amount of information required for producing high-quality multimedia content would force one to resort to specific storage and processing systems that had to be hooked on to the same network. Earlier in this century 100-150 Mbps would be required in order to get an editable HD image that could be produced properly. This would call for the need of having high-capacity storage systems and LAN networks featuring at least 1 Gpbs and, therefore, working from specific production sites. With the appearance of cloud services such as Amazon Web Services (AWS), Microsoft Azure or Google Cloud, highcapacity storage functionalities would be offered at reasonable rates, which are nowadays even more affordable than physical systems and are available anywhere. These providers were gradually introducing specific services for the audiovisual industry, being 32

the most remarkable example of this the acquisition of Elemental by AWS. This move gave Amazon a media processing and encoding portfolio much more advanced than any available from other competitors such as Microsoft or Google. The latter also have solutions for media workflows, although not as flexible and advanced as Amazon's. Another driver making possible remote production of high-quality content is the increase in compression rates available nowadays for video codecs. On the audio side, being information needs much lower, working with highcompression codecs is not as important. At present it is possible to use highquality HD content with just 30 Mbps, a third of what was needed not so long ago. The latest innovation that has been the key for adoption of these kinds of workflows is the availability of large-

capacity home Internet access. In Europe, for instance, 94% of the population has got Internet access at a speed of at least 30 Mbps, with speeds ranging between 100 Mbps and 300 Mpbs very frequent in all midsize and large cities. By combining cloud services, high-compression encoding and widespread availability of ample bandwidth in most instances, we get the perfect environment for working with media workflows from any place.


Remote v Cloud v Hybrid solutions What we know as the production centre where all equipment is centralized in a single building –or in several buildings, if involving large organizations boasting extensive production sites- would be an on-premises solution. No one who is located outside the premises can get access, either to systems or to content. Connection to the local area network hosting said systems is required.

An initial approach to a workflow from anywhere would be to make these systems accessible from outside the premises where they are physically located, which means remoting them. But this does not only entail connecting to the production site from outside, but also the capability of being able to operate the units -an editing station for instance- by connecting to it, although the media flow remains within the local system. Usage experience for machines in these kinds of environments is far from ideal and it would only become a solution when not enough bandwidth was available for using the content from the outside. The opposite approach would be to have all production systems decentralized, that is, not in a single specific location –even absolutely unknown sometimes- but make them accessible from anywhere. This would be a 100% cloudbased system. This

solution has many advantages, as production is not localized and the physical risk is nearly zero. Additionally, as far as costs are concerned, the system is only operated on a pay-per-use basis. The only thing we must foresee is the extra storage that will remain unused for a time or the inactive processing capacity, which it is only there to be available for when the need arises. On the other hand, the risks involved lie in the fact that all our content is in the hands of a third party -the provider of cloud services- which is something to be worried about, sometimes quite rightly, because of reliability and availability issues relating to our content, especially in critical times. And also quite obvious are security concerns. How secure is our content, lying there in the cloud, accessible from anywhere? Well, these issues have been mitigated nowadays, but remain something to take into account. 33


Then, we have hybrid systems as an in-between solution. In these systems there is a specific location, the production site, in which the whole content is secure and stored. This site is connected to the cloud, either through the Internet or by means of a


private connection offered by the service provider, in order to make a mirror image -partial or in full- of said content, thus enabling access to it from anywhere. Furthermore, processing capacity is also shared between the onpremises systems and the cloud systems, thus making it possible to

streamline the onpremises systems in order to avoid unnecessary costs in connection with inactive systems or capacities as cloud services can be used whenever so required by the relevant workload. The hybrid approach is the most widespread solution nowadays for mid-size or large production sites. Prices for cloud services are becoming increasingly cheaper and these


solutions are more reliable and secure than ever, so a 100% cloudbased solution is not at all something unheard of or overly risky. In fact, nearly –or probably all- current manufacturers are providing cloud solutions as an alternative to their offering in traditional systems. The only difference is the kinds of services offered.

Iaas, PaaS y SaaS When it comes to choosing and purchasing cloud services, there are three different options available: Infrastructure as

a Service (IaaS), Platform as a Service (PaaS), or Software as a Service (SaaS). IaaS means replicating in an interconnected external data centre identical equipment to that available at the production site. From storage to processing power and even network infrastructure are all replicated in an environment that is more secure and is better connected than the original single location. This option only adds connectivity to our systems, but it does not provide increased flexibility, savings or

capacity in respect of the solution located at the premises. It only offers some more reliability, as an external data centre normally has better facilities concerning power, conditioning and infrastructure than our private site. There are no providers offering these kinds of systems any longer. IF we wish a system like this for legal reasons, sometimes the information must be located in a specific place and meet certain physical security requirements; we must contact the storage provider and fit the equipment in their premises. The following step would be a Platform as a Service (PaaS). In this instance, we are not purchasing a specific infrastructure; in



fact, said deployment is seamless for us and managed by the relevant service provider. By means of this solution, some storage capacity, a few user licences and some processing power are acquired and the necessary systems are automatically resized and escalated for the chosen capacities. Although this means a great difference as compared to a IaaS system, it is not still 100% optimal as we could have unused capacities that are being paid for and this could be streamlined by means of a SaaS system. In this case there are indeed occasions in which acquiring a PaaS system is necessary for several reasons. If we want to make sure that at any rate we are going to be able to meet all peak workload, a PaaS system is the right thing to choose, even if at a bit higher cost. Besides, in a PaaS environment we are sure that we will pay for it, as it cannot be resized indefinitely. It is true that in SaaS systems limits can be set, but there 36

are variations that are not liked by the financial side. The last step and most advanced as far as development is concerned, is Software as a Service (SaaS). In this case, not only the underlying physical systems are transparent to us and are operated by the service provider, but also all the internal architecture of the systems at application and configuration levels are outsourced. This ensures that the system will have the highest availability possible based on what we are willing to engage, of course, in addition to being completely scalable and adaptable to our needs. When hiring SaaS systems, only used capacity is paid for and cost is fully variable, but also fully tailored to our working requirements from time to time. A certain capacity can be booked, either for storage or processing, and have it idle although available. Yet, there are tools that can streamline and make such capacities really efficient.

Many engineers like me could be scared by the fact that a third party may operate all our systems while we just take care of using and managing them at high level. But seen from a business standpoint, what we really want is to produce content in a secure, efficient manner and not having to swap hard disks or wire the whole building to that purpose.

The cloud applied to broadcast Nowadays there are systems, either PaaS or SaaS for nearly any known media production system.


systems also have a SaaS solution that can be used to edit content by just accessing a web platform from any PC or location. Likewise, MAM systems accessible through the web are usual and offer solutions with capabilities that are far higher than local systems and in some instances much cheaper.

From pure cloud storage featuring varying security, availability and access capacity levels; to fullfledged cloud-based content production and edition systems with processing or rendering capabilities, even for direct broadcast from the cloud. Something very frequent nowadays are PAM and MAM distributed systems, and they can be found in hybrid PaaS solutions with local equipment as deep storage, and processing and edition being performed in the cloud. The best known manufacturers of edition

Finally, there are full content broadcast systems in the cloud. These systems feature greater synergies and advantages for purely digital broadcasts via streaming, as we never leave the digital environment and no baseband video signals whatsoever are generated, from capture to final broadcast of content. The thing is that in many instances baseband linear video systems would be used because there was no choice, although it did not make much sense as a concept if we stop to think about it.

Conclusions This year 2020 has made us rethink a lot of things and the way of producing

content has been one of them. We had already had cloud services for some time, as well as the necessary encoding technology and the available bandwidth to make everything work, but we were missing a final push. We should not limit ourselves to IaaS systems in which we just replicate in the cloud what we had at home, but also go for services that are more advanced such as PaaS and SaaS in order to be able to make use of the benefits provided by these concepts as applied to our industry. Because it is not only about doing from our sofa the same that we do from a production site, but also about making the most of this revolution in order to produce more and better content and do away with the restrictions imposed by a specific location. And viewers do not care where we produce content from provided it is of good quality, right? ď ľ 37


Top 10 things to consider when migrating your VOD library to AWS By Brian Bedard, Sr. Solutions Architect for AWS Elemental



There are lots of things to consider when moving your VOD (Video OnDemand) content library the cloud. In this article, we cover the top things to keep in mind when migrating media files to the AWS Cloud. Brian Bedard, Sr. Solutions Architect for AWS Elemental

1. Should I migrate my currently encoded files or should I re-encode my library? The easiest way to move content to AWS is to simply upload your existing encoded files into Amazon Simple Storage Service (S3) and continue using them without reprocessing. If your workflow involves AWS Elemental MediaPackage to handle packaging, DRM, or time delay, then this is the easiest path. For example, if you are only delivering HLS and have no plans to alter your bitrate ladder, you can use Amazon S3 as your origin and deliver video using the Amazon CloudFront CDN.

You may decide to add an additional rendition to your existing adaptive bitrate ladder, often at the top end given the everincreasing bandwidth and resolution of end user display devices. You can do this by using AWS Elemental MediaConvert to transcode additional renditions and then update your existing manifests to include the new renditions. The biggest reason to reencode your content library is to reduce storage and delivery costs. If your content is encoded with AVC, migrating to a new codec such as HEVC will increase efficiency and lower your costs, even if only some of the players you deliver content to support this codec. You can also encode source videos with MediaConvert using a more intelligent rate control mode such as QVBR (Quality-Defined Variable Bitrate). QVBR can reduce the size of your encoded files by up to 50% over CBR mode. Plus, QVBR is simple to use, works with any type of source content, and 39


encodes at the highest video quality using the fewest bits. QVBR works with either the AVC or HEVC codec and is available at no additional cost.

2. Which codec should I choose, AVC or HEVC? Many legacy devices don’t have the ability to decode HEVC, so if your target audience relies heavily on those, you’ll likely need to retain your AVC renditions. Since the Apple HLS specification does not allow for mixed AVC and HEVC stacks, you can’t simply add your HEVC output. If you want to target devices that support AVC and HEVC, then you will need to generate two separate ABR ladders. If your target devices support both AVC and HEVC, then it’s always worth the up-front investment in HEVC. Assuming your model is mass distribution, the cost savings on your distribution and storage quickly surpass the added 40

cost of using the AWS Elemental MediaConvert professional tier encoding, which is required for HEVC outputs. To that point, even if you still require AVC for some devices, adding HEVC to your workflow, coupled with QVBR, is a good investment in the long run.

3. Which rate control mode should I use, CBR or VBR? As mentioned above, you should actually consider using QVBR. For most iOS and Android devices, using QVBR with quality level 7 works well. For large TV displays, using quality level 8 or 9 will give

viewers better results. With QVBR rate control, it’s really that simple. Unless you’re targeting a legacy playback device that can’t handle fluctuation in bitrates or technically requires CBR, using QVBR will save you money while maintaining excellent video quality.

4. Should I use HLS or MP4 as the origin source format for my CDN? Because AWS Elemental MediaPackage can ingest either HLS .m3u8 assets or .smil assets, you can simplify your workflow by using MP4 files as your


origin container. MediaConvert will generate fewer files using this process, and the final outputs can still match what you could generate if you were to export HLS from MediaConvert. This method also lowers S3 GET requests when packaging and delivering to Amazon CloudFront. Another important benefit of MP4 files as your origin source is that even if the HLS specification is updated (which it often is), nothing needs to change on your end to take advantage of the updates once MediaPackage incorporates the additions.

In this way, you’re futureproofing your HLS delivery. If your end goal is to generate “simulated live“ linear channels through AWS Elemental MediaLive using your VOD content, then keeping the files as MP4 also allows you to take advantage of dynamic inputs.

encryption during the packaging process, this becomes a more suitable point for the encryption to occur. With MediaPackage handling encryption, multiple DRM providers can be used and multiple output formats (HLS, DASH, MSS) can be generated at any time.

5. When should I use encryption to protect my content?

6. How many renditions should I generate for my ABR output?

Depending on the security requirements of your video assets, you may opt to forego encryption altogether. Many customers who do require encryption will leave their source assets in an S3 bucket with encryption-atrest handled server-side by SSE-S3. If you encrypt the source asset before storing it in S3, then anytime you want to view this asset you must decrypt it using the matching key. This adds a layer of complexity to any QA or internal workflows where you need to examine the source asset. Since AWS Elemental MediaPackage allows you to handle just-in-time

Making resolution and bitrate decisions for your ABR outputs is critical. Even if you use QVBR, you still have to choose the the resolutions and quality level settings for your content. You don’t want to pay for transcoding, storage, and delivery of outputs that are not going to be viewed by your end users. For the same reason, if all of your users’ devices can decode HEVC, you wouldn’t want to pay for AVC transcodes. We often point customers to Apple’s HLS specification, which includes the base ABR ladder recommendations. Gathering metrics about 41


how end users are viewing content becomes critical in determining if some renditions are needed or if they can be removed. The key is finding the right balance between handling bandwidth fluctuations without having your users switch bitrates too often during viewing. Bryan Samis wrote a great blog post about detecting devices and serving only the necessary renditions. If you pair this type of functionality with metrics around buffer time, viewing duration, and bitrate switch occurrences, then you could automate the removal of less performant outputs. 42

7. What settings should I be using for Group of Pictures (GOP) and slices? Generally, the more Iframes in your video, the higher the bitrate, so choosing the right amount of frames in your GOP is critical to maintaining a target bitrate while keeping quality high. When setting up outputs in MediaConvert, we generally recommend 2second GOPs. This fits nicely with Apple’s 6second segment recommendation for HLS and works well with DASH-ISO packaging in MediaPackage. The segment length you choose must be divisible by the GOP length or the

player can have issues. The settings for slices are similar, but within the frame itself. If you try to encode a 4K (UHD) output with 1 slice, you’re going to end up with a lower quality output than if you set it to 4 slices. If your output is 720×480 (SD resolution), 1 slice works just fine.

8. What about storage? When it comes to storage, you should take advantage of simple costsaving features. First, when your transcodes and QC process are done, you should move your master (source) video files to Amazon S3 Glacier. Second, assuming Amazon


S3 is the origin for your transcoded assets, you can take advantage of S3 Intelligent-Tiering, which will automatically move assets between S3 tiers based on their utilization. Finally, don’t forget to take advantage of S3 Events to create a watch folder to kick off MediaConvert jobs using AWS Lambda.

9. How do I monitor my transcoding in AWS? We recommend using Amazon CloudWatch Event Rules to monitor when MediaConvert jobs complete. A simple snippet of code can monitor the success or failure of each job, and

the code can easily trigger Lambda or SNS to keep a workflow moving along.

10. How much will this cost? Given all of the above, there are going to be some calculations involving cost and ROI when re-encoding your content, especially if you are considering moving from AVC to HEVC. Stay tuned for a future post with some pricing examples for VOD transcoding, storage, and delivery using AWS.

Conclusion Hopefully, this article gets you thinking more about some of the considerations you’ll face

when porting your content library to AWS. Once your content is uploaded to Amazon S3, there are lots of possibilities. As a next step, try exploring the Media2Cloud solution, where you can gain more insight using media analysis tools and make your content even more valuable. You can also get a jump start by using the Video on Demand on AWS solution, which automatically creates a solution that can ingest metadata and source videos, encode videos for playback on a wide range of devices, and deliver videos to end users through Amazon CloudFront. Get started today! 43


Live X Challenging the limits of streaming 18 years of experience, 7,800 shows broadcast on streaming, 1 million app downloads and 41+ million unique viewers are just four pieces of data from an impressive track record by Live X. This New York based company offers wraparound services: design, production, and broadcasting of live events for some of the world’s leading organizations. Some of their clients are USGA, Global Climate Action Summit, Verizon or Adiddas. And what about projects? Amongst many others Times Square New Year’s Eve o the latest Democratic National Convention are worth highlighting. We uncovered more details about Live X and on their most recent work with Corey Behnke, Lead Producer & Co-Founder. 44

Corey Behnke


An interview with Corey Behnke, Lead Producer and Co-Founder First, could you tell us about Live X and what has been your story?

I left as Head of Global Production & Services in 2015 to focus on best-of-breed in live streaming and broadcast. We opened our doors in late 2015 with a strong focus on large event &

content streaming, experiential, remote production and technology services. Today, we will focus on your production and live streaming. 45


Why is Live X different from other companies in the industry? I think what makes LiveX different is our focus on building products for the services we offer. Many times when clients hire us for a project, they are getting the benefit of the products that we have built specifically for their use case. In addition to broadcasting high profile events, we also provide managed services from white label platforms to many unique system integrations.

Image 1.


Live X has over 18 years of experience and has broadcast nearly 8,000 events. What have been the milestones in your career? What projects are you most proud of?

live remote broadcast production for the last 3 years. Probably the best example is our production for the United States Golf Association and the U.S. Open (Image 1).

This will be my 17th year working on New Year’s Eve in Times Square. I started as a Production Assistant and for the last 11 years have had the amazing opportunity to create & produce the official worldwide webcast seen by millions of people around the globe. I am also very proud of the fact that we have been doing watch?v=DxaemL5nI38&t= 1s.

Do you also have this amazing 4K broadcast streaming studio in NYC. Could you tell us more about it? Yes, thank you! We built our small yet highly flexible studio & master control room in 2017. In


Image 2.

addition to having a 4K 12G-SDI infrastructure, it has all of the communication, fiber & IP tools one would expect in a major network broadcast facility.

Delving deeper into live streaming, Live X had a relevant role in the DNC 2020. In addition, you personally served as Streaming Architect. Could you tell us more details about this event? What role took Live X? This was our second DNC having helped support the live streaming from the 2016 event in Philadelphia. I started working on this year’s event in mid-2019 and of course it eventually became very different than what we originally

conceived due to the Covid-19 pandemic and the subsequent restrictions and protocols put in place across the country. There’s a tweet that is a great encapsulation of what we accomplished technically (Image 2). orth/status/129664973795 3857539?s=20 My role as streaming architect was to support the convention teams with solutions & backups for all video, audio, communications and streaming technologies 47


that needed to be brought online. This included the remotes for the broadcast, as well as IPTV, distribution, IP transmission and also what ended up becoming the show’s live virtual audience. The entire LiveX team helped facilitate prerecords from our NY MCR and REMI’s for weeks


before the show so that the broadcast & post teams could work with talent and get their content for the broadcast. Our LiveX team in Milwaukee focused on the sub-control for remotes, encoding and final distribution for broadcast as well as coordinating between the hundreds of


latency streaming platform LiveX Director allowed producers, tech managers & lighting designers to have even faster access to sites as they were developing.

What specific technology has been involved in the production of DNC 2020?

United States Golf Association Project.

people on the teams serving digital, communication, broadcast & IT. We also helped to manage and deploy the live virtual audience seen on broadcast and in locations by speakers onsite. Our custom built technology cloud product Virtual Video Control Room was used to facilitate every remote stream by allowing producers, designers, stringers, and coordinators to monitor feeds in near real-time. Our streaming application Rivet (iOS, Android, MacOs, PC) was used to send us additional feeds from stringers, advancers and event participants. Our ultra low

So much technology. Our Rivet app, VVCR, tons of Amazon Web Services, and Blackbird. The experience with Blackbird has been great since we first deployed it this summer. For DNC we primarily used Blackbird as a multi-user live trimming tool. It has the ability to edit captions after the fact live which is pretty powerful, not to mention you are editing super fast in a browser. Its flexibility also allowed us to give access to several post teams so they could make any needed packages post-event. Being able to do both, inreal time and quickly after with one tool and everyone completely remote is incredible. The publishing tools to the 49


various social platforms help cut down on the tedium that often comes with publishing quickly to multiple social sites as well.

What are the main challenges when working on an event such as DNC 2020? The biggest challenge is people who have worked on remotes teaching people the new expectation for what remote production means now, how it looks and what the new challenges become. Many times in the past, broadcasters at a high level typically throw money and/or labor at a problem or challenge as it arises. With Covid and forced remote workflows, everyone has to adapt to their new reality. My job is to help support all of these amazing directors, production coordinators, designers, producers and staff communicate with each other and help everyone use the available tech to contribute their unique talent at their particular positions. 50

New Year's Eve Times Square.

What are the latest innovations the industry has developed to provide the equipment needed to work on streaming events? What technology does Live X use in a recurring basis for these kinds of events? There are so many great

innovations happening in our industry today. The capabilities to perform and execute high level live streaming broadcasts completely remotely over the internet exist and there are many options to choose from. We have built a lot of our current technology on SRT (Secure


Reliable Transport) which is an open source transport protocol. I like it for its hardware & software interoperability and the fact that it has wide adoption and we have been successfully using it in professional production for over 36 months.

What are the future prospects for Live X? Could your services be applied in future events of the 2020 presidential election? Hopefully many! I believe that this current crisis has only accelerated the natural progression that was happening with remote broadcasting and we have seen this acceleration first hand and continue to do so. What is compelling to me about this moment is we are forced to confront the challenges and problems together. Regardless of scope or affiliation, everyone’s need to connect is only growing stronger and whether politics, sports or entertainment we want to be at the forefront of creating and supporting those bonds. ď ľ 51


Passion, creativity and technique

Chatting with Dagmar Weaver-Madsen allows one to feel at each moment the creative immensity of the photography direction universe. Amongst references and experiences, she is able of accurately conveying all the keys for such a vital area in today’s TV offering. Throughout her entire career, the American artist has worked in the world of advertising, music clips, cinema and TV, being the latter the filed in which she has developed two of her most relevant projects: “Dare Me”, a brave proposal for Netflix in which she shows her command of atmospheres; and “High Maintenance”, a display of naturalism and ingenuity for HBO. We provide you with a full account of our interview with Dagmar, who reviewed with TM Broadcast International her influences, her way of understanding technology and some of her most challenging takes, amongst other interesting topics. 52




First, we would love to hear more about your story. How did you get involved in the cinema for the first time? What attracts you the most about the DoP area? When I was a little girl, my sister and I would steal our father’s VHS camera and make little movies. I would film her and her friends. At first, we set them up as plays in front of a static camera and then we learned we could make things appear and disappear, like Méliès. We kept exploring the form and progressing, and I was always behind the camera. I had been very interested in writing and storytelling, and overtime I realized how one can use a camera like a pen: it is another storytelling tool. You are shaping someone’s experience of a story in such a visceral way. It is a tool for empathy and shared experience. You can guide someone to experiencing different emotions and journeys. I think that is what draws me most to it. I believe that empathy is the most important aspect of our 54

humanity and I also think that perhaps today more than ever we need more of it. I think that storytelling and cinematography can open us up to different viewpoints and help bridge gaps in understanding. It can also provide needed escapes and releases when the world can seem too bleak, which I think we can also all appreciate right now. I think if used properly and with conscience it can do a lot of good for our societies. I love being a DoP because you are the right hand of the director helping them to bring their vision to the screen. Helping to ensure the audience member connects with the story and characters and is emotionally affected. I love collaborating with the different departments and bringing together a group of talented artists to create something bigger than any one person could make. I value the collaborations with my gaffers and key grips and how the placement of a

good shadow can really change the mood of a scene. How the angle of the light can make something ethereal or anxiety inducing. I like that it is a job that dictates a need for intense planning and preparation but also requires the flexibility of improvisation and intuition, and the ability to let go of all the planning when a new idea inspires and changes the course. I like that it is ever changing and though you draw on previous experience with each job, no two challenges are ever quite the same. It stays exciting and fresh. I also love that it is a job where you get to experience a lot of sunrises and sunsets alongside people you respect.

What is the “Dagmar” signature when it comes to cinematography? Is there a particular clue to identify your style? I think that often cinematographers try to be chameleons and adapt to whatever the story needs. I think it’s


photo by RamonaFlume.



important to try and hold that above any personal style. That being said, here are some things that I really love when they seem right for a project. I love when I can incorporate a frame within a frame. I like having a motivated backlight and then just passive bounce or fill, so the character is defined but moving in shadow or soft light. I love when I can experiment with withholding information and when to reveal things to an audience member, either with lighting or framing. For example, in a scene playing with letting a character leave frame and staying with another and hearing the rest of the conversation offscreen, or having them enter back in to frame at another point and continuing the shot. I like using that as a tool to make the viewer focus on something and guiding what they should be learning or feeling at that point in the scene. Also, I love using an edit or a cut to reveal a reaction or piece of information at the start of the next scene to 56

weave two scenes together. I also love nature and trees to anytime I can use the natural landscape to mirror or speak to something in the story. I am drawn to that; again, it has to be the right kind of project. I really love when the shots and cinematography make the viewer feel like the character does rather than seeing them objectively. Making them feel the experience the character is going through. The movie that made me learn

photo by RamonaFlume.

the word cinematographer and want to be one was “Blade Runner”, so I guess I’ll always be chasing that dream and pushing towards there. “Dare Me” brought me a little closer to it with all of our night and fog, and it was so fun to play with those elements. Again, the thing I love the most though is empathy and trying to get the viewer to have it for our characters whether they are “good” or “bad”, so perhaps that is truly the style I’m most trying to have.


one rather than “TV”, whatever that means.

High Maintenance, HBO. Photo by Cameron Bertron.

You’ve been wandering around cinema and television for the past few years. Do you have a preference between both worlds? I love aspects of both. I love that television has in many ways moved towards becoming like chapters in a very long story. Each episode is like a section of super long movie and this allows for lighting and styles of scene more traditionally thought of as “cinematic”.

That kind of TV is exciting to me and it was so rewarding to watch the fans engaging with “Dare Me” as the longer story unfolded. I like that you get into a long term relationship with a viewer and take them on a continuing journey each week (or a whirlwind intense consuming binge period, depending on their style of consumption). We referenced a lot of movies in “Dare Me” and it often felt like we were making

I have also worked on “High Maintenance”, where it is more of an anthology vignette style and we are building a world, style and characters from the ground up in each small story and then rarely returning to them. It is almost like shooting many small TV pilots/short films. That also gives you a lot of freedom to play and experiment. I always approach what I’m shooting very focused on the story and what is the best way to communicate that and affect the viewer with it; how to make them feel as though they are experiencing the story for themselves. There are, of course, different politics to the two worlds but, at the core, the projects I have worked on have all prioritized story and have had the comradery that a good movie crew would have as well. I love when you create a little family of artists, technicians, craftsmen and help build and grow eachother’s 57


ideas and visions, and that happens in both worlds, or at least the ones I have been lucky enough to be part of.

Does current DoP technology limit you or give you infinite possibilities? How do you feel about that? I think that there are so many possibilities with current technology. I also feel that technology can become a crutch and take the place of really thinking through story and storytelling. We have to be careful not to let gear and fancy toys get in the way of making sure that the shots are effectively communicating with the audience member. I have always felt that no matter the size shoot I am on, that there are infinite possibilities as we can be creative, no matter what we are working with. When I was shooting “10.000 KM” I loved the way light looked when it was broken up by some plastic trash I had found. I attached it to an open frame and put it in front of the HMI and the texture it 58

gave the light looked like light going through old leaded glass. It was unique and beautiful, and it was just trash. From the outside, it looked a mess, but what it did to the light was very special and just what I needed at that

DARE ME. Photo by Rafi.

moment. I’m sure someone can make a product to do something similar and that will be fantastic too (and easier to re-create) but it’s good to remind yourself to think outside of the box and be creative with resources. I


guess the glass is half full for me. But also I won’t lie, when I got to do wet downs every night on “Dare Me” and had the “tube of death” pumping in tons of fog along with big lights aloft to illuminate it, I was giddy,

and so happy to have those tools to play with and create the right mood for the scenes. I had so much fun with the material and the artistry that my teams brought to the table. Especially my Key Grip, Mark

Manchester and my gaffer Tom Henderson. It was wonderful to have the technology available to us to do what we wanted to do artistically on “Dare Me”.

In our opinion, “Dare Me” has an outstanding feel and helps create an eerie atmosphere, especially at night. What were the main challenges of this project? Thank you so much. I loved Megan Abbott’s book so much that I wanted very much to do right by it as we brought it to the screen. Also I was picking up the baton from Zoë White, who is a genius and I wanted to continue in the beautiful footsteps she laid out for us. I think the biggest challenge was just wanting to honour the previous work of these two women. They both had made such beauty and we wanted to continue working at that level for the entire season. We wanted to explore what modern Noir is. We hoped to reference some classic noir and cinema, 59


Through You. Photo by Cameron Bertron.



and also the photography of Todd Hido and Gregory Crewdson. Megan, Gina and I traded music early on in the process, including chipmunkson16speed‘s “Call Me”, and when I shared it with Megan she responded that it made her feel “Such sublime driving-on-misty-nights music. Like a Todd Hido photo set to music”. When I saw that I was excited, as I have long loved Todd Hido’s work and I was excited to nod to it in our exploration of night in “Dare Me”. Our night had to be eerie, haunting, lonely and yet incredibly seductive. There is exciting danger in our night. My gaffer Tom Henderson and I decided to continue pushing our nights from traditional blue to cyan and in our interiors for some characters, like Addy, we moved into their mood and went with purple, and brought those into our shadows and fill. We looked for contrasting colours to counterpoint these and make a rich image. One of my favourites is the grungy

yellow playing opposite the cyan, as the coach and Addy leave the back of Will’s apartment building. It’s dirty beautiful, just like their characters. It was fun to play with colours for what makes modern noir and evoke emotions with them. We were also not afraid of darkness on the show and I was very supported by Megan and Gina, and everyone, to explore darkness and shadow. The showrunners really knew that mood was important for “Dare Me” and had an excellent command of it. I felt very empowered to explore that seductive, dangerous, exciting, mysterious energy cinematically. One of our other challenges on the show was to show Cheer for what it actually is, rather than the stereotype it has been relegated to. Cheerleaders are incredibly strong athletes who push their bodies to perform gravity defying gymnastic feats. We wanted the audience to feel up close and personal in the routines to feel part

of them, to feel the dedication, strength and hard work these young women put into it. We wanted to see the sweat and feel the competition and pressure. During our regionals routine, we wanted to feel the organized chaos of being in the middle of a complex routine almost like a war zone. Sasha Moric, one of our operators, did an amazing visceral pass, using the Venice in its smaller Rialto mode, covering the dance from inside and following Beth’s emotional journey inside the routine (she hears something devastating just before the routine), and our Cheer coordinator and choreographer Amy Wright came up to us after the take and told us she had never seen cheer filmed that way. That was what it felt like to be in a routine: like being in a battle. I was very proud of that and Sasha. We also used the Phantom to show the beauty of their technique and in one episode we slowed down their tumbling practice until they looked like 61


astronauts floating weightless through space. The show allowed us to meditate on things, to spend time examining the artistry of their movements in high speed. And then come back to the pressures of the scene. All of this worked towards creating the mood, which is so key to “Dare Me” and needs to permeate everything.

What was your camera + lenses package for “Dare Me”? The pilot had been shot on Alexa with master primes, but we just missed the launch of the Mini LF, so we couldn’t continue with Alexa, as we had a 4K deliverable requirement to meet now that it had been ordered to series. I did some tests and decided that the Sony Venice was really interesting and gave us something unique for the project. Everyone was very supportive of the decision. I kept the master primes and then played with using different filtration for different levels of softening, often using 62

either Pancro Mitchell’s or Glimmer Glass. We almost entirely shot with primes. I liked the combination of the masters and Venice, especially for what it did with people’s eyes. The book and show are so much about who is watching who and how little looks and glances can carry so much meaning, and I loved how much detail we could see in people’s eyes. Also, our cast had fantastic eyes and my gaffer Tom Henderson is a king of eye lights, among other things. However, the limits of slow motion at the time in Venice meant that we would use an Alexa Mini and a Phantom for our slow motion work. So we were often switching between bodies. Our camera department headed up by Rob Tagliaferri made it seamless and efficient.

You also have been involved in “High Maintenance”, which looks completely different. What is it like to adapt to a predefined cinematic style? Were

you able to show your signature on this project? “High Maintenance” is a very different kind of storytelling than “Dare Me”. It is rooted in reality vs. mood/tone. It never wants to have a look or style that could in any way make the audience member feel separated from the every-day interactions in the story. The viewer should feel that they could have this same encounter on the street in New York. It’s

DARE ME. Photo by Rafi.


constructed naturalism, while “Dare Me” is surreal and moody. On “High Maintenance” it shouldn’t look as if we are lighting even when we are. I worked on “High Maintenance” from its web series days and then onto HBO. When we moved to HBO, the DoPs assumed that we needed to change the style; we felt the pressure of those three letters. But that would have changed the feel of the show and how people interacted with it. We

ended up pushing the look a little, but largely stayed true to our original roots and what people loved about the show to begin with, and that was the best choice for the material. I was happy to flex on “Dare Me” and show that I am capable of other styles and storytelling, because I think often people want to put people into categories. Oh, they make this kind of work. I was getting a lot of calls for soft naturalism and though I like working in that style and often the

stories it compliments, it’s not the only thing I like doing nor am I capable of. I also love a bold style, as long as it serves the story. The only thing I don’t want to make is something where the look comes at the expense of the story.

You also worked on “Through You”, a VR experience. What is it like to work on a project like this? What were the main difficulties? “Through You” was a very interesting project to


DOP IN TV SERIES Photo by Ramona Flume.

work on. We wanted to do some things that had not been tried at the time in VR. We wanted to have a lot of cuts and camera movement along with the dancers, so that the viewer was part of the energy and dance and not passively watching it play out around them. We also had a small budget. We used a rover for one section of the film (in the diner) where the movement was more linear and the camera could be remote driven forward and backward with the actors (if you look in the diner you can see me and the operator hidden in costume in a booth driving the rover), but for the rest of the piece we wanted more complex and human movement. But how to operate the camera when it can see 360? We came up with the idea of using the latitude of the Jaunt camera to our advantage. We knew that at the time there were not a lot of details in the blacks, so we put down a black carpet over the entire set. Then we donned all black head to toe suits, not unlike the 64

ones people think of for green screen. The directors and I inched along flat on the floor and were able to wheel and operate the camera and then pass it to each other in different part of the room while staying flat to the ground. What small detail of us you could see was then crushed out and removed in post, and we blended into the floor and the camera danced on its own. During the early edits I sort of liked looking down and seeing the ghostly puppeteer operators moving the tripod and joked that we should keep us in, but it wasn’t quite right for what we were trying to achieve, so ultimately we were removed. I was quite proud of this idea that let us create a new feeling of movement in a simple and cost-effective way. We were able to use a very new technology but pair it was an ancient theatrical tradition, hidden puppeteers, and create something unique and new.

What technology would

you like to see developed to include in your future projects? I continue to be impressed with all the new advancements in lighting tools. I think that is the department I am most excited to continue to advance. I would love to have more big lighting units that use less power, but still have the same qualities as their


number of the projects that I have shot for. I love giving notes and feedback when requested. I think because I care so much about the story, different show runners and directors have shared edits with me and asked for feedback. On “High Maintenance” I ended up becoming an associate producer over the seasons because I would help with feedback on scripts and edits in post-production. It’s an honour to help with this last formative stage of a project.

What’s next for Dagmar?

forefathers, so that we could decrease our power consumptions without sacrificing quality. I think that would make me feel less guilty when I call for big units.

What’s your vision regarding postproduction? Are you actively involved in the process? I am always involved in the final colour grade, as

that is the last work of the cinematographer. It’s the plating of the food that you have all worked together to make, and it’s where you can do any tiny finishes and sweetenings that you were unable to do simply and efficiently on set. It’s critical to finish out projects, so they finalize the way you intended. I have also been very fortunate to often be included in the edits of a

I’m hoping to continue to find projects with stories that excite me and that hopefully will let audience members feel empathy and connection with something or someone outside of their normal lives. I’ve also been raising my daughter and I’m excited for a time when I can bring her to set and show her all the big toys I use to create different times of day and moods for the stories I’m working on.  65


PTZ cameras playing key role in IP-based television production By Paul W. Richards, Director of Marketing, PTZOptics

Robotic cameras have been making their way into television production studios more frequently in recent years. Historically, TV productions have required traditional broadcast cameras because of their high visual quality, along with a one-to-one cameraoperator relationship. PTZ cameras have now become extremely appealing alternatives that rival standard broadcast cameras, allowing connectivity and remote-control options that provide studios with both cost savings and flexibility. As television studios increasingly adopt IP-based production infrastructure, the ability to easily deploy IPconnected PTZ cameras will often make them a preferred choice. 66

IP connectivity has become a trend sweeping television production, and its adoption in PTZ cameras is no different. PTZ cameras that are network connected can provide IP camera controls for pan, tilt, and zoom along with the ability for operators to remotely configure those operations. IP camera configuration allows producers to toggle aperture and shutter speed on the fly remotely from one central location. Similarly, a single camera operator located anywhere can quickly control an array of multiple IP-connected cameras in order to tweak exposure, white balance, color, and much more. Along with remote color correction and configuration, advanced

PTZ control options are also available. PTZ cameras generally support both manual speed controls and automatic presets in which the speed at which the camera moves can be saved for future automated use. Another ideal aspect of using PTZ cameras for television is their ability to be deployed in almost any position, as they do not need additional space to accommodate a camera operator. Placing PTZ cameras inside of


teleprompters has become a new trend to save space and gain additional functionality. Other ideal mounting locations for PTZ cameras include tripods, walls, and ceilings. Whatever mounting location is chosen, a PTZ camera should have a clear view of the intended capture area. For distance shots, many PTZ cameras feature optical zoom that allows them to capture

high-quality, close-up views of areas from far away. Generally, optical zoom comes with a tradeoff between the maximum zoom magnification and the field of view the camera offers, so it is important to select the right camera for your application. For example, the PTZOptics 30X-NDI camera (with a 30x optical zoom lens) can capture a head-and-shoulders view

of a subject up to 22 meters away. The 12X-NDI PTZOptics camera in the same line offers 12x optical zoom but provides an additional 12 degrees of field-of-view coverage – ideal where cameras need to be placed closer to areas of interest.

IP Infrastructure for PTZ Cameras A solid network infrastructure is important

Broadcast Beat Telepromoter Install



to successfully deploying IP-connected cameras that are designed to distribute video over IP. While many television studios already enjoy IP-based camera control and remote configuration, they may still use SDI or HDMI cable connections for signal transport along with traditional, hardwarebased video switchers. For those moving to an all-IP video workflow, careful consideration of networking infrastructure is important. For example, NDI® -- the IP video production protocol developed by NewTek – recommends that users have a minimum of a Gigabit network infrastructure. As the popularity of NDI has increased, NewTek has introduced two unique NDI modes that can be used to distribute video over IP. NDI|HX is the latest version of NDI that provides “High Efficiency” IP video connectivity over a LAN (Local Area Network). NDI|HX complements “full” NDI, which features increased quality and higher bit68

Figure 1

Figure 2

rates, but requires additional bandwidth. (See Figure 1) NDI|HX PTZ cameras generally allow users to select an NDI quality mode where users can choose from low, medium, and high-quality settings. Figure 2 outlines the bandwidth usage of an example Gigabit Ethernet IP network at a television

studio. Deploying PTZ cameras on IP infrastructure is further simplified by Power Over Ethernet (PoE). Network switches with PoE support can now easily power PTZ cameras over a single ethernet connection, eliminating the need for a dedicated power supply at each camera.


PTZOptics 20X NDI Camera.

PTZ over IP in Practice Broadcast Beat is a production studio outside of Miami, Florida, that recently upgraded their studios to support IPbased video production with NDI using a NewTek TriCaster TC1 video switcher. From inside the central Broadcast Beat production control room, a single operator can run the entire system when needed. The TriCaster TC1 combines the ability to switch video with integrated PTZ camera controls, allowing the Broadcast Beat production team to multitask with ease.

The Broadcast Beat studios include three unique, pole-mounted teleprompters that feature PTZOptics 20X-NDI cameras inside. These Autoscript teleprompters provide on-camera talent with their upcoming lines, while also acting as the main cameras for productions. The three camera setups surround the studio set, and each is able to produce four to six preset pan, tilt, and zoom camera views. This gives producers 12 to 18 camera inputs to switch between in the TriCaster, all with just three physical cameras. Producers can also take remote control of the cameras’ operations to perform manual zooming and panning shots as desired, and they prefer the responsive iOS app when performing manual control rather than a traditional PTZ joystick controller. “Operating the PTZOptics cameras with the TriCaster TC1 could not be easier,” said Broadcast Beat founder Ryan Salazar. “Deploying

PTZ cameras in our studio has helped us increase the production value by allowing us to use PTZ presets on the cameras. We can set up the various shots in our TC1 and simply click to recall a saved camera preset before we transition to the live video feed.” Broadcast Beat’s experience exemplifies how pan, tilt and zoom cameras can play an increasingly important role in IP-based broadcast production infrastructures. Using PTZ cameras in an IP-based setting can improve production workflows by eliminating the need for multiple camera operators and costly production space. This not only saves money, but can also increase safety during a time when many productions have shifted their in-person staff to being remote. Maintaining this level of flexibility will be an important consideration for television production well into the future.  69


Juan Sebastiรกn de Elcano ship



Enterprise-grade Intelligent and Accelerated File Transfer on the High Seas By Michael Darer Photos Courtesy of Telefónica

Somewhere off South America’s coast, tucked below deck in an early 20th-century topsail schooner sailing vessel, is a 21st-century solution to an age-old problem: military morale. The thirdlargest tall ship in the world and a Royal Spanish training vessel, the Juan Sebastián de Elcano boasts four masts reaching nearly 160 feet in the air, supporting 21 massive sails, and an enterprisegrade accelerated file transfer solution. In a novel merger of sea and sky, the Spanish multinational telecommunications company, Telefónica, deployed Signiant software to improve morale, security, and communication across ships and outposts. The

Juan Sebastián de Elcano was just the first of over a dozen vessels on which Signiant embarked. Elena Andres, head of marketing and business development for Telefónica, noted the challenges the Spanish Ministry of Defence faced leading to this development. “There were several problems. One was the lack of connectivity in several important locations. The second one was the security issues exacerbated by the lack of connectivity. Additionally, we wanted to improve the wellbeing of the troops and military people that work in those locations, in part by allowing them to consume content as if they were at home,” Andres says.

To achieve these goals, Telefónica needed to find a media file transfer solution that could prove reliable, even when connectivity was less than ideal. They needed something that offered speed, security, flexibility, and reliability under even the most challenging network conditions. They chose Signiant’s Manager+Agents.

Telefónica and Signiant Founded nearly a century ago in Madrid, Telefónica is one of the largest telephone operators and mobile network providers in the world. With over 113,000 employees and 337 million customers spread across Europe, Asia, and the Americas, the 71


company has routinely shaped the modern telecommunications industry and the many other industries it touches. When TelefĂłnica first reached out to Signiant in order to deploy their solutions for the Spanish Ministry of Defence, they clearly laid out the seriousness of the challenge. The Spanish Ministry of Defence found transferring files, entertainment videos, and other personal communications among naval bases and ships proved problematic. The ministry employed many different technology solutions at various locations, and the inconsistency among tools impacted morale and security alike. Juggling this technology bundle was challenging enough on land but even more so for those at sea. As a result, it was essential to the Ministry of Defence to find a single vendor that could streamline their technology stack. Luckily, Signiant was perfect for the job. Not only did 72


Manager+Agents make it possible to move all the content in question, but-since Manager+Agents-would be one part of a larger technology stack, the power and flexibility of the tool ensured that the Royal Navy wouldn’t once again find itself scrambling to integrate solutions which were antagonistic to each other and their goals.

But there was more to the problem than just a lack of consistency and efficiency. The struggles faced by naval operations were further compounded by understandable problems with connectivity conditions for vessels out of port. If file transfers were dropped between a base and the ships--and they frequently were--that could damage productivity and disrupt


important personal communiques rather than game builds or television dailies, the end game remained the same-important content reaching its destination in the most efficient way possible Telefónica leveraged Signiant Manager+Agents in a novel application.

important workflows. Even the inability to effectively move personal communications could prove to be a threat to morale and the ability to manage large numbers of personnel working on vessels and on land. The challenges Spain’s Ministry of Defense was trying to solve were the same as any film, broadcast, or VOD

enterprise: how do they move numerous, large, digital assets that are incredibly valuable, with speed and reliability, across long distances, and among multiple parties? In this case, there were challenging network conditions with highsecurity requirements. Though what was moved may have been personalgrowth education (such as language courses) or

“One of the most important issues in this project is trying to manage bandwidth all the time, especially on the ships,” says Jaime Vidal López, Telefónica’s project manager, said. “We have to use a very small bandwidth to share not only the media files but also communications. On top of that, you have all the applications that the staff wants to use, like YouTube, which further takes up bandwidth.” Starting with land-based installations in countries-the first among them in Koulikoro, Mali--the Ministry of Defence and Telefónica quickly saw the efficacy of Signiant’s Manager+Agents and moved to deploy it on 73


ships--beginning with the Juan Sebastián de Elcano. “Content is provided by Telefónica--including entertainment like films, series, programs, etc--and the Ministry of Defence,” Andres continued, explaining the workflow. “For VOD content, everything is sent to the Telefonica Data Center, and from there, using Signiant and the network, we send all of this content to each location--whether that be land or sea--so that it can be consumed at each base from the soldiers’ devices, through the new OTT platform we set up as part of this project.”

No files lost at sea Signiant’s technology takes those bandwidth challenges and efficiently moves content through narrow pipes and high congestion. For Telefónica and Spain’s Ministry of Defence, Manager+Agents solved those challenges despite the complex security and networking challenges across limited and arduous 74

environments. The proof was in the goal’s primary purpose--provide its military personnel with personal and entertaining content to increase morale. “The solution is effective, and the final users are very happy. They understand it as an improvement in their wellbeing. And since it’s used in many different places,

[that] make the users happy and the state happy,” López explains. In part, this is because of Signiant’s monitoring capabilities and the power of the Checkpoint Restart feature. Military locations do not always have the ideal connectivity; that means transfers can be interrupted as connectivity fluctuates based on the ever-widening range of


factors affecting vessels. Signiant’s ability to not only quickly report when connectivity is interrupted, but to automatically restart transfers from the point of interruption, saves time and frustrations.

Telefonica and Signiant prove that excellent file transfer is necessary everywhere Telefonica has already

deployed Signiant at no fewer than 27 locations: 11 land-based and 18 on individual ships. “We chose Signiant because it’s a very good system,” López emphasizes. “It’s very powerful to be able to get a complete workflow, and Signiant is really easy to understand and use, and to develop those workflows.”

The flexibility of Signiant’s solution also proved to be a boon to the project, considering how much additional infrastructure the new system would involve. “Signiant’s tools are perfectly integrated into the secure network that Telefónica and Hisdesat [Spanish government satellite services operator] have deployed for this project,” Elena Andres adds. Telefónica’s challenges are exactly what Signiant solves across the global media supply chain every day--media transfers in a challenging network environment. Moving large amounts of high value content between global locations, across a wide variety of storage types and network conditions with speed, reliability, and security is a constant in M&E--no matter the conditions. Land or sea, Signiant software is relied upon to move petabytes of high-value content every day around the world.  75


Understanding Content Key to Success in Streaming Wars

By Marcus BergstrĂśm, CEO, Vionlabs

The rapid growth of streaming services like Netflix, Amazon Prime Video, and Disney+ means that consumers can access more on-demand content than ever before. Along with other challengers such as Apple TV+, Peacock, and HBO Max, the OTT streaming market is pretty flooded, making the battle for viewer attention incredibly tough. The services that will thrive will be those that offer a differentiated user experience, retaining subscribers and increasing engagement through intuitive and ultrapersonalized content discovery systems. One of the main reasons viewers decide to cancel streaming subscriptions is due to time wasted navigating vast content libraries: on average, users spend more than 25 per 76

cent of their screen-time choosing what to watch. Content discovery recommendations are usually based upon information on the video content coming from an

operator’s catalogue, supported by external metadata sources. However, the resulting viewing suggestions are generally fairly simplistic and inaccurate. As a case


in point, a user may have decided to watch Uncut Gems, a recent crimethriller film starring Adam Sandler. A regular content discovery platform that solely relies on analyzing metadata would churn out recommendations leading the viewer to some of Adam Sandler’s other movies – for the most part, light-hearted comedies, such as Grown Ups or Happy Gilmore. The platform might also direct you to its ‘Hidden Gems’ genre after reacting to the name ‘Gems’ in the title.

The output of recommendations is only as good as the input, and if providers don’t know enough about their content then recommendations will be, for the most part, highly unreliable and, at times, irrelevant. How likely is it that viewers who enjoyed Uncut Gems, a criticallyacclaimed drama, will be immediately on the hunt for one of Adam Sandler’s older, goofier romcoms? Ultimately, viewers can be put off by an apparent lack of relevant content choice and decide to leave

their service for a competitor. Streaming providers can increase user engagement and improve subscriber retention by stepping up the game when it comes to content discovery. Accurate and personalized recommendations are crucial to keeping viewers on-screen, and the sooner that providers enhance their platforms with higher-quality content discovery based on more than just metadata sources, the better they will fare. 77


Typical metadata will still play an important role in content discovery, but it isn’t anywhere near enough by itself. AI has the capabilities to solve content discovery challenges that almost all operators face, combining highly-detailed video content analysis with the user’s viewing history. At Vionlabs, we use a number of different neural networks to identify patterns in audio, color, camera movements, pace, stress levels, positive/negative emotions and many other content features. Our analysis of these variables means that we can extract information determining the emotional impact of movies, television shows and other video, producing the highest quality content description available: our content fingerprint timeline. Our AI engine understands how changes in the fingerprint timelines are linked to what content users enjoy. A vital component of this is a technique we call content 78

similarity analysis, which compares each content timeline with every other timeline to evaluate the grade of similarity between each content asset. Our engine uses this knowledge and that of the viewer watchlist to provide users with highly accurate and personalized recommendations. A viewer that has just finished watching a horror film, for example, may want to move onto something more lighthearted and less intense afterwards. Our platform will be nuanced enough to understand this, serving up more intuitive recommendations. AI-based content discovery will undoubtedly have a crucial role to play in the AVOD and SVOD markets, as well as in linear programming. OTT providers distributing both live and VOD services can better understand their content libraries by using content discovery platforms. They can then utilize their VOD libraries and provide new experiences, including

ultra-personalized linear programming. Online music services such as Spotify have been using similarities in songs and music streaming behaviour to formulate channels and curated playlists for users. In the case of video, a similar deployment of content analysis can enable providers to create hyperpersonalized viewing experiences through personally curated channels such as a weekly discovery, Mediterranean cooking shows or historical documentaries, all based on the user’s watch behaviour. As well as being able to


interface (UI) experiences. Often, they can be flooded with inaccurate and basic recommendations, leading to a less enjoyable experience. However, with a better understanding of content, platforms can offer a clearer, more relevant display, even moving from countless tiles to a smaller, more focussed carousel, for example. provide more nuanced and intuitive recommendations, a deeper understanding of content can benefit OTT providers in other ways, too. One example is in making more informed decisions when it comes to content acquisition and production, giving them an edge over competitors. Knowing what content viewers enjoy, and why they enjoy it will strengthen a provider’s position and allow them to strategize better. From the user’s perspective, the streaming platforms that have a more detailed appreciation of their content libraries will be able to offer better user

While content analysis reinforces OTT providers’ understanding of their audiences and offers a more personalized user experience, so too can it open up exciting new opportunities for advertisers. Increased personalization of content discovery can be used to inform more contextual, targeted advertising opportunities, creating an experience both relevant and engaging for the viewer, while also adding value, and another key differentiator, for providers. In the increasingly saturated VOD market, providers need to stand out from the crowd and

engage viewers by offering the best user experience possible. If not, they will simply move to another service, and that will become their new goto content app. The OTT providers that offer a differentiated user experience, with the most accurate, relevant and personalized content discovery platforms will be in a much better position to survive and thrive in an increasingly competitive marketplace. AI and machine learning have unlocked capabilities to enhance our understanding of video content through detailed and intelligent analysis. There are some fantastic opportunities afforded to OTT providers through AI to keep viewers watching and save them from switching apps. Increasing viewer engagement must be the battleground as the streaming wars enter their next phase. If providers achieve that, then they will be well placed to remain standing when the wars reach their conclusion.  79


AEQ CrossNET, Kroma's digital intercom AEQ is one of those manufacturers that, at least in the European market -and most especially, in the Spanish markethave always been there, offering quality audio systems. It started off with robust ISDN codecs and, ever since they acquired Kroma, it is a reliable manufacturer of communication systems. We are now fortunate enough to test the CrossNET system, a proposal for Intercom over IP from Kroma by AEQ. Lab test perfomed by Yeray Alfageme

More specifically, we tested:  A CrossNET matrix.  TP8000 series panels: - One TP8416 desktop panel.


- One panel for 1 TP8116 rack unit.  An Xplorer beltpack and its accessories.  The Crossmaper software.

All this interconnected with standard IT equipment: one Ethernet Netgear switch featuring 24 ports -of which only 4 were used- and a Cisco Wifi AP. Because this


functionality -i.e. not having to rely on specific equipment or infrastructure to interconnect systems- is really useful and makes it much easier to deploy the system within an already existing IT infrastructure.

CrossNET Matrix CrossNET is a compact intercom matrix, one rack unit in height, with basic connectivity through IP based on DanteTM technology that supports the new AES67 standard, for transport of highquality audio suitable for broadcast. Furthermore, it comes with high-quality balanced analogue audio and includes support for Kroma’s legacy IP panels

and interfaces featuring phone-bandwidth compressed audio, as well as for Kroma panels with digital audio link. Thanks to its scalability from a 40 x 40 system to a 168 x 168 systemCrossnet offers a wide range of external connections: analogue ports, digital ports, DanteTM AoIP and low binary rate IP. Integration of such a wide range of connections within the same unit allows users to reduce the external equipment needed. The matrix is available in the following versions:  CrossNET 40: 8 Kroma digital intercom ports, 12 broadcast-quality balanced-audio analogue ports and 20

compressed-audio IP ports.  CrossNET 72: DanteTM IP Interface featuring 32 ports, 8 Kroma digital intercom ports, 12 broadcast-quality balanced-audio analogue ports and 20 compressed-audio IP ports.  CrossNET 104: DanteTM IP Interface featuring 64 ports, 8 Kroma digital intercom ports, 12 broadcast-quality balanced-audio analogue ports and 20 compressed-audio IP ports.  CrossNET 136: DanteTM IP Interface featuring 96 ports, 8 Kroma digital intercom ports, 12 broadcast-quality balanced-audio



analogue ports and 20 compressed-audio IP ports.  CrossNET 168: DanteTM IP Interface featuring 128 ports, 8 Kroma digital intercom ports, 12 broadcast-quality balanced-audio analogue ports and 20 compressed-audio IP ports.

TP8000 series panels All panels in this series work with digital audio at 48 kHz and 24 bits, an audio quality that is much higher than previous


analogue matrices, this making it possible to use the system to convey broadcast audio through them, all this compatible to DanteTM and AES67.

on a two-line backlit display, which has an additional line for indication of audio level at the crosspoint.

Both panels, the TP8416 desktop and the TP8116 of one rack unit, have 16 customizable keys in 4 different pages. They feature individual volume control for each communication point. Echo cancelling and builtin DSP. They also have a dual AoIP Dante port, a VoIP port, a digital port and an analogue port. The information is presented

Beltpack Xplorer This beltpack is based on WiFi technology. You only need to link it to the wireless network where the matrix is connected and it will automatically load the configuration corresponding to its port and crosspoints. Using standard WiFi technology sometimes leads one to think that delay in communications will be an


excessive one, but in this case you cannot tell the difference between panels connected through Ethernet and the WiFi beltpack. We have 4 shortcut keys arranged on pages, another two customizable keys and a multipurpose screen. It can work either associated to an intercom matrix or as a Party-Line terminal. The device’s user interface, somewhat simple, provides everything that is needed to configure and access to all the functions offered by the beltpack, which is more similar to a full panel than a simple beltpack, as it was in analogue versions.

Its plastic-based build may seem little robust, but the behaviour of the interface and keys is correct and pressing a key by mistake or access an incorrect menu are not possible. The reach of the beltpack is the typical one in 2.4 GHz WiFi networks, something to be expected as it just uses this technology to communicate with the matrix.

Crossmapper control software It is a tool featuring an intuitive user interface and powerful configuration possibilities. Through this piece of software we can get access to

configuration, supervision and monitoring of our entire system. Configuring crosspoints Crossmapper software for PC facilitates set-up of Kroma’s intercom systems by means of an intuitive user interface. The user has an easy access to the configuration of each terminal in the system, with different options for each key, as well as additional functions such as groups, phone dialling, IFBs, etc. Once created, the map can be loaded on any of the 8 memory slots in the matrix via an Ethernet port or by using the USB



port on the front and being activated without interrupting the ongoing communications in a seamless way for users. Crossmapper also allows supervising the status of each communication and terminal at all times, as well as consulting a complete event log afterwards. Configurable audio levels Kroma intercom matrices provide independent control of input and output gains, as well as the audio level for each audio source’s crosspoint. Thus, offsetting differences in audio levels for various devices, users, etc. becomes a very simple task. BTN/ISDN/GSM calls The matrices support both calls and dialling within the basic telephone network, ISDN or GSM (SIM card). The only thing required is to include a box system and the adequate interface cards. Thus, Crossmapper enables assigning a telephone number to a certain key in order to dial


and receive calls from said user in a quick fashion. Groups Thanks to Crossmapper’s Groups menu, several users can be grouped in such a way that it will be possible to establish communication with all of them at the same time, modifying their audio

levels, etc., but always keeping individual control for each one of them. Configuring keys By means of Crossmapper’s intuitive graphic interface, configuring destination of each key, communication type (one-way or two-way, between third parties,


software. The different modes include from a full interruption, up to varying attenuation levels for the audio signals involved. IFBs can be used with any device connected to the system and no additional hardware is required for implemeting this function.

Conclusions The CrossNET system is really easy to configure. Based on standard IT technology and capable of interconnecting by using universal equipment, including a WiFi access point for wireless systems, it provides remarkable ease of use and a really fast installation procedure.

etc.), the operation of the relevant key (latch, pushto-talk or mixed) and audio levels of the ports concerned, are all very simple tasks. IFBs Interrupted Foldbacks (IFBs) are assigned

crosspoints that get interrupted by an event: an incoming call, an outgoing call, from a third party, etc. The system offers various possibilities for IFB implemented on the matrix and configured by the Crossmapper

Within its range of prices and quality, even comparable to much higher-budget systems, it is a solution to be seriously considered in m mid-size production environments in which the basis will be based on preexisting IT infrastructure. ď ľ 85