TVB Europe 70 January / February 2020

Page 1

Intelligence for the media & entertainment industry

JANUARY/FEBRUARY 2020

Contenders Assemble!

How technology helped the nominees this awards season

JANUARY/FEBRUARY 2020

2020 Vision Looking ahead to the new decade in broadcast

Speed of Sound Hear from the audio mixer on Coldplay’s YouTube broadcast

TVBEUROPE SMASHES INTO THE NEW DECADE



www.tvbeurope.com FOLLOW US Twitter.com/TVBEUROPE / Facebook/TVBEUROPE

JENNY PRIESTLEY, EDITOR JENNY.PRIESTLEY@FUTURENET.COM @JENNYPRIESTLEY

THE WINNER TAKES IT ALL

I

t’s awards season, my favourite time of year, when we celebrate the outstanding achievements of those working in front of and behind the camera in both TV and film. We’ve already had the Golden Globes, and as you’re reading this we’ll be in the run-up to the BAFTAs, Oscars and various guild ceremonies. I find this part of awards season the most interesting, and inspiring, when those who work within specialist fields such as editing, sound design, cinematography, visual effects etc are honoured alongside the big Hollywood superstars. To that end, in this

SMPTE, the DPP, the EBU, technology vendors, analysts and those working in production. There are some fascinating predictions as to the fundamental challenges our industry could face, as well as where we could be as we enter 2030. Elsewhere in this issue, we hear from veteran broadcast audio mix engineer Toby Alington about his work on Coldplay’s live YouTube broadcast of their album launch from Amman in Jordan and the challenges the team faced while working in the desert. George Jarrett catches up with the CEOs of both LTN Global and Make.TV about the

CONTENT Editor: Jenny Priestley jenny.priestley@futurenet.com Senior Staff Writer: Dan Meier dan.meier@futurenet.com Graphic Designer: Marc Miller marc.miller@futurenet.com Managing Design Director: Nicole Cobban nicole.cobban@futurenet.com Contributing Editor: Philip Stevens Contributors: George Jarrett, Toby Alington Group Content Director, B2B: James McKeown

MANAGEMENT Senior Vice President: Content Chris Convey Brand Director: Simon Lodge UK CRO: Zack Sullivan Commercial Director: Clare Dove Head of Production US & UK: Mark Constance Head of Design: Rodney Dive

ADVERTISING SALES Account Director: Duncan Wilde duncan.wilde@futurenet.com (0)330 390 6119 Account Manager: Nathalie Adams nathalie.adams@futurenet.com (0)330 390 6305 Account Manager: Daniel Nichols daniel.nichols@futurenet.com (0)330 390 6166

SUBSCRIBER CUSTOMER SERVICE

‘We’re also taking a deep dive into what the new decade could mean for the media tech industry.’ issue we are celebrating some of the talent that has been nominated during this awards cycle and hear how technology has helped them to achieve such high-quality work. Because, they wouldn’t be accepting those golden statuettes without the innovation and expertise of hundreds of people working in the technology side of the media and entertainment industry. We’re also taking a deep dive into what the new decade could mean for the media tech industry. Who will be the winners and losers in terms of technology, acquisitions in a continuously changing landscape? We’re not just looking at this year, but what could happen over the next 10. I’m delighted to include contributions from the likes of

www.tvbeurope.com

former’s acquisition of the latter. Dan Meier talks to the team at NativeWaves about their technology that allows viewers to choose their own camera angle when watching sport - something I think will become increasingly popular over the next decade. Plus, Philip Stevens discovers how Sony goes about conceiving and developing new products. Continuing our look forward, we have lots of plans for TVBEurope over the coming year and as always I would urge you to sign up for our newsletters at TVBEurope.com in order to keep abreast of breaking news within the industry, and to see what we’re up to. Enjoy this issue! n

To subscribe, change your address, or check on your current account status, go to www.tvbeurope.com/page/faqs or email subs@tvbeurope.com

ARCHIVES Digital editions of the magazine are available to view on ISSUU. com Recent back issues of the printed edition may be available please contact rachael.hampton@futurenet.com for more information.

LICENSING/REPRINTS/PERMISSIONS TVBE is available for licensing. Contact the Licensing team to discuss partnership opportunities. Head of Print Licensing Rachel Shaw licensing@futurenet.com

Future PLC is a member of the Periodical Publishers Association All contents © 2020 Future Publishing Limited or published under licence. All rights reserved. No part of this magazine may be used, stored, transmitted or reproduced in any way without the prior written permission of the publisher. Future Publishing Limited (company number 2008885) is registered in England and Wales. Registered office: Quay House, The Ambury, Bath BA1 1UA. All information contained in this publication is for information only and is, as far as we are aware, correct at the time of going to press. Future cannot accept any responsibility for errors or inaccuracies in such information. You are advised to contact manufacturers and retailers directly with regard to the price of products/services referred to in this publication. Apps and websites mentioned in this publication are not under our control. We are not responsible for their contents or any other changes or updates to them. This magazine is fully independent and not affiliated in any way with the companies mentioned herein. If you submit material to us, you warrant that you own the material and/or have the necessary rights/permissions to supply the material and you automatically grant Future and its licensees a licence to publish your submission in whole or in part in any/all issues and/or editions of publications, in any format published worldwide and on associated websites, social media channels and associated products. Any material you submit is sent at your own risk and, although every care is taken, neither Future nor its employees, agents, subcontractors or licensees shall be liable for loss or damage. We assume all unsolicited material is for publication unless otherwise stated, and reserve the right to edit, amend, adapt all submissions.

TVBEUROPE JANUARY/FEBRUARY 2020 | 03


36

IN THIS ISSUE

JANUARY/FEBRUARY 2020 06 Low latency, 5G and the video streaming pipeline

Telestream’s Stuart Newton discusses the impact of low-latency protocols on sports video streaming

10 Making a global partnership

George Jarrett speaks to ITN Global and Make.TV’s CEOs about the future

13 2020 Vision

TVBEurope asks vendors, analysts and stakeholders how the industry will look in ten years time

30 A day in the life Toby Alington recalls mixing live audio for the YouTube broadcast of Coldplay’s album launch

36 Contenders assemble!

We hear from the award season nominees on the technology that helped them make an impact

48 London calling

Philip Stevens investigates the London operations of CNN and CNBC

61 Close to the edge: Television at CES

Tom Butts reports from January’s Consumer Electronics Show in Las Vegas

62 In the driver’s seat NativeWaves’ Marcel Hasenrader describes the latest advancement in second-screen technology

67 Life after production

48

62 4 | TVBEUROPE JANUARY/FEBRUARY 2020

Sundog Media Toolkit CEO Richard Welsh tells Dan Meier how localisation extends the life of content



OPINION AND ANALYSIS

Low latency, 5G and the video streaming pipeline By Stuart Newton, VP Strategy and Business Development at Telestream

F

rom attending some of the recent video streaming seminars, it’s clear that sports content providers (as well as betting companies) are driving the industry to provide low-latency streaming as soon as possible. For the content providers, this is to provide an end-to-end delivery chain from camera to screen in a comparable timescale to traditional broadband, satellite and terrestrial channels. Hearing others cheer 20-40 seconds before you see or hear the goal is a common issue that desperately needs to be solved, but synching live streaming with broadcast provides other opportunities to have companion screens with different camera angles in sync with the main broadcast, or simply to make sure what you see on the screen matches the timing of the sports alerts popping up from social media. One of the big “themes” being pitched to address this is 5G, but it’s really important to understand that unless a single company controls the entire video delivery architecture from camera to mobile device (which is very rare today), then 5G is just one piece (albeit an exciting piece) of the end-to-end puzzle for video delivery. There have been many successful trials of 5G for video streaming, from multiple cameras at a stadium back to spectators in the venue, from golf course greens back to outside broadcast trucks, and many other permutations where 5G will revolutionise the need for providing highspeed, low-latency bridges between locations. And 5G will be really exciting for all low-latency applications where there needs to be rapid communications into and out of the 5G network for applications such as car telematics, IoT, or augmented reality applications. However, for end-toend delivery of video from a live sports event to general

sports consumers it’s important to understand that low-latency streaming across that entire workflow requires all elements in the chain to be tuned for low latency. If that’s not the case, a super-fast 5G pipe on the end of the delivery chain will add some benefit for “last mile” speed and latency but will not resolve the end-to-end latency issue. WHAT DO WE REGARD AS LIVE IN TERMS OF LATENCY? In live sports streaming from events to mobile platforms (or smart TVs), the original video contribution from the camera feeds are typically routed to a video headend which encodes the streams into the various resolution formats, conditions for advertising, adds digital rights management, and chunks the video up into the relevant adaptive streaming protocols (e.g. HLS or DASH) for storage on an origin server. The content delivery networks (CDNs) then typically pull the HLS and DASH manifests (instruction files for how to access the chunks) and video chunks from the origin server, and distribute them across the content delivery network for caching (temporary storage) at the CDN edge locations so they are available to many people via distribution across the broadband, cellular (e.g. 5G), or Wi-Fi Clouds as required by the consumer devices. To achieve low-latency streaming (say 3-10 seconds camera to screen), and especially ultra low-latency streaming (<3 seconds), every element in the chain needs to be in tune. This is very different to the 30-50 second typical delay experienced today, which is mainly due to the inefficiency of the chunk delivery across the package-origin-CDN process coupled with a typical buffering storage of 3-4 chunks on the player before the video is actually

‘Combining low-latency protocols with super-fast last mile delivery mechanisms such as 5G will have an even greater impact on the live sports streaming experience.’ 06 | TVBEUROPE JANUARY/FEBRUARY 2020


OPINION AND ANALYSIS delivered to the screen. When these chunks are typically in the 6-10 seconds range, it’s easy to see why the current latencies exist. The new low-latency and ultra low-latency protocols being used in early trials are attempting to solve this issue by tuning the manifest and chunk delivery across the entire chain, where manifests are made available as early as possible and “partial” chunks of the video are made available from the packager and sent out as soon as they are ready. This provides a mechanism for sending the video in near real time rather than waiting for the larger video chunks to be completed before sending across the network. Maintaining backwards compatibility with existing protocols is an essential part of this, so devices or CDNs not upgraded to be capable of delivering the new low-latency mechanism have a fallback to the existing manifest and chunk retrieval methods. QUALITY TRUMPS LATENCY IN LIVE STREAMING APPS There are several promising competing low-latency protocols (each with pros and cons) and time will tell which one will become the adopted standard. Early trials are happening now, and the protocols will gradually adapt over the next few years as the industry embraces them and identifies potential improvements. Low-latency protocols will have a huge impact on sports video streaming, and combining low-latency protocols with super-fast last mile delivery mechanisms such as 5G

NEW

will have an even greater impact on the live sports streaming experience, especially where 5G enables the viewing device to access higher resolutions and HDR formats than 4G allowed. However, the complexity of these new protocols brings in another factor that will absolutely need to be addressed: Quality. Monitoring end-to-end delivery of adaptive protocols today is a specialised topic and has evolved over the last eight years to provide excellent visibility into issues around adaptive video delivery. During times of architecture change, monitoring becomes even more critical, and the monitoring has to evolve as fast as possible to keep up with the new protocols, and provide robust, actionable data to help isolate faults and tune the network for improvements. With three major industry shifts over the next few years to new low-latency protocols, Cloud-based video architectures, and the rollout of 5G-based streaming, video monitoring is going to be absolutely critical to making sure content brands are protected, the experience is good, and people perceive value for money. The potential for these three major shifts coming together will transform video delivery and the viewer experience far beyond anything people are used to today, but only if the viewing quality experience matches that potential. Having seamless operational visibility of low-latency video delivery from camera ingest to Cloud, CDN, ISP and ultimately the end user will ensure that whatever wonders the next-generation services will bring, the viewing experience will be in tune. n

EnGo 260

The world’s most durable and versatile mobile transmitter • Patented Smart Blending Technology intelligently combines multiple network connections to deliver enhanced reliability • Unique content adaptive encoding automatically adapts to scene complexity • Global modems simplify international travel • Advanced antenna design for superior RF performance • 120 minute internal rechargeable battery

Learn more about the new EnGo 260 by visiting dejero.com/engo

www.tvbeurope.com

TVBEUROPE JANUARY/FEBRUARY 2020 | 07


OPINION AND ANALYSIS

Where are we in the process of adoption of AoIP? By Bob Boster, president, Clear-Com

T

he adoption of AoIP standards gives people choices as they move further into the IP world, but these choices can throw up uncertainty. Which standard is right for a particular situation? Do we have to choose just one standard, and then live and die by it? And how are manufacturers supporting customers in the changing AoIP landscape? Firstly, let’s look at the differences between two such standards, AES67 and Ravenna. From the perspective of someone designing a system, there is not a lot of difference between them; they both deliver audio signals over IP and enable flexibility in where audio equipment can be located and how signals travel around a facility or live production. It’s easy to develop an intercom system with either of these protocols, assuming you are working with equipment from the same manufacturer and are therefore using whichever protocol within the closed environment of a unified system. The complication comes when you want to plug A into B and each element uses a slightly different implementation of the AES67 or Ravenna standard. For example, a common application would be to interface an audio routing system and a console into the intercom. The audio isn’t hard, it’s the handshake with the other elements that is the challenge. Ravenna was ahead in terms of integration so that might be a little easier, but fewer systems currently speak Ravenna so your choices would be limited. Then there is Dante, a proprietary standard. Dante makes the integration part easier as it comes with its own toolkit to plug A into B, but it’s more expensive for end users as manufacturers have to pay a license fee to Audinate for the technology, which impacts their equipment costs. The live performance and fixed installation markets have embraced Dante, but Dante is less attractive to broadcast than AES67 which bundles in with video standards more effectively. The ST2110 standard set, which includes AES67, also has a much richer toolset than either Ravenna or Dante for integrating audio and video. It is important to note that all these standards are gradually aligning – Ravenna and AES67-within-SPTE2110 start to look very much the same, and Dante is moving towards this alignment too. The benefits for

08 | TVBEUROPE JANUARY/FEBRUARY 2020

users are potentially huge, allowing broadcasters, production studios, and other venues to combine studios and resources as a result of IP audio networking. The flexibility of these systems makes it possible for them to change the studio from one network affiliate to another in a matter of minutes. That includes reconfiguring everything, from remote venues to mix-minuses and talent mics. No longer do users have to keep dedicated studios for each show or affiliate; one very flexible studio can serve multiple purposes. On a practical level, IP audio is much easier to move around and far more scalable for doing the fast-paced productions that go on in production environments these days. Changes can be made more dynamically and at least theoretically integration can be more meaningful. However, there is still a significant cost implication - it’s more technically challenging to configure and manage a media-capable IP network infrastructure, plus users will be paying to power and cool expensive switches throughout the life of the system. Manufacturers are also working hard to support customers whatever their choice, but one of the challenges facing established manufacturers is the ‘AOIP standards arrival’. Clear-Com had been offering AoIP-based products for more than a decade before these standards were introduced, so while the new standards were revolutionary for broadcast, for us they simply meant an engineering task to ensure that our cards would speak AES67 as well as continuing to support the older protocols. This means we can support a range of choices for our customers as we expect they will continue to use different standards and require integration with a variety of other systems. The other big challenge is that some of the elements pertaining to integration – which is really the key factor - are still being nailed down by the standards committees. As a member of AIMS we are fully committed to supporting the standards ‘families’ applicable to our products as they are formalised and deployed, but we know we need to be ready for further changes. Different AoIP technologies have their own benefits and compromises, but in the end, improved flexibility, interoperability and features like ‘discovery’ will mean broadcasters can adapt their systems as their own needs evolve and demands change. n


OPINION AND ANALYSIS

Volumetric Video: The next frontier in moving images By Valérie Allié, technical lead, InterDigital Research & Innovation

S

ince the earliest experiments with motion pictures in the 1880s, moving images have entertained, delighted, informed, and enhanced our lives. As we continue to look for ways to make video and visual experiences more complete, the goal, and the challenge, is to evoke the look and feel of the third dimension from a frustratingly two-dimensional medium. But thanks to volumetric video technology, we are finally getting closer to that goal.

WHAT IS VOLUMETRIC VIDEO? When an object is viewed from more than one point that isn’t along the same straight line, the resulting displacement of the apparent image is called parallax. Our binocular vision allows us to experience parallax, where nearer objects appear larger, and the way we interpret depth and distance in our visual field. Often imperceptibly, each of our eyes sees a slightly different version of what’s in front of us. The fusion in our brains of these slightly different images is how we perceive depth. Unsurprisingly, replicating this process on a two-dimensional screen is extraordinarily complex. Rendering static images to appear three-dimensional is very challenging, and rendering moving images is even harder. The term “volumetric video” is used to describe modern approaches of using parallax to present moving images that are more realistic, immersive, and “three-dimensional.” To achieve parallax, we must combine the digital signal from multiple cameras focused on the same scene to generate volume. Adding other sensors, like depth sensors, can help to enhance the quality of the volumetrically reconstructed scene. The combination of data from these sources allows us to create a visual image that isn’t merely two-dimensional. Sports is a use case that holds significant and exciting promise for volumetric video. Most sports fans watching a game on TV will not want to watch an entire game in a volumetric feed, but certain elements, like replays, could become much more immersive with volumetric video. Viewers could not only select the angle by which they view the game-winning goal, they could also view it volumetrically with a sense of depth that approximates reality. Viewers could choose a bird’s eye view, or a player’s view on the field, or choose to watch the replay from the perspective of the home team, and then re-watch it from the vantage point of the opposing team. With this

www.tvbeurope.com

sense of depth, it’s exciting to imagine how much more interesting and immersive sports experiences could be. WHAT’S THE CATCH? As with any exciting innovation, there are challenges to implementation. Sports, and other use cases that involve live broadcast, would require multiple cameras to capture a volumetric image with parallax and depth cues, as well as traditional 2D images. When a replay is needed—whether to show a goal, a penalty, or other significant play— the volumetric feed would need to be quickly rendered and delivered. In instances where it is not possible to equip every camera in the stadium with such capabilities, certain cameras—in predefined zones covering vital positions of the game—could be equipped. On replay, broadcasters would need to have the ability to switch to the volumetric feed to render the images quickly. The speed with which the video would need to be rendered presents challenges, especially in a live sports broadcast, where the renderer will need to adapt to the capabilities of a wide range of displays to allow each viewer to be immersed in the replay scenario. There are also distribution challenges. Video streams will need to be segmented so that only users with volumetric viewing capabilities can receive those streams. MPEG standards bodies in the MPEG-I (MPEG-Immersive) are currently working on this to allow real-time distribution and rendering of immersive content. A SENSE OF FUTURE VIDEO We are at the cusp of an exciting and transformative time. The volumetric revolution could easily be as radical as the move from silent films to talkies, or the shift from monochrome to full colour. However, there are key challenges that the industry faces before we realise a volumetric future. While we are seeing examples of volumetric effects on images today, moving images present an added complexity to the process of capture and rendering. Volumetric video has the potential to revolutionise how we imagine and interact with moving images, bringing us a step closer to the visual representation of reality we all desire. Ultimately, the industry must work together to develop technologies that can make a volumetric future a reality. n

TVBEUROPE JANUARY/FEBRUARY 2020 | 09


FEATURE

MAKING A GLOBAL PARTNERSHIP Following LTN Global’s acquisition of Make.TV in September 2019, George Jarrett speaks to both companies’ CEOs about plans for the future

T

PICTURED ABOVE: Andreas Jacobi (top); Malik Khan

here were three remarkable aspects about the acquisition of Make.TV by LTN Global late last year, the first being that it was the third buy in over seven months by LTN. The second was that Make.TV gives LTN a considerably wider customer appeal stretching deep into Cloud native services from its core strength of IP video transport. And the third was that both LTN and Make.TV were highly attractive acquisition targets wanted by desperate corporations looking to morph from places they didn’t want to be. The companies acquired by LTN in March and July 2019 were the Niles Media Group specialising in content creation and remote TV production, and Crystal with its software for user automated monitoring, control and metadata management. Talking about the very friendly link-up between their two companies, Malik Khan, executive chairman and co-founder of LTN Global, and Andreas Jacobi, CEO and co-founder of Make.TV, begin by explaining their positions pre-deal. “We were very fortunate because we had never taken institutional money,” said Khan. “Because of the rapid pace that we grew we ended up not needing a lot of capital. “Timing is everything, and when we acquired Make.TV it was an all cash transaction, so we did not inherit any of its (eight) investors. One of the things that attracted us to Make. TV is that they were born in the Cloud and their architecture is Cloud native,” he continues. “They came from a virtual world, and the rest of the world is just starting to turn into that realm.”

“The sheer amount of content forces networks and content owners to make more with less. With the decreases in reach and revenue, they have to create more content!” ANDREAS JACOBI 10 | TVBEUROPE JANUARY/FEBRUARY 2020

Jacobi adds: “We were lucky before the acquisition in that we had the investor back-up for three years which enabled us to invest in developing the software. The only thing that has changed really is that we don’t have to go from round to round any more, and we were pre-profitable. “From the early beginning it was mainly a customer fit and 100 per cent matched our understanding of how the industry has developed. On top of that our products are totally complimentary,” Jacobi says. LEVEL OF INDEPENDENCE This alignment of thinking has taken LTN from transporting IP streams through the public internet, something of a walled garden, to moving workflow management tools around the Cloud through adding Make.TV’s live video router on top of its solution. How does R&D shape up now that Make.TV is an LTN Global company? “We believe that a team that owns its own destiny and is specifically interested in a very OAM (operations, administation, maintenance) technology really needs to have a level of independence from other things. They need to service that market,” says Khan. “The days of a centralised command and control-based innovation have gone. We have teams in one very specific area, and this is the right way to innovate and create solutions that match with customers.” “We are going to stay independent, but have some joint initiatives to integrate accordingly. The R&D for the transport business will still focus on optimising IP transport, which means making it easier for users to execute and work with encoders and decoders,” states Jacobi. “When it comes to transporting IP streams, 5G will play a big role. “We are stopping our R&D work on network management and transport, which was our biggest loop point when we totally relied before on the network capabilities of Cloud vendors,” he adds. “They are pretty limited, especially compared to what LTN is offering, but we are still running in the Cloud environment. With the IP of LTN we are able to manage the network and the reliability. Both technical teams are great, and we don’t want to deal with unnecessary merge efforts.”


FEATURE MANY SMART PEOPLE LTN Global exhibited at IBC, which is where the Make.TV deal was announced, so what did the duo learn about the market formerly known as broadcast in Amsterdam? Jacobi takes this one, and raises the issue of C Band problems for TV stations. He says: “What we learnt was two things. The transformation, from satellite to IP, finally happened, and a lot of things are playing into it here. “We are a driver for moving satellite transport to IP, and the industry is recognising that. It enables customers to push all the industry, especially the tech industry, to move their services into Cloud-based environments,” he adds. “The second phase is moving the production, asset management and the programme management systems up to workflow tools like ours, and into Cloud environments.” In creative speak remote and de-centralised means that creative people are available to take part in productions, in whatever role, with internet connection at the browser. “This has been the fun part, where the media industry will be able to create new formats, and change roles on the editorial side,” explains Jacobi. “What we also learnt from IBC is that the necessary infrastructure phase is happening.” Before the acquisition the two companies were technology partners. So how will partnerships work going forward? “The things that we invent are unique to us and there are many smart people developing other, complimentary technologies, and being in the Cloud we can resource those partners in a very easy way,” says Khan. “You have so many apps that perform functions that were unimaginable 10 years ago.” Jacobi adds: “We want to be our own platform for signal routing and we are comfortable with all the media asset management systems, production management systems, the VoD video systems and live editing systems, so on the editing we are Media Composer and Premiere.” There are links to other brands like EVS and Dalet, but everything that comes into the Make.TV platform is recorded then transferred to a MAM system, which is where the editing of highlight cuts during live sports will happen. “The first super product point for us has been being compatible, and supporting all the industry standards for I/O, for live and recorded. The second step is continually building our strong partnerships with the market leaders like Avid, across three areas,” says Jacobi. “The first set is enabling new developers to start creating small apps that can sit on top of our operating system.” When it comes to apps many customer groups have their own software R&D teams. Has Jacobi found these to be strong? “We have found both! The big media companies know that it is super important to be at least aware of what they are using, so they need a strong IT team and a software development team,” he says. “Al Jazeera has a very strong team, and they are working very well with all our developing tools.”

www.tvbeurope.com

A EUROPEAN CITIZEN The small problem LTN Global had was being seen as a strong US presence, so how does Make.TV help to make the global thing happen? “Most US companies when they think about global expansion build a sales force and a re-seller network, but by doing that you stay as an American company with just a sales arm in Europe or Asia,” says Khan. “A big consideration with Make.TV was that we wanted to become a European citizen, and it has a very strong presence in Cologne with great technical resources and an organisation that is developing cutting-edge technology. “We are interested in expanding that development presence, and are in the middle of hiring as fast as we can,” he adds. There are plans for a tech centre, operating centres that run 24/7, and rolling out the sales, marketing and tech support organisation “in the right way”. What new technologies does Khan expect to deploy to best advantage, starting with 5G? “That is a really important change in our connection environment. When it rolls out in its full glory we expect a substantial increase in the ubiquity of connections and no last mile issues. We can take our solutions anywhere in the world where there is a 5G architecture. The bandwidth assures a significantly improved quality of consumer experience,” he says. “We have also invested heavily in ML systems and AI, and a lot of that has been delivering things like closed captions in different languages (automated translation services), and the ability to insert localised advertising into content. There is a whole slew of things you can do inside the network in an automated fashion with little or no human interaction.”

“If you are having the conversation about technology change, disruption and the future of the media industry, how can you talk about that without involving advertising?” MALIK KHAN Now both companies are so muscular in terms of capability, surely their technology would appeal to industries beyond media? “We already work in the area of E-Medicine, with people who are using our technology to have a local surgeon perform an operation and it is observed live by someone with a lot more experience, maybe sitting 1,000 miles away in another hospital,” said Khan. “That overseeing role is happening now. There are just a huge number of applications that are going to use this, and we know it is not just specific technology for a specific market that we have.”

TVBEUROPE JANUARY/FEBRUARY 2020 | 11


FEATURE How did Khan react to the acquisition approaches he faced prior to buying Make.TV to escape his walled garden? “We are at a very early stage of growth and the adoption of all the technologies, and so we would really like to have the chance to build a set of solutions that are widely adopted by customers. We chose that path rather than the path of getting acquired, and once we chose that path it made it very obvious to build some complimentary solutions, and accelerate the development of a bigger, more managed solution for the marketplace,” he says. “Thankfully we found companies like Make.TV, Crystal and Niles, and every acquisition we have made has been a friendly process,” Khan adds. “At the end of the day what we are acquiring is people. Technology without people is nothing. It has been a good experience.” CLOUD-BASED PRODUCTION IS A MUST Stepping back from the racing pace that the market is moving at, Khan notes a quiet organic revolution where many are playing catch up. “I don’t know a single media company that isn’t either experimenting with Cloud-based tools or isn’t seriously looking at starting that process,” he says. “We see the same thing with satellite, where no media company isn’t starting to say they have to look at terrestrial IP for distribution because they have to service so many different customers with different language, advertising and graphics requirements. “They have to say to themselves, I cannot keep on investing in an on-premise location,” he adds. His thinking here concerns the move from sending one channel to thousands of locations, to sending thousands of different versions to far fewer people for each version of the channel. Cloud-based production is a must. “You have a flexible, high-capacity and scalable distribution solution that can be a feed to 100 locations or 100 feeds to 10 locations. The compelling economics accompany the demands that consumers are putting on our customers. Everyone wants something that applies to him or her - and if your competitor offers that and you don’t it is a significant disadvantage,” continues Khan. “Teaming up with early adopters is essential because the learning in the industry is happening right there,” he adds. “In terms of the features, innovations and functionality that we put in the Cloud, what better place is there for us to learn and

“We want to enable customers, especially in news broadcasting, to get content when it happens.” ANDREAS JACOBI work? There is an incredible and broad internet-based demand by customers, and the devices on which consumers are viewing video content have gone nuts, so having a softwarebased, Cloud-based solution, with LDN providing very high quality in and out of the Cloud is what the conversations with the market have been about for the last four months.” CONTENT WHEN IT HAPPENS The Make.TV curation features have been big factors in the Tour de France coverage (including facial recognition) and many esports apps. “Technically speaking we are a live TV router, but the next stage of our product development will take in more and more topics like managing content rights and automating routing decisions based on the starters of feeds, performance and destinations,” says Jacobi. “We want to enable customers, especially in news broadcasting, to get content when it happens. “We will enable them with tools and services to quickly curate and make smart decisions about which content can be used in which programme on which channel, so we will focus on curation features and on enabling automation for routing and distribution.” At CES, LTN showed a demo involving TV manufacturer Vizio plus Fox, CBS, Warner Bros and other big hitters. “The CES demo involved a real on-air live feed brought onto smart TV sets, where adverts were inserted specific to the household that owns the TV set,” says Khan. “It captured the imagination of a lot of people because now you can take programme information that LTN has access to – household demographic information with all the privacy in place - so that content information plus the demographics of the receiver now allow you to create a live real-time ad option for people who want to access that audience or that household during specific programmes they know that they are likely to watch. You can sell very specific products.” n

“The amount of money that advertisers are willing to pay for a minute of time for 1000 eyeballs or pairs of eyeballs rises pretty rapidly if you become more and more addressable. We see that happening.” MALIK KHAN 12 | TVBEUROPE JANUARY/FEBRUARY 2020


Š Sam Richwood

FEATURE As we enter a new decade, the media technology industry continues to undergo fundamental change. With viewers continuing to move from linear to OTT, the arrival of new technologies such as 5G and AI, and changes to standards, how will our industry look in ten years time? TVBEurope invited a number of key stakeholders within the industry to offer their thoughts.

www.tvbeurope.com

TVBEUROPE JANUARY/FEBRUARY 2020 | 26


THE FUTURE OF THE TV AND VIDEO LANDSCAPE By Alexander Mogg, partner, strategy and operations lead TMT Germany, Klaus Böhm, director, media and entertainment lead Germany, and Ralf Esser, senior manager, head of TMT Insights Germany at Deloitte Consulting

T

he rise of on-demand-services, new market entrants, and fundamental changes in consumer demand for TV and video consumption has seen the established players confronted with horrifying news about their positioning and future in the TV and video landscape. But will these dramatic predictions really come true? It is true that TV and video are facing many uncertainties, and the extent of change in the industry is hard to predict. Streaming services no longer serve only as a platform for films and TV shows, they are also investing into producing and licensing their own content. This makes them direct competitors to the traditional TV and video industry. At the same time, TV channels and media organisations are starting their own on-demand offerings, while large content producers are also setting up streaming services themselves. From another perspective, on-demand services have rapidly changed consumer appetite for TV and video services. With the success of video-on-demand (VoD), consumers increasingly expect relevant content they can access at any time, any place and in the format that best suits their needs. SCENARIO APPROACH This rapidly changing market landscape makes predicting the future difficult, if not impossible. Deloitte has therefore adopted a more holistic scenario approach which will transport us to the year 2030. The four scenarios we have developed do not aim to predict the most likely outcome, but to illustrate what could plausibly happen in the world of TV and video and how today’s market players might adapt to deal with the many changes and uncertainties along the way. Our analysis is based on a comprehensive list of relevant drivers. Following workshop sessions and expert interviews, those drivers were rated with regard to their degree of uncertainty and their impact on the TV and video industry. Our experts then identified a combination of two critical uncertainties that create the most challenging, divergent, and relevant scenarios. The result is a scenario matrix built on two axes that address the critical uncertainties by asking the

14 | TVBEUROPE JANUARY/FEBRUARY 2020

questions, “What will the player structure look like?” and “Who will have access to customers?” On the axis responding to “What will the player structure look like?”, the TV and video landscape could be driven either by national players, meaning that the industry does not continue to globalise, or by global players who push national players to the fringes. The second axis or critical uncertainty that determines the future of video is “Who will have access to customers?”. This raises the question whether content owners or platform owners


leverage a direct to consumer relationship and therefore become the most powerful players in the market. The following four scenarios are plausible but highly distinct visions of the future of TV and video. Each one individually illustrates what opportunities or risks might occur in the future. SCENARIO 1: UNIVERSAL SUPERMARKET In this scenario, a few global digital platform companies have taken the leading role in aggregation and distribution from national broadcasters. They control the entire TV and video market and have entered all steps along the value chain. Similar to large supermarkets, each of the digital platform companies offers an extensive range of global and national content, only differentiated by some exclusive productions and sports rights. Broadcasting as we know it today has disappeared, broadcasters have evolved into pure creators of national content. Advertising has also shifted to the digital platforms, along with consumer relationships. Broadcasters are dependent on revenue shares from digital platform companies. Advertising agencies and traders have disappeared, rendered irrelevant by digital platforms’ direct models of ad trading. SCENARIO 2: CONTENT ENDGAME In this world, large global content owners dominate the markets. They have integrated vertically along the entire value chain and started distributing content on their own channels while establishing direct customer relationships. Content has become the main differentiating factor, with technology considered only a commodity. Whereas big content owners with costly blockbuster productions benefit from economies of scale, smaller producers have been pushed out of the market. Consequently, the variety of content has decreased, but the quality of global productions has reached new dimensions. Broadcasters have survived as suppliers to the global content owners by shifting their focus on creating strong local formats and protected by strict national regulations. Digital platform companies have retreated to being pure distribution channels focused only on technical delivery. As content truly is king in this world, global content owners negotiate directly with advertisers and agencies are losing relevance. SCENARIO 3: REVENGE OF THE BROADCASTERS In this scenario, national broadcasters have successfully accomplished their digital transformation and secured a strong position in the TV and video ecosystem. They have evolved into digital platforms and established direct customer relationships. National broadcasters and global digital platform companies co-exist and provide a high richness of content. Whilst national broadcasters focus on local quality content, digital platform companies supply global productions and blockbusters. Alliances with national telecommunication providers have

www.tvbeurope.com

resulted in efficient content distribution via high-performance platforms. Advertising agencies remain in the market and help broadcasters implement innovative ad formats. Strong media regulation at a national level protects local content production. SCENARIO 4: LOST IN DIVERSITY TV and video have evolved into an ever more diverse ecosystem with no dominant players. Instead, consumers are served by numerous distribution platforms, a high degree of content richness and steady turnover of players. Demand for national content remains strong, therefore partnerships between global and local players are widespread. Everyone does everything: global digital platform companies contribute global formats like high-profile series and forge alliances with local producers to provide relevant local content. Telecommunication providers, broadcasters, and content producers have also successfully created their own digital platforms. Consumers are only interested in content and therefore are less loyal to any one platform or brand. Advertising agencies continue to allocate advertising budgets and provide guidance within the complex ecosystem. Regulators strongly protect national broadcasters in this scenario to preserve local content and media companies. KEY SUCCESS FACTORS: ALLIANCES AND TECH SKILLS Looking at the set of four scenarios, we see digital platform companies (DPCs) being major disruptors within the future TV and video market. By contrast, broadcasters and content producers face the highest pressure to change. Overall, our scenarios share two common themes: partnerships and technological skills. Strong partnerships along the value chain will be required to survive in changing market environments. For instance, broadcasters and content producers can no longer rely on their current market positions. To safeguard their business models and future revenues, they need to be open for cooperation and alliances with their direct competitors. Shared platforms, production, and distribution are appropriate measures to counter the threat presented by DPCs. In parallel, regulators need to become less restrictive when it comes to alliances between equally positioned players within the TV and video value chain. Broadcasters and content producers must work toward convincing regulators to permit such cooperative models. Beyond that, market players must continuously invest in technological skills. Tech has already become a core element of TV and video business processes, so mastering technology is a prerequisite for being successful in the digital video market. In order to achieve this, attracting qualified staff is essential. Consequently, both new and traditional players in the video market must become and remain attractive to digital talents. n

TVBEUROPE JANUARY/FEBRUARY 2020 | 15


WHAT DO THE NEXT TEN YEARS HOLD FOR BROADCASTERS? By Mark Harrison, managing director, DPP

W

e can all predict one thing with complete certainty: whatever the media industry looks like in 2030 won’t have been successfully predicted by anyone in 2020. We can also predict a second thing. Getting to that unknown place will have taken a lot of blood, sweat and tears. The only question is: whose? Let’s begin with the state of the market. It’s projected that by 2023 the volume of viewing of SVoD services globally will reach parity with over the air ad-funded TV viewing. What makes this trend more interesting, however, is that 2023 is also projected to be the year in which SVoD revenues level out. These two trends of increasing SVoD viewing, but finite revenues, are evidence of the single most compelling characteristic of current media markets: demand for content keeps growing; supply keeps growing; but revenues will not. As The Economist reported in November 2019, the media industry shows all the characteristics of a boom in search of a bust. Rather like in the music industry, consumers are learning they can access enormous amounts of great content relatively cheaply. Most consumers have a poor sense of what constitutes a reasonable value exchange. Just how much of what kind of content should cost how much? Right now the consumer’s answer is simply ‘as much as I can get my hands on for what I can afford.’ How does a content provider know how to supply to that kind of consumer sensibility? Who knows – especially when some of the players in the media game, such as global technology giants or connectivity providers – do not have content as their only, or even primary, source of revenue. All these themes (and many others) add up to a highly volatile and unpredictable market. But when considering their impact upon established broadcasters there appear to be some unavoidable truths that look set to materialise in the next ten years. The first is that efficiency will be essential for survival. It’s awkward to admit, but the broadcast industry has never been strong on efficiency. Predictable revenues – whether from advertising or public subsidy –

16 | TVBEUROPE JANUARY/FEBRUARY 2020

meant broadcasters could maintain an image of themselves as creative entities that competed on the quality of their programming, whatever it took. Indeed, many were proud to have their own, bespoke technologies that they saw as essential to the quality of their brand. But there is simply no longer any place for such hubris. Broadcasters now compete with content providers that were built for the internet age. Legacy technology, processes and culture built up over decades are now the dead weight that could sink those broadcasters unless they can modernise fast. We may see some well-known broadcasters go under – and if they do, the failure to re-engineer for efficiency will be a primary reason. Related to this need to discover efficiency will be a growing interest in sharing infrastructure and resources. In Australia, Channel Seven and Channel Nine have partnered on a joint playout centre. We should expect to see many more such initiatives – and not just around playout and distribution. Why not share content receipt, processing and compliance? Or enterprise IT services and support? Or payroll and HR? In short, why not share pretty much anything that’s invisible to the consumer, and leave competition to the thing audiences care about most, content? We can also expect to see more partnership around platforms. BritBox is an obvious early example of course. And in 2020 we will see the launch in France of Salto, a joint SVoD service from TF1, M6 and France Televisions. But the real test of partnership will be if broadcasters group together to share their core video player, developed and maintained by a joint team. At the DPP Tech Leaders’ Briefing one CTO of a national broadcaster pointed out that he and every other national broadcaster CTO were struggling to find software developer resource to run individual video players that were all effectively the same. The best way to fight disruption could be collaboration. The 2020s might turn out to be the decade when broadcasters divided into those who wanted to remain broadcasters and those who turned into media companies. In which case, by 2030 both groups will have shed blood and sweat; but probably only one will have shed tears. n


TAKING STANDARDS INTO THE COMING DECADE By Bruce Devlin, SMPTE standards vice president, and Andy Warman, AIMS marketing working group chairman

S

MPTE and AIMS are connected by many shared goals, one of which is to promote multivendor interoperability. SMPTE has a 100-year history of developing industry standards that facilitate interoperability and drive the quality and evolution of motion pictures, television, and professional media. AIMS has engaged leaders from across the broadcast and media industry to create a technology roadmap that helps broadcasters rapidly deploy flexible IP-based production, playout and distribution capabilities. By promoting adoption of IP technology and standards, AIMS drives interoperability. Both organisations see standards and specifications as fundamental building blocks in addressing the video, audio, and data requirements of not just broadcasters, but the larger media and entertainment industries. Both organisations also recognise the industry’s shift toward COTS computing and networking, as well as componentised media, and believe that standards and specifications play a vital role in ensuring that nonproprietary, off-the-shelf systems can effectively handle audio, video, and data streams in a consistent manner that supports various media workflows. Moreover, AIMS and SMPTE want to ensure that standards and specifications meet as many user-community requirements as possible while remaining relatively simple for implementers. In fact, SMPTE works well with various industry groups in trying to identify the right kind of standardisation solution for each part of the puzzle that is IP infrastructure for media. Through complementary work, SMPTE and AIMS are helping the industry implement and optimise IP infrastructure, systems, and workflows. In the case of IP-based workflows built on the SMPTE ST 2110 standards suite, for example, the ultimate goal is to support very lowlatency, pristine-quality delivery chains and production capabilities so that when downstream compression-based systems do their work, they’re still working with the best possible image quality available. The idea is that the end product will be the best it can be on whatever platform the consumer uses. Through further development, including common control protocols such as those of the Advanced Media Workflow Association (AMWA), the industry is working to enable use of protocols that support identification and control over network-connected devices. This is just one example of how standards can be complemented effectively by specifications, protocols, open source projects, and best practices to improve interoperability and utility.

www.tvbeurope.com

While IP adoption worldwide is coming along at what might be a surprising pace, the challenge of enabling interoperability will take AIMS well into the next decade. Now that the industry has SMPTE ST 2110 and has solved the problem of uncompressed, realtime video over IP, the Alliance has shifted toward helping broadcasters and media companies deploy and manage IP solutions with ease. The work that AIMS does today to help reduce the complexity of media workflows will continue to be important work as broadcasting — in any current or future form — grows increasingly dependent on computer networking and network-based switching control. By the time 2030 approaches, the Alliance likely will have refined its focus one or two more times to continue to address new challenges in optimising IP-based media workflows. SMPTE’s standards development work encompasses much more than media-over-IP workflows, but what’s true across virtually every area of its standards work is that the industry is best served by the right mix of standards, specifications, recommended practices, registered disclosure documents, open source projects and registers (lists). The industry’s rapid rate of technological change demands a more nimble and agile approach. So, moving into the next decade, the society will be focusing on providing faster, more flexible services in which not every output is a standard. While SMPTE standards typically are concrete and unchanging, even boring, they can support technical specifications and other documents — data definitions, protocols, implementations of an API, etc. — that help drive growth and innovation. If SMPTE successfully provides these technology facilities, it might just make life easier for AIMS members and help the Alliance to pursue their goals. With SMPTE serving as a hub for the creation of standards and other documents, AIMS can leverage this work to drive future initiatives in support of the media and entertainment and professional AV industries. n

TVBEUROPE JANUARY/FEBRUARY 2020 | 17


5G AND BEY As 5G begins to enter both the public’s conciousness as well as content producers’, three experts offer their thoughts on how the technology will develop over the coming decade

5G: THE NEXT EVOLUTION Richard McClurg, vice president, marketing, Dejero

5

G is coming, who’s ready? As the broadcast and media industry anticipates the new production workflow possibilities offered with 5G, the status of availability is rapidly changing. It depends on which country or region you are in, and new announcements seem to happen weekly from mobile network operators. There are pockets of early availability in cities and communities across Finland, Spain, Switzerland, and the UK. Trials are proceeding elsewhere, with commercial launches expected in France, Germany, Norway, Sweden, and the Netherlands in 2020. While there remain practical realities to the rollout, including availability of spectrum, and the availability of

18 | TVBEUROPE JANUARY/FEBRUARY 2020

devices, widespread availability of 5G - and specifically the more advanced capabilities that promise ultra-low latency, blistering speeds, and much greater capacity - are a long way off. What we can expect initially is a performance boost similar to what we saw with LTE-Advanced. Dejero products are ‘5G ready’ through the use of external 5G modems, while future products will integrate 5G modems as they become commercially available. With limited 5G coverage in most locations, supporting existing wireless network technologies will remain key. Dejero’s Smart Blending Technology enables broadcasters to blend connections from multiple providers and different wireless technologies. It is this connection diversity that will


OND remain key, even in a 5G world, to deliver the reliability that broadcasters and media organisations need. In 2019, as part of Dejero’s preparations for 5G, we were involved in an exciting live project with Musion 3D and Vodafone Romania. For the first time ever, 5G, holographic technology, and Dejero transmitters and receivers were used to successfully transmit a life-sized 3D holographic live video of a performer who was situated in a remote location, away from the venue. This successful field demonstration would not have been possible without the capacity offered by a 5G wireless network, and hints at the possibilities for broadcasters and media organisations. Exciting times are ahead. For Dejero, 5G is the next evolution in terms of blending different wireless technologies, just as we’ve supported the migration from 3G to 4G and 4G LTE-A, as well as pioneering the blending of cellular with satellite transmission paths. For our customers, blending multiple networks including those that are 5G, will continue to provide enhanced reliability, expanded coverage, and greater bandwidth when operating remotely. Whether it’s breaking news, sports or live events, Dejero is ready to help customers take advantage of this latest wireless technology. n

THE ANALYST’S VIEW

By Paolo Pescatore, tech, media and telco analyst, PP Foresight 2019 will be remembered for the arrival of 5G networks across four continents. It is truly remarkable to see how quickly 5G is evolving; far more than 4G ever did. 5G is now here and 2020 will be all about scale. All telcos are preparing their networks to cope with the impact of the new iPhone 5G which is widely expected to launch this year. Most if not all new devices will support 5G. People’s insatiable appetite for data continues to proliferate. The arrival of 5G means users will be able to do much more on their devices at lightning speeds and reliable connections compared to previous generations of networks. All this means no waiting time for a stream to appear and download stuff like a 4K movie in seconds rather than minutes. However, telcos will struggle to make money and recoup the significant investments needed in 5G. The move to unlimited caps revenue. For this

reason, expect telcos to place greater focus on enterprise services in selling 5G as a network slice. The media industry is set to benefit massively from 5G in terms of video contribution and user consumption. Using 5G is set to revolutionise remote production and live contribution bringing significant benefits to the industry. 5G promises to change the way content is created, produced, transmitted and distributed through the entire value chain. It feels like we are in a golden era of connectivity which promises to transform the way we interact and engage with devices in the future. This will proliferate in 2020. n

5G LEADS TO INNOVATION

By Tim MacGregor, head of Watson Media Platform Offering and Product Management, IBM Watson Media The emergence of the 5G network should minimise video buffering struggles. 5G is primed to transform access to streaming services by speeding up and enhancing wireless connectivity. Looking ahead, consumers nationwide will be able to enjoy lightning-fast video streams as 5G infrastructure is developed. With higher speeds and lower latency, doors will open for bandwidth-

www.tvbeurope.com

intensive applications of streaming technology — including immersive mobile technologies like AR and VR — enabling content providers to experiment with new, smoother, more engaging viewer experiences. As 5G is deployed on a larger scale, the industry will enter a new era of video innovation across platforms, devices, and media. n

TVBEUROPE JANUARY/FEBRUARY 2020 | 19


2010

BROADCASTERS BEGIN TO ROLL OUT ‘TV EVERYWHERE’ SERVICES LAUNCH OF APPLE IPAD

WHERE WE’VE COME FROM AND WHERE WE’RE GOING

2023

MORE USE OF AI FOR VIEWERS TO DETERMINE WHAT TO WATCH AND WHEN

CONSUMER TECHNOLOGY ASSOCIATION PREDICTS TWOTHIRDS OF SMARTPHONE USERS WILL BE USING 5G HANDSETS

2022 20 | TVBEUROPE JANUARY/FEBRUARY 2020

2011 SNAPCHAT DEBUTS

A look back at some of the most signifcant events in the media tech industry from 2010-2020, and a look forward to some we expect over the coming decade

2025

5G SUBSCRIPTIONS PREDICTED TO SURPASS 2.6 BILLION ACCORDING TO ERICSSON

HBO MAX PREDICTED TO LAUNCH IN EUROPE FACILTIES AND STUDIOS CONTINUE TO ADOPT IP IN-CAR FILM AND TV STREAMING ACCELERATES

2021


NETFLIX ANNOUNCES ITS EXPANSION INTO EUROPE

2012

2030

INTERNET OF SENSES (TECHNOLOGY INTERACTING WITH OUR SENSES OF SIGHT, SOUND, TASTE, SMELL AND TOUCH) WILL COMPETE WITH SCREEN-BASED EXPERIENCES

2026

MORE NEWSROOM AUTOMATION THAT WILL SEE GALLERY PERSONNEL REDUCED EVEN MORE

2013 EUTELSAT ANNOUNCES THE FIRST DEDICATED 4K ULTRAHD CHANNEL BBC ANNOUNCES IT WILL ABANDON 3D TELEVISION

2015

MARY MEEKER’S 2015 INTERNET TRENDS REPORTS VERTICAL VIDEO GROWING FROM 5 PER CENT OF VIEWING IN 2010 TO 29 PER CENT IN 2015

2016 TIKTOK LAUNCHES

DISNEY PLUS ENTERS UK STREAMING MARKET 8K OLYMPICS

2020 www.tvbeurope.com

BBC DISBANDS ITS VR TEAM BT SPORT TESTS 5G-ENABLED REMOTE PRODUCTION

2019 TVBEUROPE JANUARY/FEBRUARY 2020 | 21


REALISING THE POTENTIAL OF IP Greg Burns, head of business development, Arqiva

W

hen it comes to IP, the broadcast industry is making strides in the right direction. In fact, IP has been steadily gaining traction on a global scale - with the technology improving and already accounting for huge video consumption numbers and growing video production volumes. We now live in an age of on-demand and personalised media where almost all viewer demographics are watching content on an IP enabled device, such as their phone or tablet. However, despite the associated benefits of flexibility, scalability and costeffectiveness, there is still more work to be done when it comes to IP adoption. Whilst we have seen a steady increase in broadcasters seriously embracing the transition to an all-IP future, many still await concrete business value and seek rarified expertise to help them navigate the transition. With this in mind, here are some challenges and potential scenarios that broadcasters need to consider in order to truly unlock the benefits of IP… Firstly, video providers need to remember that the long-term benefits of IP are not at the infrastructure and transport level, but rather the application level. It’s not simply about migrating workloads to IP and Cloud but embracing the level of application innovation across non-linear and OTT platforms. Essentially, it’s as much about IT as it is IP. In fact, the opportunities around smart content networks, personalisation of linear channels, dynamic adaptation of pan-regional feeds and the integration of advanced client-side technologies, have been enabled as a result of embracing the IT applications (i.e. web technologies) instead of the IP networks themselves. Driven by the need to deliver on quality of service and flexibility of offering, the next decade will see distributors looking to make their content management processes more seamless and integrated. This can be achieved by IP (or should I say IT) adoption. Imagine a world where all content (live and file) is created and made available in the Cloud. Managed through a common, distributed global rights ledger, where creators and distributors can seamlessly transact without having to have multiple copies of media or separate, inconsistent libraries. Although linear channels will still be originated, they’ll be done so through a more dynamic, organic process - informed by up-to-the-minute rights information and real-time viewer behaviour. Dynamic, macro and microinsights will drive the way channels are originated, much as VoD content is on Netflix and Amazon Prime. Even before one episode ends consumers are being faced with tailored suggestions for the next TV series or movie to watch, a bit like linear TV! However, this level of personalisation and tailoring does not have to be delivered solely on an OTT or VoD ecosystem. Looking to the future, traditional networks with ‘traditional’ last-mile distribution will be able to leverage the technology that powers this high-level personalisation. This will allow decisions about which content (programmes, promos and ads) to

22 | TVBEUROPE JANUARY/FEBRUARY 2020

be dynamically made based on the most granular level of information available, whether it’s a specific OTT consumer profile or it’s a DTT region in a traditional broadcast market. When this is combined with the hybrid capabilities of most set top boxes and television sets, we envision a long-term trajectory where the consumer is finally getting the ultimate combination of a curated lean-back experience with significantly increased direct engagement. The viewer experience will be personalised in a more intelligent way without overwhelming the consumer with choice. As an industry, we need to stop thinking about two separate types of broadcast process and workflow – linear and non-linear. The paradigms associated with these types of deliveries, whether it’s linear channels or VoD, are equally applicable regardless of whether you’re focusing on online distribution or over the air. The key to making the process more integrated is to change the mindset around scheduling and creating those channels. Why not address a traditional distribution endpoint (e.g. a cable head-end) in the same way you address an OTT endpoint (e.g. the consumer)? In order to achieve this and enhance traditional broadcast services, linear channels must embrace OTT technologies and a non-linear mindset. Instead of seeing non-linear and OTT as a threat, the linear broadcast ecosystem should adopt behaviours associated for media delivery over the internet. In the next five to ten years, this will be particularly relevant to the future of playout and distribution. Shifting those processes to a much more holistic and flexible, dynamic process will be vital to their success going forward. Ultimately, before the true potential of IP can be realised and it becomes fully mainstream, it requires a seismic shift in attitude and strategy. While the case for adopting an all-IP and IT infrastructure approach is clear, its pace will be dictated by the benefits – which themselves depend on generation of true business value. However, businesses looking to win the race for relevancy should also recognise that IP adoption is being driven by consumer habits and preferences. If the TV industry wants to keep its competitive edge it needs to be flexible and adapt to consumers’ needs. We know the way viewers consume their TV content is changing rapidly and increasingly, as more people are watching on IP-enabled devices. However, although the technology capabilities are already there and have been delivering broadcast successfully using IP, it is now up to traditional broadcasters to understand its true potential and adapt to a new process that allows the benefits to be realised across the entire, holistic value chain, not just in the world of VoD and OTT. n


HOW WILL VIDEO DELIVERY ADAPT AND EVOLVE IN THE 2020S? By Elke Hungenaert, VP product management, Synamedia

T

he 1920s are remembered as the roaring ‘20s thanks to the decade’s exuberant, freewheeling popular culture. As we kick off the 2020s, the roaring is the sound of sports fans watching their teams’ matches being streamed live to living rooms, pub and buses. With live streaming certain to be the cornerstone of any operators’ strategy to compete in the ‘20s, one of the biggest issues they face is latency. Synchronised latency between live broadcast and OTT streaming at scale will be a ‘must-have’ particularly for premium live sports. Reducing latency isn’t the only challenge that has engineers and operations executives scratching their heads, though. It goes hand in glove with a need to optimise complex workflows using Cloud, virtualisation, automation and AI all while keeping a tight control on costs. In the worst latency cases, subscribers hear the cheering of a goal next door before seeing it on their own screen. A minute’s delay can negatively impact your brand and result in subscriber churn. Don’t fret, help is at hand. One sure-fire approach that is already getting interest is to optimise the entire chain for low latency. There’s no benefit in using a low-delay encoder if your CDN platform or player are going to introduce latency. Simply put, if you are not in control of the CDN, then you do not have control of the end-to-end network and your latency will be dependent on third parties. It’s not enough to change the profile bitrates or add a new logo, it’s a much larger commitment. But the user demand will justify the effort. Synamedia is uniquely positioned to help operators because we have an end-to-end solution from the encoding to the CDN through to the player. This means we can minimise the delay and work towards a goal of matching broadcast and OTT latency. We will be supporting customers with the first end-to-end deployments in 2020 and believe this will go mainstream during the ‘20s. There are other ways to minimise latency too, including converging the broadcast and IP streams at the headend. This also brings the benefit

www.tvbeurope.com

of optimising workflows. Streamlining complex workloads is made easier with a software-based converged headend that brings together multiple broadcast and broadband workflows to deliver broadcast-quality video quality to any screen. Over the next decade we expect to see operators fully virtualising their video networks. We’re all familiar with pay-TV providers using subscription revenues to pay off Capex infrastructure investments but when it comes to streaming, it is faster, less risky and more cost-effective to launch a new service in the Cloud. You can quickly and easily scale down or up, and there’s always the option of flipping into Capex mode down the line if the numbers stack up. And, as the move to the Cloud accelerates in the ‘20s, we expect to see greater adoption of solutions to find the optimal balance between Capex and Opex. For example, the ability to launch, scale up and scale down high-availability channels for smarter hybrid on-premise and public Cloud deployments while keeping a tight control on costs. A software-based video network gives you the flexibility to move content and workloads between on-premise, Cloud or hybrid environments quickly and easily. Plus, automation tools will make it easy for your operations teams to find the right balance between Opex and Capex and the most cost-effective Cloud provider. Automation also helps your teams scale up and down in the Cloud dynamically and costeffectively and launch permanent and pop-up channels in a matter of minutes rather than months, accelerating time to value. No-one argues that AI and machine learning won’t play an increasing role in video delivery. The first applications are already on coming on stream, including a cost-effective way to deliver broadcast-equivalent video experiences without consuming ever greater chunks of bandwidth. AI and machine learning techniques use constant quality encoding and create video segments 40-50 per cent smaller than using constant bit rate encoding. With support for 4K and 8K video on the horizon, this will become critical. n

TVBEUROPE JANUARY/FEBRUARY 2020 | 23


GOING LIVE

Live production will likely see fundamental change in the coming decade, particularly with the arrival of remote production and increasing adoption of 8K. TVBEurope asked BBC Studioworks CEO Andrew Moultrie for his predictions

W

HICH TECHNOLOGIES DO YOU THINK WILL HAVE THE BIGGEST IMPACT ON LIVE PRODUCTION OVER THE NEXT DECADE? AND WHY? There’s no doubt that ambitions to improve picture quality will have a significant impact on live production. Although the majority of content production is still HD, especially live entertainment shows, the appetite for improved resolution across the next decade, such as 4K and 8K, and the trend towards richer colour reproduction has every potential to increase. Also, the next evolution of tapeless technology will see accelerated workflows in server-based production. HOW WILL TECHNOLOGICAL DEVELOPMENTS MAKE IT EASIER FOR PRODUCTION TEAMS IN TERMS OF TIME AND THE NUMBER OF PEOPLE INVOLVED? Technological developments will offer the benefits of more effective workflows and the consequential environmental dividends this brings. However, the relationship between tech and the work force will create friction and tech will inevitably replace some craft roles through automation. You just need to look at Amazon and its warehouse solutions and the proliferation of ‘bots’ and machine learning in the Google universe to see how manual labour and editorial roles are being replaced by machine learning tech and how that tech drives commercial bottom line requirements. HOW DO YOU THINK AUDIENCES WILL BE WATCHING LIVE CONTENT BY THE END OF THE DECADE? WILL IT STILL BE VIA TRADITIONAL LINEAR TV? Linear TV, when combined with the right technology, can hold its own. However, whilst linear will still serve a function for family and big format viewing, the phone will increasingly become a dominant platform to broadcast directly to, especially with the development of the future

24 | TVBEUROPE JANUARY/FEBRUARY 2020

generation of wireless connectivity (5G and 6G) and its potential data transmission speeds. WHAT NEW TECHNOLOGY DO YOU THINK WE’LL BE DISCUSSING BY 2030? Advancements in green technologies. We have seen many production companies and studios committing to minimising environmental impact in order to make more climate-conscious production a reality. Green technologies, such as LED lighting, zero emission generators and all-electric newsgathering vehicles are already in play. The further development of these technologies will inevitably be adopted by the production industry as initiatives to reduce environmental impact grow. n

THE IMPACT OF AI

By Tim MacGregor, head of Watson Media Platform Offering and Product Management, IBM Watson Media As visual technology continues to advance, we’ll see major TV broadcasters applying AI to monetisation strategies in an effort to monetise advertising and partnership deals. Broadcasters will rely heavily on new tools that demonstrate advertising value for potential customers. Developers can train AI systems with key images to recognise brand logos within video footage to track how often it appears and for how long. With this granular level of insight into a brand’s on-screen impact, broadcasters can make more strategic ad spend decisions and optimise the value of their air time. n


THE SUPPLIERS’ VIEW OF THE COMING DECADE By Lorenzo Zanni, head of insight and analysis, IABM

I

ABM research shows that media technology buyers are shifting their investment toward more flexible technology payment models such as as-a-service offerings and away from legacy systems such as hardware equipment and licences. Recent IABM data shows this clearly with most customers planning to increase their investment in software subscriptions and on-demand models. On the contrary, investment in hardware and software licences is mostly flat or declining. Interviews conducted for our recent Adapt for Change report described this transition as inevitable, particularly by customers. In fact, an overwhelming majority of them said they would be investing only in flexible payment models going forward and building a supplier ecosystem that provides consumption-based options. The unpredictable nature of the market is the main reason behind buyers’ reluctance to commit to multiyear technology solutions. The constant change in consumer demands and expectations translates into a demand for flexibility of vendor solutions. CONTINUOUS ENGAGEMENT This demand for flexibility is translating into a variety of shifts in supplier organisations. The move to as-a-service models entails a complete reorganisation of investment and departments such as technology, finance, sales and marketing around the new cashflow paradigms. From a technology perspective, development in as-a-service models becomes more agile, thus more collaborative, dynamic and responsive to customer feedback. Agility translates into continuous software updates driven by customer feedback rather than the release of major products every few years – a major feature of on-premise product releases. As things move more quickly, suppliers are becoming more inclined to roll out small, new features and receive feedback from customers on them on a continuous basis. As the buying cadence moves from multi-year commitments to monthly or metered payments, engagement with media technology buyers also becomes continuous. This requires suppliers to implement a series of organisational changes that range from the creation of new roles to the development of new marketing activities. For example, in an as-a-service model, customer retention becomes key to financial success – as it influences customer churn and thus the financial value of customers – requiring vendors to create dedicated positions to manage retention. Generally, the engagement touchpoints with both customers and prospects within a certain year become multiple, going beyond the traditional model of C-level sales that has characterised industry exhibitions in the past.

www.tvbeurope.com

COLLABORATION The unpredictability of the broadcast and media market is also one of the main drivers behind collaborative technology solutions. Broadcast and media organisations highlight that the change in the industry is unprecedented, requiring them to collaborate with different media technology suppliers as well as with other broadcasters in an increasingly complex ecosystem. IABM research shows that broadcast and media organisations are investing a large share of their technology budgets in building inhouse technology capabilities (BIY). The main drivers behind BIY are customisation, speed of deployment and control over technology development. Increasing demand for BIY capabilities, particularly by larger organisations, is leading to an increase in co-development projects that see broadcast and media organisations working on new technology solutions together with vendors. Collaborative technology models are also increasingly being developed in Cloud-based environments. This has major implications for the supply-side of the industry. Buyers are in fact building deeper and more collaborative relationships with Cloud service providers as well as smaller media technology specialists providing services on top of their platforms. These smaller suppliers have relationships with their customers, their Cloud service providers and other media specialists in a dynamic ecosystem. Also, Cloud service providers’ investment in the media industry is rising. In the last few years, major Cloud service providers’ media-specific capabilities have significantly grown and now cover content creation, production, management and distribution. In this changing supply-side landscape, Cloud service providers are increasingly the gatekeepers of media services as they own the infrastructure on top of which these are delivered. n

TVBEUROPE JANUARY/FEBRUARY 2020 | 25


THE FUTURE FOR PUBLIC SERVICE BROADCASTERS

H

OW DO YOU THINK THE PUBLIC SERVICE BROADCASTING LANDSCAPE WILL LOOK IN 2030? In 2030, the broadcast landscape will no doubt look very different from today. In much the same way it has been revolutionised since 2010! We work in a rapidly changing media landscape and we must continue to adapt accordingly to the evolving demands of audiences, particularly the expectation that they can enjoy content on multiple distribution platforms, whenever and wherever they like. Undoubtedly public broadcasters are making strides in reinventing themselves as multiplatform media services in a world of converging media. And, in ten years’ time, we expect even more changes – and opportunities – proffered by new tools (and rapid artificial intelligence developments) that will, for example, remove some of the more onerous tasks for production centres and newsrooms, enabling staff to better exploit these new opportunities. Investment is key. Innovation is central to ensuring the sustainability and development of Europe’s media and creative industries in the

26 | TVBEUROPE JANUARY/FEBRUARY 2020

Noel Curran, director general, European Broadcasting Union, discusses the next decade for PSBs, the EBU and the Eurovision Song Contest

increasingly global marketplace. We’ve already been working in partnership with technology experts, creators and other media organisations and start-ups though the MediaRoad project and we’d like to see more investment in media innovation and research through European programmes such as Horizon Europe. Certainly, the need for strong public service media (PSM) – nonpartisan, independent and run for the benefit of society as a whole – will continue to be greater than ever, in light of the prevailing political and social climate, where misinformation and populism continue to rise and upheavals in traditional allegiances become the new normal. PSM has been shown to be the bedrock of democratic societies and strong PSM correlates with higher degrees of press freedom, lower levels of right-wing extremism and better control of corruption. An effective PSM network will be an influential advocate for professional journalism and ethical standards, synonymous with the provision of trusted content, working to develop technical solutions and appropriate international policy frameworks.


However, it will become increasingly important that politicians, market players and, most importantly, the public understand what PSM stands for and the value it delivers. Without that understanding, the concept of strong PSM could be under threat. WILL THERE STILL BE A PLACE FOR THE EBU? No question. If we take on board that well-funded and strong PSM is the cornerstone of a democracy then, in a world which is seeing the proliferation of fake news and hate speech, an independent, trusted source of news and content has never been more important. The EBU is the biggest professional association of PSM in the world. We campaign for a sustainable future for PSM, provide our Members with world-class content from news to sports and music, and build on our founding ethos of solidarity and co-operation to create a centre for learning and sharing. We know the media landscape is changing rapidly but, as our president Tony Hall said at our last General Assembly: “In a disinformation age, now is our moment. We have to come together as a family like never before. We have to collaborate even more closely…and we have to listen and respond to the needs of everyone.” We have launched a new strategy – Together – at the EBU that will help Members address the challenges of the next decade – providing them with an amplified voice and services that help them remain relevant to future generations. We are looking forward to seeing this bearing fruit in the years to come. WHAT WILL PUBLIC SERVICE BROADCASTERS NEED TO DO TO KEEP UP WITH THE STREAMERS IN THE COMING DECADE? PSM has always been very strong in making great, creative content that reflects society, encourages debate, shapes communities and informs people. But that is not enough on its own. In order to stand out amongst the growing number of platforms – at a time when audiences are fragmenting and diversifying - PSM will have to play to its strengths. So we must firmly root our activities in our values and mission, keep investing in creativity and local content, build stronger connections with our audiences and master the opportunities offered by new technologies and distribution platforms. The latter is especially relevant for gaining and retaining younger audiences. Kids expect to watch the content they like at any time, in any place. We have to continue to adapt and take this on board, ensuring our content is flexible to view and enjoy across many delivery mechanisms. But this is not something we can do on our own. Global online platforms have profoundly changed how people access and consume content. We need European leaders to help manage this constantly changing landscape in a way that upholds our values and empowers citizens. Europe needs a fair and transparent online environment to ensure people can easily find and access high-quality, trustworthy information and programming. Platforms need to provide access to data so PSM can tailor their content to meet audience expectations. And we need to have clear brand attribution so citizens are aware of the source of content. PSM has a distinctive role to play in society and in the media landscape

www.tvbeurope.com

and, by working with others, we can continue to deliver distinctive and valuable services to our audience WILL IT BECOME HARDER FOR PUBLIC SERVICE BROADCASTERS TO HOLD ONTO SPORT RIGHTS AS ONLINE PLATFORMS AND THE LEAGUES’ OWN SERVICES GROW? From our perspective, this is not an ‘us versus them’ situation. We need to take an integrated approach – promoting partnership opportunities and looking at how linear and digital can enhance each other, rather than competing, as the future. The programming itself should always be the key driver for deciding where content should go - be it a premium linear channel or a digital ‘direct to fan’ experience. Linear TV will always be the mass-audience driver, but for some sports properties there are limited, or no exploitation possibilities in certain territories, and we should not be afraid to offer a digital first experience. One solution no longer fits all. A great strength of PSM is its ability to connect with our communities in all our territories. The huge digital footprint of our Members is invaluable and offers great cross promotional opportunities. We have adapted our distribution strategy, first of course based on a business model, but then enabling content curation according to what is most relevant to all our associated publishers. For example, our work with worldwide properties often means meeting the audience needs of nontraditional broadcasters. We clearly understand our position in the media ecosystem, which is still substantial and very relevant particularly when it comes to major events. This position has been underpinned by the huge audiences our Members delivered for the FIFA Women’s World Cup in France and the European Championships 2018 in Glasgow and Berlin. If anything our position is stronger now than it was 5 years ago. We are continuously working to meet the appetite of the consumer, and the different needs of media partners, and we provide certainty as well as quality at a time of huge fragmentation and insecurity in the market. HOW LIKELY IS IT THAT THE EUROVISION SONG CONTEST WILL STILL BE HAPPENING IN 2030? We have no doubt that the Eurovision Song Contest will still be entertaining as many people worldwide in ten years time as it does today and has for the last 65 years! It is the world’s largest live music event, regularly attracting global TV audiences of over 180 million. While it is equally popular online, with 40 million unique viewers on YouTube last year, live television has the power to bring audiences together and create a shared experience like no other. The ESC is, ultimately, a live, communal experience, hinged on a live results format, where people come together as part of a shared event and influence its result. And that really is the point. Rather than passive spectators, new developments in distribution and technology are enabling audiences to be a vital, vocal proactive part of the experience. Five million engaged with our social posts from Tel Aviv in 2019 showing the audiences’ appetite for information, for dialogue both around the show and between fans. And ultimately it’s this engagement that will keep the show fresh and exciting for generations to come. n

TVBEUROPE JANUARY/FEBRUARY 2020 | 27


THE NEXT-GEN OF TV SCREENS With all the changes we expect to happen within the media technology industry over the coming decade, possibly the biggest will be in the way viewers consume content. Media analyst Paolo Pescatore looks into his crystal ball

W

ILL THE TV AS WE KNOW IT BE OBSOLETE BY 2030? The TV as we know it is here to stay. People still want to watch TV in front of an even bigger telly in their living rooms. How we access TV and the way it will be delivered will fundamentally change. Arguably the set-top-box as it is now will become obsolete. There is no longer a need for costly hardware that is given away. The two most popular video services (Netflix and YouTube for now) do not rely on a settop-box. All the content we watch will be stored in the Cloud. Most if not all TVs are becoming smarter and aggregating a wide range of content. All devices will be connected, and every wall or surface will emerge as a pop-up screen authenticated by the user, engaged via voice and powered by the Cloud through one converged network. HOW WILL WE BE WATCHING CONTENT? We envisage a world where virtual screens pop up everywhere all connected with super-fast networks. Viewers no longer search by channel, nor by broadcaster. They want to access the content they love by its name through voice. With the proliferation of AI in everything, the platform/service will know what the viewer wants to watch. Ultimately, it is all about the content and the way users can engage with that content in different ways. Remotes will also be obsolete, likely to be replaced by haptic gloves and voice. DO YOU THINK THE FOLDABLE/ROLLABLE TV SCREENS WILL EVER CATCH ON WITH VIEWERS? While there’s a lot of excitement around this new category, it is still early days. These devices will evolve significantly. For sure we will see a plethora of new screens emerge. Consumer electronics providers are struggling to differentiate. They are all looking to new categories to drive innovation and ultimately generate new sources of revenue. Smart glasses is an interesting category as it opens up new use cases for all customer segments. WHAT COULD THE RISE IN VR AND HOLOGRAMS MEAN FOR THE SCREEN VIEWER? The promise of immersive content and VR has undelivered and this is unlikely to change. There are still numerous challenges. Users do not want to put a smart headset/glasses on, they do not want to pay extra for it. However, there are some interesting use cases.

28 | TVBEUROPE JANUARY/FEBRUARY 2020

Immersive media is emerging more and more. Tech companies are all seeking to leverage their computing capabilities to provide users with a more immersive experience; in particular taking sports fans closer to the action. Expect more tech, stadium and venue owners, and automotive companies to work more closely with the entertainment and media industries to bring technology to life. It is all about offering the content users want on any screen, anytime, anyplace, and anywhere. It is a great time to be a content creator given all these new channels and screens to tell stories. HOW FAR WILL SCREEN RESOLUTIONS HAVE DEVELOPED? Bigger is better. We are already seeing the market flooded with dazzling 8K TVs. The quest for more pixels, sharper images and better processors is not easing up. However, the initial high prices will always be a barrier to consumer uptake. There’s no good in having the latest and greatest TV if there’s no amazing content in that format. It is apparent that tech is moving way too fast than the content ecosystem due to the cost of supporting that format natively. n

THE FUTURE OF VIEWER MEASUREMENT

By Justin Sampson, chief executive, BARB Since BARB launched in 1981, we have been continually developing our audience currency in response to fragmenting viewing patterns. In the past decade, we have delivered measurement of pre- and post-broadcast and non-linear viewing, viewing on non-TV devices and addressable advertising. We’re now planning how we’re going to keep track of how UK viewers will be watching television in the 2020s. We have an extensive to-do list. This includes delivering BVoD campaign performance (the final stage of Project Dovetail) and rolling out router meters across our panel; these offer a number of benefits, notably providing a measure of viewing to SVoD services. Our objective is that by the end of the next decade, all of these new methods of viewing, and any others that emerge, will be measured by BARB as integral parts of our service. n



A DAY IN T THE LIFE

On November 22, Coldplay launched their latest album Everyday Life with a highly ambitious gig in Amman, Jordan. Here, veteran broadcast audio mix engineer Toby Alington, who was recruited to mix the show, provides an exclusive account of this extraordinary and challenging event...

he global broadcast on YouTube of Coldplay’s Everyday Life album launch, live from the Citadel in Amman on Friday 22nd November, was an extraordinary artistic and technical achievement. Performing live from this historical site, Coldplay’s dawn and dusk shows reflected the two halves of the album: Sunrise and Sunset. As the sun hovered under the horizon that Friday morning, the muted dawn atmosphere of Amman merged into the orchestral strings opening of Sunrise, the first track on the album. Directed by Paul Dugdale, cameras and drones captured Coldplay’s live performance while in a white tent a few hundred metres from the stage, Rik Simpson (Coldplay’s producer and engineer), Dan Green (producer and FOH engineer) and I mixed the broadcast audio. As the city slowly came to life and the sun appeared over the horizon, birdsong, distant traffic and street noise could be heard between numbers. When Chris Martin closed the Sunrise set with When I need a Friend, another Friday had come to life in Amman.

XXXXXXXXX

PICTURED ABOVE: Xxx

30 | TVBEUROPE JANUARY/FEBRUARY 2020


PRODUCTION AND POST Later in the day, as the sun went down on Jordan, the band performed the Sunset half of the album with the title track Everyday Life finishing moments before the evening call-to-prayer echoed around the city. A few days later the band performed the album (including a few other favourites) to a live audience in London at the Natural History Museum, with the audio once again captured by the team and me, this time in Floating Earth’s Lawo-equipped mobile recording truck. Rewind to September and the first phone call I received from Tony Smith, Coldplay’s audio designer, FOH tech and crew chief: “We’ve got this gig in the Middle East… 192 inputs… live broadcast… probably a tent… maybe we should meet up…!” Within days, planning was underway and a team of experts had started to come together. Dirk Sykora, my colleague on the MTV EMAs for many years who now works for Lawo was tasked with securing a Lawo console and peripheral audio gear for Amman. Meanwhile, I enlisted SR Films – who were supplying the video OB equipment for production company JA Digital – to provide shipping, technical support and infrastructure for our mix room. Well, mix tent.

www.tvbeurope.com

A fortnight before heading to Amman, Dirk and I were working together on the MTV EMAs in Seville. This allowed us to work face-to-face on planning and safety nets for the forthcoming trip to Jordan. There aren’t many options on the top of the Citadel in Amman if you don’t have something you need – every bit of fine detail needed to be covered ahead of the 10-day shipment of equipment to Jordan. “It’s not like we throw things together, but the goalposts do move, sometimes more rapidly, other times at a leisurely pace,” Smith says. “The last call with Toby was from our third location option, 31 days before live to air. I was within the Amman Citadel walls with Andrew Craig (Live Nation) and Simon Fisher (Paul Dugdale’s producer) looking for two stage positions, along with possible locations for a studio, gallery, toilets and so on, with our respective teams. I had an archaeological site to draw into a .dwg format site map so we could make this work. On a job like this, Toby is at the top of my list. We have had a long relationship with him, and his experience, ears and the team of experts he can pull together were all essential for this project. This was not only a live-to-air album launch but also three days of rehearsals where the brief was to be prepared to record

TVBEUROPE JANUARY/FEBRUARY 2020 | 31


PRODUCTION AND POST up to 128 channels for possible remixing and recording of overdubs and transitions between tracks for the live performance, along with 64 channels of broadcast stems that come via our FOH Neve Shelfords and 500 Series analogue preamps.” On the subject of planning and equipment, Sykora elaborates: “I was looking at 192 channels to be received from the FOH console to the Lawo mc²56 via MADI. There were also two Pro Tools workstations and one Reaper system we needed to feed, because everything was to be recorded. Renting any kind of equipment locally at short notice was going to be virtually impossible, so I decided to pack everything I could possibly think of. Even though the mc²56 was scheduled to receive MADI and AES3 signals, I also included three Lawo DALLIS I/O stage boxes for all signal formats, including analogue I/Os for Rik Simpson’s effect gear – just in case. We used the Dallis stagebox for the connectivity of the outboard in the control room, but thanks to Tony Smith’s excellent preparation we never needed any additional stage boxes for ambience mics since his team took care of those and included them in the MADI streams.” Tony Smith liaised with Rik and Dan regarding their requirements and outboard equipment – some of which would come to Amman with Coldplay’s backline equipment. Every audio connector and power lead had to be defined, and emails went around the world refining the equipment list. I visited Coldplay’s rehearsals at Air Studios in London a few days before we all left for Amman to cross-check everything with the team.

Tony requested two 192-channel Pro Tools systems which I commissioned from Matt Phillips at FX Rentals in London. These were shipped to Hilversum to join the rest of SR Films’ equipment heading for Jordan. My long-term colleague Leaf Troup came on board to look after these rigs in Amman. Everything arrived in Jordan on 16th November, and many tonnes of audio-visual equipment were transferred to the Citadel site and also to our rehearsal venue, the Cultural Palace at Malika Aliya where our control room was built in a backstage area. When Tony had seen (and heard) this proposed space a month prior, he knew some substantial acoustic treatment would be needed to turn it into a viable audio environment. “There are very few options in Amman to find a rehearsal space that

32 | TVBEUROPE JANUARY/FEBRUARY 2020

can fit the band, guest musicians and all the techs, and also be a recording studio to capture a local choir and facilitate any overdubs. Rik Simpson, Dan Green and Bill Rahko (just voted into the top five Rock Producers of 2019) are lovely guys but they do have very high expectations,” Smith explains. “You need to trust what you’re hearing to mix well. Our lovely room in the Cultural Palace did not fit into the so-called ‘Bolt-Area’ – it needed serious TLC. “After some research into acoustics and acoustic properties I came across ASC TubeTraps, good for bass absorption between 55Hz and 250Hz and treble diffusion or absorption 250Hz and above. Placed correctly, these totally transformed the room, along with Auralex ProMAX v2 and some thick quilted Sound Control Services studio blankets, and we had a studio. My expectation was to improve the room - we did more than that.” After a few days of rehearsals, the entire control room was packed up and taken overnight to our tent on the Citadel hillside. By the following morning, a high-tech audio installation was in place, complete with acoustic baffling, mood lighting, 45-inch TV monitor, three 192-channel multitrack machines, UAD Apollo outboard equipment, two TC System 6000s, and various wonderful vocal effects for Rik Simpson to work his magic on. It was surreal to walk into a tent in the 3,000-year-old ruins of the Citadel and be confronted with the very latest audio technology – lights blinking in every corner of every room. And it sounded great. With Barefoot and Auratone monitors to work with, a brand new Lawo mc256 and all the toys we’d asked for, we had no excuse but to make the shows sound perfect. All the live inputs, including Tony’s ambience mics, came down six 96kHz MADI streams via an M12 Optocore system provided by Wigwam to our recording tent, where we converted them to 192 channels at 48kHz. It was during the first rehearsal of Sunrise at the Citadel I heard this very weird whining sound over everything, and it took me a few seconds to realise that Dugdale’s drones were going to prohibit us using many of Tony’s ambience mics. Dirk’s team rapidly rigged some more mics for me at the other side of the hill, allowing us to capture the ambience of the city waking up without the drones. The drone footage was an amazing addition to the filming, but as every engineer knows, they aren’t very compatible with recording. Guest acts in the performance included Femi Kuti on sax with his amazing band in Arabesque, Palestinian singer Norah Shakur, a gospel quartet and string section, a children’s choir, and Belgian singer Stromae returning to the public eye after quite some time away. The additional microphones needed for these guests alone numbered around 24 channels; over 200 channels were used for the live shows in total. Delivering sound for online is very similar to TV broadcast. YouTube and other online channels have different loudness requirements to TV broadcast, generally R128 minus 16 LUFS, compared to minus 23 LUFS for TV. The team tested the broadcast chain through YouTube’s distribution, and managed to clear up some audio level confusion resulting in a perfect digital experience for the viewer.


“The drone footage was an amazing addition to the filming, but as every engineer knows, they aren't compatible with recording.” TOBY ALINGTON

On YouTube distribution, Sykora says: “As cinema cameras had been specified for this project, two versions of the audio mix were prepared concurrently: a straight one with no delay, and a delayed one for perfect lipsync with the video footage (cinema cameras with live colour grading tend to induce a latency of up to eight frames). This audio version was embedded into the video footage and aired to YouTube. “Getting the audio and video elements to the YouTube facilities proved surprisingly straightforward. The redundant satellite uplink worked flawlessly, and delivering the material to the worldwide audience was a simple matter of sending it to YouTube’s live webstream encoder.” Come Friday morning, the crew were called for 3am transport to the site. With the temperature outside just above freezing, the warmth in the tent from the people and equipment was very welcome. Knowing that we had pretty much every Coldplay fan tuning in on YouTube and that we were broadcasting live to the world made for a focused mindset. We went into the tent in the pre-dawn darkness and came out to bright daylight after the Sunrise show. A welcome daytime break for the crew was followed by the Sunset show. The hugely positive social media reaction contributed to the euphoria of completing two perfect broadcasts. We knew we had delivered something magical, and the immediacy of Twitter and other social media confirmed that we had achieved the emotional responses we were hoping for. On Saturday, the day after the Sunrise and Sunset shows, Coldplay again performed at the Citadel, this time to a live audience. And Tony could finally turn on his PA. This show was also delivered for radio and online broadcast. With the Citadel shows complete, the equipment was returned to its flight cases but not before Dirk had trimmed and copied the

www.tvbeurope.com

production file from the Lawo console, ready to import it into Floating Earth’s Lawo-equipped mobile in London for Monday’s Natural History Museum show. All the essential additions of outboard effects travelled with the band’s backline, arriving at the museum in the afternoon. Mike Hatch and Dirk imported the production file into Floating Earth’s Lawo, and without a soundcheck the Monday night recording was completed successfully using the same settings from the 10 days in Amman. Sykora comments: “Technically speaking, the London gig was the third time the mc²56 settings had been used: in the rehearsal room in downtown Amman, in the tent at the Citadel and, finally, outside the Natural History Museum. As such, this is nothing new, of course, but considering how much was at stake and how well everything worked, I cannot help feeling grateful.” Smith adds: “The joy of having a team of experts around is you don’t always need to state or ask for things, they just happen. “The NHM show was slotted in just two days after the Amman Citadel gig. Toby had engaged the Floating Earth mobile, another long-term partnership with Coldplay as well as Dirk. All was perfect: Lawo desk, Mike Hatch, Dirk, Toby: sorted. Right, next, the two-hour load-in.” I have to say I totally agree with Dirk’s sentiment: “Coldplay’s Sunrise and Sunset gigs at Amman’s Citadel are among the absolute highlights of my professional career.” The Sunset and Sunrise shows were real highlights of my 30 years working in live broadcast. Only made possible by the expertise of Dirk and his crew, the attention to detail of Rik, Dan and Tony, and the acceptance by Coldplay and their management of our proposals to get it right with specific equipment, people and approach. Fond memories, and if you haven’t watched it yet, find the shows on YouTube, knowing it was live and how we approached it. n

TVBEUROPE JANUARY/FEBRUARY 2020 | 33


FEATURE

WHAT IS THE AVOIP “STACK”? By Brad Price, senior product manager at Audinate

I ‘The “stack” is what makes the underlying technology workable in real-life studios and facilities.’

n the world of computer-based technology, the term “full stack” refers to a solution or service that contains everything needed for use, and not just a component or layer. An example from another familiar sector illustrates: a set of wheels, a motor and way to steer might satisfy some minimal definition of a motor vehicle, but nobody would consider that to be a usable automobile. The complete “stack” for an automobile includes a body, seating, suspension, brakes, a dashboard of important information, and hundreds more components that are considered essential if one is to actually drive and use the vehicle in the real world. In AV-over-IP, the “stack” represents everything needed to deploy, control, and manage the system so that it is fully useable, scalable and secure in a wide range of environments. The “stack” is what makes the underlying technology workable in real-life studios and facilities. THE TRANSPORT LAYER While there are several variations of transport layer used in different AV-over-IP systems, their performance is remarkably similar in a good way. The abundant bandwidth of gigabit networking alongside timing standards such as IEEE1588 make it straightforward and logical. All AV-over-IP solutions in use today employ the IEEE1588 Precision Time Protocol (PTP), a standard that establishes a reliable means of calculating network traverse times with great accuracy. This in turn allows clocking to easily be distributed around the network so that audio packets can be faithfully sent and reconstructed as they go from device to device with sub-microsecond synchronisation. In the great majority of cases, outstanding performance is achieved with no need for an external clock, reducing costs and complexity. By itself, the IEEE1588 standard can only be applied within a single network subnet, which limits the size and scope of any possible deployment. While many AV-overIP networks are used this way with excellent results, only a “full stack” solution can provide tools to extend PTP beyond this limitation. A domain manager product allows multiple subnets to be synchronised by employing

34 | TVBEUROPE JANUARY/FEBRUARY 2020

managed boundary clocks, which in turn allows audio to be sent across routers with no degradation in performance. This then allows station and studio managers to deploy AV-over-IP freely as needed, spanning buildings, studios and facilities using existing network infrastructure. Another extension of PTP that is impactful to broadcast allows distant facilities to be linked and synchronised using a reliable shared time resource, such as GPS clocking. A studio with live talent in one city can now be connected in real time to a production studio many miles away with no degradation of performance or added difficulty. While critically important, these extended capabilities are not found in the original standards per se - they are made possible by the development of complete solutions that leverage the standards. THE CONTROL LAYER If “transport” is the basic wheels and motor of an automobile, then “control” is how you actually drive it; merely having a set of transport standards and capabilities alone isn’t sufficient. The control layer is what most users think of as the solution itself; it is the part with which they interact to accomplish nearly all daily tasks, and so is rightly viewed as the face of the system. All regular activities - adding and removing devices, naming and labelling devices, establishing clocking configuration, setting sample rates, and of course defining signal routing between devices - are done via the control layer. As anyone who has used software knows, simply having an interface doesn’t mean that something is easy or intuitive to use without mistakes. Proper design of the control layer user interface is a vital part of ensuring that the system can be used by many people with the fewest number of steps. It needs to provide unambiguous feedback that keeps users informed and able to act. Where automation saves time and increases clarity, it should be used. When details aren’t necessary, they should be kept out of sight to provide users with only the most important information. When details are needed, they should be accessible. These attributes are key to minimising


FEATURE confusion and rapidly resolving issues when they arise in order to stay on the air. Several AV-over-IP systems are confined to use with products made by a single vendor, and control is focused upon that smaller set of products. While this approach is coherent, it restricts customer choice with an “all or nothing” option for control. When designing a system comprised of products from a variety of manufacturers, a vendor-neutral control solution is the logical choice, providing your operators and managers with a consistent, reliable control experience across all devices. THE SECURITY LAYER In old-school, point-to-point connected audio systems, security has often meant simply, “don’t touch that.” In a busy studio environment, that isn’t enough to ensure that systems aren’t tampered with. AV-over-IP is based upon standard computer networking and uses a similar model for essential security. Because networks are physically distributed, access must be controlled at all points, and this is done through user authentication and permissions. Authority (or the lack thereof) limits what user actions are permitted, protecting the system from damaging configurations or unwanted access to protected areas. This type of security is essential for mission-critical modern installations such as broadcast studios. Once you know who is using the network, then you can define what they can do. Products such as a domain manager provide user authentication, and are linked to existing directory mechanisms like LDAP or Active Directory so that one doesn’t have to start from scratch. Authenticated users are assigned roles by the AV manager that determine which domains are accessible and what actions may be taken. Know your users, and you know a lot about how your system is really used.

system should stop this from happening, while also providing a process by which devices are legitimately authorised to participate. Controlling users is always key. As mentioned above, user authentication is an essential first step in securing a system. The system should permit user privileges to be defined by the administrator, so that only the correct people can change critical parts of the network. Visibility is always key. The administrative layer should be able to keep managers informed of any unwanted changes such as loss of clock, failure of a device, etc. Real time and email alerts are required to ensure that people have the information to act as quickly as possible. Makers of single-vendor AV-over-IP systems provide most or all of these features for their own products, while makers of AV-over-IP platforms provide these services as vendor-neutral tools for use with any supported products, allowing a mix of brands to be securely managed.

THE ADMINISTRATION LAYER Administration covers the tasks and information that managers need in order to keep systems running as desired. In AV-over-IP, this covers parametres related to devices, users, and network topology. It’s how the network conforms to the requirements of the studio or station. Organisation is always key. A large network quickly becomes difficult to manage, as all devices are visible all the time, displaying hundreds or thousands of discrete channels of audio. A good administrative system allows these devices to be organised into the functional domains they represent, such as “Studio A” or “Production Room 1”. This clarifies the system for users and reduces mishaps that can result from an overloaded interface. Controlling devices is always key. An unwanted “rogue device” can wreak havoc on a system and must be prevented from joining the network. The administrative

CONCLUSION: WHERE WE’RE HEADED All successful technologies start complex, then evolve into more graceful forms over time. The hand-cranked cars of the 1920s have become the technical marvels that are modern day vehicles, and the command-line computers of the 1980s are now the colourful, ubiquitous phones used by everyone. So it is with AV-over-IP. The future doesn’t stay forever complex and esoteric - it is driven by users who want to get things done efficiently and easily, and the companies that serve them. Like the driver of a modern car who only wishes to get to the supermarket and doesn’t need to understand how everything works in minute detail, broadcasters in the future want to focus upon their goals and tasks, and not dwell upon the underlying technology. At the end of the day, that is exactly what the “Stack” is all about. n

www.tvbeurope.com

‘Broadcasters in the future want to focus upon their goals and tasks, and not dwell upon the underlying technology.’

TVBEUROPE JANUARY/FEBRUARY 2020 | 35


FEATURE

RED CARPET READY With thousands of broadcasters around the world descending on red carpets during awards season, it’s imperative they have a trusted partner to ensure their viewers get to see who’s there and what they’re wearing. Jenny Priestley puts her best dress on to find out more

I

t’s that time of year again when the great and the good of the entertainment industry take a slap on the back for their work over the past year. From the Golden Globes to the BAFTAs, the Grammys to the Oscars, viewers’ appetites for seeing which of their favourite stars will be going home with a little golden statue shows no sign of diminishing. But with so many broadcasters and digital companies crowded into such a small piece of real estate as they cover the stars arriving at the ceremony, how can they guarantee their coverage will reach their viewers? One obvious option is to partner with a company that has been working on these kinds of events for a number of years. Step forward, The Switch. It works with everyone from major international broadcasters to digital companies to the awards organisers themselves to ensure delivery of the content back to the viewer.

36 | TVBEUROPE JANUARY/FEBRUARY 2020

“We live by the guise of every production is a snowflake, they’re all unique. Different people with different projects have different needs,” explains Ian Dittbrenner, executive in charge of production and digital at The Switch, when asked exactly what it is that the company offers. “Some clients know and use The Switch as the network that they entrust to deliver their content. Others know it as a means to deploy their content to more viewers,” he adds. “The Switch is the backbone for most tentpole events because of the stability of the service that the company offers. Most tentpole award shows that you’ve seen have touched The Switch network in one capacity or another.” Dittbrenner explains that typically The Switch will have multiple facilities on a red carpet in order to bridge any gaps in any production. “We do everything from producing our own programming to scenic, staging and lighting, through production, resourcing and crewing, and


FEATURE

“Coverage of the red carpet has become a lot more interactive.” IAN DITTBRENNER

transmission in any domestic and/or global digital media deployment,” he continues. “The Switch has always been the answer for delivering entrusted work for clients such as the Oscars, Emmys, the Grammys as well as all the people who report on those award ceremonies as well. People spend months and years creating programming around these ceremonies and they have to get that content from point A to point B. We do that for them as a trusted partner, as well as delivering to what we refer to as point C, which is our digital media services.” “We will have a range of customers at any one given event,” adds Kevin O’Meara, vice president of marketing at The Switch. “Depending on who those customers are, we have a range of services that we supply to them. Some of them have their own OB services on site with their own production team and we may just be doing transmission for them. Some of the teams working on the red carpet might need help with the production services side of it, so we could be providing our own OB truck and producing coverage on site with them. They’ve got their own talent in front of the camera, but we would look after all the connectivity around the site and do the transmission for them. The other option is that the client might want to look at doing a remote production.”

www.tvbeurope.com

Coverage of the red carpet really took off back in the 1990s thanks to the likes of E! Entertainment and the doyenne of fashion commentators Joan Rivers with her ‘who are you wearing?’ question for any celebrity within a 10-metre radius. In recent years, the red carpet has become an event unto itself – many viewers will tune in for the carpet and not the ceremony – and technology has made it easier than ever to cover all the pre-award build-up. “Livestreaming has made wall-to-wall coverage of the red carpet for awards of all types more available and better – it all starts earlier than it did five years ago, with high-quality production values and often separate commentaries aimed at specific markets,” says Dittbrenner. “Coverage of the red carpet has also become a lot more interactive, with streaming on social media platforms allowing viewers to contribute their own lively commentary. There are just so many ways for viewers to engage with what’s happening around their favourite awards now – sometimes using multiple screens. It has helped capture a whole new generation of followers for coverage of big awards such as the Oscars, BAFTAs, Grammys and Emmys – as well as rising global events like The Game Awards, which has its own personalities gracing the red carpet. Overall, we have just seen more red carpets for more events being produced at a level that was once reserved for the run-up to the big music, movie and television awards shows.” So while coverage of the red carpet has undergone fundamental change in the recent past, how does Dittbrenner think it will develop over the next few years as more new technologies become available to producers? “The technology is only going to make coverage of the red carpet even more extensive and widespread,” he says. “OTT platforms will continue to allow for even more red carpet coverage around niche events, as it continues to become more economical for smaller organisations to give livestreaming around their awards a broadcast-quality treatment. Remote production will drive this whole process, enabling smaller-scale events to have the range of camera positions and top-flight crew that raises their coverage of the red carpet to a whole new level.” Dittbrenner believes producers will also embrace remote production for coverage of the red carpet at the big award shows. “Equally, big technological advancements like 4K/8K and 5G will have an important impact – but not for a couple more years yet,” he adds. “Realising the full potential of 4K, let alone 8K, is still quite far off. While the technology is in place to produce and transmit 4K signals, there are still huge technical hurdles to providing true 4K content on the consumer’s TV – that’s if they even have a big enough TV to notice the difference from HD. “Next-generation wireless network technology that can facilitate significant bandwidth for video transmission services is always of huge benefit to live production, and 5G will doubtless help make live coverage of the red carpet and similar events easier and more accessible – but down the road.” So not so much ‘who are you wearing?’ as ‘what are you using?’ n

TVBEUROPE JANUARY/FEBRUARY 2020 | 37


FEATURE

THE CONTENDERS

Over the next few pages, TVBEurope hears from some of this year’s awards season contenders in both TV and film. They explain how technology helped create their nominated work

AVENGERS: ENDGAME STUART PENN, VFX SUPERVISOR

PLEASE EXPLAIN YOUR ROLE ON AVENGERS: ENDGAME AND WHAT THAT INVOLVED. I was VFX supervisor for Framestore’s work on Avengers: Endgame. We completed a wide range of VFX covering over 300 shots. The work included character performance animation on Smart Hulk and Rocket Raccoon, development of the Avengers Quantum suits, some complex time travel effects, holograms and detailed environments such as the Avengers Hangar environment, Asgard and Wakanda. WHAT TECHNOLOGY DID YOU USE FOR YOUR WORK ON AVENGERS: ENDGAME, AND HOW? For Smart Hulk, we developed machine learning technologies to allow us to translate Mark Ruffalo’s performance on set on to our Smart Hulk facial rig. We also constructed a full set of anatomy for muscle simulations. For the Quantum Suits we used high-precision body tracks and cloth simulations to create the fully digital costumes. This also required using digital neck and hair replacement to assist with integration.

38 | TVBEUROPE JANUARY/FEBRUARY 2020

WHY DID YOU USE THAT TECHNOLOGY IN PARTICULAR, AND HOW DID IT HELP YOU WITH YOUR WORK ON THE FILM? The machine learning used on Smart Hulk allowed us to quickly generate first passes of facial animation. This was especially useful for allowing us to deliver full rendered versions of Smart Hulk for the edit early in post. For the suits the body tracking ensured that we retained the actors’ performances while fully replacing their costumes. The cloth simulations were essential to give a variety of fabrics in the costume naturalistic motion. Digital neck replacements were needed where the costumes the actors were wearing covered areas that were left exposed by the design of the Quantum Suits. WHAT DOES IT MEAN TO YOUR TEAM THAT THE FILM IS NOMINATED? We are very proud of our work on Avengers: Endgame and are honoured to be nominated. It’s recognition for the hard work, dedication and attention to detail the team put into every aspect of the VFX on the film. n


FEATURE

PEAKY BLINDERS STU WRIGHT, PRODUCTION SOUND MIXER PLEASE EXPLAIN YOUR ROLE ON PEAKY BLINDERS AND WHAT THAT INVOLVED As the production sound mixer on Peaky Blinders, managing the sound team is my first priority. I was lucky enough to have Alessandro Pascale (my go-to first AS) on board whose ability to run a smooth floor really allows me to concentrate on getting the most out of each scene. My second AS Ben Hossle and trainee Joshua Carr did excellent work on providing me with useable radio mic channels which have become so important on multi camera period dramas where location noise can really distract from the pictures we are trying to capture. After the location challenges, you then have to deal with the own goals of the filming process where other departments have collateral audio damage with the process of creating the Peaky’s world. Thankfully the efforts made by these departments made co-operation count at the right moments to save many shots from ADR. WHAT TECHNOLOGY DID YOU USE FOR YOUR WORK ON PEAKY BLINDERS? I recorded on my two trusty Sound Devices 788t recorders. Old as they are, they still battled with the hostile environments of the Peaky’s world. Mic wise I was mostly on the Sanken mics and Sennheiser MKH50s until the last block of shooting where I got to try out the DPA 6060 sub-miniature lavier which has become a new effective tool. WHY DID YOU USE THAT TECHNOLOGY IN PARTICULAR, AND HOW DID IT HELP YOU WITH YOUR WORK ON THE SHOW? The DPA 6060 helped improve a couple of scenes, one of which was in episode six when Sam Clafin, playing Oswald Mosley, was delivering his larger than life speech in the auditorium. The number of camera angles and sizes required to capture the same performance meant that the boom mic was good for air but not for heart, the 6060 personal was not great due to his

www.tvbeurope.com

costume having strange acoustic properties and the MHK50 plant on the podium was inconsistent and would get knocked. The only consistent mic was the 6060 which we managed to weave into the retro unpractical prop mic! Although in shot, it is very hard to find due to the cable and capsule being so tiny. WHAT DOES IT MEAN TO BE NOMINATED FOR YOUR WORK ON THE SHOW? It’s an honour to have worked on such a prestigious, well-written series, but to also be nominated along with the post sound mixers, editors and artists is a joy. WHAT’S NEXT? Reading episode one of Peaky Blinders six has got me wound up ready to “f*ck with the Peaky Blinders” in order to preserve their outstanding performances! n

TVBEUROPE JANUARY/FEBRUARY 2020 | 39


FEATURE

HIS DARK MATERIALS ROBERT HARRINGTON, VFX SUPERVISOR

PLEASE EXPLAIN YOUR ROLE ON THE SHOW AND WHAT THAT INVOLVED My role was VFX Supervisor for Framestore’s London team. This meant I was acting as a bridge between the client-side overall show supervisor, fellow Framestore-employee Russell Dodgson, and the work we were creating, but in reality it goes deeper. Between the London and Montreal offices, over 700 different people worked on the show and they were doing all sorts of things - making, shaping, defining, grooming, painting, solving, animating and lighting everything from animals to boats to buildings to airships to explosions and more - and all that work needs to be quality-controlled. With regards to the on-set work, I spent time with the on-set puppeteers working on how their physical bear-riding rig behaved and making sure it lined up with what we were doing back in the office. It’s an important thing to do as, at some point, we would have Dafne Keen (Lyra) on the polar bear, bouncing up and down courtesy of puppeteers burning their calories, but we would need to also show a CG polar bear underneath her, moving in a way that matches her motions plus the biomechanics of our animation. Based on how they’re filming something, you need to ask yourself questions like “can we make a digital creature walk along there, go through that real shadow there, remove the unavoidable crew reflections in that window there and add digital buildings over that wall there? Can we do all that in six months time when we do the VFX work, is there anything we need to change or work out now to make sure we can stick the landing?”. WHAT TECHNOLOGY DID YOU USE FOR YOUR WORK? For the artists, everything is ACES compliant for colour and predominantly uses Linux workstations running Maya/Houdini/Nuke/Mari/etc., also

40 | TVBEUROPE JANUARY/FEBRUARY 2020

probably called “typical VFX software”, but with additional proprietary in-house tools added in. The production side uses Windows machines running Shotgun, Excel, etc., and has some custom things on top too. Beyond that, tech-wise, there was also a lot of scanning, so every set was captured with LIDAR to confirm what shape/size the locations were, and all the “people and props” were captured with photogrammetry. WHY DID YOU USE THAT TECHNOLOGY IN PARTICULAR, AND HOW DID IT HELP YOU WITH YOUR WORK ON THE SHOW? We use Linux because of it’s friendly licensing and ability to scale, ACES because it’s a useful standard, and the software choices are the de facto baseline standards for the industry. The in-house software is a more nuanced subject and is often the case (particularly) at large-scale VFX companies. It can serve various purposes, anything from “we’d need to be able to do X automatically as we’re doing so much of it”, to not having to be dependent upon a company’s roadmap where there are sometimes other factors at play which you’re better off not being exposed to. WHAT DOES IT MEAN TO YOUR TEAM THAT THE SHOW IS NOMINATED? This is going to sound quite predictable; it’s purely a great honour. Many people put a lot of hours into creating a lot of VFX. All those hours add up, and whilst you can try hard to make the work environment nice and have good team spirit, there’s still that ethereal work-life balance. Ultimately, with a “time = tangible reward” industry such as this one, a lot of people give a lot of their lives to create this work. To go up against all of that and to still get a nomination at the end is a special thing for everyone involved. n


FEATURE

THE IRISHMAN PABLO HELMAN, VISUAL EFFECTS SUPERVISOR HOW LONG DOES IT TAKE TO DO THE SOPHISTICATED DE-AGING EFFECTS ON THE IRISHMAN AND WHAT DO THESE INVOLVE? Pre-production for The Irishman started in 2015. After doing a successful test shot with Robert De Niro. Industrial Light & Magic (ILM) went on to write the software needed to withstand the rigors that would come with such a large production effort. Martin Scorsese and Robert De Niro requested the maximum amount of freedom on set. The actors preferred not to wear markers on their faces or helmet cams on their heads. The filmmakers also made it clear that they wanted to be on the set with theatrical lighting and no separate stage shoots would be scheduled. The R&D team spent two years writing new software, repeatedly testing it and constantly refining it. The 108-day shoot started in 2017 and concurrently ILM’s digital asset team started to create highly detailed models of Robert De Niro, Joe Pesci and Al Pacino at their current ages. Then the team proceeded to painstakingly model younger age variations for each actor. After the shoot, ILM artists worked in post for a year and a half, tracking every camera, reconstructing the lighting for every shot the CG characters or added elements would appear in, rendering the CG and compositing the 1,750 visual effects shots for the movie. WHAT TECHNOLOGY AND SOFTWARE DID YOU USE? ILM developed a new markerless on-set facial capture system. Proprietary software allowed the actors to be on set, under theatrical lighting and to perform without wearing facial markers or helmet cams on their heads.

www.tvbeurope.com

There were no restrictions of space or camera movement for the director or cinematographer Rodrigo Prieto. This system is currently the only available system in the world to allow for markerless on-set lighting facial performance capture. ILM designed a three-camera rig in collaboration with Prieto and ARRI Los Angeles, which included two high-resolution ARRI Infrared-modified ‘witness cameras’ attached to and synched with the primary RED director’s camera. This system captured the facial performances from multiple angles and also threw infrared light onto the actors’ faces to neutralise unintended shadows while remaining invisible to the production camera. This process provided a full set of data of every frame of each performance and all of the on-set lighting and camera positions that would be needed later. HOW SOON DO YOU THINK THIS KIND OF DE-AGEING TECHNOLOGY WILL BECOME COMMONPLACE IN FILMS AND TV? The technology is what it needs to be, to attend to the different creative needs that are out there. The methodology does come in several “flavours”. There is a 2D, 2½D and 3D approach depending on the budget, amount of coverage, lensing and kind of performance (action, dramatic performance, etc). It depends on the appetite. The brand new markerless software used on The Irishman provides much greater freedom for the actors, who feel closer to the “truth” by not bearing the burden of technology as they perform. The technology can also change the way filmmakers and actors approach the work with a focus on performance instead of the “logistics” and “acrobatics” of how to make the day. n

TVBEUROPE JANUARY/FEBRUARY 2020 | 41


FEATURE

THE CROWN ANDREW SCRASE, VFX SUPERVISOR

WHAT WAS YOUR ROLE ON THE CROWN AND WHAT DID THAT INVOLVE? I was the VFX supervisor at Framestore where I oversaw our involvement with the project. Our work mainly centred around the digital creation of London Airport (now Heathrow) and the addition of several BOAC aircraft that feature in the scenes. WHAT TECHNOLOGY DID YOU USE FOR YOUR WORK ON THE SHOW, AND HOW? All our compositing work was done in Nuke with some small additional help from Flame. On the 3D side of things for the BOAC aircraft, Maya was our ‘go to’ for any hard surface modelling and animation, Substance Painter and Mari for texturing and Houdini for our look development and lighting using Arnold as our renderer. The digital matte paintings for London Airport and a couple of high wide aerial shots were created using Photoshop. WHY DID YOU USE THAT TECHNOLOGY IN PARTICULAR, AND HOW DID IT HELP YOU WITH YOUR WORK ON THE SHOW? Nuke has been the staple in visual effects compositing for quite a few years now and is still one of the best pieces of software to use for this kind of shot finishing work.

42 | TVBEUROPE JANUARY/FEBRUARY 2020

The Substance Painter software, now developed by Adobe, has been a great addition for artists creating CG assets. Its procedural approach and non-destructive workflow make it a very powerful tool with which the broad strokes of textures can be laid down much quicker. At the same time it gives you a good starting point for look development of an asset. For me Houdini is becoming the more popular software for a lot of 3D work. Traditionally it was used for the creation of FX work in the form of CG smoke, fire, water and particle effects. Now it has a broader appeal as its workflow and node-based system make it great for creating complex scenes. The capability for large environment generation and its integration with Arnold have meant that it’s now very popular as an all round 3D software. WHAT’S NEXT? I’m currently working on several upcoming TV shows for the likes of the BBC and for Netflix. It’s a really interesting time to be working in television because the explosion of high-quality shows is giving us the opportunity to work on some exciting new stories and to create some great visual effects. One of the real strengths of Framestore has been its ability to really assist with the storytelling process. You only have to look at the work it’s done on shows such as His Dark Materials as an example of what the company can bring to a project. n


JOJO RABBIT MIHAI MALAIMARE JR, CINEMATOGRAPHER

FEATURE

WHAT WAS YOUR ROLE ON JOJO RABBIT AND WHAT DID THAT INVOLVE? I think every project is like solving a puzzle. It’s also an amazing team work that evolves continuously until the premiere. As a cinematographer, my role is to determine how to visually tell the story and along with the other departments to offer the best tools for the director to tell the story. WHAT TECHNOLOGY DID YOU USE FOR YOUR WORK ON JOJO RABBIT? We used the Arri Alexa SXT Plus shooting ARRIRAW Open Gate with the Hawk V-Lite 1.3x anamorphic lenses. WHY DID YOU USE THAT TECHNOLOGY IN PARTICULAR, AND HOW DID IT HELP YOU WITH YOUR WORK ON THE FILM? After numerous camera tests we determined that 1:1.85 was the best aspect ratio to tell Jojo’s story. Shooting open gate with the Alexa SXT and the 1.3x anamorphic lenses allowed us to achieve a true 1:1.85 anamorphic image with velvety skin tones, really interesting falloff and amazing bokeh and flares.

WHAT DOES IT MEAN TO HAVE THE FILM INCLUDED IN THE DISCUSSION AROUND AWARDS SEASON THIS YEAR? It means a lot. We don’t think about awards when making a movie but if they come it’s really amazing. Having it included in the discussion around awards season means a bigger audience and that is great. n

www.tvbeurope.com

TVBEUROPE JANUARY/FEBRUARY2020 | 43


FEATURE

CHERNOBYL SIMON SMITH, EDITOR WHAT WAS YOUR ROLE ON CHERNOBYL AND WHAT DID THAT INVOLVE? There were two editors on Chernobyl, we had alternate episodes, so I cut episodes two and four and then we co-edited episode five which is the episode we’ve been nominated for [at the Eddie Awards]. The shoot was spread over five months (100 days), so each day we would receive the rushes over the internet from the location team in Lithuania. We’d assemble the dailies until the shoot was over, and then began fine-cutting. This generally takes the form of sending versions of each episode to the director, writer, producers, executive producers, and then the broadcasters for their notes and feedback, and then working together to make it better. It was around another seven months of fine-cutting, mixing, VFX and grading before they were all finished. WHAT TECHNOLOGY DID YOU USE FOR YOUR WORK ON CHERNOBYL? We used Avid Media Composer as our NLE, with two editors, three assistants and a VFX editor all running from the same project on an ISIS. In total, the proxy media files took up about 10TB of space, as often we would shoot with multiple cameras. The courtroom trial scenes were always three cameras shooting. During the shoot, PIX was used to distribute daily rushes securely to everyone that needed them. During the edit, the VFX editor was

44 | TVBEUROPE JANUARY/FEBRUARY 2020

able to pass shots back and forth between us and two in-house VFX artists which was a huge help. We were all in Soho in London, but the rest of the team were spread all over the world - New York, Los Angeles, Berlin… so we used a new collaboration tool called Evercast which allowed us to share screens, webcams, and simultaneously view the edits. WHY DID YOU USE THAT TECHNOLOGY IN PARTICULAR, AND HOW DID IT HELP YOU WITH YOUR WORK ON THE SHOW? In my opinion, Avid is the best NLE for working at scales like this, being able to collaborate with multiple other people on the same project, and deal with such huge libraries of rushes, sound effects, music and sequences. Breaking it down into individual ‘bin’ files means the computer can allocate resources really efficiently. We were really impressed by the new Evercast system - being able to all stay local to our homes and families really helps our quality of life, and that then feeds back into the quality of work you’re able to do. WHAT DOES IT MEAN TO BE NOMINATED FOR YOUR WORK? To be nominated by the ACE is a huge honour, as it’s coming from other editors. I’m really looking forward to the event, to be in the same room as heroes of editing like Thelma Schoonmaker, Fred Raskin, Kirk Baxter - it’s very inspiring. n


FEATURE

LIP SYNC BATTLE Jenny Priestley discovers how Hitomi’s MatchBox Glass technology allowed Viacom to deliver all of the feeds from the MTV EMAs in perfect sync

W

hen broadcasting a huge awards show to an international audience it’s imperative to make sure both sound and vision are in sync. It’s even more important when the show features numerous musical acts because fans will be quick to take to social media if there are any issues with their favourite performers. In order to ensure fans around the world got the best possible experience while watching November’s MTV EMA Awards in Seville, Spain, Viacom employed Hitomi Broadcast’s Glass and Matchbox products to ensure perfect synchronisation between the microphones and cameras at the venue, and in the international feeds. “The biggest crime in broadcast is to have your audio out of sync with your video,” explains Matt Okotie, lead engineer, Viacom International Media Networks. “We time that down to milliseconds or point zero of a millisecond. Hitomi gave us a box that could read that signal, and basically send the signal with picture from London to anywhere in the world.” The Matchbox software gives the user a reading of how much the audio is out of sync with the video, and enables them to line it back up on all their equipment. “With global distribution on an event such as the MTV EMAs, we were sending eight different feeds, HD, UHD, mains and backups back to London, and that’s globally distributed into the US, Latin America, Asia Pac, all over Russia, all over Europe,” explains Okotie. “You need to make sure everything’s in sync every step of the chain so we use Hitomi boxes globally to sync up everybody together.” Having used Matchbox previously, the EMAs was the first time Viacom had used Hitomi’s newest product, Glass. “We initially went to demonstrate our Matchbox product to Matt Okotie at the Viacom offices in Camden, North London. As a result, he hired a unit for the

www.tvbeurope.com

2018 MTV EMA event,” explains Russell Johnson, Hitomi’s managing director. “In September 2019, we showed him Glass on our stand at IBC and he said he would love to field trial it for us. We had to say yes to such a great opportunity!” “The one bit that was missing previously was linking the camera to the OB compound sync,” adds Okotie. “With Glass you hold up an iPad and then get sync from multiple cameras back to the OB compound. We had 14 cameras at the main venue in Seville and then multiple cameras around the other venues and on the red carpet. We needed to make sure all the cameras were synced so that as you cut to each one, they’re all in sync as well.” Of course, when you’re sending a feed around the world, there must be latency issues depending on where it’s going. It’s likely to arrive in Manchester faster than it will in Moscow or Manhattan. How does the Viacom production team work around that? “We measure it at each point,” explains Okotie. ‘We measure the latency from Seville to London, London to New York. From London we transmit to everywhere, so we do every section individually.” To help with the synchronisation, Viacom used multiple Hitomi Matchboxes. “We had one in Seville, one in London, India and New York,” says Okotie. “For UHD distribution we used the Hitomi technology and then we sort of down-converted it from UHD and distributed it as HD to countries that are using older kit. We also use the Hitomi boxes internally in our London studios quite a lot to sync up all our cameras there. The boxes work on any broadcast that need sync.” According to Okotie, the Hitomi products are “the most accurate on the market” and Viacom is already looking at using the kit on more events. “We’re also looking to invest across our main data centres in New York and London over the next couple of quarters.” n

TVBEUROPE JANUARY/FEBRUARY 2020 | 45


HOW BORUSSIA DORTMUND SCORES WITH VIRTUAL ADVERTISING FEATURE

Supponor and Lagardère Sports drive new revenue and fan engagement for Borussia Dortmund in the Deutsche Fußball Liga

V

irtual advertising technology has gathered increased momentum and technological sophistication over the past decade. Forward-thinking federations, clubs and marketing agencies collaborate to replace physical content on in-stadium LED perimeter boards with targeted messages for international broadcast TV signals. One such pioneer is Borussia Dortmund football club (known as BVB). Through a long-standing partnership with Lagardère Sports, BVB’s international advertising inventory has grown significantly, along with increased local market relevance, and broadened fan engagement within its Signal Iduna Park stadium. As the premier competition in the German football league DFL, the Bundesliga has a keen international fan

46 | TVBEUROPE JANUARY/FEBRUARY 2020

base. Increased international broadcaster interest has translated into 70 per cent of its target audience now lying outside of Germany. With innovation at its core, the DFL has monitored developments in virtual advertising technology for many years. One early influence was LaLiga in Spain, which has been monetising its global audience since 2013, using Supponor’s Digital Board Replacement (DBR) virtual advertising technology. As virtual advertising exists only on TV, it must be integrated into extremely valuable live broadcast content. Therefore, before virtual ads can be sold, the priority has to be the proven quality of the virtual technology. The DFL worked closely with its production team at Sportcast,


FEATURE

“Brand and marketing communication will become increasingly personalised as the costs for advanced technology drop.” MARIO LUCAN

www.tvbeurope.com

which manages the broadcast signals for every Bundesliga rights holder. Together, they assess any potential technology for approved use within the league. The DFL and Sportcast, with the guidance of BVB’s commercial partner, Lagardère Sports, initiated tests on both static and animated LED boards in 2016, with Supponor taken forward to provide its award-winning DBR solution at live Bundesliga events later that year. These ‘virtual showcase’ events, organised by Lagardère Sports and attended by DFL officials and Bundesliga clubs’ personnel, demonstrated reliable and authentic virtual ads on LED boards supplied by ADI, one of Supponor’s solution partners. During the test phase, BVB was able to see the potential of this new technology - and was keen to learn more. Together with Lagardère Sports, they modelled how the DBR solution could benefit BVB. “Borussia Dortmund was founded in 1909 and since then has become deeply rooted in the local area. When we began looking at new ways to grow, it was crucial to retain and strengthen the stadium experience and atmosphere. For example, the Signal Iduna Park is known all over the world for its ‘yellow wall’ and we never want to touch this,” says Benedikt Scholz, head of international and new business, Borussia Dortmund. “We could see that Supponor’s virtual advertising technology would provide us with an opportunity to target our communication towards certain markets for our clients and partners, without disturbing the stadium atmosphere and experience.” WINNING TRUST AND OPENING NEW OPPORTUNITIES The extensive test phase provided the confidence in the technology that the DFL, Sportscast and Bundesliga clubs were looking for. In March 2018, Supponor became the first technology company to pass the DFL’s rigorous quality check, allowing its DBR solution to be used for live international broadcasts of Bundesliga and Bundesliga 2 matches. With this permission, BVB became the first club in

the Bundesliga to adopt the solution. Since August 2018 it has been used for all BVB home matches as well as selected away matches, thanks to the 240m of mobile LED boards that can be transported and installed at other club grounds. Following the implementation, Lagardère’s focus on finding and building relevant brand partnerships for BVB quickly enjoyed a significant boost. “Brand and marketing communication will become increasingly personalised as the costs for advanced technology drop,” suggests Mario Lucan, senior director product management, Lagardère Sports. “Consumer relevance should always be at the core of a state-of-the-art brand and marketing communication. Virtual advertising allows brands to display products and services in highly localised and targeted ways.” IMPROVING FAN ENGAGEMENT BVB is also mindful of its fans overseas and uses the Supponor technology to communicate in a targeted way to fan clubs around the world, with specific messages in different languages. And while all this is happening for the global TV audience, those local fans who attend a match can be rewarded with more dynamic and varied content on the displays in the stadium, such as live social media feeds, statistics from the game or the league, or local information. BUILDING FOR THE FUTURE As an early adopter of this new technology in the Bundesliga, BVB has already seen how Supponor’s technology is enabling the club to provide better services to its partners and clients, and better communication to its global and local fans. “Previously, we were unable to provide our regional partners with promotion and communication within the stadium right to the pitch. We can now commercialise our offering in a far more relevant and targeted way, and build long-lasting partnerships in key international markets,” adds Scholz. “While it’s a product that needs some explanation in certain markets, we have received very positive feedback and it has achieved specific outcomes.” Having completed a full season with the system, BVB is keen to further exploit the potential of the system for the 2019/20 season and beyond. “In the future, we can individualise and target more and more - right now we’re at four international feeds, and this is something we would like to expand,” Scholz says. By embracing this new DFL-approved technology, BVB is now communicating effectively with its fan base around the world, retaining the unique atmosphere of its stadium, and increasing revenue streams at the same time. n

TVBEUROPE JANUARY/FEBRUARY 2020 | 47


LONDON CALLING

Philip Stevens looks at the London operations of two international broadcasters

“W PICTURED ABOVE: Neil Burt, head of Technical Operations at CNBC International

e follow the sun!” That’s how Neil Burt, head of technical operations at CNBC International based in London describes the global work of the broadcaster. With the CNBC operation based in three centres - Singapore, London and the US - it is easy to understand what Burt means. As the business world springs to life each weekday, the broadcast operation focuses on what is happening at that moment in that each centre. He continues: “Our UK transmission starts at 0600 when

48 | TVBEUROPE JANUARY/FEBRUARY 2020

we take over from Asia. We then stay on air until just before the markets open in America.” “The UK operation began in 1998 when it merged with European Business News,” says Burt. “It was a very niche market in those days, but with all the happenings in the world – especially the events surrounding Brexit - we are not as niche as we used to be!” To handle that growing need, the newsroom operation is under the control of Avid’s iNews system. Here the journalists and writers use a MOS based workflow using plugins for both Chyron lower thirds and Stratus for video.


PRODUCTION AND POST

The main London studio measuring 195 square metres accommodates 10 Grass Valley LDXC80 Box cameras robotically controlled by Telemetrics equipment. Six of the cameras are mounted on moveable pedestals, two are attached to wall-based poles allowing controllable up and down movements, while the remaining pair are located on the ceiling to provide wide shots of the studio environment. The pedestal cameras are equipped with Prompter People teleprompters which are operated by the presenters. Reversible confidence monitors are also provided that can be switched to a variety of sources from the production gallery. A Barco video wall measuring a little under 14 metres in length allows a variety of graphics, including a multicoloured FTSE100 display, to be shown in 4K resolution. Processing is carried out with an HP Z800 playout unit running a Brainstorm graphics package. All lighting is LED and provided by Vitec. “As well as our own programming, we also provide

studio facilities for Sky’s Ian King Show,” says Burt. “NBC – and therefore CNBC – is owned by Comcast and when that company acquired Sky in 2018 it made sense to consolidate some of the London operations.” In addition to the main studio, there is a smaller one that can be used by both Sky and CNBC programming. For Sky productions, the Sony cameras with their Shotoku robotics are remotely operated from the main Sky facility at Osterley. A GLANCE AT THE GALLERY Central to the CNBC production gallery is a Ross Acuity vision mixer. “Our directors normally carry out their own vision mixing, although for some complex shows we may split the roles. In addition, the director will rack and operate the robotic cameras.” Associated with the Acuity is Ross Xpression as the technical graphics playout tool. “We use it with limited facilities,” explains Burt. “We utilise one channel for fill and

“CNN London is one of only a handful of IP-only broadcast facilities around the world.” JONATHAN KILLIAN

www.tvbeurope.com

TVBEUROPE JANUARY/FEBRUARY 2020 | 49


PRODUCTION AND POST

PICTURED ABOVE: A presenter’s eye view of the CNBC studio (top) Jonathan Killian, CNN’s executive director, creative and brand development

key to provide a moving background element. There are four other channels that can be used for playout under the control of the switcher.” Four channels of Chyron are used as the main graphics playout for lower thirds and stills. Communications throughout the international operations are the responsibility of Telex equipment. When it comes to external shoots, CNBC has one ENG crew based in London, but also makes extended use of LiveU units. “In addition, we have an OB truck based in Amsterdam with both LiveU and multicamera capabilities. However, the latter technology is normally only used for events such as the Davos Economic Summit where multi feeds are required for different parts of the CNBC network. “We operate five dedicated edit suites,” states Burt. “All are using Grass Valley Edius with a full suite of Adobe products. Although journalists will carry out a simple cut with Stratus using a storyboard editor, material is generally passed to craft editors using Adobe for the final edit. Adobe is also used for graphics, utilising Studio Max and 3D Studio as the main tools. Chyron is used for lower thirds.” Although all the content for the European operation is centred in London, playout for the channel is triggered from Singapore. “Actually, there are five channels originating in London, serving Europe, UK, Middle East, Latinos and one that goes to the United States. That split means, of course, that commercials can be inserted to suit each market.” Burt goes on: “We still operate in a traditional videobased routing regime within the building. We have not made the leap into IP because you then have to break out into some kind of interface. Changing means different cameras and infrastructure and it doesn’t seem to offer much gain for our operation. So, we use video routing with audio routing also built in. However, that situation could

50 | TVBEUROPE JANUARY/FEBRUARY 2020

change if a move to another building became necessary. There are no plans for that at the moment.” The Media Asset Management for the whole of the CNBC International operation is Grass Valley Stratus. “There are 16 channels of ingest and 12 channels for playout, seven render engines and five processors. Our back-up systems can handle around 100 Terabytes of content. Our playout encoders are from MediaKind.” CNBC’s internal network comprises three links to the US and a further three to Asia. To complete the triangulation, Asia has three links to America. “So, around the globe we have an IP network that is serviced by NBC/ Comcast with Ericsson equipment for encoding and decoding. In addition, there is a BT MPLS (Multi-Protocol Label Switching) network for sending the transmission signal to Sky at Osterley. From there it goes to Sky in Milan and then to Sky Germany. In 2019, this was upgraded to HD for all the platforms.” Burt concludes: “As I mentioned earlier, CNBC has become more than just a niche broadcaster. Times have changed and business news is more important than ever in the changing world. We are here as part of an international network that is meeting the needs of 2020 – and beyond.” HIGH TECH MOVE Over the next couple of months, CNN International will be leaving its current facility in London’s West End for a new building in the City. This move will result in a purpose-built newsroom and studio that will be home to CNN International’s television and digital output. “We are very excited about the move which we started planning about three years ago,” states Jonathan Killian, CNN’s executive director, creative and brand development. “The new location provides us with a much bigger space and a lot more natural light. And from a programming standpoint it is going to be the biggest studio that CNN International as ever had or created.” Killian says that the interior has some unique features from a design standpoint. “In fact, one might call those features challenges. For example, there are big concrete columns right down the middle of the room and so we decided to make that a virtue and embrace it. So, those columns will feature monitors which can be used to display information. For an election broadcast, for instance, we can show bar charts revealing how well parties are doing. Or there will be relevant images for sports programmes. The columns show the impressive ceiling height and will certainly make us distinctive from our competitors.” He goes on to say that this new facility is coming on board at the perfect time as CNN is expanding and mixing the TV and digital output. It is also introducing a brand new technology set up to meet the demand of today’s


PRODUCTION AND POST broadcast and online worlds. “Following the successful introduction in 2019 of our new Hudson Yards broadcast centre in New York, we have opted for the same IP technology for the London facility. That makes CNN London one of only a handful of IP-only broadcast facilities around the world.” He continues: “The technical challenges we face with having programming based all around the world is to ensure smooth transitions. We have perfected that in terms of existing technologies, but can do even better with IP. We are embracing the most up-to-date technology available to us to create our biggest production hub that will result in a great product for the viewers at home.” Guiding the broadcaster with the challenges of IP implementation and supporting the team’s commitment to the SMPTE 2110 roadmap will be Systems Integrator Megahertz. Evertz is the choice for the core broadcast IP routing. Megahertz will work under the day to day supervision of CNN’s head of engineering, JJ Eynon. Although the technology will allow gallery production to be handled anywhere in the world, this is not the intention for everyday workflow. The new facility will have its own gallery equipped with a Sony XVS 7000 vision mixer, Calrec Artemis audio console and RTS communications system. The directors will carry out their own vision mixing and operate the robotic cameras – although this can be handled globally, if necessary. Playout is under the control of Evertz DreamCatcher. “As far as equipment is concerned, the vast majority will be brand new with just a few things coming from the existing facility,” reveals Killian. The studio, which measures 214 square metres (2,300 square feet), will house a total of six Sony cameras. Four will be robotic controlled by Vinten Radamec, one will be jib mounted and the other is a Steadicam. Prompting comes from Autoscript Win. In addition, there will be one further camera located on a platform on a mezzanine floor. The studio will feature a huge video wall that has been designed by London company Jago Designs and created by Layard. The presenters’ desk will be able to rotate through 180 degrees to provide a different background for each show. Graphics will come from a Vizrt system and will allow the easy import of content from other CNN broadcast facilities around the world. Although journalists may carry out some editing (there are 64 Mac laptops to facilitate this work), there are 14 edit suites using Adobe Premiere for the team of craft editors. “On the editorial side, we are using Avid’s iNews in the newsroom,” says Killian. “There are separate teams for the online and TV output, but they will all work on the same floor allowing full integration of the information that is available.” During 2019 there were some reports about the

www.tvbeurope.com

downsizing of the CNN London operation. Although, for operational reasons, a few activities have been moved to the United States, Killian is keen to emphasise that the new London facility is producing significantly increased programming. “We are all-in on London. We’ve introduced three new shows, and employ more anchors in the UK than before. There is a new evening bulletin, a new morning show for Europe and another for Asia, and much of our sports programming originates in London. “As far as digital is concerned, this is going to be a key digital hub when it comes to online output, and London remains the EMEA hub for our newsgathering, feature programming and commercial operations. We’re investing heavily in our London operation, and in moving to this brand to a new, larger facility we are making the biggest commitment that CNN has ever made to the city.” Killian concludes: “In terms of the space we are very excited about what we’re going to achieve. We’re planning to utilise the space as much as possible in the production of the output, and that will certainly set us apart from other news channels.” n

PICTURED ABOVE: An artist’s impression of the exterior of the new building that will house the CNN London broadcast hub

TVBEUROPE JANUARY/FEBRUARY 2020 | 51


PICTURED ABOVE: The Sony VENICE camera system is the result of over 20 years of digital cinematography experience, and conversations with key directors of photography

BETTER – BY DESIGN Philip Stevens discovers how new products are conceived and developed

W

ith so many trade shows and exhibitions now available to the broadcast industry, there is a constant need to come up with new and innovative designs. So how do manufacturers decide on what is needed and when to satisfy an everdeveloping market? “There are two key factors that determine whether or not there is a need for a new product or solution,”

52 | TVBEUROPE JANUARY/FEBRUARY 2020

states Jin Yamashita, general manager, Product Planning Department, Media Solution Business Division, Sony Imaging Products and Solutions Inc. “The first is based on technological advancements. We create products leveraging our unique and cutting-edge technology, for example, by incorporating the latest image sensors or design platforms. The second is the voice of our customers. At Sony, we work incredibly closely with a wide variety of broadcast and cinema customers and organisations.


PRODUCTION AND POST

“The conversations we have with our customers obviously play a major part in deciding what features we want to include in any new camera.” JIN YAMASHITA That gives us unique knowledge of the market, alongside seeking input from industry recognised bodies and associations. Our customers are integral to almost every phase of our research and development process and are consulted in a number of ways throughout. We listen intently to what they have to say, and their feedback plays a huge part in the decisions we make around launching new products, solutions and services.” During planning the development team keep in mind the total system solution, and not just the single product. For instance, when a new system camera is developed, it requires optional accessories and supporting equipment to maximise the performance. “For example, we planned the 4K/HD system camera HDC-5500 together with Camera Control Unit HDCU-5500. The pair offers Ultra High Bitrate (UHB) transmission which transmits two channels of 4K signals at a time - without a baseband processor unit. This can save space and make operation of the camera simpler.” THE PROCESS With all of that in mind, how does Sony start designing a new camera? Is it a case of building on an already successful design – or starting with a blank sheet of paper? “We usually use our latest cameras as a point of reference and then investigate what changes we could possibly make - from both a hardware and software perspective. The conversations we have with our customers obviously play a major part in deciding what features we want to include in any new camera. Occasionally, there are cameras that we create completely from scratch. Our flagship next-generation motion picture camera system VENICE, for example, is the result of over 20 years of digital cinematography experience, conversations with key directors of photography from around the world, and our brilliant engineering team that made these ideas a reality.” So, does Yamashita see the same principles applying to other pieces of equipment – switchers, for example? “Switchers are a category where operational requirements are key. When designing a new vision switcher, we keep the basic operational philosophy of our tried and tested products in place, but add new features and improvements based on customer feedback. We also consider supporting new signal formats as they emerge.”

www.tvbeurope.com

He says that since the launch of the popular MVS Switcher Series, over 50 software version releases have been developed, bringing more processing power and adding new operational features to help users realise their creative vision. As an example, some years ago it was recognised that there was a need to evolve the control panels to make them more adaptable. As a result, the ICP-X7000 series 4 was launched to enable operators to tailor the control surface to their needs. More recently, Sony has brought all the control panel functions into ‘virtual’ versions to enable greater creativity and collaboration. Of course, some products – such as archive solutions and MAMs – are continually upgraded to meet new demands. How does Sony handle this area of the industry? “We’re in the midst of a data revolution, and the media industry is just one small part of that. Sony has been at the forefront of Archive Technologies for many years and has been continually investing in products and solutions that meet the increasing demand for long-term, secure and cost-effective data storage. With the launch of Optical Disc Archive in 2013, we were able to provide a solution that is not only robust, but, importantly, sustainable. Now, we are continuing to evolve that proposition with our recently developed Generation 3 supporting 5.5TB per Cartridge and 375MB/sec Transfer Speed per Drive. An enterprise class library solution, the ‘PetaSite EX’, which utilises

PICTURED ABOVE: Jin Yamashita, general manager, Product Planning Department, Media Solution Business Division, Sony Imaging Products & Solutions Inc.

PICTURED BELOW: Camera Control Unit HDCU-5500 combined with the HDC-5500 offers Ultra High Bitrate (UHB) transmission which transmits two channels of 4K at a time, without a baseband processor unit

TVBEUROPE JANUARY/FEBRUARY 2020 | 53


PRODUCTION AND POST

Generation 3 technology and offers up to 50PB capacity is also under development. Given Optical Disc Archive is one of the best technologies for WORM (Write Once Read Many) it will continue to be an area of continual improvement and investment.” MAM depends more heavily on the specific application customers require, so the company adopts a blended approach to integration where, like for archive solutions, it offers APIs and integration the relevant data systems. “Thanks to our heritage and expertise in the media industry and deep understanding of customer workflows, we can also offer turnkey MAM solutions for news or sports like, for example, our industry awarded Media Backbone Hive.” TESTING TESTING Developing new products in an R&D environment is all well and good, but what testing is carried out once the

prototype is completed? “We take user feedback very seriously, so initiate customer testing during the prototype and development stage. Focusing on on-site verification, we ask customers to test our prototypes and use that feedback to help fine-tune the design when it comes to finalising the mass market version of a product. Our Quality Assurance department is one of the strictest and most stringent in the world. From user acceptance testing scenarios, to ageing scenarios using heat and cold or humidity chambers and following strict ISO certification procedures, our products are heavily tested so we can guarantee the most robust and future-proof solutions. Once products are out in the market, our local engineers and expansive support team

“We ask customers to test our prototypes and use that feedback to help fine-tune the design.” JIN YAMASHITA

54 | TVBEUROPE JANUARY/FEBRUARY 2020

PICTURED ABOVE: The HDC-5500 is a 4K/HD system camera


PRODUCTION AND POST liaise with engineering teams at our factories where the feedback loop continues.” Yamashita goes on to say that the most innovative introduction to TV broadcast equipment in the last 10 years has been 4K and HDR technologies. “4K allows us to capture immersive images in incredibly high resolution, while HDR has eliminated blackouts or overexposure when moving from dark to bright areas, and vice-versa. Combined, they create content of superior quality that was unimaginable just 10 years ago. “Additionally, IP connectivity has become a major game-changer for our industry, particularly when it comes to live production. IP now enables remote production setups, resource sharing and a more collaborative, faster turnaround time. Sony has been at the forefront of this revolution and has, to date, worked with over 60 customers around the world to create IP-enabled production setups.” Sony has recently announced its new Live Element Orchestrator, a system orchestration and management solution that comes with integrated control and device monitoring capabilities to help meet the demand for full IP solutions. When it comes to designing these solutions from a physical perspective, IP poses another huge advantage:

rather than needing multiple SDI, audio, timecode and reference connectors, IP-live enabled solutions just need a handful of connectors. This means the design process is relatively straightforward in that regard. THE FUTURE As far as the future is concerned, AI technology is high on the agenda. “At Inter BEE 2019 held in Japan, we introduced our latest AI technology including several POCs (Proof of Concepts) that we conducted with customers. We believe AI technology can greatly help efficient production and increase the value of the content. We also announced a prototype of a 24’’ UHD HDR display. These new product innovations will be available to customers in Europe very soon.” He concludes: “At Sony, we have a rich heritage spanning over 50 years. For our professional business, we have consciously decided that our corporate slogan should be ‘Live Your Vision’ which perfectly encapsulates our mission to help our customers be as successful as they can possibly be. We create solutions that actively address the challenges customers face today and provide them with the technologies they need to make tomorrow a reality.” n

Introducing the ControlCenter-IP: the outstanding flexibility of KVM-over-IPTM with the renowned functionality and reliability of the G&D matrix. 0 t ISE 202 Visit us a ebruary F 4 -1 11 -R130 stand 10

It’s a combination that has no competition. We’ve taken the features of our classic ControlCenter series and integrated them into the KVM-over-IP™ flexibility. matrix to achieve new levels of fl exibility. With the ControlCenter-IP, you can operate even the largest installations since it uses standard IP structures instead of dedicated cabling. It supports all common video signals up to 4K@60Hz, using our own lossless video compression for maximum compatibility. And of course the ControlCenter-IP offers the peerless levels of usability, safety and reliability you would expect from G&D. The most comprehensive and complete KVM product range in the industry just stretched even further.

www.gdsys.de

www.tvbeurope.com

TVBEUROPE JANUARY/FEBRUARY 2020 | 55


PRODUCTION AND POST

NEWS IN THE 56 | TVBEUROPE JANUARY/FEBRUARY 2020

CLO


PRODUCTION AND POST

Philip Stevens explores a comprehensive move for a Danish

broadcaster

T

V 2/Fyn, serving the half a million residents on the island of Fyn, is one the eight regional broadcasters in Denmark. It is a public-funded broadcaster which runs its own TV channel, webpage and SoMe content, and broadcasts news shows and bulletins on the main TV 2 Denmark channel. “We are not owned by TV 2 Denmark, the national state-owned commercial broadcaster, so we make our own decisions according to workflow and equipment,” explains Michael Jensen, head of technology, production and innovation. “The news bulletins on TV 2 Denmark are aired five times during the day, with a news show 19.30 to 20.00.” In the summer of 2019, a contract was awarded to Mediability, the Nordics’ largest professional equipment reseller and systems integrator, to build a new journalist-centric, fully Cloud-based newsroom.

UD www.tvbeurope.com

NEW APPROACH “Our old system had been a reliable workhorse for many years – but there had been no significant development for some time,” says Jensen. “We were also aware that our different editorial departments were working in silos, meaning what they did on the digital desk did not come to the TV desk and so on. We wanted to work story-centric, so whenever news breaks - or even covering planned events - we would be able to work with the content in all our departments, at the same time.” Jensen reports that the benefits of the new system include the ability to track all content, work and publish faster. In addition, it is possible to schedule publishing and use the newest technology like cognitive services. “The idea from the beginning was to build our broadcast centre like a data centre, meaning that we could use standard servers and Cloud services. This lowers our cost, and frees our technical support from a lot of maintenance, which will give them time to do innovation instead.” He says that this is the first real Cloud-based solution the station has used for this type of work, although it has used virtual servers for a while, and employed software like Trint for speech to text. EASY ACCESS Håvard Saunes Myklebust, CEO, Fonn Group (the owner of Mediability) picks up the story. “Our brief from the client was to create a newsroom in the Cloud that was story-centric. By ‘Cloud’ I mean a solution-as-a-service and with little or no on-premise installations. This frees them completely from expensive hardware purchases and painful maintenance processes, managing servers, client installs and so on. And with the Cloud any user can access tools via their web browser without having to install anything. So, it was about accessibility, ease of use, and working with smarter, modern, web-based tools.” He states that a story-centric approach does not mean overruling the ability to create great stories with the best possible toolbox for the end-users. “Many

TVBEUROPE JANUARY/FEBRUARY 2020 | 57


PRODUCTION AND POST

PICTURED ABOVE: Michael Jensen, head of technology, production and innovation, TV 2/Fyn (top) Håvard Saunes Myklebust, CEO of Fonn Group

58 | TVBEUROPE JANUARY/FEBRUARY 2020

newsroom systems are still built for creating stories for linear TV only, with rundowns and stories created for a specific event or news show. Collaboration, sharing and publishing on multiple platforms is not easy.” In the case of TV 2/FYN the broadcaster wanted to move away from workflows which tied it up in legacy ‘linear-only’. Stories needed to be created for multi-platform publishing as a default - and users would be able to easily collaborate on a story across web, social media, the news desks and so on.

system and with their video editing system, their system for editing of live streams, online clipping system and more. “Mimir was also selected due to its extensive choice of cognitive services for analysing and logging videos, with its integrations to all the major vendors for AI analysis and metadata logging,” states Myklebust. “Vimond IO is considered the best video editing system in the Cloud and it integrates with Mimir for video storage and for analysing and logging metadata using AI.”

MEETING THE BRIEF The biggest challenge facing Mediability was the fact that its client was the first to go live with the brand new tool DiNA, from 7Mountains. This was launched during IBC 2019 and was in the process of being finalised on the functionality side for the first roll-out. Mediability cites a number of reasons for opting for DiNA. These include being built from the ground up on modern, web-based technology, does not require any install for the end users, can be accessed from anywhere, it’s offered with a subscription model and it reduces the number of applications a journalist needs to work with on a daily basis. In addition, it has an internal marketplace for pitching news stories to directors, producers and it uses AI and machine learning to suggest related videos and images, to create text versions for social media, and to categorise stories. “DiNA replaces their traditional newsroom systems and introduces a new way of working with stories, collaborating on stories, and also collaborating across departments,” explains Myklebust. “It is, of course, a substantial task to replace a newsroom system and we were extremely excited about rolling out DiNA, Mimir and Vimond IO with such a client as TV 2/Fyn which is extremely forward-leaning!” The broadcaster chose Mimir because it is a total Cloudified production asset management system, with support for on-premise storage. Again, the focus was to move away from having to invest in expensive hardware and software licenses, and choose a solution that is subscriptionbased, integrates with the DiNA newsroom

EXCITING FUTURE Jensen adds some thoughts about AI. “The most important thing in our workflow is metadata, if we can’t find our material then we can’t publish it. So, with AI we are able to transcribe on the fly and have object and face recognition. This will give us a lot of ‘free’ metadata, and we will be able to go faster on air. And I think we are just scratching the surface, I’m very excited about what the future will bring.” Since the systems do not require any heavy installation at the customer site, the roll-out started by the client giving access to users from their web browsers. Although there is a short period of overlap, the plan is to switch off the legacy systems with a hard ‘cut-off ’ on an agreed upon date. “Broadcasters typically replace technology on a system by system basis,” says Myklebust. “But that approach is fundamentally flawed as it means underperforming legacy systems remain in place, which often restrict the huge potential for new and smarter web-based media workflows. Clearly, such a major change means a variation in workflow. “The new system will affect everyone in our broadcasting department, from acquisition to archive,” states Jensen. “We will be able to work anywhere, even from home. The only thing you need is a laptop and internet, and from there you can access all of our material, and publish to whatever platform we are using. And, of course, the financial benefits of having no on-premise systems and no long-term Capex investments will give us the ability to scale as and when the need arises.” n


PRODUCTION AND POST

THE GROWING ROLE OF AI IN AUDIO WORKFLOWS Francois Quereuil, director of product management, audio workstations and control surfaces at Avid, on harnassing the power of AI

A

rtificial Intelligence (AI) is having a transformative effect on a huge range of industries, and the world of media and entertainment is no exception. Creators and machines are continuing to become more intertwined, with creative workflows taking on new shapes as AI-assistance gathers momentum. At a broad level, people are recognising that technology and creativity go hand in hand. Creative professionals are expressing an interest in how AI and machine learning can aid the creative process. And although the discussion about machines replacing humans remains prevalent, the reality is much less dystopian. Rather than being worried about losing their jobs to technology, they are recognising the potential for AI-powered tools to make processes more intuitive and reduce the time spent on tedious, uncreative tasks. When talking about audio specifically, it’s no secret that AI is quickly becoming a vital cog in the machine. So, what role is AI currently playing within audio workflows and how is this growing trend likely to develop in the future? TRANSFORMING WORKFLOWS When it comes to audio workflows, there are three main areas where AI is starting to have an impact: assisted mastering, assisted mixing and assisted composition. All three are at slightly different points on the adoption scale. For example, AI is already well established in the mastering process – despite this arguably being the most specialised area of music production. The goal of mastering is to make the listening experience consistent across all formats. The process varies across formats (Spotify, CDs, movies etc.) as each has different loudness constraints, making mastering extremely technical and potentially costly. There are very few skilled mastering engineers around, but AI is proving to be a viable and democratising alternative for many musicians. By analysing data and learning from previous tracks, AI-powered tools enable less experienced engineers to quickly and easily achieve professional results, albeit without the finesse of a human expert. Next, we come to assisted mixing which, although currently slightly behind mastering in terms of adoption, is developing fast. With so much content being created for OTT services such as Netflix and Amazon Prime, the volume of audio work happening in post is increasing dramatically. Facilities are therefore looking for ways to work faster and more cost-efficiently. AI tools can help engineers and audio teams make basic decisions and complete the more routine tasks, thereby saving valuable pre-mixing time and enabling humans to focus on the more complex and creative elements. For example, some mastering plugins contain built-in intelligence that analyses source material (such as guitars or vocals) and puts it in the context of the rest of the mix to suggest mixing decisions. By taking on much of

www.tvbeurope.com

the initial heavy lifting, tools such as this can be hugely beneficial for less experienced users. Finally, there’s audio composition, another area of music production that is quickly realising the value of AI. More and more tools are using deep learning algorithms to identify patterns in huge amounts of source material and then use the insights generated to compose basic tunes and melodies. They are by no means perfect. But intuitive, user-friendly AI systems are having a transformative effect on audio workflows. The prevalence of AI in audio workflows is only going to gather momentum in the months and years to come. AI is well suited to up-and-coming artists who don’t rely on music as their primary income and have limited time and resources to dedicate to song writing. But the real opportunity is in post production due to the time-to-market pressures involved. Sound engineers can use AI to speed up and simplify baseline tasks, enabling them to focus on the high-value aspects that require more creativity. In the long term, AI could be used to manage complex installations and systems. With audio over IP, teams manage routing from central software so they can pool resources to support projects. AI could be used to manage these complex networks of computers and software. Ultimately, we’re at the tip of the iceberg. For beginner and intermediatelevel creative professionals, AI tools can act as an assistant that can learn their mixing habits over time and help audio sound the best it possibly can. For more experienced professionals, it can help increase efficiency by removing many of the tedious, time-consuming tasks. AI will never replace humans entirely, but it’s clear that the technology is set to play a key role in the years to come as it continues to get more advanced. Audio professionals have to be prepared to embrace the AI revolution. n

TVBEUROPE JANUARY/FEBRUARY 2020 | 59


PRODUCTION AND POST

LOTS MORE FOR THAT LOT Philip Stevens looks at an ambitious installation

F

ounded in 2014 by David Schneider, David Levin and with a unique spin knob control, push button panel and the David Beresford, London-based That Lot specialises in built in LCD screen. The setup also includes Ethernet for remote ‘social-first’ platform-specific content and production. control and a fully customisable software development kit for Mac Its client list includes Channel 4, Intercontinental Hotels Group, and Windows. McCain, B&Q, Have I Got News for You and Jamie Oliver. “We also installed a Softron Multicam Logger which permits “Our aim is to produce highly shareable, innovative, strategyusers to log all of the different angles or inputs that are used informed content that cuts through on social and delivers in a live multi-camera production. It records which input the industry-leading Return on Investment (ROI)”, says Ben Forder, director has selected on the vision mixer and the time. In this way, That Lot’s head of video. “Our workforce includes creatives, Multicam Logger creates a multicamera clip ready for editing.” designers, writers and videographers, as well as project managers Networking is handled by TL-SG1024 24-Port Gigabit Switches. and account teams led by managing director Laura Tannenbaum.” All 24 ports support auto MDI/MDIX removing the need for Early in 2019, the company approached system integrators ATG concern about the cable type. Peirson-Hagger says it is simply a Danmon to design and provide a new - and flexible - technical case of ‘plug and play’. facility for creating broadcast-quality content for a wide range The gallery is also equipped with an Allen & Heath QU16 of media applications. The project included a new multi-camera digital audio mixer and a 27-inch Dell P2715Q 4K IPS LED PICTURED ABOVE: studio plus a fully equipped control room, lighting, vision and audio Russell Peirson-Hagger; monitor. This Rack Mountable Digital Mixer has 16 Mic inputs, three Ben Forder mixing, routing, recording, monitoring and archiving. Stereo inputs, 10 Mix Outputs and USB Multitrack Recording. Studio PICTURED TOP: monitoring comes from a pair of Rokit RP5 G3 units. David Schneider THE EQUIPMENT Two 490-inch Samsung SMDC49H monitors are provided for The studio is equipped with Blackmagic Design URSA UHD cameras which studio floor and green room use. Storage and recording are handled by an can be operated in a wide range of lighting levels and are ideal both for studio EVO Prodigy Desktop Base System that comprises 32 TB RAW (4 x 8TB SATA and outdoor operation. Production mixing and internal signal distribution are 6Gb/s), and a ShareBrowser Workflow Appliance. “ShareBrowser provides That via an ATEM Studio HD switcher and Smart Videohub UHD-capable router, Lot with an added layer of easy-to-use Asset Management capability. Users can capturing to Softron recorders and SNS EVO networked storage. Index, Search, Preview and Verify assets on any online/offline storage device.” “We recommended three Blackmagic Design URSA Mini Pro on the basis “We have also integrated Apple iMac Pro computers for post production,” of very good experience with these cameras,” explains Russell Peirson-Hagger, reports Peirson-Hagger. “Additional facilities include Cirro Lite lighting, a ATG Danmon managing director. “They deliver excellent video, are compact, Custom Consoles control room desk and Sonifex talkbacks.” easy to operate and sensibly priced. A shoulder kit was included with each camera to allow informal-style shooting as well as tripod-mounted, in studio or UNIQUE TOUCH on location.” Forder says that the result of the installation means the company has been He continues: “You get a Super 35mm 4.6K sensor with 15 stops of dynamic able to create the most flexible possible system for use across a wide range of range that’s perfect for feature films and can also be instantly switched to creative services. “A full-colour back-lit LED cyclorama, which we believe is traditional video mode for regular video work. Also included are built in ND the first of its kind in Europe, provides sharp chroma-key superimposition filters, user changeable lens mount, plus RAW and ProRes format recording to on to on any background. Lighting is from a ceiling-mounted rig which can dual media CFast or SD cards.” be adjusted to get the exact style of illumination we need for live streaming or The cameras are mounted on three Sachtler Ace, each with XL fluid head recording sketches, scripted videos, interviews, podcasts, vodcasts and social and DV 75 dolly. media photoshoots. That includes 360-degree surround video and audio Peirson-Hagger goes on: “The ATEM switcher incorporates all the features capture if specified.” needed for high-end UHD/HD live production including four SDI and four He concludes: “ATG Danmon has also integrated a control room suite which HDMI inputs, each with resync. We integrated it with a BlackMagic Smart is being used for live production and editing. External material can be sourced Videohub 20 x 20 router which includes 6G-SDI connections so you can from wherever required or we can perform high-quality production on simultaneously connect and route any combination of SD, HD and Ultra HD location. The system is intuitive to operate which is especially important when video all on the same router at the same time. You also get new visual routing working live, and highly versatile.” n

60 | TVBEUROPE JANUARY/FEBRUARY 2020


TECHNOLOGY

CLOSE TO THE EDGE: TELEVISION AT CES Tom Butts finds screens getting larger as 8K dominates the Consumer Electronics Show

A

s the world’s largest gathering of consumer electronics technology, the annual Consumer Electronics Show has greatly expanded its focus over the years as more household products become digitised and then connected. The days where TV sets were once the biggest stars are long gone but even today, some of the new entrants at the 2020 show prompted some headscratching. For example, why would Impossible Foods, one of the world’s largest developers of meat alternatives be showing at an electronics show? The company used the gala event to roll out its latest experiment in ultraprocessed food: fake bacon; nevertheless the connection between their newest concoction and electronics is tangential at best. Although one could certainly draw a straight line between TV watching and food, the main themes at this year’s show were traditionally consistent with the trends in recent years, namely 5G, AI, IoT (now referred to as the “Intelligence of Things”) and for television: 8K. While sales numbers are still in the single digits for 8K sets worldwide, TV manufacturers were all aboard the high resolution bandwagon at this year’s CES. The fact that there is little to no 8K content available has not deterred manufacturers who claim that their upscaling and display technologies will satisfy consumers’ thirst for the best pictures available. WHAT FILMMAKERS WANT Concerns over how the variety of televised programming is displayed on their new UHD sets was a priority for the UHD Association, a consortium of studios and set manufacturers promoting high-resolution television. Specifically, the Association used CES 2020 to formally roll out Filmmaker Mode, a setting that will be adopted by most major TV manufacturers to ensure that the correct viewing mode is being used. As it currently stands, many 4K and 8K sets come out of the box with default modes that don’t necessarily represent how content is to be viewed. Viewers have complained about films shot in 24 fps being displayed in video mode (aka what has been referred to as “the BBC effect” or the “soap opera effect”). Needless to say, Hollywood has not been pleased. “Most people today are watching classic films at home, where existing technology presents all media in exactly the same way, whether it’s a football game or Lawrence of Arabia,” said director Martin Scorsese. “With Filmmaker Mode, different works will be presented accurately, as they were created and designed by the filmmaker. Filmmaker Mode is a long overdue and welcome innovation.” Some sets currently on the market offer viewers the ability to activate a ‘cinema mode’ in the TV’s settings; however, Filmmaker Mode will be

www.tvbeurope.com

activated automatically, through metadata embedded in the content. LG, Panasonic and Vizio were among the first set makers to commit to offering it in their 2020 television products; Samsung, Philips/TP Vision and Kaleidescape will also offer Filmmaker Mode in their TV products this year as well. Samsung dazzled attendees with its modular The Wall, offering four new sizes in addition to its largest option - a whopping 292 inches. The Wall includes Samsung’s MicroLED screen technology that allows consumers to customise the size of their screens. Samsung has also upgraded its AI Quantum Processor for its QLED 8K TVs and redesigned the sets in an ultra-thin form factor as well as introducing its concept of the Infinity Screen, which produces a screen-to-body ratio of 99 per cent. VERTICAL VIDEO But perhaps the TV product that garnered the most buzz was Sero, which, with the simple press of a button, rotates to vertical mode to view content created from smartphones. Gone are the vertical bars that characterise the display of vertical videos on horizontal sets. For US broadcasters, CES 2020 represented the official launch of ATSC 3.0, now branded as NextGen TV. LG, Samsung and Sony all announced sets that will be available by the 2020 holiday shopping season that include support for the new broadcast standard, which combines over the air broadcasts with IP. n

TVBEUROPE JANUARY/FEBRUARY 2020 | 61


TECHNOLOGY

IN THE DRIVER’S SEAT Dan Meier gets to grips with NativeWaves’ second-screen technology

I

f there’s a greater thrill than watching a live sporting event, it may lie in choosing your own camera angle; assuming the viewpoint of a Formula One driver, for instance, can take the viewing experience up a gear. “Let’s consider a sporting event or an event where there are multi camera angles,” says NativeWaves CEO Marcel Hasenrader. “At the moment those cameras are put through a production system which then feed a video and audio feed to broadcasts. What our concept does is it’s able to utilise those additional video or video feeds from the same event because they’re all captured at the same moment in time and provide access to those via the Cloud delivered to mobile devices.” NativeWaves’ Hybrid Multiview Broadcast solution works within the normal production framework but enables the streams that go outside of this distribution to be delivered simultaneously with the primary linear

62 | TVBEUROPE JANUARY/FEBRUARY 2020

PICTURED ABOVE: Marcel Hasenrader

broadcast, all using non-proprietary parts. “The prime ingredient for that is ultra low-latency encoding and delivery,” says Hasenrader. “Where our special technology and intellectual property lies is the Cloud-based ability to relate all of those additional streams, be they data, video or audio, to a specific event and synchronise those for delivery. That’s what in the past has not been possible.” Ultra low-latency is achieved through a number of scenarios, as Hasenrader explains: “One is the encoding; this ultra-fast efficient encoding of those outputs from the production, delivering them to the CDN - and it could be any CDN so it’s CDN agnostic. In the Cloud we have our special sauce if you like, which then treats them in the most efficient way possible, while keeping latency to the lowest level, but maximising the time that we have to make sure the quality is ensured. So if it’s a broadcast that’s going to the television in three seconds, we would use 2.7 seconds. If it’s less than that we would use less, or if it’s more - if the broadcast is going out in six seconds - we have more time to ensure even greater quality of streaming.”


TECHNOLOGY So why has a multi camera angle service failed to catch on until now? “There’s been many attempts at it,” notes Hasenrader. “The classic second-screen model has worked partially sometimes. The hybrid in our title is really to do with bridging the gap between the linear broadcast and OTT. Because of the lack of real relationship between those two in terms of their delivery times, there hasn’t been a compelling reason for someone to watch on the second screen. Some of the attempts have been simply to try and get the same content on multiple devices, which meant little value. “But having additional content that is not otherwise seen, and is specifically synchronised, has a much more compelling reason for the consumer to actually enjoy that experience,” he continues. “If you think about Formula One, you can be with Lewis Hamilton five seconds after you’ve seen it on telly and you’re in the car, it doesn’t really do too much. But if you’re actually sitting in the car while the overtaking manoeuvre is happening and not when it has already happened, it changes the whole idea of getting that out. And I think that bridge between broadcast and OTT exists mostly because of lack of low latency.” The applications for this hybrid model extend beyond sport; music concerts are another case where choosing the camera angle can enhance the viewing experience. “Let’s say it’s my favourite band and I’m a big fan of the drummer, and I want to see what the drummer’s doing rather than the person panning out and showing me the crowd, the stadium or the bass player or the boring keyboardist,” says Hasenrader. “It provides also in that sense a nice link in for people to immerse themselves in what they actually enjoy watching, rather than being dictated to by some person who honestly believes that that’s what the people want to see.” The solution provides not only additional video streams, but also audio streams. “There’s been a long history of people using alternate audio streams, to listen to their radio with their TV, for instance,” Hasenrader observes. “Technology now really allows many different ways of doing it, and also actually providing different languages for more of a global entertainment base - and why should you be restricted to having one language for one country? “Then there’s the notion of the shoutcasters that’s emerging in the United States, both with things like major sports but also with esports,” he adds. “We have these amazingly talented people who can just rave on for seven hours straight. It’s something different to the standard BBC English journalist doing their commentary, or the ex-footballer. I think that’s where people are now expecting a deeper experience or expecting some flexibility in directing their own experience. And I think that’s the thing that we really think will re-engage younger audiences and retain them for longer periods of time.” This idea of adapting to the changing ways in which people are watching TV is a key driver for Hasenrader: “Young consumers are not the sort of people to maybe sit at home and just watch the TV as their fathers did, but actually consume their entertainment in a completely different way and a more engaging way.” He uses esports as an example of “drilling down into the game,” whereby fans can follow one player or have a macro view of the whole playing field. “In conventional television, that’s been determined by some person sitting in the production suite and trying to give a view that is an average; it’s something to everyone, but we can perhaps provide many things to many people.” n

www.tvbeurope.com

TVBEUROPE JANUARY/FEBRUARY 2020 | 63


TECHNOLOGY

IMMERSIVE SOUND: LOOKING BACK AND FORWARD

Immersive sound is not new, but the concept of immersive sound is evolving. By Dennis Baxter

A

s we start another decade in broadcasting, looking back I think the last 10 years was as dynamic and influential as any for the world of audio. Not so much with new or better technology, but with a convergence of the technologies to make possible advances in multichannel audio production, delivery and consumption a reality. THE OLYMPIC EFFECT The pace of adopting the next generation of sound is on schedule. In 1996 we heard the full implementation of stereo sound at the Summer Olympics and in 2008 we saw 2K and heard 5.1-surround sound from Beijing. At the Summer Olympics in 2020 there will be the full rollout

64 | TVBEUROPE JANUARY/FEBRUARY 2020

of 4K and 9.1 immersive sound showing a clear trend where every 12 years the broadcast industry uses the Olympics to introduce or prove technologies. There is no doubt that immersive sound is quickly finding its way to the consumer through streaming and from progressive networks around the world such as NHK in Japan, Korea’s KBS and NBC in the United States, who understand the future of broadcasting. NBC is planning to mix immersive sound from a control room in Tokyo and will feature content from high viewership events such as beach volleyball and the opening and closing ceremonies. Karl Malone, audio and audio systems engineering designer for NBC Sports and Olympics said that NBC has benefitted from its experience


TECHNOLOGY at the 2016 and 2018 Olympics Games where immersive sound was produced. The future for dimensional and interactive audio is bright, but recently I read something that disturbed me: a sound manager for a large 2020 sporting event was complaining about inadequate resources to produce immersive sound. Resources or not, the requirement for immersive sound is on the table just as the requirement for surround sound was on the table in 2008—even with woefully inadequate resources and an inexperienced crew. CREATING IMMERSIVE SOUND I am convinced that it is not hard to create and produce immersive sound with minimum resources and would like to offer a couple of observations: First, to create immersive sound a mixer needs to be able to hear the dimensional soundfield. Second, the mixer must be able to adequately position the sound elements in the soundfield; and finally the mixer must be able to make and maintain an artistic balance of the elements in the dimensional soundfield. An alternative workflow would change the location for mixing and monitoring immersive sound. For more than a decade, FIFA World Cup and NBC have been sending mono and stereo stems from the venues back to master control to be mixed and I can clearly see the further trend to centralised control rooms for multiple remote productions. There also seems to be some schools of thought that you need ambisonic microphones to create immersive sound. I spent 2019 recording side-by-side a 1st, 3rd and 4th Order ambisonics microphone along with 10 correlated and noncorrelated spot microphones. I’ve come to the conclusion that ambisonic microphones sound great, but are not essential to creating immersive sound. Immersive sound for sports and live entertainment is a subjective balance between venue ambiance and atmosphere, as well as relevant event-specific sound. With any sport (event) minimum immersive sound production can be as simple as injecting additional ambiance and atmosphere into the height speakers. To me, American and European football are the definitive example of overhead atmospheric enhancements and this method does not require any localisation. 3D panning is helpful for precise localisation, but bottom line—convincing immersive sound for most sports does not require precise localisation. Some sound above the viewer/listener will usually create a sense of aural space for the 2D picture, but sound designers are wrestling with the concept of what sounds should be heard above the viewer when there is no obvious reason. Sure, we can put some sound up in the height channels, but how long is more crowd sound going to be a compelling reason for immersive sound? SUCCESSFUL IMMERSIVE SOUND Immersive sound production will develop and evolve with creative sound design and imagination. The picture is still two-dimensional and sound could become a production differentiator where “made for TV” sound such as NHRA will become a viewer draw.

www.tvbeurope.com

There is no doubt that field sports such as football have no sportsrelevant sound in the vertical axis, only ambiance and atmosphere, and artificial embellishments are probably not appropriate. But consider that perhaps enhanced ambiance is enough of an embellishment for football fans—immersive sound does not need to be over the top to be effective. Hoping for interesting and successful immersive sound is not going to advance our audio agenda. NHK in Japan has spent almost two decades preparing for immersive sound. Dreaming, theorising, research, testing, evaluation, planning, preparing new sound designs and documents have contributed to and facilitated a successful implementation of immersive sound within the Japanese broadcaster. NHK will produce the entire 2020 Olympic Games in 22.2 immersive sound. Immersive sound will be proven one step at a time. Immersive sound is not new, but the concept of immersive sound is evolving. Remember, I said NBC would produce 9.1 immersive sound and NHK would produce 22.2; both formats claim to be immersive sound. What is the difference on a soundbar? Advance audio designs are imminent but will be built on proven successes. What is the channel configuration for immersive sound? Is it 5.1.2, 5.1.4 or 22.2? I do not know. I am currently testing sound schemes that concentrate the listener’s attention forward and uses the height element to draw the focus beyond the normal left and right peripherally. This is an interesting concept for immersive sound and seems to work well with soundbars. The migration to immersive sound must be planned, methodical and most importantly successful. Immersive sound must be marketed to the consumer and easy to install with minimum wires. Stereo and surround sound had growing pains, but I could not imagine going back to mono sound or black-and-white TV. n

Dennis Baxter has spent more than 35 years in live broadcasting contributing to hundreds of live events including sound design for nine Olympic Games.

TVBEUROPE JANUARY/FEBRUARY 2020 | 65


TECHNOLOGY

LET THE SUNSHINE IN

LaLiga recently unveiled the submissions for its inaugural technology showcase, aimed at helping clubs, players and TV viewers to gain insights and experiences previously unavailable. Among the submissions was Sunlight Broadcasting Planning, which uses modelling of all LaLiga stadiums and the effects of lighting at all hours of the day. Luis Gil, director of competitions and the players’ office at LaLiga, explains how that could help broadcasters HOW DOES THE SUNLIGHT TECHNOLOGY WORK? Sunlight is an application in which all stadiums of LaLiga Santander have been modelled in 3D with added projections of sunlight based on each day of the year. This is used to simulate the light conditions inside each stadium, allowing LaLiga to optimise the audiovisual spectacle and predict the impact of the sun in the stands and on the grass. All of this helps us to create the best match schedules and for this reason the tool is coupled with our calendar selector tool, which uses artificial intelligence to suggest the optimum audiences and attendance for LaLiga matches.

“Visual spectacle is everything.” WHY IS THAT IMPORTANT FOR BROADCASTERS? In the world of entertainment, visual spectacle is everything. The lighting in the final season of Game of Thrones received a tonne of feedback from fans. A football match with solar glare would be no different. Sunlight helps broadcasters to produce the best audiovisual experience possible which is critical to growing our international audience.

66 | TVBEUROPE JANUARY/FEBRUARY 2020

WHY HAS LALIGA CHOSEN TO DEVELOP THIS TECHNOLOGY ITSELF, RATHER THAN LEAVE IT TO BROADCASTERS? LaLiga is committed to ongoing technology innovation to improve its product and offer the best experience to all stakeholders. By producing this technology in-house we are able to make adjustments as they are needed and continue perfecting the tool for the benefit of fans, broadcasters and clubs. WHAT IS THE NEXT STEP FOR THE TECHNOLOGY? WHEN WILL IT BEGIN TO BE EMPLOYED? The technology is already in use across LaLiga Santander where it is used alongside Calendar Selector. Soon it will be implemented in LaLiga SmartBank. WILL IT BE DEVELOPED FURTHER? COULD IT HAVE OTHER BROADCAST USE-CASES? We are open to exploring new possibilities with clubs and broadcasters and certainly there are options. For example, the projections could be used by clubs to place their screens around the stadium, or the sunlight data could be further evaluated to support greenkeeping in the stadiums. n


LIFE AFTER PRODUCTION

TECHNOLOGY

Dan Meier asks Sundog Media Toolkit CEO Richard Welsh to break down localisation

O

n his recent appearance on The Graham Norton Show, Ricky Gervais revealed that his series After Life had to be delivered to

Netflix a full two months early in order for localisation services to be completed. Since Norton had to steer Gervais onto the pressing issue of an anecdote about meeting Elizabeth Banks, we’ve taken it upon ourselves to ask a localisation expert what these processes involve and how they’re changing. “Once a show is complete there are a surprising number of steps that may take place in localisation,” explains Richard Welsh, CEO at post production developers Sundog Media Toolkit. “Before anything else happens, it may be necessary to create a secure copy of the content, and transcribe the content (as the final result is rarely if ever a match to the script).” Dubbing, subtitling and re-mixing follows, which means local voice casting and recording, then mixing the original music and effects with the foreign language dub. “That’s assuming music rights across territories,” adds Welsh. “If those don’t exist there may well be a music recut as well.” The title graphics are then localised with the name of the show translated into local languages, followed by censor cuts for ratings purposes or airline/hotel versions, and commercial edits or swap-outs where products or logos appear in shot. “My favourite one of these which require a script change was the volleyball ‘Wilson’ in the Tom Hanks movie Castaway,” notes Welsh. “In some versions the volleyball was called ‘Spalding’ because Wilson wasn’t a well-known brand in those countries.” 2 MONTHS 2 LABORIOUS Why does the process take so long? “Essentially because it is manually serviced, either centrally at a larger service provider, or distributed across multiple service vendors usually geographically as well as between multiple business entities,” says Welsh. “This makes the whole process high touch and fragmented. This in turn creates a significant disconnect from the business process, which is usually centralised and low touch. This is because the type of tools and automation available to business are much more advanced and mature than the media process, generally backed out by fairly sophisticated systems for CRM, finance, logistics, supply chain management etc.” The solution then is to automate and integrate into business systems as much as possible, though Welsh says the reality is far from ideal. “In the specific case of versioning and localisation, this is very much one of those niche processes where advanced and integrated tooling is sparse in availability,” he explains. “With that said, plenty of progressive companies are increasingly managing their localisation teams remotely, and often with sophisticated management systems. This streamlines the necessary human steps. Combining this with fully automated

www.tvbeurope.com

mastering can hugely speed up the process. “This is what we see with our work at Sundog,” he continues. “Our engine can take the versioning process from days and weeks to hours and minutes, but this only starts to come into its own when the upstream supply chain for the raw assets and translations/dubs/subs etc. is also efficient in order to feed the machine.” DIGITAL DUBBLES The use of AI is also burgeoning in versioning, in order to further streamline these processes. “It still feels like we’re at an early stage here,” Welsh observes. “Content analysis using AI is largely centred on analytics at this point, tagging what’s in the scene, who’s in the scene and so on. In a more advanced analysis, AIs are providing context such as the emotional balance of the scene - is it 10 per cent happy, 15 per cent sad and 75 per cent angry? That is (literally and figuratively) on balance an angry scene.” Sundog has also introduced an AI tool for security rotoscoping that reduces the process of creating a secure dubbing copy from about a week to a few hours. “This type of application is the low-hanging fruit that localisation pipelines can take advantage of,” says Welsh. “However there are still gaps in capabilities that would massively benefit the localisation process. “Examples would be a really accurate transcription tool (and by this I mean really really accurate) which also brings contextual data into the transcription,” he continues. “This would be a phenomenal benefit to the localisation for both dubs and subs.” UNIVERSALISING LOCALISATION Finally Welsh observes “big shifts” in the way broadcasters make and distribute content, by either partnering with streaming platforms or creating their own, and shifting their processes to meet the streamers’ model. “I would characterise this as a “big studio” mindset,” says Welsh. “The big advantages the studios have are that efficiency can come with scale and so they are always streamlining their processes because they can; with a high volume of content they can employ automated processes rather than every production workflow being a special snowflake.” This creates more and more on-demand services via the Cloud that are available to anyone, almost regardless of budget. “There are still very many manual processes in the chain that effectively introduce costs that make it harder to compete when budgets are tight,” adds Welsh. “The streaming platforms have been able to pump huge amounts of money and resource into content production, but the advances that are coming with that can actually benefit all levels of production from the smallest indie to the biggest blockbuster in the long run. I believe this is a case of the rising tide raising all ships,” he concludes. n

TVBEUROPE JANUARY/FEBRUARY 2020 | 67


TECHNOLOGY

“WE KNOW BEFORE YOU” Jacques-Edouard Guillemot, SVP executive affairs at NAGRA, tells Dan Meier about the company’s use of AI

WHAT IS NAGRA INSIGHT? Insight is aimed at boosting business performance for our customers, which are pay-TV companies and telcos. The growth of clients for the product in 2019 was multiplied by four, so it was a strong year for us. Our aim with AI over the year was about how to use data, how to use artificial intelligence in order to create business impact for our customers. WHY AI SO IMPORTANT TO ANALYTICS? So much data exists, and there is a lot of work to do - there’s a very important amount of work. And so what we really focus on is how we take the data to create impact, and there are several ways to create impact. Either you display the data, eg. how has my network performed yesterday, and you have a very nice graph of what happened yesterday, and this is already a way to create impact. Then you have a content expert or the packages expert, the marketing expert, the acquisition expert at the customer’s end all have a look at the data and take decisions based on the data; looking at one single subscriber and being able to say ‘okay, so to this household I recommend this type of content, or to this household I recommend this type of package for next time.’ So this is another level at which data can help.

issue with your box, we call you and we tell you that it’s not a hardware issue but that it’s an issue with your WiFi. So we help you over the phone to fix your issue and as you are extremely happy, advise you on the fact that you don’t have the right package because your kids prefer to watch Disney movies and so the kids package would be much more price-efficient and much better for your household. You can see that in those two experiences there is a very big difference, not only in terms of satisfaction but also in terms of value, because before you had a package that you were not using very well, and on the other hand you have a package that exactly suits the needs of your household. WHAT AI DEVELOPMENTS DOES NAGRA HAVE PLANNED? Our solution is based around four pillars: optimising the customer lifetime; optimising the value of content; optimising the experience to make sure that the network etc is working perfectly; and the fourth one is around advertising to make sure that we have the right audience. Now we are at a phase where we are extremely opportunistic; when a new customer comes with a new issue we are happy to develop, but for the time being I cannot really tell you what is going to happen in 2020!

“There are so many dimensions where AI can really change the game.” When you put AI into this formula, that’s really how you put one brain behind each and every subscriber to make sure that the decision that’s taken for one individual subscriber is as good as a decision that could have been taken by one expert regarding this case. So that’s the way AI can help us, we need to leverage this know-how to the maximum and to make sure that we can have one brain behind each subscriber to make sure the right decision is made. WHAT IS THE VALUE OF THIS FOR COMPANIES? You could have a pay-TV subscription. You call the provider because you have an issue with your box, and they have not noticed that you were a very good customer for so many years, and so you wait about 15 minutes at the call centre and then you have someone that hasn’t looked at your profile who politely tells you that they can’t do anything for you - and you have to send your box from the post office etc. The way with AI is basically we know before you that you have an

68 | TVBEUROPE JANUARY/FEBRUARY 2020

WHERE DO YOU EXPECT AI TO GO IN M&E OVER THE NEXT FIVE YEARS OR SO? You have to look at what’s happening in the internet world. You have to look at Netflix, you have to look at Facebook to understand exactly where it’s going. You have to understand that the feed you have on Facebook is completely tailored post by post to make sure that the stickiness is maximum. So this is one example where AI plays a very important role to exactly, in real-time, arbitrate which content to display and which content not to display to ensure stickiness. Our customers, the operators are far away from that, but this is the direction we are going; the way forward. When it comes to packages, for example, we can offer the right package, even downgrade the packages to make sure that the customer continues to pay subscription. There are so many dimensions where AI can really change the game. n


THE FINAL WORD

THE FINAL WORD

Christine Schyvinck, chairman, president and CEO at Shure, has the final word How did you get started in the media tech industry? Beginning my career as a quality control engineer, I was exposed to learning about the types of issues sound professionals encounter daily and making sure that our equipment didn’t contribute to those issues. Because Shure’s reputation is built on high-quality products, this was an extremely important job. It still is today, as this company remains committed to delivering the best-performing products. I eventually moved from vice president of quality to vice president of operations, where I managed procurement, supply chain and manufacturing as well as quality. This was another essential experience because I was able to work with intelligent, passionate people worldwide. I then moved into a role leading global marketing and sales, where I managed three business units in the Americas, Europe/Middle East/ Africa and the Asia/Pacific regions. This experience helped me learn more about overall company operations, the global business structure, and what we need to do to meet the evolving needs of customers in the media tech industry. Having diverse experience in different parts of the company has given me an appreciation that I don’t think many CEOs have: thoroughly understanding the type of work that happens at various levels of an organisation and appreciating the efforts of associates to make this such a fantastic place.

“Looking beyond our traditional target customers can bring fresh insights.”

How has it changed since you started your career? As Shure celebrates its 95-year anniversary in 2020, there’s a tremendous level of pride throughout the company about where we are today. But business is changing. Customer needs are changing. And we need to continue our evolution. The biggest challenges most companies face is how quickly technology is changing. At Shure, we’re having discussions about our various markets and how to make sure we remain relevant. Media tech is a vital part of our focus. We have tremendous horsepower when it comes to audio, wireless technology, DSP and software. Now, more than ever, we must always stay on the forefront of shifting market needs and technology changes.

www.tvbeurope.com

What makes you passionate about working in the industry? I love our associates. I’m proud of the culture Shure has grown and fostered. Evolving from a local, family-run Chicago company to a global, go-to solutions provider in audio technology, we’ve not forgotten our purpose. Our associates are committed to making this the best company by providing superior products and services for customers. With so many changes on the technology side, it’s a fascinating time in our industry, and a flexible, evolving workforce is a competitive advantage. If you could change one thing about the media tech industry, what would it be? The demand for innovation is happening so quickly. I’m excited about our

TVBEUROPE JANUARY/FEBRUARY 2020 | 69


THE FINAL WORD

“Companies that have more diversity on their teams perform better financially.” continuous innovation. We are always looking to improve our products – or develop new ones – to help meet the demands of customers. In some cases, we’re working with customers to create new solutions for them. For example, we’ve already achieved great success with a newer audience of content creators with the MV88+. Our new TwinPlex microphones – launched in April – are built to take on the diverse needs of top-tier audio professionals in every setting, with reliable clarity for TV and film, tailored-for-speech audio for speaking appearances, and discreet durability for broadcast. Sometimes I wish we could speed up the innovation process for some of our products, but we will never rush to market before products are thoroughly tested and approved. Audio professionals stake their careers on big moments in broadcast, and they continue to trust Shure to deliver. We have done so for 94 years with the quality of our products, and we will never sacrifice speed for the quality performance that millions of people have enjoyed around the world. How inclusive do you think the industry is, and how can we make it more inclusive? McKinsey did a study a few years ago called Diversity Matters. The facts are there – companies that have more diversity on their teams perform better financially. And I’ve seen that change at Shure, having been here for so long. We used to make products for customers and conventions we knew in the US, and then we’d export those products to other parts of the world. Now, we have associates working in more than 35 countries around the world. This gives us a much better handle on local market needs and how to grow in those regions. Diversity in backgrounds, from the tech sector, musicians, audio engineers, is also important. We embrace diversity because it helps us better understand our diverse set of customers, which range from major television networks and movie studios, to educational institutions, to musicians and concert venues, to global businesses to individuals who use headphones and earphones. As an industry, we need to make more efforts to ensure we encourage young people of all backgrounds to get involved in the world of media technology. At Shure, we are also engaged in efforts to bring more women into the industry, from engineering to marketing and sales. How do we encourage young people that media technology is the career for them? There’s not a single formula for success in pro audio – everyone can make their own path. The interesting part is that there are so many routes to success in this space. I would stress that having passion and perseverance are two of the attributes that can help. I’ve seen people who did not have “traditional backgrounds” in pro

70 | TVBEUROPE JANUARY/FEBRUARY 2020

audio thrive in this industry because they were dedicated to working hard and learning more. Making connections with others is also important. At the end of the day, we all share the same passion about creating and enabling great performances. That doesn’t happen without great audio. Where do you think the industry will go next? The global television broadcasting market was valued at $317.2 billion in 2017, according to a report from The Business Research Company. With so many studios increasing budgets for TV shows to produce high-quality content across a growing number of platforms, the need for broadcastquality audio solutions is more important than ever. Platforms like Netflix, Disney+, Hulu, Amazon Prime, YouTube and others have become more popular as they try to compete with traditional broadcast outlets. The result is a higher demand for audio equipment and technologies that keep up with innovations in broadcast. What’s the biggest topic of discussion in your area of the industry? With more growth in the industry and expansion globally, we can’t forget who makes media tech so successful – the professionals behind the scenes. They are our customers. Shure is - and always has been - a customer-focused company. Customers in this business are very particular about their products. They must perform at the highest quality possible. Our goals are to continue to develop quality products and to innovate for the next generation. We are also working on better positioning ourselves with customers globally, as we now have operations in more than 35 locations around the world. So much of our business requires hands-on product demos and human interaction, so we ensure that we have support for our customers wherever they are. What should the industry be talking about that it isn’t at the moment? We should be listening more to those who are influential in the media tech industry. Today, more than ever, those influencers may be found in non-traditional places. Putting customers first, we’re not afraid to work shoulder to shoulder with people in the industry to help find solutions to problems. We like being in the field and solving issues with venues, broadcasters and performers, but we must recognise that product and technology trends can take root in places outside of these familiar environments. Looking beyond our traditional target customers can bring fresh insights and help us continue to be problem solvers that share the end goal of having people in an audience be wowed by a performance. The only way this happens is attention to detail, attention to quality products, and attention in listening to what customers need. n



9000


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.