Page 1

Intelligence for the media & entertainment industry




TVB60.cover.indd 1

09/11/2018 14:32

w w w. t e d i a l . c o m







E U R O P E · U S A · M E A · L ATA M · A PA C





Editor: Jenny Priestley jenny.priestley@futurenet.com Staff Writer: Dan Meier dan.meier@futurenet.com Designer: Sam Richwood sam.richwood@futurenet.com Managing Design Director: Nicole Cobban nicole.cobban@futurenet.com


recent study by the Hollywood Reporter found that binge watching is now the most popular way for US viewers to consume content. They want their series all at once, rather than being drip fed to them on a weekly basis. However, four in 10 TV viewers said they watch shows live, compared to roughly 30 per cent who watch them on-demand. Those statistics got me thinking about how we in Europe consume our TV. Yes, of course we binge-watch the shows from the big streamers but in 2018 we have seen a marked

Contributors: Michael Garwood, George Jarrett, Philip Stevens Group Content Director, B2B: James McKeown

to Harry and Meghan, I think it’s fair to argue that the big screen in the corner of the living room has seen something of a resurgence over the past 12 months. I admit, I’ve talked about the death of linear TV a lot over the past couple of years but with millions of people tuning in to watch specific events or episodes, I believe there is still a place for it in our lives. I currently have an appointment with my TV every Saturday evening for Strictly Come Dancing. I’ve been a fan of Strictly since the first series and it’s been great to see more and more viewers catch the Strictly bug. So, I am

‘2018 has been the year where we’ve all remembered what the communal viewing experience is like.’ return to the weekly episodic drama. Look at the popularity of the BBC’s Bodyguard. Its final episode saw 10.4 million viewers tuning into BBC One at 9pm on a Sunday night, the biggest overnight drama figure since 10.5 million saw Downton Abbey’s series two finale in November 2011. So, if we’re all bingewatching our content, how did that happen? I would argue that there is still content out there that makes us all stop what we’re doing and gather around our TV sets. And 2018 has been the year where we’ve all remembered what the communal viewing experience is like. From Bodyguard to the FIFA World Cup

particularly excited that this month we’ve sent Michael Garwood to Elstree Studios for a look at all the hard work that goes into bringing a bit of sparkle to our TV screens every week. Elsewhere, George Jarrett talks to TiVo’s Charles Dawes about the intersection between content and delivery; we speak to new SMPTE UK governor Marina Kalkanis; Philip Stevens takes a look at the increasing popularity of esports; and we hear from the director of photography on BBC drama Luther about the production process behind the show. 

MANAGEMENT Managing Director/Senior Vice President: Christine Shaw Chief Revenue Officer: Luke Edson Chief Content Officer: Joe Territo Chief Marketing Officer: Wendy Lissau Head of Production US & UK: Mark Constance

ADVERTISING SALES Group Sales Manager: Andrew Leggatt andrew.leggatt@futurenet.com (0)207 354 6029 Sales Executive: Can Turkeri can.turkeri@futurenet.com (0)207 534 6000 Commercial Sales Director, B2B: Ryan O’Donnell ryan.odonnell@futurenet.com Japan and Korea Sales: Sho Harihara sho@yukarimedia.com +81 6 4790 2222

SUBSCRIBER CUSTOMER SERVICE To subscribe, change your address, or check on your current account status, go to www.tvbeurope.com/page/faqs or email subs@tvbeurope.com

ARCHIVES Digital editions of the magazine are available to view on ISSUU.com Recent back issues of the printed edition may be available please contact lucy.wilkie@futurenet.com for more information.

INTERNATIONAL TVBE and its content are available for licensing and syndication re-use. Contact the International department to discuss partnership opportunities and permissions International Licensing Director Matt Ellis, matt.ellis@futurenet.com Future PLC is a member of the Periodical Publishers Association

All contents © 2018 Future Publishing Limited or published under licence. All rights reserved. No part of this magazine may be used, stored, transmitted or reproduced in any way without the prior written permission of the publisher. Future Publishing Limited (company number 2008885) is registered in England and Wales. Registered office: Quay House, The Ambury, Bath BA1 1UA. All information contained in this publication is for information only and is, as far as we are aware, correct at the time of going to press. Future cannot accept any responsibility for errors or inaccuracies in such information. You are advised to contact manufacturers and retailers directly with regard to the price of products/services referred to in this publication. Apps and websites mentioned in this publication are not under our control. We are not responsible for their contents or any other changes or updates to them. This magazine is fully independent and not affiliated in any way with the companies mentioned herein. If you submit material to us, you warrant that you own the material and/or have the necessary rights/permissions to supply the material and you automatically grant Future and its licensees a licence to publish your submission in whole or in part in any/all issues and/or editions of publications, in any format published worldwide and on associated websites, social media channels and associated products. Any material you submit is sent at your own risk and, although every care is taken, neither Future nor its employees, agents, subcontractors or licensees shall be liable for loss or damage. We assume all unsolicited material is for publication unless otherwise stated, and reserve the right to edit, amend, adapt all submissions.


TVB60.welcome.indd 1

09/11/2018 09:51



08 The cloud is now

Blue Lucy’s Julian Wright on why operators need to be thinking cloud first

11 Strictly off-limits

Michael Garwood waltzes on to BBC Studioworks Elstree Studios for a behind the scenes cha-cha-chat

29 At the intersection of content and delivery

George Jarrett meets TiVo’s Charles Dawes

38 Looking to SMPTE’s future


Jenny Priestley talks to new SMPTE UK governor Marina Kalkanis

54 Esports is expanding

Philip Stevens takes a look at the increasing popularity of esports

60 Investigating Luther

DoP John Pardue discusses the production process behind the show

70 Levelling the football field

Ampere Analysis’ Alexios Dimitropoulos takes a look at the value of Europe’s big five football leagues


Sony introduces its Intelligent Media Services

Supplement Tedial’s latest supplement takes a radical view of the media future


60 TVB60.contents.indd 1

09/11/2018 14:35


Understanding your data By Harry Grinling, CEO, Support Partners


irtualisation technology has already arrived in the broad world of smart business where automation and predictability hold the key to monetisation. Industries such as telcos, manufacturing, energy, financial services and retail, amongst others, successfully adopted the cloud some time ago. However, when you look at the broadcast, media and entertainment sector and its vendors, it’s somewhat behind the curve. As a broadcaster, if you aren’t considering or testing cloud technology at this moment in time when other sectors have already made lucrative gains, then your business model is at a high risk of failure. Long-established vendors in the world of broadcast, while still having a role in specialist functionality, are now joined - and arguably superseded – by a new influx of service providers. On the one hand, there are cloud companies like AWS, Azure and Google; on the other enterprise business management services like Salesforce and SAP. Over the last few years, these companies have broadened their feature sets to the point where the line on which one product finishes and another starts, has blurred beyond recognition. These ‘new’ players are introducing completely new ideas when it comes to managing and delivering content, based on processing, connectivity and storage, while at the same time enabling the move towards new business models. As one of the first companies in the UK M&E space to

adopt a ‘cloud-first’ strategy and the first AWS certified M&E partner in the UK, Support Partners are well versed in utilising these disruptive technologies. As a company, our goal is to help all of our clients migrate parts or all of their infrastructure to the cloud by 2020. Making the move from CapEx to OpEx is potentially transformative for the business models of content creators and deliverers. The OpEx model allows a direct link between costs and revenues, dynamically scaling on demand to capture those new opportunities. That, in turn, allows you to understand where profitability lies, and therefore concentrate on the more commercially compelling services. Cloud technology is inherently agile. You can add new services, whether that is 4K Ultra HD production or machine learning for automated logging, simply by spinning up the appropriate microservices in the cloud. A software-defined network allows you to move to a business-driven operation. Instead of someone in marketing asking for a channel and someone in engineering saying it will take six months to get it online, executives working in enterprise management software like Salesforce will be able to run a what if? exercise and immediately see how financially and operationally viable a new service would be. However, for most organisations it is daunting to make the jump to being cloud first, so where is the sweet spot

‘As a broadcaster, if you aren’t considering or testing cloud technology at this moment in time, then your business model is at a high risk of failure.’ 06 | TVBEUROPE NOVEMBER / DECEMBER 2018

TVB60.supportpartners.indd 1

09/11/2018 09:52

OPINION AND ANALYSIS between on-premises hardware and the cloud? Do you hand everything over to AWS, Azure or Google? Or do you use the cloud for its elasticity, stepping in when your on-premise hardware is at full stretch? And, if the latter, how do you know just how much hardware you should have on site so that it will be running close to capacity most of the time? UNDERSTANDING SCALE In a traditional broadcast architecture, the experienced system designer would look at a requirement and have a basic understanding of the scale. An installation might have, say, around 400 device ports, so to be safe the architect would specify a 512 x 512 router. Having set the size, the procurement people could talk to a selection of vendors and get the right deal on the right device. This CapEx model means that you can’t easily scale up without more investment or indeed scale down as and when required, plus you are tied to specific vendors. In a virtualised environment the scale is equally vital when provisioning a system but it is far less readily understood. How many processor cycles do you need to render a 4K animated title sequence? How much storage do you need to edit a 13-part drama? How much additional processing power can you afford to call on in the cloud before you have to say a process will have to wait – and who makes that decision? The answer: you need to understand the data. Not the content and its metadata, but the usage data. How much storage are you using and for how long? How many processors do you need, how many are in continual use and how many do you call on when a lot is happening? Now the chances are you already have some cloud contracts or at least are negotiating at this very moment. The IABM/Devoncroft research published earlier this year said that 85 per cent of broadcasters think they will be using the cloud in the next two to three years and 28 per cent are already using it. If you already have a cloud contract, did you get a good deal? It can be hard to tell when file transfers are billed by the gigabyte, processing by the minute, and asset management by the terabyte of storage. This is a multi-dimensional matrix. The cloud provider will have a set of fees, which may include some hefty surcharges if you go over a certain level. The software provider will also have its own licensing model, which will be based on different parameters such as usage, nodes activated, codecs used etc. The people providing media-specific software used to sell broadcast hardware. They have had to re-engineer their own businesses from a cash flow position where they received 100 per cent of the value of a sale at the point of acceptance, to one where they are maybe hoping

to get a recurring licence of 20 per cent of the value each year for five years. Their natural instincts are to ensure their return by nudging the licence fee up; you may find yourself paying 25 per cent a year, with the expectation that you will still be there five years from now. So, how can this be avoided? DASHBOARD The solution is to have a means of monitoring and evaluating all the data and presenting it on a single pane of glass. If you were making something physical – a car, say – then you would want to be able to see the complete supply chain from component to customer on a single dashboard. Why should it be any different for the manufacture of intellectual property, like a television programme or a commercial? Such software exists. At Support Partners we have implemented systems using Salesforce, a platform that has evolved significantly over the last few years to take data from many disparate systems and house it in one place to help monitor trends, usage and provide actionable insights. Taking this approach builds an accurate, trusted dataset, derived from all available sources. Usage is reconciled between on-premise hardware and software services and cloud providers. That, in turn, gives you a real insight into how your transition to virtualisation and the cloud is performing. More importantly, it allows you to apply predictive data analytics to do the what ifs? for your future plans. If you know accurately what you need in terms of processing, connectivity and storage today, you are in a strong position to make decisions for the future. You know if you have a good deal today (or not), and you know what you want to achieve when you renegotiate. Armed with predictive data requirements, you are in a much stronger position to compare bids and costs. You are no longer locked into your current vendor for fear of getting a worse deal. You can use predictive analytics to understand how best to apply automation and machine learning, which microservices to spin up, and determine which spot instances are the best value at any particular time. There is no doubt that the future for media is all about predictive automated workflows, running on software-defined architectures in virtualised environments, on-premise and in the cloud. Provisioning these systems calls for new skill sets and a new way of thinking. At Support Partners, we have implemented predictive data analytics for one of the world’s biggest production and broadcast companies, but it is applicable to any media company. If you are not capturing your operational data today, you will not know where you are tomorrow. That will be the difference between commercial stability and the risk of failure. n

‘If you know accurately what you need in terms of processing, connectivity and storage today, you are in a strong position to make decisions for the future.’ TVBEUROPE NOVEMBER / DECEMBER 2018 | 07

TVB60.supportpartners.indd 2

09/11/2018 09:52


The cloud is now By Julian Wright, founder of Blue Lucy


he media technology industry has, this year, been characterised by announcements from operators and suppliers about cloud-based implementations. This has been the case for us too; in customer engagements over the past 12 months there is a clear assumption that the technology offering will be a cloud-based SaaS model. INFLECTION Coincident with IBC, highly-respected analysts, Devoncroft, published their Cloud Adoption Index. This reported “more media technology buyers are budgeting for cloud services/cloud technology than any other project.” This research is significant and represents the inflection point between the early adopters and the early majority to use Rogers’ technology adoption model. For many it’s been a long time coming. If 2018 is the year where cloud for media operations management came to the fore it also marks a contrast to the view, only a few years ago, when capabilities such as cloud storage were seen as ‘ok for the back-up, safety copy’ only. This back-up would be used only in a disaster or emergency situation, should the real system (an on prem’ LTO library) be off-line. “Just think of the extortionate content egress charges should it ever be used,” was the prevailing view. There has also been cynicism at the application level. Customer engineers would ask “What is your cloud story?” with the emphasis on story. The implied suspicion was that the proposed cloud solution was the ‘old’ on prem application running on a VM somewhere, and therefore not a true cloud. To an extent this is valid as such models do not afford the key benefits of cloud, such as scalability and incremental cost. Given that cloud, in the context of media operations has been discussed since 2010, it has taken far longer than many expected for adoption to cross the chasm between the early adopters and the early majority phases. A number of factors may have influenced this, such as the long

technology refresh cycles in our industry, but it is a shift in the overall business model for media operators which has driven the recent step change. FLEXIBILITY The need to manage content on more platforms than ever, often in differing technical formats, with different metadata structures, means that flexibility in solution approach is essential. Diluted income pressures drive the need to do more with less so the importance of process automation and task management increases. Shorter contracts and events-based operations mean that platforms need to scale rapidly and, ideally, automatically. Cloud infrastructure is the only real way to achieve this. Cloud infrastructure is less expensive than on prem hardware, no it’s not, yes it is. Many reports have been published by respected media technology analysts that make the financial case for, or against cloud for media applications. Depending on the context and measure it is possible to make a conclusion in either direction. The case for cloud hasn’t been helped by the relative complexity (obfuscation) in the pricing structures of the cloud providers. In truth like-for-like cost comparisons are impossible to make and operators are now accepting that cloud is just a different way of paying for infrastructure and are instead focusing on the many benefits that it provides. The benefits of flexibility, scaling and global access that cloud provisioned services afford cannot even realistically be built on-prem. The on when you need it function of cloud enables prototyping and project risk reduction. Business hypotheses and operating models may be tested and refined at very low cost. This year we ran a number of pilot projects in the cloud which enabled our clients to prove that operating models would enable revenue or relieve operational pain ahead of a broader commercial commitment. As with all transitions business-focused planning is key to the successful migration of services to the cloud but, across the board, operators are now thinking cloud first. n


TVB60.bluelucy.indd 1

09/11/2018 14:50

¸SpycerNode – the next generation of storage for your media assets ❙ Reliable with high redundancy based on HPC RAID technology, with up to 4 times faster rebuild ❙ High performance with built-in information lifecycle management for easy file location and optimal storage utilization ❙ Easy scalability, from a single unit setup to large production setups More information: www.rohde-schwarz.com/spycernode Supporting you today. And tomorrow.

27105.003_SpycerNode_TVBEurope-Okt11_210x265_e.indd 1

21.08.18 15:45 Uhr


Is the cloud a broadcaster end game? By David Schleifer, chief operating officer at Primestream


or a broadcaster, the key to leveraging the cloud is to use it where it makes sense. Too often the cloud is presented as an all or nothing opportunity. “The cloud is where you will do everything!” In reality, like every other technology, the cloud works best when you use it for what it does best. Use the cloud when you have a need for almost unlimited storage on a platform that will never require that you worry about changing out the underlying technology from one format to another. This is a great benefit that is available today. It is reliable, functional, and will relieve a facility of space, power and maintenance responsibilities. Do you have multiple facilities that need to be connected? Then use the cloud. But it gets trickier. At first, the simple solution might seem to be to try and put all of these operations in the cloud, but since broadcasters still have requirements demanding real time, high resolution, and quick turnaround workflow, today’s cloud solutions can start to struggle to deliver. Besides, you are probably starting out with existing on-premise solutions that would need replacement. A better solution is to use the cloud for what it does best. A centralised archive, a smart central hub - for Primestream this is Xchange Hub, which is capable of optimising all of the transfers between sites, all the transfers to and from the Hub, and to the archive to reduce bandwidth use, reduce ingress and egress charges to the cloud storage, and speed up the collaborative process across the board. The system also manages security and can grow as your needs evolve to include services like transcoding, Review and Approval, AI analysis and more. If you have a workflow which only captures streaming media, and delivers to OTT platforms or other streaming solutions then you will likely find that more of your workflow can be up in the cloud, but you still need to manage the costs of putting media in, storing it, taking it out, and

CPU cycles to make sure that you control your expenses. It is easy to jump to the end game, to a state we all know will come eventually, when the majority of what we need to accomplish is hosted in the cloud. Moore’s law says that the cost of processing will continue to drop. Kryder’s law tells us that disc-based storage density will go up. Both laws intersect with flash memory, and all of this combined with increasing bandwidth will solve latency, cost, and viability issues around cloud-based solutions. The last external factor comes from the overall value of the network (the internet) that is our access to the cloud, and between facilities and services. In the same way one telephone is interesting but not useful, and two might be a novelty, and trillions are of unmeasurable value, the viability of the cloud will increase as more facilities, services and options are available “up there”. Take a Review and Approval workflow. Today, if you have media on-premise you need to upload it to the cloud-based provider to start the Review and Approval process. With a cloud-based solution, you would be able to move media between systems in the cloud without bringing it through your site. As services like AI analysis, transcoding, or even full-fledged post, colour correction, Q&A and others become available, not only by their physical address and location, but simply as targets you can point to in the cloud, you will see the value of the underlying network increase, becoming a more and more compelling option to consider. I started out saying that it is easy to jump to this end game. In order to get there, we need to work closely with customers to make sure we deliver solutions that work today and can grow as the value of the cloud increases. We don’t want to rush up into the cloud, stake out a position and then wait for the costs, bandwidth, and connectivity to arrive later because that delivers workflow that will be compromised today, and delivers it at a high cost. n


TVB60.primestream.indd 1

09/11/2018 09:55


STRICTLY OFF-LIMITS Michael Garwood waltzes into BBC Studioworks Elstree studios – Hertfordshire’s answer to Hollywood – for a behind the scenes cha-cha-chat and tour of BBC One’s Saturday night juggernaut, Strictly Come Dancing


TVB60.strictly.indd 1

09/11/2018 14:55



he advent and ever growing popularity of ondemand subscription and non subscription based streaming services (Netflix, Amazon Prime, iPlayer et al) coupled with the availability of high-speed internet, has significantly altered the way audiences engage with television content in recent years. According to a report from Ofcom, a staggering eight in every 10 adults in the UK (that’s 40 million people) used some form of catch up TV last year, whilst subscription numbers for the aforementioned paid services topped 14 million. The rise – predictably – continues to fuel the fire around suggestions that linear TV is either dead or dying, with people increasingly preferring to choose as and when they consume content. You only need to look back to 2017, when an episode of Blue Peter (June 13, 2:30pm) failed to record a single viewer – down from eight million in its prime. But whilst the figures may give the death knell ringers some solid backing, there still however remains an insatiable appetite for live television, notably sport and – our focus here – during Saturday primetime. ALL SINGING, ALL DANCING The time slot, typically measured between 19:00-22:00, remains highly competitive, with BBC and ITV entering into what’s often described as a ‘ratings war’ between networks through their various different shows throughout the year. Figures for shows, such as The Voice (5.2m) and The X Factor (5.6m), Britain’s Got Talent (8-9m) have all remained healthy and strong, if not entirely consistent. However, one show that’s currently dancing its way to the top of the rating tables is the BBC’s Strictly Come Dancing. Now in its 16th series, viewing figures for its live Saturday night broadcast remain solid, pulling in audience figures of between 8 and 10 million – that’s the equivalent of the population of Sweden collectively glued to a TV between 6:30pm. and 8:35pm. This year, the show has been attracting an even bigger audience: on 14th October, Strictly had a peak audience of 11.9 million. But Strictly’s success hasn’t come out of luck, nor has it sat back and admired its own success. Such is the demand from fans and eagerness for the broadcaster to remain engaged with its audience, Strictly now airs seven days a week. This includes Sunday’s results show, followed by five days (Monday to Friday) of It Takes Two, which follows the progress of the contestants building up to Saturday’s live show. For BBC Studioworks, with studio operations at Television Centre (White City), BBC Elstree Centre and Elstree Studios, facilitating the production and post

production for the show is a full time job. And by full time, we mean 24/7. “There’s never a quiet day here,” jokes the MD of BBC Studioworks, David Conway during our visit to its Elstree post production village, the entrance to which is the exterior setting of Holby City. “Whilst Strictly is a live show, there is a lot taking place during the rest of the week, with daily recordings of things like rehearsals as well as a lot of time critical editing.” RUN VT He’s not kidding. The workload that goes into producing the show is not for the faint hearted, be it the staff or indeed the dancers – who by comparison, get off lightly. For the dancers, it’s a six day week, only resting on Sunday – should they choose. Monday to Thursday, the couples practice and rehearse in a convenient dance studio location (often close to their home), whilst on Friday they’re required to be in London for a full rehearsal. During each of their practice sessions, the couples will be assigned a camera crew to capture their progress and collect other types of footage that may occur (accidents, progress - or lack if it). Each day, crews record two hours of footage on each couple, which will be used to create a VT lasting around 90 seconds for the show(s). In total, more than 100 hours of footage is collected during the week. As part of a “well oiled machine,” content is first delivered to the BBC Studioworks data wrangler, who then catalogues, checks and backs up all of the material to a 210TB Object Matrix Online storage solution. From there, a team of loggers ingest the material at XDCam 50 onto a Cinegy logging system connected to 100TB of EMC and SATA Beast storage where the media remains for the duration of the series. Specific content is sent to a 180TB Avid Nexis server. Editing happens across 12 Avid Symphony edit suites, with audio dubbing being carried out on a Pro Tools system. “You get handheld cam footage all week of the celebrities in the different rehearsal rooms, so there’s quite a lot to piece together,” adds Conway. “It’s quite a big logging operation.” John Loughman is post production supervisor at BBC Studioworks and has worked on the show for the past 11 years. He is responsible for ensuring that all VTs used in its broadcasts are edited to the highest standard and are photosensitive epilepsy compliant before going out on air. “The workflow hasn’t changed that much over the years and it’s evolved into a very smooth operation,” says Loughman tapping his desk, from which he is working across four different monitors. “My responsibility is to view all the VTs after they’ve been put together by


TVB60.strictly.indd 2

09/11/2018 14:55


the editors and make sure they’re good and technically compliant. The editors get so close to the content cutting that they can sometimes miss some obvious things, like someone coughing during a shot or something like that.” Discussing the captured footage, he adds: “Every year they do the VTs differently. They used to shoot hours and hours of training, whereas now they’re a little more strategic. They might go and visit somewhere, such as a family member or a special place, so the training aspect for the main show is now a smaller part. It Takes Two is on five days a week, so they tend to concentrate on the training and have various segments like Choreography Corner, which looks back at the previous weekend. Editorially they have a lot of time to fill and they want to fill it with engaging content.” DAY AND NIGHT Editing (unlike the dancing), Loughman explains, does not take a day off – with the frequency of the show ensuring there is always new content to cut. Staff even have to work overnight following the live show’s conclusion on Saturday, with preparation for the results show starting before the

credits have even rolled. “Saturday night there’s editing, Sunday there’s editing – basically there’s editing every day, seven days a week, 24 hours a day up until the final,” he explains, still smiling. “On Saturday night, the production team cuts a ‘Behind the scenes’ VT for the Sunday show. They shoot a lot of footage, so much of the work is finding the best bits of the show and all the buzz behind the scenes. It [the VT] is usually in two parts. The first is cutting all the preparation, makeup and getting ready for the show. Then transitioning into the actual show, the reactions from the judges and the door burst as each couple leaves the dance floor. Sunday morning, they then start editing the results show at 6:30am and then deliver that down the line at around 3pm Sunday afternoon for broadcast on TV later that evening. Repeat that process 12 times in a row. It’s constant, but we enjoy it.”

PICTURED ABOVE: This year’s crop of celebrities

TAPE IS DEAD One of the big changes to the way this year’s show is recorded, delivered and edited is through the use of tapeless capture technology.


TVB60.strictly.indd 3

09/11/2018 14:55



The new tapeless system, which has been used at BBC Studioworks’ Television Centre base for the past year, is making its debut in Elstree for Strictly and will be used for all future programming recorded there. According to Loughman, its new tapeless set-up is something that has become increasingly requested by clients due to it providing a number of time and cost saving benefits (not to mention environmental) when compared to tape. Stand out benefits include the ability to make material from the studio floor instantly available for editing, even if the show is still being recorded, saving time and ensuring a more speedy turnaround for completion. The technology is powered by three 12 channel EVS XS3 Servers, which are based in proximity of the relevant studio, along with a 10 GbE switch and an IP Director for each studio. The core system, based in an UPS and generator-backed apparatus room, consists of three Ultra High Performance XT Access Servers, a mirrored database solution and a management IP Director. All media streams onto BBC Studioworks’ 180 TB Avid Nexis storage, either to be edited or held as a back up. “I pushed for it and was involved in designing and implementing it,” says Loughman. “Basically tape is dead. When we re-opened Television Centre last year, we made a conscious decision to be fully tapeless there and looked at a few different options. It’s kind of becoming the industry standard. “After doing The Voice last year here [at Elstree], we

thought, we’re spending more and more money hiring kit, because a lot of the shows we do, such as The Chase are all tapeless and demand is growing, so I suggested we should look at buying a system and we have.” AND ON THE BENEFITS? “If people edit with us, the benefits are massive,” continues Loughman. “They can literally record in the studio, stop recording and start editing straight away. It’s a really efficient studio and post production tie-up. There’s also cost savings because I don’t have to employ an edit assistant to digitise in real time from tape, but also there are time and environmental benefits, because they’re not being used and then being binned, or keeping them in a room and then binning them.” He adds: “Within five minutes of a recording being finished, we can hand it over to production. For a lot of other solutions it takes time and you have to wait. There might be a copying time or it doesn’t stream in real time, so it might be 15-20 minutes or even an hour after the recording is finished. A lot of the stuff we do, like The Jonathan Ross Show, Mock The Week, The Graham Norton Show and Have I Got News For You, they’re all edited on the night. So waiting an hour after the recording isn’t really an option.” THE STAGE IS SET That’s the editing side, but what about the live elements of the show? Ahead of leaving Holby City (BBC Elstree


TVB60.strictly.indd 4

09/11/2018 14:55

FEATURE Centre and home of Studioworks’ Elstree edit village) for the short walk to Elstree Studios, Conway gives us a brief tour of Studio D, which was used back in the day for the likes of Jim Henson’s The Muppet Show, Top of the Pops and ‘Allo ‘Allo! Studio D is preparing to film a new episode of Through the Keyhole and has been the home of things like Children in Need, Celebrity Juice and A League of Their Own in recent years. Measuring 11,800 square feet, with a shiny resin floor and grid infrastructure on the ceiling used to suspend equipment to service the show such as lamps, scenery, screens, speakers, microphones – whatever the requirements happen to be. Seating for the audience is technically off and away from the studio floor, positioned in a recess at the back of the studio and allowing more floor space for the set. Studios such as these, Conway explains, are designed to be turned around relatively quickly, with some of our studios across Elstree and at Television Centre hosting multiple shows in a given week or even on a daily basis. “Down in Television Centre we have six different shows taking place in Studio TC1 this week. So once one finishes, one set comes out and another goes in.” However... “I wanted to show you this because what you’re about to see over the road couldn’t be more different,” he adds cryptically. As we enter Elstree Studios’ grounds, passing the Big Brother house, a live recording of The Chase, set building for Celebrity Juice, and pausing to admire the set of Netflix drama The Crown along the way, we arrive at the George Lucas Stage 2 – home of the “world famous” Strictly Ballroom. The Elstree Studios site is one that carries a lot of history in the film world, with over 800 features being produced at the facility over the years. The Star Wars and Indiana Jones trilogies, Superman, Moby Dick, The Dam Busters, The Shining and (my personal favourite) Labyrinth, to name just a few. BIGGER, BETTER, LOUDER Measuring close to 16,000 square feet, it is the largest gallery-served studio or stage space in the country and has become the venue of choice for many of big saturday night shows, including The Voice, Britain’s Got Talent and of course, since 2013, Strictly Come Dancing. “We used to host Strictly at Television Centre in Studio 1, which was the largest studio at the time,” says Conway as we maneuver the tight security, and sidestep numerous people carrying colourful (and skimpy) outfits and technicians as rehearsal for the results show is about to begin. Unlike with the smaller TV studios, George Lucas Stage

2 is, as described by Conway, a “giant empty soundproof box” – with the set, which houses 640 audience seats, taking over a fortnight to build and remains in place for the duration of the series. “It doesn’t have an infrastructure like a traditional TV studio, so you’re building everything from scratch,” comments Mark Osborne, construction manager for BBC Studioworks, who has been working on Strictly for 14 years and with the BBC since 1983. Osborne is responsible for turning the “empty box” like venue into the spectacular ballroom setting it is in front of us. “It’s the biggest and best thing I’ve ever done,” he explains as we stand behind the judges’ desk, the stage to our right and the giant disco ball in our sightline. “In July I go to set storage and get it all refurbished. On August the 8th, I start in here with an empty shell. From that day on, the roof goes in, the lighting goes in, and then we mark out for where the set goes in, which needs to be accurate to within 35mm, or else it wont work. A week later, we start with the set build. That takes two and half weeks to get everything in and then after that, the dance floor is laid, which takes a day to put down. “It’s like Ikea, you just then put it all together. Everything about it is just huge, far bigger than what it was at Television Centre.” Experience for the audience and making them feel truly immersed in a theatre, rather than a television studio is key. “When it was first designed, the objective was for it to be theatreesque and like the ballroom at Blackpool Tower,” says Osborne. “Each year it grows, much of which many won’t notice, but it’s about maintaining the illusion. I love it.”


CHOREOGRAPHY With the show broadcast live, there is no room for error, not just those on camera, but crucially those behind it. Visually, the audio and visual aspect of the show needs to be as choreographed and scripted as the dancers. In total, there are 16 camera operators (all manned) during a live show – all of which have a script, determining what they are required to do and where they are required to be at all times. For example, the director will monitor each couples’ rehearsal during the week to determine what shots will work best. The same needs apply for all teams to ensure everything runs smoothly and all potential permutations are considered. Lighting and visuals play a crucial role in creating a visually impressive and immersive environment and experience for those inside the studio and watching at home. There are hundreds of lights and six projectors (all


TVB60.strictly.indd 5

09/11/2018 14:55



Christie and supplied by Creative Technology), dotted around the venue, which like the camera crews are choreographed based on the music and the dancers. Considerations such as ensuring lights or projection images do not point directly at a camera during specific shots to avoid lens glare, or even in the eyes of the dancers are a key part of the planning. The choice of colours from the projections and lights must also be considered so not to create a colour clash with costumes. Colour temperatures from each camera are also monitored closely to ensure the exposure is consistent throughout, ensuring what is seen on TV does not vary between shots. “Everything is choreographed and scripted from start to finish” explains sound supervisor, Andy Tapley, “Everything is broken down into bars and they [the camera crew] will have a shot card telling them exactly when their camera will be operational, where their camera should be, how wide, how tight and who they’re looking at. Everything has been thought about in advance. All the music has been listened to and worked out on the camera scripts.” SOUND OF MUSIC The glue (arguably) holding all of the above together, plus ensuring the experience for the audience is as immersive as it is entertaining, is through audio. With a plethora of major live large-scale projects for BBC Studioworks under their belt, Tapley, co-supervisor Richard Sillitto and their team are responsible for bringing

the show to life. Using a Riedel comms system and a Studer Vista X mixing console (its broadcast desk) as the backbone to the audio set-up, Strictly now has the largest audio configuration of any production supported by Studioworks. There are of course many different areas in which audio is paramount. During a live performance, the audio team manage around 80 RF channels, including presenters, dancers (often built into their outfits), judges, guests and performance artists. All radio microphone equipment is hired through Terry Tew Audio, an AV firm, based in Chigwell, with the preferred choice used in the show being Sennheiser and Shure. In total, there are just under 200 sources in the studio feeding into six stage boxes which feeds directly (via fibre) into the sound desk, which is manned by two people. “It’s such a busy show, physically it would be very difficult for one person to mix it,” says Tapley. “With 15 original couples, judges, presenters all on radio mics, 80 band circuits, audience mics and spares on standby, we have a team of sound assistants ready for every eventuality. Sometimes there are some unforeseen circumstances and you work together to overcome them. That’s part of the fun of the job. He adds: “The frequency management is obviously very important and we have to work closely with other shows in the area to make sure we don’t have any crossovers.” GOING LIVE Another major aspect of the show, one in which Tapley lauds as one of Strictly’s greatest assets, is the use of a live band. In total, 18 musicians are used during each and every show, only learning what music they will be playing when they gather for the first time ahead of the show. In total, there are 80 circuits coming into from the band. This includes a drummer, a percussionist, bass player, two guitars, three keyboard players, three saxophones, three trumpets, two trombones and four singers. “Everything has to be hidden away because we don’t want the viewers to see speakers or microphones at all,” says Tapley. “Generally the audience shouldn’t be aware of it either and it’s our job to make sure that’s all managed properly.” The couples rehearse their dances to track playback during Friday rehearsals, but in the evening the band rehearse the songs for the first time. This gives the audio team the chance to start setting up individual mixes for each song in preparation for the live show. “It’s one of those things people watching at home may not realise or truly appreciate, but it’s a huge part of the show and the experience,” he explains as we take a tour


TVB60.strictly.indd 6

09/11/2018 14:55

FEATURE of the dance floor, currently empty but with a brightly lit stage. “It’s completely live, so if something goes wrong, or they play a wrong note, or sing something wrong that’s what goes out. The couples all dance to playback during the week, which allows the preparation for all the positioning, lighting etc to be decided and confirmed. “These musicians are some of the best in the world. They’ll turn up on Friday night, not knowing what they have to play and they’ll do it perfectly. It’s so much better having a band play rather than as a playback. It has much more energy and makes the show very special. It’s a huge part of what makes Strictly such a great show.” SWANSONG As our time at Elstree draws to a close, we are quickly hurried off the dance floor as presenters Tess Daly and the rather colourful Claudia Winkleman arrive. With the results show rehearsal taking place, we get to see, albeit briefly, Tapley and his team in action. Working to a 120-plus page script and under the constant direction of the script supervisor (whose voice can be heard at all times often, counting down), the rehearsal kicks off. Using ‘Spoton’ playback software (recorded sound bites), Howard Hopkins standing in front of a monitor clicks the first of many (possibly hundreds) of different coloured squares in front of him. The music starts. Watching the feed from the studio, viewing his script and the command of the director, he hits another. “This is Strictly Come Dancing: The Results! Please welcome your hosts, Tess Daly and Claudia Winkleman.’ Tapley and his team spring into life, managing the microphones from the presenters as they introduce the show, reading from teleprompters positioned on several cameras. “These are all Howard’s cues he needs to play in,” Tapley explains during a brief pause. “So this next section for example, we’ll cut to Claudia and then we’ll cut to a graphic and then we’ll move into VTs and we’ll play the VT sound. We can then relax for a minute. A VT is often where if there is a problem somewhere, such as a dodgy microphone, we can get in there, fix or replace it. “Everything is about managing that sound. We have to make sure the audience reaction is working properly with the mix. When we come to a couple’s song, I have that loaded in already, so the band is set up, but I’m still managing their levels and making sure the whole mix is working together. “If you make one small mistake, even just for a few seconds, everyone will know and the impact is huge.” The audio and lighting teams are the first to find out the results of the public vote in advance of the audience and dancers – often just a few minutes before the cameras

start rolling again. During this short period, they will immediately prepare the running order as Tess Daly details who stays and who goes, highlighting (spotlights) and capturing the reactions (both visually and audibly) of the dancers and the audiences. “We’re on air at 6:30 and I’ll probably walk out at 11:00pm, so it’s probably a 15 hour day for us,” says Tapley as Winkleman and Daly conduct the stop-start rehearsed elimination in the background. “We have a wrap time of around 10:00pm. It’s a full on day,” Wrapping up our time in Elstree, Tapley concludes: “The great thing about doing a live show is the adrenaline rush. You know that at 6:30pm we’re on air. So, your whole day is building up to that point and you don’t have the option of saying, we’re not quite ready, can we go on at 7:pm instead. You have to be ready. That why I love this job.” n

GEORGE LUCAS STAGE 2 SPECIFICATION n Sound Proof: Yes n Size: 41.275m x 35.5m (135ft. 6ins x 116ft. 6ins) n Area: 1465sqm. (15,770 sq ft.) n Height:15m (49ft.) n Lighting Grid: Chains & Tackles n Power: 1200 amps AC TPN


TVB60.strictly.indd 7

09/11/2018 14:55


MANAGING INTERACTIVITY IN VOD ASSETS By Ciarán Doran, exec VP global sales and marketing, Pixel Power


ntertainment and reality shows; sporting events; popular documentaries. One thing they increasingly have in common is the invitation to take part in a quiz or some form of interaction. Cricket highlights might include an invitation to compete for seats in a box at the final test. A holiday documentary might include a competition from an airline to win a holiday. A vote to keep your favourite – or eliminate the current villain – on a reality show. These are seen by broadcasters as great ways of building audience engagement, and another source of secondary revenue – an all-round good thing, until the broadcast has to be turned into a VoD asset that will be seen after the competition has closed and the phone-in number released. Then it becomes a nuisance. At Pixel Power we recently carried out some research among broadcasters around the world. We wanted to know if automation was still a valid concept in today’s content landscape. Yes, the results told us, it very definitely is, however, automation has moved beyond playout and into other areas of content creation and delivery. More than 70 per cent of respondents said that creation of VoD assets is one of their top priorities for automation, along with promo production. Audiences expect the same quality of experience on OTT and mobile services as they do on broadcast channels. That applies to technical issues – no freezes, no black, no chopped content – but it also applies to the content itself. Sequences such as advertising competitions that have long closed come into this category. They are annoying for viewers, even if they do not reach for their phones to call a long-closed premium rate line. And putting up a board at the start of a catch-up asset that says “please do not call the competition line” is really not very elegant – even if that in itself did not require any effort to add. Pixel Power has an automated production tool called

Gallium FACTORY, based on our core Gallium and StreamMaster platforms. It is widely used, for example, for automated promo production, creating all the different variants of a promo campaign from a single brief as well as within VoD applications. The engine is quite complex – if it was easy, everyone would do it! But in concept, it takes a set of rules and along with content, it automatically creates completely new material. That same principle can be applied to other requirements, such as handling time-limited contest material when creating a VoD asset from a broadcast programme. Essentially, you put markers around anything that should be replaced or removed. If the programme is pre-recorded, you would put those markers in at the final stage of post. If it is live, you would add these markers as you would a log. Then set up the business rules about what to do with it. If the competition closes at midnight and your house policy is not to put catch-up content online until the following day, then you simply, automatically cut that section from the programme. If the competition closes, say, a week or a month later, then the automation will read the rule and create one VoD asset now, and when the competition closes create a different version. Once the idea of smart automation is accepted, based on the content itself and business rules which you determine, then the applications rapidly appear. You could use it to create reactive commercials: advertising that responds to the content around it. One example is for a commercial break at the half time interval of a football match; include a commercial advertisement from a betting website that has been created only moments before the half time break that can show the latest odds for the next goal scorer in the second half. Creating dynamic advertising based on the live


TVB60.pixelpower.indd 1

09/11/2018 09:59

programme that has just aired has a very powerful effect on viewers. It can only happen by using smart, automated graphical insertions that have been set up in advance of the live show going on air. Pixel Power has already deployed this technology with the UK’s leading commercial broadcasters. A single, smart, automated workflow should manage conventional playout as we have known it for 30 years, but it should also automate VoD and catch-up asset creation, making the necessary modifications (such as removing or masking phone-ins) on the fly, as well as stripping out commercial breaks and stitching content segments together. It should then automatically create the content branding for each delivery platform and service. Sponsors

can have different requirements for online and linear broadcast content – wherever there are no ad breaks they can have no break bumpers – that can be added at the appropriate point, according to business rules. Finally, as the media world changes rapidly, it is imperative to employ a software solution that has a granular feature set that offers abilities for many new opportunities. Why pay for 100 per cent of the product when you only use 60 per cent of the features 40 per cent of the time? Only pay for the features you need, and only pay when you use them, perhaps even in a virtualised environment. Your new-generation automation system should be sufficiently flexible to meet future requirements, even if you cannot even begin to guess them today. n

‘Audiences expect the same quality of experience on OTT and mobile services as they do on broadcast channels. That applies to technical issues – no freezes, no black, no chopped content – but it also applies to the content itself.’


TVB60.pixelpower.indd 2

09/11/2018 09:59


WHY MAM CAN MAKE OR BREAK CONTENT PRODUCERS By Jim O’Neill, principal analyst, Ooyala


he media and entertainment industry has seen a huge shift towards content being consumed online. As a result, content producers are now being forced to change the way video is produced, managed and distributed, to fulfil the demand of a new digital audience. These processes must become less expensive, more time efficient, and more adaptable in order to meet the evolving demands of the global marketplace and the end viewer. Streaming services like Netflix have flourished from the effective management of their media assets, as well as their use of metadata which underpins much of their strategy. Other content producers will need to follow suit if they hope to compete. Whether through targeting different media types more accurately or making content recommendations to their customers, effective media asset management is often the making or breaking of content producers. Regardless of the quality of content, if a sports broadcaster, for instance, can’t distribute clips to a worldwide audience within minutes, its audience will look elsewhere. METADATA, MEET AI Metadata is transforming content. It’s being harnessed throughout the content supply chain to create benefits at every stage: from pre-production and delivery to audience engagement and monetisation. Thanks to metadata and the advanced technologies that use and enhance it, improved processes at the beginning of content creation are now directly elevating consumer experiences and profits at the end.

Manually input metadata can be rife with errors and duplication on content producers’ media platforms, making it difficult to catalogue and find assets. But video elements can now be automatically transcribed using artificial intelligence and based on an asset’s audio or metadata, rather than manually, which is costly and time consuming. Artificial intelligence (AI) also can help with asset logging. By integrating your platform with AI technology, companies can further enhance metadata. Not only can the logging process be automated, but services such as facial recognition, audio transcription, multi-language translation and object tagging can be applied at this stage for benefits all the way through the supply chain. Smarter logging means production staff can focus on higher-value tasks that improve creativity, business strategy and audience experiences. More content can be processed in less time to increase revenue opportunities. In fact, according to several Ooyala customers, automating the content supply chain using metadata, machine learning and AI can reduce project execution time by between 58 per cent and 70 per cent, result in a direct cost reduction more than 70 per cent and allow on-boarding of projects in half the normal time…enabling content producers to increase the number of projects delivered by 3-4 times. MANAGEMENT Once assets are logged as above, metadata can be used to manage them. By automating and optimising both the human and machine tasks, it reduces the chances of making

a mistake in a high-pressure production environment. Through a standard metadata design tool, along with advanced data modelling, businesses can easily create and define metadata, model and track it. Automated business logic and user-defined rules also can be set up to determine how teams should treat assets and metadata. Advanced asset management also enables teams to collaborate with editors wherever they work, using leading non-linear editing systems integrated with the platform. Through the right technology, broadcasters and publishers alike can enhance their search capabilities to enable in-depth, real-time and historical searches, and lead to more accurate search results. Companies can build their own thesaurus entries and perform advanced searches that return results associated with was typed, such as spelling of a town in a different language. Artificial intelligence (AI) comes into play again here as managed assets are connected to other workflow steps. For instance, metadata rules can be set up to identify assets that contained a specific topic, personality or location identified when they were logged into that content system. VoD highlights could then be automatically created with those assets, packaged with a superimposed logo, and delivered to multiple syndication channels, increasing audience reach and monetisation opportunities. Metadata captured automatically through AI, such as tags, specific actions (e.g. a sport goal) or brands, can be used to enhance personalised viewing experiences and boost revenues (like Netflix). n


TVB60.ooyala.indd 1

09/11/2018 09:59

Intelligence for the media & entertainment industry



TVB60.sony.indd 1

09/11/2018 09:24

Recapture lost audiences Break the news first across all platforms with Media Backbone Hive. It’s time to reconnect with your lost audience in a more cost effective way. As viewers look to the internet, mobile and social media for their news fix, broadcasters can now reclaim this ‘Lost Audience’ with Media Backbone Hive, a Unified Content Platform linking everyone within the organisation to enable unified news production. One production system to deliver stories faster across multiple platforms online, TV and radio. No more silos. No more duplication of effort. And lots more flexibility.

Media Backbone Hive

Find out more pro.sony/eu-news

TVB60.sony.indd 2

09/11/2018 09:24


INTELLIGENT MEDIA SERVICES: THE INTELLIGENT CHOICE Using Sony’s Intelligent Media Services, producers and distributors can make, manage and deliver content anytime, anywhere and to an unprecedented range of distribution platforms, adding value at every stage, writes head of of marketing and communications at Sony Professional, Europe, Stuart Almond


he days of buying a single, standalone product to achieve a single end-result (and then get it to somehow talk to a load of other single, standalone products) are long gone. The media and entertainment industry has evolved. In today’s production and distribution market, neither the content creator nor the content distributor wants to work that way – because it no longer achieves their goals: to make, manage and deliver content anytime, anywhere and to an unprecedented range of distribution platforms. Yet, some vendors are still trying to flog those single, standalone products. Which is crazy, given the current market drivers. These drivers include a huge demand for more integration with best-of-breed solutions, the desire to avoid over investment in underutilised resources and the ability to move and change quickly, swapping services in and out as required. Rather than using standalone solutions, media organisations now require tools that give them the scale, flexibility and efficiency to be more agile; aligning cost to usage and revenue – improving value and reducing cost. At the same time, there remains the fundamental need to capture great content, at greater speed, and at the right cost. And more than ever, no customer wants to wait 10 years before they see tangible benefits or a return on their investment. To make this possible necessitates an ecosystem of technology that works together seamlessly. An ecosystem that allows media organisations to quickly react to, and often pre-

empt, audience demands. One that operates on a platform as a service (PaaS) or software as a service (SaaS) business model and makes use of micro services. That is where Sony’s Intelligent Media Services comes in. Intelligent Media Services is a portfolio of market-leading services that can transform traditional media supply chains and help to extract more value from video and audio content. Including cloud-native technologies, the use of artificial intelligence (AI), and the opportunity of subscription payment models, Intelligent Media Services consists of Media Backbone Hive, Media Backbone Navigator, XDCAM air, Ci, Virtual Production, VEN.UE, Optical Disc Archive, Hawk-Eye, Memnon, Crispin and Pulselive. In addition to a range of design, development and delivery and support capabilities on top. Intelligent Media Services can help the media and entertainment industry in a number of different ways, saving time and money, soothing a number of pain points and offering a host of benefits and advantages. It can streamline and automate workflows, improve multi-platform production and delivery, provide secure access to content digitally, migrate workflows to the cloud and much more besides. Over the pages that follow, we’ll illustrate how Intelligent Media Services does this, providing real world examples along the way. We’ll see how Red Bull Switzerland (see page IV) is relying on Sony’s cloud-based Virtual Production service to deliver remote, live events in challenging locations. And we’ll get a unique insight into how BBC Studios (see page VII)

is using Sony’s new Media Solutions process automation to reduce its workflow admin by 50 per cent and increase speed to market. But let me leave you with an example of how Intelligent Media Services has helped Turner Broadcasting. By enabling a file-based workflow for the compliance teams within its new state-of-the-art Turner Media Centre (TMC), Sony and Intelligent Media Services has reduced edit time at Turner by 70 per cent, saving money and allowing content to be aired earlier than ever before. That’s a real return on investment. Not just marketing hype. Whether your ambition is to automate management and production workflows, move production systems and content management to the cloud, digitise archives or incorporate artificial intelligence, Intelligent Media Services will add value to each and every stage. 

To find out how Intelligent Media Services can help your media business, visit pro.sony/eu-IMS


TVB60.sony.indd 3

09/11/2018 09:24


IMS: SOOTHING THE PAIN POINTS Working hand-in-hand, the different components of Sony’s Intelligent Media Services provide media organisations with the chance to amplify their content, extend their reach and make their business more profitable.


Benefit: Streamlined and automated workflows that reduce costs By automating processes using IMS, media companies can reduce manual operations, cut costs and work faster across targeted workflow areas, particularly when dealing with large volumes of content.

Benefit: Improved multi-platform production and delivery IMS allows media organisations to increase audience engagement and be more efficient, while also ensuring their teams can work more collaboratively.

Applications: n Cloud-based acquisition n Integrated news and sports production n Live production n Fast turnaround n Optimised content supply chains n Asset safety n OTT services n Digitisation and monetisation of assets n Store and retrieve

ony’s Intelligent Media Services (IMS) is a portfolio of marketleading services that can transform traditional media supply chains and help to extract more value from video and audio content. Cloud-native and making use of both artificial intelligence (AI) and subscription payment models, IMS consists of Media Backbone Hive, Media Backbone Navigator, XDCam Air, Ci, Virtual Production, VEN.UE, Optical Disc Archive, Hawk-Eye, Memnon, Crispian and Pulselive. By combining these services together in different configurations, IMS can help the media and entertainment industry in a number of different ways, saving time and money, soothing a number of pain points and offering a host of benefits and advantages.

Applications: n Cloud-based acquisition n Integrated news and sports production n Live production n OTT services n Automated cloud processing n Advertising and ecommerce

Benefit: Instant, secure access to all content digitally Users of IMS can improve their workflows, have more robust metadata, and get reliable, secure, instant-access to storage, allowing them to extract more value from their content, increase its longevity and opening up new revenue opportunities. Applications: n Cloud-based archives n Streamline and automate archive workflows n Asset safety n Secure media Benefit: Readiness for 4K UHD workflows Adopting improved workflows and future-proofed archiving with IMS allows producers and distributors to work with higher quality content that is more engaging and is more saleable around the world. Applications: n Cloud-based acquisition n Optimised content supply chains n OTT services Benefit: Integrated asset management systems Where media organisations that have more than one asset management system, IMS can improve collaboration and reduce duplication. It not only


TVB60.sony.indd 4

09/11/2018 09:24

SUPPLEMENT benefits broadcast workflows, it can also help systems and operations to cope with cross-media (video, image, text) services more efficiently. Applications: n Optimised content supply chains n Streamlined and automated workflows that save time and cost n Automated broadcast acquisition for playout Benefit: Workflows migrated to the cloud IMS affords a proficient route to the cloud, allowing content producers and distributors to migrate their workflows. The benefits of this include increased collaboration, reduced capital expenditure on facilities and systems and a move to operating expenditure, as well as increased efficiency and faster business transformation. Applications: n Cloud-based acquisition n Integrated news and sports production n Live Production n Optimised content supply chains n OTT services n Automated cloud processing Benefit: Increased revenue with OTT eCommerce solutions By taking advantage of the cloud, and both SaaS and PaaS technologies, IMS allows content distributors to customise the viewing experience for their customers and make it more engaging. Importantly, it also allows them to make money from this engagement through advertising, subscriptions and eCommerce. Applications: n Cloud-based acquisition n Integrated news and sports production n Live Production n Fast Turnaround n Optimised content supply chains n OTT services n Automated cloud processing n Supply chain management and distribution services n Automated broadcast acquisition for playout n Workflows to streamline repetitive and time-consuming tasks n Cloud storage WORKING IN THE REAL-WORLD Because IMS is a combination of existing services and tools, this is not some blue-sky idea. IMS is already working for media and entertainment companies around the world. STREAMING LIVE FROM A MOUNTAIN TOP Virtual Production, part of IMS, is an on-demand cloud production service that provides a complete toolset for multi-platform content creation and delivery.

Red Bull Switzerland used Virtual Production at the Alpenbrevet motorcycle race in July 2018, where it helped to overcome the challenges of creating content in a remote location. As deploying physical infrastructure wasn’t feasible, thanks to Virtual Production and Sony wireless cameras using 4G connectivity, Red Bull was able to stream the event live. A single laptop worked as the production hub, providing access to a cloud-based production mixer. Operators could switch the camera feeds, add graphics, logos and captions and stream the output to different platforms, including YouTube and Facebook Live, quickly and easily. Thanks to the freedom, flexibility and agility to capture and distribute content quickly that Virtual Production offered, Red Bull Switzerland is now a subscribing customer for this service. RE-INVENTING ENG When it comes to TV, radio and online news reporting, several things are important, not least accuracy and speed. While IMS cannot assist with the former, it can help journalists and the wider newsgathering operation to report on a story in a more timely and collaborative fashion. XDCAM air from Sony is the answer here. A cloud-based subscription service, it provides superior ENG picture quality over Wifi and LTE, allowing video and audio streams to be sent directly from Sony XDCAM cameras to the newsroom. When integrated with another Sony IMS application, the news production platform Media Backbone Hive, XDCAM air can allow editors in the newsroom to see, access, edit and partially retrieve high resolution content. It can even make it possible for field cameras to be controlled remotely. Media Backbone Hive users include RTV Oost, TV2 Lorry, and in the near future SRG. ALL TIED TOGETHER WITH SONY GLUE Like everything that Sony does, IMS is more than just cutting-edge technology. We back up everything with Media Toolkit Services, providing media organisations with access to all the resources, capabilities and expertise that Sony has to offer. Through Media Toolkit Services, Sony can orchestrate media supply chains, integrate through open APIs and design and implement new services around the IMS portfolio. MEDIA TOOLKIT SERVICES IS ABOUT PEOPLE. From consultation and design to integration and R&D, Sony’s global team of expert engineers, consultants, developers, QA, architects, project managers, product managers, and support services help media organisations to make, manage and deliver content anytime, anywhere and to an unprecedented range of distribution platforms. Sony’s IMS has the end-to-end solutions and professional expertise to support content producers and distributors at every stage of their journey. To amplify their content. Extend their reach. And make their business more profitable.

To find out how Intelligent Media Services can help your media business, visit pro.sony/eu-IMS


TVB60.sony.indd 5

09/11/2018 09:24




BC Studios is a commercial subsidiary of the BBC. A global content company, it was formed in April 2018 by the merger of BBC Worldwide and BBC Studios. It spans content financing, development, production, sales, branded services and ancillaries and employs around 3,000 people. Sony New Media Solutions’ Ven.ue platform provides supply chain and OTT/e-Commerce solutions to power

a new era of media consumption. Global brands rely on Ven.ue for one single solution to distribute content to thousands of destinations around the world, enabling our clients to keep pace with evolving consumer demand and emerging multi-channel distribution models. Sony were engaged by BBC Studios to provide a comprehensive global asset management and distribution solution in 2011.


TVB60.sony.indd 6

09/11/2018 09:24

SUPPLEMENT THE CHALLENGE One of BBC Studios’ key jobs is to ensure the secure and effective distribution and monetisation of content. This spans multiple areas of the business such as channels, on-demand services and licensed consumer products. In addition, it has significant media management and postproduction functions. The challenge was to modernise a legacy global content supply chain, streamlining the three core segments: Transactions (sales, billing, contracts, licensing, and financials), the Catalogue and Content (video, audio, metadata, images, promos). Given the technology landscape is always changing; new forms of content to deliver, new requirements to meet, new ways of connecting with audiences, the only thing certain was change. An increased amount of content led to increased mastering and distribution and as a compound effect, an increase in the methods of monetising and consuming content. In order to thrive and continue to scale the business, BBC Studios selected Sony as their partner to adapt to the change and avoid #TheDigitalCliff. THE OUTCOME The success of the project relied not only on choosing and deploying the appropriate technology but also on strong investment from both sides of the relationship. Working closely together, BBC Studios and Sony were able to break down the traditional client-vendor lines, leveraging Sony’s technical and industry expertise to lower costs while simultaneously improving productivity in the broadcast and digital distribution supply chain. This was done through the centralised data aggregation; faster, automated fulfilment mechanisms both of which sat on a foundation of flexible and scalable systems. Having a consolidated, accurate, and normalised inventory has reduced duplicative costs and strain on BBC Studios’ internal content operations team. Sales and licensing teams are now empowered, not impeded, by technical requirements and inventory availability. THE SOLUTION To make the project work, BBC Studios reset existing business processes to take advantage of the efficiencies that Ven.ue created. At the same time, Sony altered application design and system capabilities to reflect BBC Studios’ future state requirements and processes. Acknowledging that technology doesn’t work alone, and that process evolution is critical, Sony’s approach to these implementations had a strong change-management team and focus. Having key performance indicators is a standard, but learning from them and leveraging them to drive business benefits is where Sony sets itself apart.

‘Intelligent Media Services is already working for media and entertainment companies around the world.’ Testament to Sony’s ethos of powering today while simplifying tomorrow, Sony were one of the first to move to a 100 per cent cloud native solution in 2016 which has provided further scalability and allowed BBC Studios to take advantage of a service model that allows them to focus investments where they belong: on the content. Today, Sony is responsible for the delivery of BBC Studios’ content to over 800 distribution points globally. In partnership, the companies have realised their goals of significantly improving time to market and streamlining the content delivery supply chain and continue to work towards setting the bar on operational and service level excellence. The Ven.ue platform presently enables content delivery across all devices and connects to more than 1,200 distribution points worldwide. It currently stores 25 petabytes of TV and film content, equating to more than two million hours of content with this figure growing at approximately one petabyte every quarter. 


TVB60.sony.indd 7

09/11/2018 09:24

Want to help disrupt the market? Share events. Reach further. Broadcast live. Broadcast as a Service. Virtual Production by Sony. All you need is:



Access to Virtual production

And to get started:


Configure Stream devices

Select Outputs

And you're ready to professionally live stream with Sony

Learn more at pro.sony/virtualproduction TVB60.sony.indd 8

09/11/2018 09:24




iVo sees itself as sitting at the centre of entertainment discovery, and it got there through dialogue search, data exploitation, and predictive algorithms, all aided hugely by moving to the cloud. Charles Dawes, senior director for international marketing in Europe, explains the tricky bit of content discovery: “Essentially we are helping people to find the content they love, but it has become more difficult to do that because of the combination of two factors. There is infinitely more content being produced than ever there was before, and at the same time we have so many ways to access it. “We sit in the intersection of everything, helping consumers find content and making sure it is discovered in a way that makes sense,” he adds. “On the other side of the equation we are helping content producers to find the right people because that is just as important as consumers finding content. Ultimately, if they do not get an audience the system falls down.” The top item in the TiVo toolbox is voice demands, with a radical take. “This is where we really major with what we call conversational search. There is a lot of command and control out there but you cannot do multi-faceted queries. We see it going into something that is much more towards human dialogue,” says Dawes. TiVo sees itself moving into the smart home space, and becoming a hub for other things that the consumer wants to do. It already has experience of adding to Alexa and Android TV. “On that Alexa and Google type of assistant thing, what we are doing is interfacing in with


TVB60.tivo.indd 1

09/11/2018 09:35

FEATURE them. A consumer has the choice of the near field stuff, or the far-field piece where you have a device in the corner of the room and you speak to it,” explains Dawes. “You teach it to recognise you because as a consumer you are going to have devices in your house you want to interact with, plus all your entertainment services as well.” IDENTIFIABLE INSIGHTS TiVo and data are another technology dance. “Where we sit is in trying to help people who collate data, us being one but especially the operators. We take all of that information that they have, and don’t necessarily know what to do with, and then apply intelligent algorithms to create what we call identifiable insights,” says Dawes. “People can take that data and use it within their business. One area where we have focused around is TV advertising, and helping people understand where an audience is actually going to be,” he continues. “A lot of the time in TV advertising you are working off post data rather than predictive data. You can say such and such a group watched a show but it does not take into account anything else that is happening as consumer habits change. So we are using algorithms to predict where audiences are going to be in a much more granular fashion.” This has commonalities with search and recommendation. Using all the inputs enables TiVo to create a much tighter set of parameters around someone that can then be used for predictive returns. “You can help to find the right audience much faster, and then we can target the right kind of commercial at them. The other thing you can do is to show a graphic to someone within the EPG interface that is specifically relevant. You may get a different visual to what I get because we are understanding the touch points and the factors within that programme that are going to help us get you to watch it,” says Dawes.

ONE-TO-MANY DOES NOT DIE The multiple personalisation aspects suggest multiversioning, but TiVo is not in that IMF space. Sitting in the heart of digital is the company learning to work with the concept that one to many must live on? “That is an interesting conundrum,” says Dawes. “There are certain use cases where one-to-many will never disappear. Things happen that you have to view live. “At the same time we have seen more and more people moving towards having that one-to-one relationship with content, but at the same time you have got one-to-one you could have one-to-many because other people are doing things at the same time. They are not together as it were,” he adds. “Being able to create an audience group from a set of disparate people who are going through the same experience is one area we will see more of. “People want to have that ability to have content and talk about it in their lives. It is not just a single experience,” Dawes continues. “It comes back to being a plural experience. One-to-many does not die but it morphs in, and we see more and different use cases.” TiVo does its targeted research because whilst using data collection and the different data points does show what consumers are doing on a click-by-click basis, something is still missing. “Having targeted research and using focus groups is crucially important. So too is going in and watching people interacting, because it is very easy to look at some data and get completely the wrong end of the stick,” says Dawes. “We have had examples of that in the past when something has been really trending, but it was rubbish content. It does not mean you are going to use that to base a recommendation on. You have to understand the reason why something is being watched.” FROM THE SAME BACK END TiVo (and group company Rovi) has been transitioning its services into the cloud for the last eight years.


TVB60.tivo.indd 2

09/11/2018 09:35

FEATURE “It gives you a lot more processing power and it enables you to share across different services. You have to have a very similar experience across different devices because you power it from the same back end,” says Dawes. “Having a cloud-based platform for us is very important and we are getting to the stage where connectivity is starting to be ubiquitous. Especially with 5G coming along there are all the questions in the industry about if you even need a fixed broadband any more, because most people are going to be within the range of having a service that is far more powerful.” Back at IBC, TiVo boosted its Next-Gen Platform with TiVo Experience 4, a new UI that pushes even more of its activities into the cloud. “It brings together all of the entertainment resources. The pay-TV operators were running scared of Netflix at one point, but it is now integrated pretty much into every one,” says Dawes. “Consumers want to sit in one place and be able to access all entertainment content. “They do not want this thing of thinking is that program from the BBC so I have to go to I-Player or, is that show from ITV so I have to got to Hub? Having what we call a sea of apps is really complicated for people, and they do not understand it,” he adds. “We provide an over-arching discovery layer that allows you to be able to go into the content irrespective of where it is, and we do some neat things. We present the content in the way the consumer wants, rather than in the way the industry’s business models are forcing it on customers.” Perhaps the industry needs to adjust those business models? “They don’t need new business models to do it, because providing a layer on top of all that makes it easier for the consumer to make sense of where the content actually is,” says Dawes. TAKING METADATA TO THE NEXT LEVEL TiVo has four key areas of business – user experience, entertainment metadata, advanced media and advertising, and patent licensing. Outside of its retail business in the US it is completely B2B software and services. With far-field conversational control it started with English then US English and recently added German, Portuguese and Italian. “The number of languages will continue to grow because we are really focused around that route for conversational entertainment discovery,” says Dawes. Entertainment metadata can be seen as like an iceberg. You have a certain amount of information above the waterline, and lots more below, and the above stuff is what we have had for a long time in the form of titles, short synopses, time and channel and maybe a cast list. “But as you get into new discovery models, especially

voice, the amount of information you need to understand a piece of content in terms of metadata has become huge. And also there is how you link between content, and understand what content relates to other pieces of content,” explains Dawes. “We have taken metadata to the next level and moved it into more of a graphic-based model, where you have dynamic connections between pieces of content. Metadata is now very deep.” For providing the service of bringing content together, normalising it and then enhancing it, TiVo has eliminated all those manual processes of the recent past with a technology platform that automates everything. UNIVERSAL INCOME AND 100 PER CENT LEISURE TIME Of all the near horizon technologies what will TiVo look to exploit? “I see a lot of development in the AI space as everybody does, but particularly for us in helping our partners understand consumers – the nuances of people. The other interesting thing there is impacting on a lot of what we have seen in the push towards personalisation on a oneto-one basis,” says Dawes. “But in the entertainment space a lot of experiences are still shared so we should be able to have a broader set of different types of personalisation. “At IBC we saw quite a lot going on around collaboration. It was interesting to see people coming together to provide solutions. The show itself has weathered all the various different changes in the industry; it is always there as a stalwart,” he adds. “Certainly within our part of the industry there has been more and more consolidation – fewer vendors.” Dawes saw a lot of progress in the roll out of voice at IBC. He says: “Seeing far-field voice becoming a reality, being mainstream, was big. It suddenly takes TV back to being simple.” Right now consumer leisure time is increasingly restrained, but what happens when automation comes along? “What are they going to do when they are on universal income and everybody has 100 per cent leisure time? Entertainment is still a very important part of human lives, and that does not go away,” concludes Dawes. “What we really focus around is understanding what people say, and giving them the right result.” n CHARLES DAWES came from the Rovi side of the 2015 merger between TiVo and Rovi, and has been a group man for eight years. Previously he worked at Liberty Global running the digital TV deployments across Europe. In his 10 year stint he pushed digital platform services into Holland, Austria, Switzerland, Norway, Sweden and Poland.


TVB60.tivo.indd 3

09/11/2018 09:35

LOUDNESS EVOLUTION By Paul Tapper, technical director, NUGEN Audio


ow-a-days loudness measurement of audio is absolutely routine for audio post-production engineers, and broadcast QC departments. Since the grand-daddy of all loudness standards (ITU-R BS. 1770) was published back in 2006, broadcasters around the world have adopted loudness standards as part of their delivery specs. The original motivation behind the loudness standards was a need to solve the problem of highly compressed commercials becoming significantly louder than the programme material they were played out next to, giving the viewers/listeners an uncomfortable experience. The enforcement of loudness levels appears to have largely solved this issue (complaints around commercials being too loud have dropped dramatically). A secondary benefit of the new standards was an avoidance of a broadcast “Loudness War,” which would have resulted in hyper-compressed (and therefore

damaged) audio. Because the loudness is fixed at play-out, there is no loudness benefit from hyper-compression, freeing up engineers to make use of much greater headroom, for greater audio detail and clarity, than they would be able to otherwise. So, I think it would be fair to say that, in the words of EBU PLOUD chairman, Florian Camerer, “Loudness has made the world a little bit better.” But, that does not necessarily mean that the existing broadcast loudness standards are perfect, or that they are a good solution for every situation. For example, we at NUGEN Audio have worked with broadcasters for quite a few years now and have identified problems with broadcasting soundtracks from cinema releases. It’s easy enough to normalise the average loudness of a film to make it compliant to the broadcast standard, but often-times the “dynamic range” is too wide for home viewing environments. When I say “dynamic range”


TVB60.nugen.indd 1

09/11/2018 10:01

FEATURE here, what I really mean is the “macro dynamic range,” or the Loudness Range (or LRA measurement), which is a measure of (roughly) the loudness difference between the loudest sections of audio and the quietest sections. An approach to reducing this Loudness Range could be to pass the audio through a compressor, but particularly for films with very loud sections (e.g., action movies), this has a tendency to push the dialogue level down in the overall mix, leaving the Loudness Range acceptable, but the dialogue quiet and unintelligible. It turns out that now the viewers have stopped complaining about loud commercials, they mainly (in terms of audio related complaints) complain about dialogue intelligibility (e.g., see Mumble-Gate - the furore around the BBC’s drama series Jamaica Inn in 2014). So, this still leaves us with a problem that the common standard measurements don’t address. Thankfully though, there is a way forward. Dolby has very kindly made its algorithm for detecting dialogue in audio freely available, which means that it is possible for loudness algorithms to measure the loudness of just the dialogue sections. This allows us to get a number for the Dialogue Loudness in a clip of audio, as well as the overall average loudness. By using both these numbers, it is possible to detect situations where you might have a problem with dialogue intelligibility due to the dialogue being too quiet. It is even possible to have the computer automatically process your audio to give you a first pass version of the audio repurposed for broadcast, with an appropriate Loudness Range, and it can preserve the level of dialogue in the mix (NUGEN Audio’s DynApt process, available in LM-Correct and AMB, is an example of this sort of processing). Another potential problem with dialogue intelligibility is caused when there is a large variation in the loudness levels of the dialogue present in the mix. You might have some sections with speech levels at a good clear level, but then other sections with dialogue that is too quiet to hear clearly. The average level of the dialogue might seem okay when you measure the Dialogue Loudness, so you need yet another measurement to detect this situation. A sensible choice of measure here would be the Dialogue LRA (the loudness range of the dialogue present within the mix). If the range of dialogue loudness is too great, it’s not going to be possible to set just one playback level to give a comfortable dialogue level throughout. Dialogue LRA is another measure, which could be used to detect the likelihood of potential dialogue intelligibility problems. For very dynamic content, like cinema soundtracks, or premium content that is intended to be consumed in home theatre environments, it makes sense to measure, not

just the standard average loudness, but also the Dialogue loudness, the LRA and the Dialogue LRA to give you a fuller picture of whether the audio is likely to give the viewers the results you are hoping for. These measures could be easily used by an automated QC process to flag potentially problematic content that would benefit from a person checking that the dialogue in the programme is intelligible throughout. Looking at the problem in the other direction, it is also a common application for broadcasters to want to reuse content for lower-fidelity playback contexts. An example of this would be the distribution of broadcast content to mobile devices, where a likely usage would be watching/ listening while travelling or commuting. Because of the high noise-floor provided by the car, train, or plane, there’s an even greater need for the loudness of the dialogue to be consistent (i.e., even lower Dialogue LRA) to achieve consistent intelligibility. Of course, there is the problem of ascertaining the physical environment of the playback. Hopefully, the discussion above will have convinced you that it is possible to use extensions of the standard loudness measures, combined with Dialogue detection to give additional and useful information about your audio content for broadcast and to improve the overall listener experience. n


TVB60.nugen.indd 2

09/11/2018 10:01




riven by the phenomenal growth of technology over the past decade, today’s viewers not only expect a wide-range of high-quality programming, but the freedom to view their favourite shows whenever, wherever and on whatever device they want. For broadcasters, the resulting need to deliver content across multiple networks – from Digital Terrestrial Television (DTT), mobile and broadband – to multiple devices, running multiple technologies and resolutions is creating more complex platforms than ever before. In search of the agility and commercial flexibility needed to meet this challenge, broadcasters have introduced new platforms alongside existing ones, resulting in isolated systems, infrastructure duplication and different operational characteristics. In turn, this has led to increased operational cost and complexity. We believe this approach is not sustainable and will not deliver the desired operational savings or support for new commercial models, such as pop-up channels for events like the FIFA World Cup. But with consumer demands in constant flux, how do platforms continue to evolve to meet current and yet unknown business demands?

Secondly, content must be made available over common interfaces – the use of single function interfaces restricts a resource to a specific application, reducing its utilisation and reuse, and ultimately limiting platform flexibility. Finally, the end-to-end broadcast workflow should be software-defined so we can deploy applications quickly and efficiently across shared resource, and tear them down when no longer needed. In this scenario, broadcast workflows will be created within a service-focused layer, which sits above the infrastructure layer and comprises of the underlying networking, compute and storage resources needed to deliver services. In recent years the broadcast industry has been transitioning to a software approach, initially with fixed software deployments on COTS hardware using broadcast interfaces like SDI and ASI that fit into existing environments. With developments in broadcast IP standards, however, applications can now be deployed in data centre environments (public cloud or on-premise) and via a variety of models and tools such as virtual machines or containers, which use orchestration systems like VMWare to Kubernetes.

FLEXIBILITY With the adoption of standard IT infrastructure and software-defined broadcast functions, a new platform paradigm is emerging which should enable the industry to simplify platforms while providing commercial flexibility. In order to recognise this opportunity, broadcasters must first abstract and separate applications such as encoding and multiplexing from their dedicated hardware, and instead enable them to run on standard IT hardware.

HIGH AVAILABILITY Which you choose will depend on use case, but deploying software is only the start. While flexibility and agility are emerging requirements, high availability – the cornerstone of broadcast services – is an absolute necessity if these new platforms are to deliver the nation’s favourite shows. Switching from dedicated broadcast interfaces is no simple task. While inflexible, these interfaces provide deterministic low latency performance, essential for


TVB60.arqiva.indd 1

09/11/2018 10:02

FEATURE maintaining high-availability levels and picture quality. Ethernet interfaces from 10G to 100G provide sufficient bandwidth to transport uncompressed content from SD to UHD but require additional broadcast wrappers to do so. This includes the SMPTE standards, which have recently enabled uncompressed media transport over IP and thereby removed the last blocker to an all-IP platform. While still in their infancy these new protocols, along with standards like AMWA NMOS, promise to reduce complexity in managing content distribution. All the components are now there, but how do you configure and deploy these software applications and ensure the network reliably delivers content in real-time? These are exactly the challenges we are investigating as we develop our next generation platforms. How can we guarantee high availability for broadcast workflows, which are different from the transactional-based workflows typical in today’s data centre, cloud environments? We believe broadcast grade infrastructure and the level of interaction and awareness between this and the services running on top as key elements in delivering the desired user experience. SPEED The adoption of an IT environment brings continuous deployment and we are excited by the possibilities this offers broadcasters to introduce improvements and deliver new services quickly. Today, evaluating and deploying improvements is a costly and slow process with new dedicated systems required to prove functionality before a physical lift and shift upgrade to the production systems. In an IT environment, the hardware resource and software applications follow independent update cycles. This decoupling allows us to evaluate new software and rapidly deploy it using automation onto existing hardware, reducing time to market and enabling customers to benefit quickly from technology advancements. But so too, does it bring its own challenges. Can these platforms deliver 99.999 per cent service availability while also carrying out continuous deployment to maintain security and introduce new functionality? We are working with broadcast suppliers to develop ‘cloud native’ applications designed for this new environment, which will enable new operating models and allow services to move between application instances with zero service impact. HYBRID VIEWING Today’s viewers are not concerned with how their shows are delivered, only that they can watch what they want, when and how they want. In response, hybrid viewing

platforms such as Freeview Play and YouView have been developed to provide seamless navigation across DTT broadcast and IP delivered services. The ability to enhance traditional DTT services with IP services is exciting, with opportunities such as restart TV if you miss the beginning of a programme, catch-up services for viewing yesterday’s top shows and personalised viewing with targeted content. We believe that hybrid services have become the norm and it is essential for broadcasters to embrace the opportunities they offer. CONCLUSION We believe technology now supports a transition to a world in which broadcast services are software defined, with dynamic workflows running on standard IT infrastructure. This change brings broadcasters the agility to react to changing business needs, explore new service models, and deploy new services potentially within minutes. But there are new questions to answer… n Can IT infrastructure cost-effectively deliver a constant and deterministic transport of uncompressed media in real-time while enabling flexibility? n How do you provide 99.999 per cent availability in an IT environment with continuous deployment of software updates? n Where should you deploy your services, on-premise or public cloud? These are exactly the questions we are working on. At Arqiva, infrastructure is our business and we believe that broadcasters should have the choice and flexibility to change quickly to meet the demands of viewers – accordingly the infrastructure they use needs to be flexible, quick to deploy, efficient and reliable. We believe that, in time future broadcast platforms will be virtualised, offering this flexibility, speed and most importantly, reliability. They will operate within a broadcast-focused environment, enabling high-availability delivery and service portability. n

‘We believe that hybrid services have become the norm and it is essential for broadcasters to embrace the opportunities they offer.’


TVB60.arqiva.indd 2

09/11/2018 10:02


VISUAL DATA EXPLORATION Roger Noble, CTO, Zegami, on a paradigm shift in building meaningful real time content


PICTURED BELOW: Zegami draws all of the information from a broadcaster’s structural database, MAM system, API, statistic and social media into a single highly visual user interface

AM systems have enabled broadcasters to manage their media assets from ingest right up to the point of distribution for many years now. They carry vast amounts of data, including information about file formats, nature, location and any metadata describing the asset. With cloud processing facilitating even more sources of information and even more delivery platforms; and with the rise of social media, IoT (Internet of Things) devices and smartphones to also contend with, broadcasters are awakening to the age of data overload. While more data does present more opportunity to provide more content, it brings with it all manner of issues when it comes to big data structure management and analysis. Now that we have it, how does all this data easily translate into meaningful information so we can provide more engaging, timely and relevant content to our audiences and subsequently more revenue in return? STRUCTURED AND UNSTRUCTURED DATA Asset management systems traditionally work around

structured data. They deal in metadata that has a defined structure or taxonomy that rarely changes over time. Standard keyword and search interfaces are then used to access that content, so it can be used within a production or broadcast system. With the current process of labelling and logging metadata, however, it is impossible to anticipate future uses for content. New technologies like OTT and social media have dramatically changed the way we consume media. The challenge is that these large, disparate sources of data are inherently unstructured. Unlike structured data (numbers and known categories) – unstructured data must be processed and analysed before it can be quantified. COMBINING AI AND HUMAN SENTIMENT Humans are inherently very good at understanding the messy, unstructured nature of the world. But human processing cannot scale at the same rate that unstructured data is growing. Any production company working with a content library will know that their library is worthless without some means of knowing what is in the archive. Increasingly, broadcasters need to access much more than titles and an outline of content: as automation becomes ever more prevalent, they need metadata which can provide actionable insight. This is where Artificial intelligence (AI) and machine learning can step in, with the capability to consume large amounts of data streaming in from varied sources and more importantly, to make sense of it all. By leaving tasks like face and logo detection, sentiment analysis, pose detection and metadata extraction to AI, us humans can step away from the repetitiveness of manually annotating data, to make better use of our higher cognitive functions. This is especially important in the media industry where the impact of the content is very subjective by nature; something that machines do not handle well. A NEW WAY OF SEARCHING Making big data immediately comprehensible to the human eye, through dynamic interactive visualisations, is a new approach that offers exciting new possibilities to the


TVB60.zegami.indd 1

09/11/2018 10:03


broadcast industry. Traditionally, exploration and discovery of data have been facilitated by search. Until now, the concept of search offers one user experience: a single textbox where a word or phrase can be typed, with the results all looking the same, i.e. 10 to 50 links in a series of pages. There has been no innovation on search for the last 20 years and even companies like Google have little incentive to change the search experience. Zegami is consciously very different. By presenting an entire collection of information within a single field of view, users can take in the full scope of the data all at once. It does away with paged interfaces, so users can instantly see the shape of the data, leveraging the innate abilities of our subconscious mind to identify patterns. DELIVERING AND ENRICHING ‘THE MOMENT’ Zegami offers an instant boost to real-time live production values. Its unique algorithm draws all of the information from a broadcaster’s structural databases, including MAM systems, metadata, API, statistics, as well

as unstructured data, such as social media and live data platforms, into a single, highly visual user interface using proxys and webhooks. This presents a highly interactive way of looking at content. A customised faceted filter system aids producers, researchers and editors to analyse all of the data and dynamically position search individual item on a single screen. Users can then quickly figure out what their audiences want to see in that moment, without having to work their way through lists or search engines or have a researcher do any ‘ground work’ before the programme airs. It uses visuals, such as graphs, scatter plots and media tiles which can be lassoed to find comparable data, i.e. correlations, outliers, patterns and relationships, all incredibly quickly. It is this kind of interactive exploration that truly sets Zegami apart, making it possible to discover the right content at the speed of thought. Zegami’s ability to access the power of AI and then making big data immediately comprehensible to the human eye, through dynamic interactive visualisations, brings an entirely new approach to the world of broadcast. n

PICTURED ABOVE: Zegami’s visual data exploration platform combines digital media, data and artificial intelligence, into a sible cloud-based platform

‘Increasingly, broadcasters need to access much more than titles and an outline of content: as automation becomes ever more prevalent, they need metadata which can provide actionable insight.’ TVBEUROPE NOVEMBER / DECEMBER 2018 | 37

TVB60.zegami.indd 2

09/11/2018 10:03


LOOKING TO SMPTE’S FUTURE Jenny Priestley talks to incoming SMPTE UK governor Marina Kalkanis about how the organisation can reach out to new members and plans for her term in office


n 1st January 2019, M2A Media CEO Marina Kalkanis will begin a two-year term as SMPTE’s UK governor. She’s been a member of the organisation for just over a year, after being persuaded to join by the Digital Production Partnership’s managing director Mark Harrison. Having been active in other UK and European organisations, Kalkanis admits she always regarded SMPTE as more of a motion picture organisation. “I come very much from the digital side, I don’t really come from a traditional broadcast background at all,” she explains. “So it initially didn’t appear to me that SMPTE was as relevant to the sorts of things that I was engaged with. That has changed since I’ve become a member, and now I’m thinking about how it can become even more relevant, because I do think it’s probably still the case that a lot of people that are doing interesting things with video and media online are not active in SMPTE, or even seeing how it’s relevant to them.” RELEVANT AND ENGAGING Kalkanis decided to stand as UK governor after being encouraged by colleagues, with her appointment announced in early October. She says she wants to use her 24-month term to help create more diversity within the organisation, and make it feel more relevant and engaging. “I felt it would be a challenge to see whether it will be possible to do that. It’s also important to encourage people who are trying out new things - who may not have all the professional training and haven’t had 10 years of apprenticeships, but they just want to try new things and that’s what I find interesting and exciting.” So what does Kalkanis think she will bring to the role? “Well, I think I’m pretty good at networking!” she laughs. “I think it’s really important to build up the network, really getting the visibility of what’s going on, organising things that will attract people, get people engaged. I think in my first four to six months as governor, I need to


TVB60.smpte.indd 1

09/11/2018 10:24

FEATURE understand what’s worked in the past and what hasn’t. “But I would like to think we could look at new things, like a hack-a-thon or stuff that engages people, gets them interested, gets them trying new things. Maybe we could organise events around other things that are happening - so maybe something that’s happening in the UK and building around that. Maybe looking at trying to work with some of the bigger broadcasters in the UK to sponsor events.” 2018 has seen a lot of turmoil in the UK media and entertainment industry - from the long drawn out wrangle over Sky, to SAM’s acquisition by Grass Valley. What does Kalkanis see as the biggest issues facing the industry in the UK? “I think there’s more consolidation of technology to come,” she explains. While the UK obviously has some major tech companies, Kalkanis stresses that it’s important to keep room for the smaller innovators. “It may be that the economy is such that there’s possibly a little less investment in strictly R&D,” she says. “Some of the bigger R&D teams, people like the BBC, Sky, possibly don’t have the same sort of budgets that they may have in the past. “We’ve had some great innovation in media, look at the iPlayer, but it’s maybe harder to find those opportunities. It’s about making sure that we can find them and compete with Silicon Valley really. That’s the challenge these days I think, competing with what’s going on in the west coast of the United States.” Within the first three months of Kalkanis’ term, the UK will be dealing with Brexit. And, at time of writing, we still don’t know how that will look or exactly what effect it will have on the UK and European broadcast industries. “I am absolutely already thinking about Brexit,” she says. “I think

the UK has been quite strong in media, when you look at the role that the UK plays in things like the EBU and IBC, the UK is a really major player. “We don’t know what’s going to happen and I think Brexit is going to have an effect - not least on the free movement of people. People come to London from all over Europe to work on projects and it’s just not going to be as easy. So I can’t imagine that there’s not going to be some kind of effect. But we just have to wait and see I guess.” NOT JUST LONDON Having dealt with Brexit, the ongoing merger and acquisitions within the UK industry, and whatever the next 24 months throws at her, when her term comes to an end, what does Kalkanis hope to have achieved? “There are lots of creative hubs in the UK, in Manchester, Leeds, London etc, and I would like to think that SMPTE is playing a role within those creative hubs,” she explains. “I’d also like to work on building more partnerships with some of those organisations that are promoting creativity and innovation. “I’d like to think SMPTE will be more in the front of people’s minds in terms of what it is and why it’s relevant, and how it can be relevant to them.” Kalkanis says she excited to get started with her new role and that it will involve more than just building membership. “I think there’s more to do than that. It’s about not just being a member, but being an active member. I have a good network in terms of women in technology, women in engineering, and so I’ve been thinking about how SMPTE an become more active and engaged with some of these organisations and what role can it play.” n

“It’s also important to encourage people who are trying out new things - who may not have all the professional training and haven’t had 10 years of apprenticeships but they just want to try new things. That’s what I find interesting and exciting” MARINA KALKANIS


TVB60.smpte.indd 2

09/11/2018 10:24


PROJECT ORCHID TAKES ROOT AT TELESTREAM By Stuart Newton and Ken Haren from Telestream


o, how important is live video streaming to the global broadcast and electronic media market? Research shows that 2018 represents the inflection point after which time, on average, consumers watch more minutes of video through OTT streaming than through scheduled linear TV broadcasts (source: Zenith via Recode). By 2021, video will constitute 82 per cent of all consumer internet traffic and mobile video will comprise 78 per cent of all mobile data traffic (Source: Verizon Digital Media Services 2017; Cisco VNI 2017). ‘Live’ is the biggest differentiator that a broadcaster or service provider has in their armoury. More than any other form of entertainment, it greatly influences their ability to attract viewing audiences and thereby commercial advertisers. The more live content a broadcaster produces, the more ad revenue earned. Relatively speaking, the investment that programme makers are putting into live production is greater since the viewing audience is the most valuable they have. However, the question today is whether that live broadcast is transmitted over the airwaves or streamed direct to the consumers’ device of choice. Several years ago, a major cable company CEO announced at CES that live was the firewall that would protect pay-TV operators. He believed that large audiences still wanted to watch their live events on linear TV and that would be the industry’s redemption. Now, a few years down the track, multiple holes are appearing in that firewall – the most significant being the impact of live video streaming. Research predicts that live internet video will grow 15-fold between 2016 and 2021 (Source: Internetworldstats.com). ASSURING QOE AND QOS WITH LIVE STREAMING For live streaming to become an attractive business model, service providers need to provide a number of things, the

most important being a consistently high-quality viewing experience for large audiences. To achieve this requires a good understanding of the product that is being offered to consumers – you can’t make a better viewing experience if you don’t first understand what today’s service looks like. Monitoring is the secret to this – firstly, monitoring at the video headend proves that the origin data meets the quality levels that consumers demand. After this point, the distribution pathway will encompass multiple CDNs, access networks, in-home networks and WiFi infrastructures where media quality can be diminished. If quality at the headend is good but it is significantly less good by the time the consumer views it, there is a challenge of identifying the weak link in the chain responsible for degrading the media experience. Today, with much of the distribution infrastructure being cloudbased there is a need to position virtual monitoring probes at every stage throughout the distribution chain. Once this probe network is up and running, service providers can move from being reactive to become truly proactive. It will provide an efficient early warning system of distribution faults, so service providers don’t have to wait until they start to receive complaints from viewers. As the market moves to event-based business models where consumers are paying to view a specific event – be that a boxing match or a music concert – then efficient and effective real-time monitoring of the distribution chain is absolutely essential. Without it, service providers are open to real reputation damage, even involving class action lawsuits. WHAT IS PROJECT ORCHID? Within enterprise-scale live event streaming ecosystems there are a number of essential requirements. One key area is the ability to dynamically adapt channel capacity to match the needs of the viewing audience. Within


TVB60.projectorchid.indd 1

09/11/2018 14:22

FEATURE linear TV applications, the long-term solution has been a ‘channel in a box’. This hardware solution has been available for many years, and from multiple vendors. Whilst it is relatively rapid to install and commission, it will still take a broadcaster several weeks or months to get this new channel operational and on-air. The cloud has transformed this scenario, empowering business agility that was previously just a pipe-dream. The Holy Grail is an ability to spin up a new channel within a tiny fraction of the time of traditional channels in a box – in effect reducing months to minutes. This is an area where Telestream has been focusing much of its research and development resources for more than a year. Still a proof of concept, Telestream introduced a new and radically different concept at IBC this year. Combining the joint resources of Telestream and IneoQuest (which the company acquired 18 months ago), the result is Project Orchid. This concept provides a one click channel spin-up capability with fully integrated monitoring that is portable across all major cloud architectures and on-premise data centres. At the same time, the company has worked hard to ensure that anything deployed in traditional workflows today will work equally well within this new environment in the future. Project Orchid has multiple real-life applications. One is the ability to very quickly spin up a channel and then take it down again once the spike in demand has passed. Having that ultra-fast channel response with integrated monitoring from the outset is incredibly valuable to users and differentiates Project Orchid from any other solution available today. Another key application is where a service provider is running at high capacity for a length of time – maybe there are two major sporting events such as a major golf and tennis tournament happening at the same time. Normally in this situation, service providers will need to leave valuable live assets on the shelf, simply because they don’t have the channel capacity to stream them. This could take the form of different camera angles or highlight reels. But now, Project Orchid offers extra channel capacity, which can be spun up at very short notice and then taken down as soon as the peak has been passed. This business model focused agility enables content providers and

service providers to monetise their live content in the most compelling way. An equally powerful component within the system is the ability to live-mix the channel on the fly. With this feature, the operator has pre-configured video playlists, ad drop-in lists and live switching between inputs, which is especially useful if there is a loss of main signal during an event. This functionality builds on Telestream’s broadcast experience: it targets live sports producers as well as a host of other broadcast applications. Based on visitor response at IBC, and also the conversations that Telestream is having with its customer based, this rich live channel compilation capability combined with industry leading video monitoring and analytics throughout the distribution chain is a very compelling proposition.

‘A key differentiator of Project Orchid is the rigorous video monitoring and analytics capabilities that come as standard.’


TVB60.projectorchid.indd 2

09/11/2018 14:22


PICTURED: Stuart Newton (left), Ken Haren (right)

Talking to users of traditional channel in a box systems, a key differentiator of Project Orchid is the rigorous video monitoring and analytics capabilities that come as standard. Traditional channel in a box systems focus tightly on content production but they can turn a blind eye to the viewers’ QoE. Project Orchid is the only concept which ties together the content production and content monitoring as core components of channel origination. Telestream owns virtually all the intellectual property (IP) involved in Project Orchid. This allows the company to focus on key issues and develop agile solutions to those challenges. For example, channel latency is a key issue in live streaming operations. Previously, there was no way of dynamically monitoring changes in latency across different networks and different geographical regions: now, with Project Orchid there is a solution in the making. CAN A NETWORK BECOME SELF-AWARE? Latency is just one aspect of a QoE experience. With Project Orchid’s integration of Telestream’s market leading media processing capabilities with its newly acquired video monitoring capabilities it is offering a clearly differentiated customer proposition. In a progressive development that breaks new ground, Telestream has developed capabilities for the system data generated by the downstream monitoring to be automatically fed back into the content creation architecture. With this capability, the system becomes self-aware. When other vendors talk about this capability they are

talking only about the encoder or the data packager. In comparison, Telestream takes a holistic view of the entire content creation and distribution chain. Project Orchid provides actionable analytics which inform the production system; and the production system is embedding things in the video which helps inform the analytics programme. EVOLUTION. NOT REVOLUTION Project Orchid is significant: the ability to spin up new channels within any cloud environment in minutes is new to the industry. And for that channel to include monitoring so sophisticated that it becomes aware of defects and can provide real-time feedback for corrective action is equally important. What is really important is the evolutionary approach that Telestream has taken in developing Project Orchid. The company expects the first Project Orchid products to be introduced in late Q1 next year. However, it has developed a migration pathway that assures that any products bought today will be equally applicable with an Orchid environment. Users that buy Telestream products today – from Vantage and Lightspeed Live to iQ Inspector – will benefit from using the same core technologies if and when they migrate to Orchid. Their operators will be trained on the same software, and all their configuration work will port across seamlessly. Customers can buy Telestream today and use it on-premise safe in the knowledge that Telestream will provide an intuitive migration pathway for their operations as and when they decide to migrate to the cloud or another virtualised environment. n


TVB60.projectorchid.indd 3

09/11/2018 14:22

Intelligence for the media & entertainment industry



TVB60.tedial.indd 1

09/11/2018 09:38


TAKING A RADICAL VIEW OF THE MEDIA FUTURE By Emilio L. Zapata, president, Tedial



recent study by the IABM, the body which represents vendors in media and broadcasting, listed the top priorities for strategic technology investment. The first five were multi-platform content delivery, 4K and Ultra HD production and delivery, IP infrastructure, media asset management and file-based workflows. The sixth, incidentally, was social media broadcasting, a priority for 30 per cent of broadcasters. Good news for Tedial is that, IP infrastructure apart, we are a leading force in all of those. It could be argued that, because IP infrastructure depends upon file-based content, it too cannot exist without a well-developed asset management platform. This view of the industry today raises a huge number of issues. However, I have space to develop just two of them here. MULTIPLE RESOLUTIONS First, the move to Ultra HD and beyond. Although the IABM survey spoke about 4K, in just a couple of years we will be watching the Olympic Games from Tokyo, which will see a lot of mainstream 8K production. Our software already supports 14 different camera resolutions, plus DPX and camera raw formats. The effect of moving to larger screen resolutions is that we end up with a lot of content. A single episode of a drama today could take 15TB of storage. The nature of storage management systems is that these end up as a large number of individual files, all of which have to be tracked, managed and – most importantly – secured. The Red camera is still popular for acquisition. Shooting at 6K resolution, and with individual files limited by computer standards to 4GB, you could easily end up with more than 1000 files for a single episode. Keeping track of large numbers of files is not a trivial matter. In the past this would be part of hierarchical storage management software, but broadcast storage vendors are losing the skills necessary to deliver this sort of performance.

VERSIONING The second issue I want to talk about is closely related to the first. Not only are we looking at multiple individual files to make up a piece of video content, we are looking at multiple files to make up what we would regard as a single asset. Given the need for different language soundtracks and subtitle files, as well as different edits for different markets, it is now not uncommon for a single asset to actually contain more than 150 files. To the user, though, it still has to appear as one asset. That, in turn, calls for additional functionality in the asset management system. Traditional broadcasters are also being challenged because their audience for linear playout is collapsing. Even when viewers are watching on a large television at home, they want it at a time of their choosing, not the broadcaster’s. Allied to this is the move to consume content on other devices. In the past broadcasters have simply repurposed content but that is not enough today. No-one is going to watch a one-hour drama on a smartphone, and it is foolish to pretend that they might. TRANSMEDIA STORYTELLING The solution is to take a transmedia approach. This goes beyond reformatting and transcoding, to create versions of the content which are appropriate for each device. This requires producers to change the way they work, in order to allow audiences to interact with the content in the way that they choose. That might be sports highlights sent to a mobile device within moments of it happening, or it might be retelling a drama in five-minute chunks not one-hour episodes. A single asset, then, might have many different ways to tell the same story. When searching the archive, you need first to find the asset, but then you need to navigate that asset, through what we might call the media set. That media set will include all the versions, plus all the associated material like trailers, promos, posters, stills and more.


TVB60.tedial.indd 2

09/11/2018 09:38

To achieve this it calls for a strong, multi-relationship tool. The asset management platform will have to bring together metadata from multiple sources. That includes the increasing use of artificial intelligence (AI) to generate metadata, as well as the information generated through the application of business rules. IMF Tedial was founded 18 years ago. In that time, I have seen two standards emerge which I believe are vital for the future of our industry. MXF is now established and well understood. IMF is the second of these standards. To me, it is absolutely critical in developing these complex relationships to bind together transmedia assets. At its simplest, IMF allows you to store a single master video file, along with a set of instructions to derive all the deliverable versions from it. You can add supplemental files with version-specific content, and it supports as many audio and subtitle files as you need. At Tedial we developed a software product called Version Factory four years ago. This drew on the work already under way in the digital cinema industry, which has also been used as the foundation for IMF. So, we already have huge experience in the key concepts. IMF is now a SMPTE standard, and as broadcasters and production companies appreciate its richness, they are beginning to see how it offers huge savings in storage cost, processing and time to delivery. If you need to make a change, you only do it once and it ripples through every version

which it affects. By linking IMF delivery to a set of business rules, you can automate delivery to a very large extent. CLOUD In a well-ordered asset management system, there is a clear separation between the content and its management. We see a future where the metadata, proxies and business process management could exist in the cloud, while the high-resolution content is stored on premises – and possibly in the cloud as well – in one or multiple locations. To take proper advantage of cloud processing, we are transitioning to a microservices architecture. It will give us the flexibility to add new functionality, like AI video and audio analysis to allow existing archives to be swept for new descriptive metadata, an ideal cloud application. There is much change in our industry at the moment. But all will revolve around two things. First, the separation of production and delivery resolutions. Ultra HD is mandatory today for production, and be prepared for higher resolutions to come. Second, a transmedia approach is a new way of telling stories, by delivering content in the most satisfying form for each platform. That depends on very sophisticated asset management, where new standards like IMF will be important. The asset management platform of the future will be fully scalable: today 100 users is a big system; with media consolidation 1000 users will become commonplace. Those users will need systems which are powerful and intuitive. The road ahead is very exciting! n


TVB60.tedial.indd 3

09/11/2018 09:38

TVB60.tedial.indd 4

09/11/2018 09:38




he rights to major sports command ever higher prices. Broadcasters and media companies who win these rights have to innovate to create new revenue opportunities to pay for them. There are two opportunities. First, the traditional, linear broadcast service has to be ever more engaging. That means more highlights, more replays – more insight in general, keeping the sports fan watching. The second route is by providing clips and packages tailored for other platforms. These can be to serve those who cannot sit down and watch the whole game but still want to keep up to date with the action. But there is also a sizeable majority of viewers – and it is moving towards virtually every viewer – that watch with a smartphone in their hand. They want more insight from the second screen. The pressure is on to create ever more highlights packages, from 30 seconds around a goal being scored to a 30 minute summary of the game for the archive. Some viewers or subscribers might want to watch all the action from their favourite player, a particular game, or maybe a summary of the best goals of the weekend. For craft editors to create all these packages places huge demands on personnel and on equipment, which often needs to be rigged on location. Automating the production of sports highlights would have a huge impact on production costs. Such a system needs to provide intelligent functionality to enrich and act upon metadata. It should assemble and develop metadata from multiple sources, then use it to guide automated clip creation. Finally, it should be able to interpret business rules which determine what to do with the

completed package, whether that is to deliver it to the director or to automatically deliver it across multiple platforms. Enriching the metadata will integrate tags from sports logging specialists, adding it to information already in the database. It should also use artificial intelligence to create additional logging points, for example by using speech to text analysis, using crowd sounds as well as the commentators’ words to judge where the key action starts and finishes. Armed with this rich metadata, the system should then be capable of creating large numbers of clips. The vast majority of these clips should be created through automated decision making. For example, where a clip is based on the live output, it should look at the video feed to minimise visual jarring. The AI edit functionality must ensure that these automatically generated clips are highly polished and ready to go. Should a clip need to go to a craft editor – and this should be a very small minority – then the timeline should be loaded with the automatically generated edit, and each clip given a descriptive name so the editor does not need to waste time working out what clip is what. Once complete, the clips can be offered to the director, or pushed to social networks or digital platforms, with the appropriate branding and metadata mapping. Tedial has developed a solution for automatic sports clip creation, SMARTLIVE. Built on open standards, it is agnostic to the equipment around it and can be added to any production infrastructure. It can be implemented on premises, in the cloud, or as a hybrid. As sports rights become ever more expensive and fans expect more insight and more analysis, using an AI enabled asset

management and production system is the most comprehensive, most productive way to create additional content, satisfying consumers and increasing revenue opportunities without increasing human and resource costs. n


TVB60.tedial.indd 5

09/11/2018 09:38

HOW WILL IMF IMPACT THE BROADCAST INDUSTRY? TVBEurope talks to Julián Fernández-Campón, business solution director, Tedial, about the Interoperable Mastering Format (IMF)

PICTURED ABOVE: Julián Fernández-Campón

Tedial has been championing the IMF format. Can you explain how IMF works? IMF is based on two key concepts: the composition playlist (CPL) and the output profile list (OPL). The first defines the continuous interconnection of video, audio and subtitles into a timeline, without effects. The second defines the transformations required for the final version. IMF has several advantages: it’s based on deployed and tested standards that are constrained to ensure interoperability; it uses MXF for all media tracks (including subtitles); known codecs for video and audio; and, importantly, XML to define the package elements and the composition (more of which to follow). This makes it really simple for broadcast manufacturers to implement software that reads and manages IMF, instead of including that information in binary within the media itself.

Can you explain how IMF improves broadcast workflows? IMF was launched to solve two problems: the lack of interoperability and the lack of efficiency when delivering content. There are several benefits for using IMF in the broadcast media industry, which we will come to shortly. To put it simply: IMF allows broadcasters to know what type of media they are receiving and gives them the ability to adapt their workflows to support that media. In addition, IMF introduces the concept of supplemental packages so that they don’t have to replicate media for every new version, which significantly reduces processing times and optimises the use of storage. A film for example, might need to be distributed to 15 different (or more) countries or territories, which means 15 different versions have to be created. With IMF, the


TVB60.tedial.indd 6

09/11/2018 09:38

SUPPLEMENT broadcaster knows what codec they’re going to be using so it’s easy for them to adapt their tools to manipulate that movie. Also using IMF, they don’t have to replicate all 15 versions. For a 90-minute film for example, the only difference in versions is likely to be localisation elements, such as the title for translation and the audio tracks. This means the broadcaster saves huge amounts of storage because they are only adding the new segments of media to the localised version. IMF has traditionally been seen as something tailored to the film studios. Is it now addressing the real requirements of broadcasters? Yes, IMF was born mainly to solve content distribution issues that the studios had. The first supported codecs were JPEG2000 and Simple Studio Profile (SStP), which were not really used in the broadcast world. This has now changed and new codecs such as ProRes in different variants among other features have been introduced to meet the needs of broadcasters. This is very important to mention as it means that the SMPTE IMF group checks the market needs and evolves IMF accordingly to make sure these requirements are met. What key challenges do broadcasters face when delivering to multiple platforms and how does IMF overcome those challenges? As we can see, the main challenge is generating multiple versions for different platforms and different deliveries. Without IMF, broadcasters would need a huge archive to keep all these versions, bearing in mind that each one would have the same size master. Also the costs in terms of processing time and power when creating all those versions would be considerable. IMF simplifies the workflow; the master has all the media and the rest of the metadata and the additional versions are supplemental packages that reduce space and time to process and deliver the content. How have MAM companies answered the requirements of new IMF technology? When the Hollywood studios first adopted IMF they solved their initial problem: to know what content they were going to receive and deliver and to provide interoperability. The next issue to solve was storing all these IMF packages and to generate new versions. To begin with, the studios put all the content into a shared storage system (NAS), they didn’t need a MAM because they could archive their content directly from the folder structure and regenerate new packages. But this was a very manual process. The next logical step for the studios was to implement an automated, end-to-end, IMF MAM workflow.

Our company’s philosophy has always been to be one step ahead of the technological curve. We do this by adopting new standards that improve products and offer future-proof solutions to our customers. IMF is a perfect example of this. We introduced IMF into our MAM workflow in 2013 in parallel with its release, to easily define the delivery profiles for every destination without the need to reinvent the wheel by using the composition playlist (CPL) to define components to be delivered and the output profile list (OPL) to define transformations. We then extended this to offer a full IMF end-to-end solution from ingest to IMF enrichment and delivery. It‘s worth highlighting that if a broadcaster needs to deliver to a platform that doesn’t yet support IMF, they can convert the IMF package on the output. The composition playlist (CPL) shows the EDL and the audio tracks that will be delivered to the territory. The second component, the output profile list (OPL), shows the transformations required to generate the deliverable. Also, when a broadcaster needs to deliver to platforms that are not IMF compliant, they will soon be able to transcode to a different codec or down convert to HD using the OPL. This final point is currently being defined but will be available soon. What do IMF MAM systems allow broadcasters and content owners to do that they couldn’t do previously? An IMF MAM leverages the use of IMF as it allows broadcasters and content owners to import, manipulate and deliver IMF seamlessly as another format, implementing business rules that are not possible when managing IMF packages individually, without the concept of a ‘Title Centric View’. A MAM that supports IMF enables the assembly of new packages with compatible components on-the-fly without the use of a mastering tool; takes IMF ingest decisions regarding duplications and incremental ingests; and relinks supplemental packages with the watermarked master. To put it simply, the IMF MAM orchestrates all logistics. n

Answering the multi-delivery and multi-version requirements of global broadcasters, Tedial’s HYPER IMF is an end-to-end MAM solution that supports IMF formats for ingest, archive and delivery. The company’s IMF Markup Tool provides a simple-to-use editor that addresses the versioning requirements typically needed for distribution. Designed for fast validation and repair of IMF packages, this intuitive tool functions independently of expensive third-party systems, making it an extremely cost-effective solution and easy-to operate for users. Tedial is part of the IMF User Group.


TVB60.tedial.indd 7

09/11/2018 09:38

TVB60.tedial.indd 8

09/11/2018 09:38




he growing popularity of live sports has led to the rapid adoption of subscription video streaming services. While Netflix recently announced it has no plans to invest in live sports rights, there are lots of players looking to take the crown as the “Netflix of sports”. Specialist streaming services DAZN and Eleven Sports are notable examples that have made a play for this lucrative market. And while the demand for live sports is huge, only half of the opportunity is being seized. Accelerating output to keep up with content-hungry sports fans is the next big challenge. Traditional set-ups used to broadcast live sports are expensive and resource-intensive. Coupled with shrinking budgets, broadcasters are under pressure to explore innovative production methods to keep pace with the demand for live sports content. This is where artificial intelligence (AI) is showing its worth, and its potential to transform live sports streaming is vast. Broadcasters are keen to adopt new technologies that can revolutionise the industry by providing massive operational cost savings. And in the same way we have seen costly satellite trucks replaced by backpack-sized live video transmission units, AI will deliver similar cost savings to live production and streaming—whether that is from using AI to analyse audio signals, video images, people or objects, to identify which cameras to switch to and control, removing the need for an expensive camera crew, or from AI and machine learning algorithms that can generate replays and graphics, and can live stream content from glass to glass immediately. USING AI TO LOWER THE BARRIER TO ENTRY Live sports has largely focused on the big tournaments such as the Premier League in football and other money making sports such as world championship boxing. Lucrative sports rights deals from the likes of Sky Sports and BT Sport, and now new players like Amazon, are all centred on these higher level leagues or big events such as the US Open Tennis Championships. Yet demand to watch live streams of lower league football clubs, or ‘less mainstream’ sports such as hockey and handball, is there. These sports have typically had a difficult time getting coverage on television, but still have large fan bases that would want to watch this content live. AI technology can help smaller clubs, or more niche sports, to follow in the footsteps of the EFL and livestream games. With AI, motion-tracking automated cameras can be used to stream live sports events from any location. An AI-enabled camera can be taught to select different camera lenses during a game to offer the same camera control associated with high quality productions. This technology removes the requirement to install expensive camera and production facilities at sports grounds, and doesn’t need a dedicated camera crew or director to operate. And as AI algorithms develop, this technology can detect

certain types of behaviour to ensure the most interesting content is being captured—acting as a director—and allowing content to be live streamed from the field. But there is more potential for AI outside of capturing and streaming live sports content. AFC Ajax Football Club is, of course, very far from being classed as a “smaller or lower league club”. In fact, it’s the biggest club in the Netherlands and part of the UEFA Champions League, so its games are aired courtesy of BT Sport. But interestingly, Ajax is using AI technology to film and stream its matches. The footage isn’t captured to live stream its games to fans, but rather, for player performance analysis during training. Using a motion-tracking automated camera and AI based software, the Ajax training academy can monitor why a player missed a goal, why they failed to make an assist, and help improve their performance. Image detection means the AI technology can recognise different players and follow them, or, detect a ball on a pitch and follow its movements. The potential for AI in this respect is huge, especially in the production of live sports content. As algorithms develop, AI can detect faults (yellow or red cards) or injuries as it learns how to make productions more interesting and story-like. There are plenty of smaller sports that could use this technology to become content owners in their own right at a low cost, and then monetise it. CAN AI DO IT ALONE? To completely automate live sports streaming without the need for a production crew and director is perhaps a bridge too far—for now. However, AI used for assisted production alongside humans can be achieved. One of the obstacles to fully automated streaming is that it simply takes time for an algorithm to learn the nuances of what is interesting or important for each sport. For example, an algorithm may think that capturing a fight breaking out during a football match is the same as capturing a punch being thrown during a boxing match—but for the viewer, these are two very different experiences—one that is normal, and one that isn’t. Getting the algorithms up to speed requires time, so there is still very much a role for humans. What will eventually happen is that as AI technology is used for capturing, the skills that people have will be applied to enhance the viewer experience. Investment in AI now can massively reduce costs for smaller clubs, allowing them to affordably produce and stream live sports content and monetise it to their huge fan bases. We know fans are thirsty for content, and AI enables fans to watch more, or in some cases watch for the first time, their favourite team or sport on television. Advances in AI are already making this cost-efficient and engaging. So while there is still some way to go, there’s no doubt that the future of live sports streaming depends on AI. n


TVB60.mobileviewpoint.indd 1

09/11/2018 14:22

THE DOS AND DON’TS OF LIVE VIDEO STREAMING By James O’Farrell, head of live video, Trickbox TV

PICTURED ABOVE: The BIg NHS Singalong Live


ideo streaming is everywhere. Earlier this year a Research and Markets report projected the global video streaming market to be worth $70 billion by 2021. Streaming content has become essential for broadcasters and corporates who want to connect with their audience by increasing their video output in the most cost-effective way. Of course, there are also those who are streaming-only. Now with live streaming, companies have the opportunity to get really creative with their video marketing approach and to do it instantaneously. The key is to do this effectively. From a professional viewpoint, the way to set up a high-quality live streaming project, and thereby achieve

maximum reach, is to treat the stream as a broadcast. The mistake that people often make is thinking that live streaming is a quick and easy recording that you can do using a smartphone, but this approach loses far too much production quality. From the very start, the project should be treated with the same respect as any live TV show, and to achieve that planning is key. At Trickbox, we plan a live stream project exactly the same way as we plan a traditional OB broadcast project. We spec the kit in the same way, the lighting, how we’ll work at the venue site and that enables us to deliver a quality live stream. To maximise the planning process, production teams should be involved and budgets agreed. Some low


TVB60.trickbox.indd 1

09/11/2018 15:14

PRODUCTION AND POST budget jobs have one engineer vision mixing, streaming and delivering the content, which, if you want a quality project, is too much for one person to do in a very short space of time. The minute you start adding people to the production process you see an instant improvement in the live stream. As our streaming capability has grown through the broadcast side of the business we can use the same technologies to deliver a stream to Facebook or an uplink to a network television master control room. This means we can deploy solutions faster with greater reliability. From our perspective the concept of streaming and broadcast delivery have become synonymous and go hand-in-hand. With broadcast clients the jobs that Trickbox handles can vary greatly. One day we can be at a concert, the next day an awards show, and the next a Royal Wedding! For big broadcasters like ITV – as well as offering traditional broadcast connectivity on some projects – we’re seeing an increase in IP delivery, with solutions from manufacturers like LiveU, TVU Networks and AVIWEST. The recent This Morning Live Wedding project, which we broadcast live from the Royal Albert Hall, is a classic example. For this we supplied ITV with a multi-camera fly-away solution that included six HD fibre camera channels, and a fly-away control area with positions for the ITV production team and Trickbox engineers. We also managed all the connectivity for the live broadcast, which included three independent circuits to ITV (a main and two backups). In July, we supported the live stream from the Coronation Street cobbles in Manchester for the Big NHS Singalong Live. This was a one-off ITV event produced by Endemol Shine North, which saw the NHS Choir joined by pop stars and celebrities in a unique, live singalong celebrating the NHS’ 70th anniversary. The event went out live on ITV with streams from Abbey Road Studios in London and other parts of the country, including Manchester. The project was run by One Ten who positioned an OB truck at the Abbey Road Studios. The concept was that all the sites would sing at the same time, and all the feeds were synced in the truck and broadcast live on ITV as one song. With this type of job it’s imperative to have kit that

provides low latency. It was a huge success and is an indicator of how far low latency streaming technology has come. What’s becoming clear is that the lines between broadcast and live streaming are blurring. We live streamed the recent Rated Awards 2018, hosted by Mo Gilligan and Julie Adenuga at the Eventim Apollo in London, to YouTube Live. The edit was then recut and aired on Channel 4 a few days later. The audience for this event is Generation-Z who don’t watch much TV and event organisers like this are ensuring that they provide the budget to meet the needs of this demographic. As well as providing live streaming OB projects, Trickbox also provides studio-based live streaming and broadcast connectivity from its central London Tower Bridge TV Studios. Typically, clients use the studio to transmit to the relevant broadcaster – either via traditional broadcast or IP delivery – or to social platforms and client websites, using IP live streaming. For the Royal Wedding between Prince Harry and Meghan Markle, Trickbox delivered content using LiveU and a circuit to BT Tower to two major US television magazine shows, Inside Edition and Entertainment Tonight, both on the CBS network, and Australia Network Ten’s The Project. For Network Ten, we also provided a return off air IP feed for the production team. Earlier in the summer, we live streamed a “fake news” PR stunt to celebrate the release of Jurassic World: Fallen Kingdom. The production team used the studios to provide the vantage point of a T-Rex, which was positioned aboard a boat sailing along the Thames. The studio also provided the team with a hub for the production. We supplied a LiveU bonded uplink unit, which was positioned on a second boat following the T-Rex, to send the live footage back to the studio via a LiveU server. This meant that the footage could immediately be edited for fast turnaround and delivered to big screens across London. It will be interesting to see how live streaming continues to develop over the next couple of years. As social media departments increase their video budgets, productions will continue to improve and the volume of content will grow, which is exciting for us. n

‘From our perspective, the concept of streaming and broadcast delivery have become synonymous and go hand-in-hand.’


TVB60.trickbox.indd 2

09/11/2018 15:14

ESPORTS IS EXPANDING Philip Stevens takes a look at the increasing popularity of esports.


sports refers to competitive gaming, in either a one versus one or team versus team format. Most commonly it takes the form of organised competitions, online or live from a venue. According to Newzoo, the global esports audience should reach 380 million this year. This could be made up of 215 million occasional viewers and 165 million esports enthusiasts. The EU counts for 18 per cent of the esports enthusiasts in 2018. So, why do people like watching this type of competition? “Esports allows viewers to watch the absolute world’s best players at the game they enjoy. You can learn while enjoying some real entertainment. Tension, rivalry,

competition, other emotions - esports has some great stories to tell,” explains Solenne Lagrange, marketing director of GINX Esports TV, the first and largest esports TV network available in more than 55 million homes, across more than 50 territories in 10 languages. “We provide a unique round-the-clock esports coverage worldwide, offering both international and regional perspectives to our viewers. We are entirely independent, and have the largest content offer on the market that we curate from numerous organisations such as ESL, Starladder, and Valve. From news shows, tutorials, or docu-series to live tournaments, we provide an authentic esports voice in TV, covering its every facet.”


TVB60.esports.indd 1

09/11/2018 09:31

PRODUCTION AND POST GINX TV was founded in 2007, but the channel became GINX Esports TV in June 2016 when it moved from gaming to esports. “We operate from studios in London. From there, our teams travel to the main esports events worldwide. GINX Esports TV not only broadcasts the tournament itself, but also provides exclusive perspectives to its viewers such as behind the stage content, interviews, analysis etc. Event organisers usually manage the tournament-side of things we receive a signal that we then deploy to each one of our feeds - but GINX TV creates its own ‘shoulder content’ material that evolves around tournaments and which helps to tell the story,” says Lagrange. Content on GINX TV can be split into several categories. There is ‘Hardcore gaming’, with content dedicated to help viewers improve their gaming level or providing very specialised input. Next, is ‘Competitive gaming’ which includes events themselves, but also highlight packages or player analysis, for example. And finally, GINX provides a more ‘casual gaming’ offer that will include everything from speed-runs to tops or unboxing sessions. RACING AHEAD Here are some more statistics. It has been reported that the esports economy will grow to $696 million during 2018 – that’s a year-on-year growth of over 41 per cent. Beyond that, brand investment is expected to double by 2020, creating a total market valued at $1.5 billion. So, good business potential. And one that has been readily been adopted by some already well-established names. Take, for example, Formula One. The sport took its first steps in the world of esports in August 2017. The primary objective was to provide a tool to reach out to a new audience – one that is younger, more digitally-savvy, global and growing, and one that F1 might not have traditionally spoken to that effectively prior to this decision. “That audience is part of our vision to grow the sport,” explains Julian Tan, Formula One’s head of growth and F1 esports. “But also, we realise that, as the esports industry grows, there might be a commercial opportunity. It’s not there yet, but the trends are there and offers a good potential.” He continues, “There are a lot of parallels and synergies between F1 esports and real life F1 races. The game is really realistic and so for the race enthusiast, there is a great deal that is familiar. Racing is easy to understand – and the e-races are just as readily understood.” Tan explains that the races themselves in F1 esports are shorter – and therefore more interesting - than the real thing. What’s more, the cars are equalised - which means

their performance is the same and that, in turn, creates more drama and more overtaking on the track. TEAMS, TOO Alongside the Formula One involvement, there is also considerable benefit for the F1 teams. Again, it is all about reaching out to a new audience – and in the process also realising there might be commercial possibilities. As a result of forming esports teams, the F1 teams have seen sponsorship opportunities arise. For example, Force India launched a new esports team with backing from Hype Energy to coincide with the start of the official F1 Esports 2018 championship. “I think F1 teams are embracing this industry which is growing tremendously. In fact, more broadly, anybody not embracing esports right now would be missing out on opportunities.”

“I think F1 teams are embracing this industry which is growing tremendously. In fact, more broadly, anybody not embracing esports right now would be missing out on opportunities.” JULIAN TAN, FORMULA ONE


TVB60.esports.indd 2

09/11/2018 09:31

PRODUCTION AND POST Tan reveals that a number of different formats have been tried – online, live events at a venue and even holding e-races at an actual Grand Prix. “All have pros and cons – but we can create something unique by combining the magic of esports with the power of Formula One.” All the live events are owned and operated by Formula One, working with games publisher, Codemasters, and esports experts, Gfinity. The television broadcasts are distributed worldwide through global networks including ESPN in the US and Sky Sports in the UK. The live broadcasts are available online exclusively through Facebook. “These esports events are fully produced with a presenter and a team of commentators and analysts in our studio,” reports Tan. FOOTBALL FANS In January 2017, F.C. Copenhagen and Danish entertainment company Nordisk Film moved into the esports business through the acquisition of the international Team Dignitas. Unlike some other football clubs which have given their names to their esports teams, F.C. Copenhagen decided to be different and create a new company called ‘North’. It was felt there was a danger that ‘Copenhagen’ would limited the esports team’s activity to that city. “Both F.C. Copenhagen and Nordisk Film wanted to get involved in the biggest growing sport based on economics and followers globally,” explains Christian Slot, head of press for North. So, how does this partnership work? “Nordic Film develops, produces and markets films and TV series across the Nordic region as well as operating the leading cinema chain in Denmark and Norway. It is also highly committed to gaming and is responsible for the distribution of PlayStation products in the Nordic and Baltic countries, as well as being a keen investor in Nordic gaming studios.” Slot continues, “F.C. Copenhagen is Scandinavia’s largest football club with a healthy business, professional setup, strong sporting success and a stadium with capacity for around 40,000 seated guests. The club has a strong professional sporting model to inspire and influence North. For example, within performance, scouting and recruiting and a large and a highly ambitious backroom staff such as physiotherapists, medical staff and a youth academy from which North can draw and learn.” Slot explains that North currently consists of one Counter-Strike: Global Offensive team. The team of five players has training facilities in Telia Parken, the home ground of F.C. Copenhagen. Although the team mostly play tournaments abroad, for the past couple of years it has also participated in a few Danish tournaments, such as the BLAST Pro Series and Copenhagen Games.

Most recently, the team has won DreamHack Open Valencia 2018 and DreamHack Open Tours 2018. Slot goes on, “Counter-Strike:Global Offensive is mostly viewed on Twitch - last year Facebook made a huge deal with ESL (formerly known as Electronic Sports League) about broadcasting the season. In Denmark, TV2 and the Danish Broadcasting Corporation also cover the games and one event even broke viewership records on one flow TV channel.” The football club academy model can also work for esports teams. “We are aware of mostly all players from 14 years old and older in Scandinavia who play competitively. We know who they are and their skill sets and then establish a plan on when would be the best time to contact them and invite them to our esports academy.” “The project has been a success,” reveals Slot. “We hope to develop a sustainable business in the coming years we’re constantly looking to monetise further and expand into more games.” So much for production, but what kit is needed for these new challenges? A case study helps explain. INVATE INNOVATES Studio INVATE is a production company based in Bangkok that produces live stream broadcasts for 15 major esports tournaments every year, each with up to 40 gamers competing and up to 50,000 spectators viewing the transmission. When INVATE wanted to expand, it was faced with the limitations of SDI-only production in a computer-based world. As a result, Pachara Ruangrasameejantorn (Benz), the company’s co-founder and his team knew they had to go with IP video production. The team already used the NewTek TriCaster 8000 integrated production system along with the NewTek 3Play 4800 replay solution for their live services, but the requirements in esports were changing quickly. JM Lim, technical training specialist for NewTek Elite reseller-distributor, Blonde Robot says “When you talk about esports, and covering these competitions in terms of game play, you could have 10 – or even 20 individual players.” Since anything can happen at any time, each player has to be covered by dedicated cameras. Blonde Robot director Eamon Drew adds “Switching live between real-world players and the actual game on a computer screen, without having to use scan converters – and without coming out HDMI, converting into SDI, and then going into the TriCaster or any other switcher is a challenge. With esports, the content is on a computer and playing out at 1080p 60f. Using NewTek’s NDI technology, the video coming over the network is already


TVB60.esports.indd 3

09/11/2018 09:31

PRODUCTION AND POST in the right format.” Looking at the size of the typical tournament events INVATE produces, it was calculated that there was a need to switch between at least 24 hybrid IP/SDI feeds. “Twelve come from SDI sources, such as the players’ POV cameras, the announcers’ fixed cameras, and the roaming action cameras. The rest are from computer sources, like the gaming screens of the players, plus graphics and effects, which we make high-end, similar to live sports,” explains Benz. In order to hit that 20+ input requirement, with most sources coming in over IP, Benz selected a NewTek IP Series including a Video Mix Engine VMC1 - which can switch up to 44 external inputs, from both SDI and IP sources - and a 4-stripe Control Panel. In the first instance, INVATE expanded the VMC1 unit’s built-in SDI capacity with a NewTek NC1 SDI input units to achieve a total of 12 SDI inputs. They then added a NewTek TimeWarp replay controller, so they could have an operator performing live replays during the tournaments and in post-game. “We use CasparCG. It’s open source so we’ve been able to make use of the full graphics engine and customise the CG application,” reports Benz. “NDI allows us to send both Key and Fill into IP Series easily without worrying about connecting more BNC cables, so we pull graphics into our IP workflow during game production.” n

“Event organisers usually manage the tournament-side of things we receive a signal that we then deploy to each one of our feeds - but GINX TV creates its own ‘shoulder content’ - material that evolves around tournaments and which helps to tell the story.” SOLENNE LEGRANGE, GINX EXPORTS TV


TVB60.esports.indd 4

09/11/2018 09:31





he dual rear cameras on the iPhone X make it almost as capable as a professional camera. You can shoot 4K video at 60 frames per second, film in slo-mo, and use continuous focus and optical image stabilisation to add high-end polish to your videos. And these cameras, the ones that we carry around in our pockets everyday, are only going to get better. The rapid rise in the specification of iPhone cameras has opened up opportunities for professional teams to use them as legitimate filmmaking tools. Oscar-winner Damien Chazelle used an iPhone to test footage on La La Land’s opening traffic jam dance sequence. Sean Baker used three iPhone 5s to shoot his award-winning film Tangerine. And, since shooting his latest feature film, Unsane, entirely on iPhone, esteemed director Steven Soderbergh has pledged never to look back.

And it’s not just for the big screen where we’re seeing iPhones used for filming; smartphones are used increasingly alongside professional cameras for television. The iPhone is a perfect companion for the reality contestant tasked to shoot behind the scenes content or to bare their souls in a video blog fashion. And it fits in equally well when filming documentaries and newsgathering. For example, BBC News launched a mobile journalism (or ‘mojo’) pilot scheme this year where they gave 15 professional crews iPhones to shoot with, and the reports back so far seem overwhelmingly positive. Let’s not get carried away though, there are still plenty of shooting scenarios when you need the full power of a RED camera, or the advanced options offered by Sony’s Cine Alta line of cameras, to achieve the look you’re aiming for. But, there are instances where an iPhone can work just as well: if you’re shooting in tight spaces, to capture a spontaneous moment, when you want filming to go unnoticed, or simply when a smaller lens will make a nervous subject feel more at ease in front of the camera. In the right situation, there are few credible technical reasons not to use an iPhone (or at least nothing that can’t be solved by investing in a gimbal or an app). Except, if you ask someone in post production. Searching through iPhone footage, aligning multiple angles, and editing it together with separate sound or professional footage, is a time-consuming, manual process for editors. Adding an iPhone to a shoot might offer convenience for the film crew, but it often creates a huge amount of hassle for the edit team. This is largely down to timecode, or more specifically, the lack of timecode in iPhone camera media. Using external timecode devices ensures every camera and audio source is jammed to one incredibly accurate master clock. Each frame recorded is stamped with a timecode reference and if each camera and audio device on a shoot is running timecode, all sources can be synced


TVB60.timecode.indd 1

09/11/2018 14:23

PRODUCTION AND POST at the point of shooting. This metadata is embedded into the recording device’s media and allows footage to be dropped into the edit timeline and automatically aligned. This makes it quick and easy for editors to find a particular moment across multiple camera and audio sources by referencing the frame accurate timecode. However, since iPhone cameras are built primarily for the consumer, they don’t support external timecode out of the box; there is no timecode or LTC port, and no alternative option for plugging in a timecode generator. And relying on the iPhone’s internal clock unfortunately won’t solve the problem as it simply isn’t designed to be accurate or consistent enough for the frame-accurate synchronisation editors need. It’s been an exciting time for sync technology recently. In the last couple of years we’ve seen some massive advancements, not only in terms of reducing the size and cost of timecode solutions, but also with solutions becoming more widely compatible with more consumerlevel devices such as GoPros and DSLRs. But, until we launched the UltraSync BLUE in October, there was still no way to embed frame-accurate timecode into sound and video recordings captured on iPhones. This was the biggest thing missing from the timecode market, and one of the most important reasons for us developing UltraSync BLUE. Removing the need for cables was key to creating a sync solution for the iPhone. UltraSync BLUE uses Bluetooth pairing to connect to the iPhone. Problem solved. The next challenge was finding a way for the iPhone to accept external timecode. The latest releases of the MAVIS professional camera app and Apogee’s MetaRecorder app include compatibility with our patented Bluetooth timing protocol. This means once these apps are connected to an UltraSync BLUE, timecode can be transmitted wirelessly over Bluetooth to the iPhone, where it is embedded directly into media files generated from video shot on the MAVIS app or sound captured on MetaRecorder. Because the UltraSync BLUE is wireless, crews can use one unit to sync up to four iPhone cameras shooting in close range over Bluetooth. Alternatively, crews can use UltraSync BLUE to sync the iPhone over robust, longrange RF to other camera and audio recorders using Timecode Systems units, or even equipment containing a Timecode Systems OEM sync module (for example, the upcoming AtomX SYNC extension module for the new Atomos Ninja V). How could this work out in the field? Imagine you’re on a reality shoot — there’s a professional camera filming the main content, a sound recordist capturing audio, and GoPros rigged around the set. Sync is under control with

Timecode Systems units running on each device. You have a :pulse providing the master timecode from the sound mixer bag, an UltraSync ONE on the full frame camera, SyncBac PROs on the GoPros. Then the director asks for some last-minute additional angles using iPhones. With UltraSync BLUE, you now have the technology to make sure this iPhone footage is in sync too. The pressure on television crews to capture more and more content within the confines of each day’s shoot means iPhones are becoming commonplace in the film and broadcast world. Advancements in camera technology mean visually iPhone footage works seamlessly with video captured on high-end professional equipment. The challenge is now for production and post production workflows to adapt to integrate iPhone footage and process it more efficiently. Getting iPhones in sync is definitely a great start. n


TVB60.timecode.indd 2

09/11/2018 14:23

INVESTIGATING LUTHER With Golden Globe-winning TV series Luther set to return after a two-year hiatus, director of photography John Pardue, and Philip Large, CEO of entertainment recruitment community The Mandy Network, discuss the production process behind the show and the essential traits DoPs must demonstrate to succeed within today’s competitive TV landscape.


TVB60.luther.indd 1

09/11/2018 14:23

PRODUCTION AND POST How did you get into the industry? John Pardue: I got into the film industry by chance. I lived in Manchester and used to play in a lot of bands, and so was in the local music scene. Some of my friends were working at The Haçienda, making music videos and, because I was good at photography, they thought I should be the cameraman. That’s how it started and then I obtained a more professional job at Granada TV, working as a camera assistant. I assisted some great cameramen on documentaries and TV drama and learnt the correct way to do things. Philip Large: After graduating from university, I found a number of my friends were struggling to break into the film and TV industry. They had the talent but just lacked the opportunity. The more I learned of their plight, the more I wanted to get involved and do something to help them out. Mandy was created about 15 years ago, and leading the business for so many years has given me a unique insight into the recruitment process for production professionals, as well as the issues and opportunities that the industry at large faces right now. What is the process of working on shows such as Luther? JP: It’s quite a hard shoot, because it’s all at night, the BBC don’t give it the biggest of budgets and the ambitions are very big. There’s a lot of lighting; a lot of streets that need to be lit up; Highgate Woods needed to be lit up. The structure of it is still like a TV show and the shooting schedule too. Perhaps, it’s a movie in the sense that we’re doing more cinematic shots and it’s written in a way that the plot doesn’t follow a lot of dialogue. There’s a lot of visual stuff that takes you through it. I think we achieved a production value worthy of a well produced movie. What are the technical challenges that DoPs face when working on TV series today? JP: Luther is quite ambitious for a TV show. It has a little bit of ‘police show’ aspect but really it’s about a central character that’s mixing with the underworld and is also a policeman trying to investigate some pretty dark murders and events. As DoP on Luther, I had a very short prep of two weeks. That was before Christmas and then we were straight into it. The prep was tough, because there were some big things to try to organise. How has the industry changed for crew? PL: The web and the rapid growth of the major online streaming services has made a significant impact on the industry in recent years. The explosion in budgets and production value has meant that the ‘technical art’ of

creating is given more significance and with it a greater opportunity for creative expression. As a result, more credit is being given to technical roles. This is having a knock on effect from a recruitment perspective and we are seeing more people entering the industry with aspirations to be on the technical side. This brings with it more creativity and new perspectives in to the industry, which is fantastic. However, it also means competition is greater and standing out is more of a challenge. Technology means it’s easier to distribute your work than ever before, but at the same time, the importance of being able to market yourself and your work has never been greater. JP: I think it’s very different now to when I first started working in the film industry. When I started, it was an unknown world – if you didn’t know about it, you could go through your whole life and not realise it was a career opportunity. When I became a focus puller, before I was a director of photography, I used to tell people I was a focus puller and no one knew what that was, but now I think I lot of people do know what it is. There is a different awareness of the technical side of film-making now. What advice do you have for those entering the industry? JP: There’s a lot more work and a lot more opportunities, but there are also a lot more people doing it. The key thing is to choose your jobs; sometimes, you have to turn a job down because you have to wait for a better one to come. Ultimately, you want to try and work with talented people, who you get on with and that inspire you. You don’t just want to take every job that comes along.

PICTURED ABOVE: John Pardue (top); Philip Large

PL: For a budding DoP, or anyone involved in the technical side of filmmaking, there are certainly more varied opportunities now to get your work seen. Knowing how to promote yourself is key, given the competition for opportunities, whether it’s creating a portfolio of work online or cultivating your own professional social networks. The range of opportunities is increasing but the best roles are still more likely to go to DoPs that can demonstrate a strong commercial acumen that sits alongside their artistic vision and technical skillset. JP: If I could give any advice, I would say try to do jobs that enrich you in what you’re doing, as opposed to stuff you’ve done before. Try to do something different every time. Try to work with new people and try to work with good people, who have decent ideas. You will only ever be as good as the people you work with. 


TVB60.luther.indd 2

09/11/2018 14:23

COMBINING AR WITH LIVE PERFORMANCE Philip Stevens explores digital technology that enhances opera – and can pave the way for broadcast applications


n this centenary year of the enfranchisement of women over 30 with the vote, the Welsh National Opera (WNO) has produced a cabaret opera called Rhondda Rips It Up! that highlights the work of Suffragette, activist and entrepreneur, Margaret Haig Thomas, the Viscountess Rhondda. However, WNO didn’t want to produce just a regular stage production. Known as an innovator in the world of opera, WNO wanted to enhance the production value of the musical, further engage audiences, and augment the narrative in a very visual way. “We wanted to take the experience outside of the confines of a traditional theatre/stage environment and

tell the story of Lady Rhondda in a site-specific space that holds more of a truth to her story,” explains David Massey, WNO digital producer. “Our vision was to create something evocative drawing on technology, music, sound and performance allowing audiences to take part in a theatrical, magical experience as soon as they stepped into the courtroom lobby.” He continues, “Continuing our R&D work exploring the realm of digital and performance, we asked St. Albans based immersive content studio REWIND to work with us to create a bespoke experience that blends live theatre with augmented digital content.” Taking a key scene from the musical – the courtroom


TVB60.wno.indd 1

09/11/2018 14:23

PRODUCTION AND POST trial – REWIND used augmented reality (AR) to create that digital content to create a new and stimulating audience experience. “This is the second piece of work we have carried out for WNO,” explains Greg Furber, REWIND’s creative director for the AR project. “Last year, we collaborated on a Virtual Reality production called Magic Butterfly which was a combination of two operas - The Magic Flute and Madame Butterfly. The production has won several awards and received very positive responses from viewers of all ages from 14 to 94.” APPROPRIATE LOCATION Moving forward for this year’s project featuring the activities of Viscountess Rhondda, various ideas were put forward, and eventually it was decided to use the Sessions House in Usk where the trial of Margaret Haig Thomas took place. “The idea was to bring her back to that courtroom to tell her story using elements from the operetta and some scripted sections,” says Furber. This was to be accomplished by pre-recording the actress playing Viscountess Rhondda and then inserting the video into the live performance of the actors playing the jury and judge. “This was a traditional green screen shoot using Sony A7 Mark III cameras. There were four positions that needed to be recorded to cover the various parts of the courtroom. Because there could be no editing within those four positions, each was accomplished with 15 to 20-minute single takes.” Furber says that one advantage of using theatre trained actors is that they are used to working in a live environment where no retakes are possible. “Television talent know that, usually, a fluff can be covered by a retake. On the stage that is not possible.” Of course, the timings of speech from the ‘Viscountess’

in the pre-recorded sections were crucial, as the live actors would be reacting to the AR playout. “During the pre-record, we would read the lines that would be performed live – not only to get the timings right, but to ensure the appropriate reactions from the actress. And that also meant that the live actors would need to know the speed of their own deliveries. In short, there was a great deal of planning!” After the shoot, around six weeks was needed to get the material ready for a rehearsal of the whole production.

PICTURED ABOVE: Recording the audio: timing was crucial at all stages.


TVB60.wno.indd 2

09/11/2018 14:23

PRODUCTION AND POST COMPREHENSIVE CUES To cover the cuts in the edited pre-record, Furber says that the video contained cues for lighting changes that made the scene changes unnoticeable by the watching audience. “In fact, the lighting also helps us control the space. As we see Rhondda talking about her emotions during a train journey when she went on to pick up explosives, the lighting helps set the mood.” After that, she moves into the witness stand and the courtroom comes to life with the live actors playing their part alongside the augmented reality Lady Rhondda. “The video also contained audio cues which enabled the sound to come alive in an immersive sense for the audience as the action moved to different areas of the courtroom. The audio came from multipoint speakers with directional sound - we used software called CueLab to cue different volumes for different locations. It also triggered the lights, and the iPads to start the experience. We can do an awful lot with a 5.1 system.” In order to place Rhondda correctly throughout the performance, AR tracking markers were hidden within the court environment, with a networking setup also in play to ensure everything was synced to the millisecond. THE PERFORMANCE So, what happened when the whole production was ready to be performed? When the audience arrived at Sessions House they were greeted by two ‘policemen’ who talked them through evidence from the crime scene and then escorted the public into the courtroom. “An iPad was shared between the audience members, and by looking through the screen they could see the lady come to life in the space of the courtroom. They saw past her, enabling them to view the judge interacting with Rhondda.” The audience reaction, says Furber, was extremely positive. Following the premiere in Usk, Rhondda Rebel AR has toured as an innovative installation in locations across Wales and England. The touring installation was a portable version of the live experience, triggered off an A3 marker built in to a desk, placed as if left there by Lady Rhondda herself. Through a portal that appears in front of them, users are given a direct face to face with Lady Rhondda, making it look like she has just stepped away from her desk for a moment. FUTURE USES So, what lessons were learned and how could this innovation be transferred to a broadcast environment? “There are two considerations regarding the shoot,” states Furber. “We shot this one in 4K, but I would be

looking to 8K for future AR productions like this to improve the resolution on the final presentation. “But the real move forward would be to do a volumetric shoot. This involves having 116 cameras filming from all directions. When you shoot volumetrically it means that, as you move, your view adjusts just as it would with a real person. But, of course, there is a cost implication.” As far as the broadcast possibilities, Furber sees no reason why the same type of technology cannot be implemented. “If, for example, the opera was shot using a multi-camera set up, there would have to be careful liaison between the production company creating the AR element and the director of the broadcast so that camera angles could be exact.” Furber continues, “There are lots of conversations around how this technology can sit with traditional broadcasts. For example, the Magic Leap Mixed Reality headset has been used for the NBA in the States where someone watching a game on a TV screen also has a


TVB60.wno.indd 3

09/11/2018 14:23

virtual court in front of them. This means when a real player slam-dunks, a virtual character runs out and does the same thing.” He believes that innovators are now fighting the limit of the technology. “The ideas we have are further ahead. We can use a frequency signal that the user can’t hear to trigger an iPad which then fires an AR event – and that could be a character stepping out of the screen into your living room.” He continues, “There is so much that we can do by combining AR with traditional broadcast. And we are not hampered by ‘now press the button’. By using these discrete signals you can tell the app to time code the programme so if you pause it, you can go back and press play and pick up where you left off.” MARRYING CONTENT That’s REWIND’s positive take on the future. But what about WNO?

“We are constantly looking at new, creative ways to engage audiences with opera and the stories we tell,” says Massey. “Working within this immersive realm gives us an opportunity to play with technology in new and interesting ways that not only excite us, but offer audiences a unique and memorable experience as an alternative to our mainscale work. From our learnings we hope to develop our digital work to eventually present a new form of pioneering tech that will marry real time with creative content for the main stage.” He concludes, “I do like the idea of bringing a performance of Lady Rhondda into homes - AR offers an opportunity to do this. It’s an exciting time, and I think some of the partnerships that we have made since working in this arena have given us an opportunity to be brave with our ideas and pioneers within this field. “We hope to build on this and be at the forefront at bringing opera and technology together to excite audiences of the future.” n

PICTURED ABOVE: The production went on tour. Here seen at Insole House in Llandaff.


TVB60.wno.indd 4

09/11/2018 14:23


DRIVING GLOBAL EXPANSION By David C. Smith, CEO of Driving Plates UK


riving Plates. ‘Who are they?’ I hear you ask. Well, if you’ve seen any sort of driving footage on North American television recently, there’s a strong chance it hails from our company. From TV shows such as The Big Bang Theory and Tom Clancy’s Jack Ryan to feature films including last year’s Oscar-nominated The Post and this month’s Rocky sequel Creed II, our state-of-the-art patented camera rig has captured thousands of hours of driving footage – commonly known as “plates”. From our hub in Los Angeles, we have been providing bespoke and stock driving footage from our vast library to TV and film productions worldwide since 2011. Our mantra is that it should be possible for global stories to be within reach of any production. Indeed, one of the main questions from our clients is: “how do we cost-effectively bring this country or location into our production and make it a realistic experience?” The answer is that driving sequences are probably the best way to immerse any viewer into the reality of a story, wherever the characters are in the world. Although we have amassed a vast worldwide library of over 24 countries, we understand it may be difficult to service the immediate needs of different markets. That’s why we recently decided to launch our first European hub Driving Plates UK. Engaging renowned British film and TV producer Ian Sharples as our production consultant, we are putting together a talented pool of production staff from across the UK who can steer us through the complexities of filming on the British road system. Our UK hub aims to provide clients with a fully functioning production service that can also branch out into the rest of Europe. What we’ve found in the past is that even a small terminology difference can make a big difference to our work. For instance, us Americans could be talking about a ‘highway’ but it’s actually a ‘motorway’ or ‘A-road’ we need to be shooting. So it’s crucial that we’re in Europe building our local knowledge and boosting our UK-centric footage output. Right now we are going through a training process with our newly recruited UK crews who are getting to grips with our equipment. Our patented rig comprises an array of nine cameras that can film in 4K and 10-bit 4:2:2. The rig is truly portable and can physically attach to any rental car or vehicle required for production. It only takes around 90 minutes to set up and mount. Once that’s done, one driver and one camera operator drive and shoot all the footage in real time. So, if we are told to shoot 20 minutes of driving footage from a script, that’s how long it actually takes us to do it.

The good thing about our camera rig is its flexibility. Other production companies in this space will have a purpose-built vehicle that has an abundance of cables and mounts already attached. But this means you are required to transport that vehicle to wherever you shoot the plates – and that can be an arduous process. Another advantage is that we can attach our rig to stunt vehicles and if the script says we need to film some action footage then we have no problem with that. Gone are the days of having to drive eight or nine times down a road with one camera, with police cordoning off the area and costing us tens of thousands of pounds. Now we can drive down that same road in a fraction of the time and at a fraction of the cost. We capture all nine angles at once. In the UK, where our library supplied a tonne of footage for the Queen biopic Bohemian Rhapsody, we are now training three teams to assist us with any bespoke filming needs. This country’s expertise is world class and we are finding producers here are very detail orientated, which is exactly what we are looking for. Dealing with nine cameras simultaneously is a much more technical and creative way of working so we are recruiting a very specific breed of crew that is capable of managing that level of detail on a consistent basis. n


TVB60.drivingplates.indd 1

09/11/2018 14:25


ANSWERING THE RISING NEED FOR WORKFLOW OPTIMISED STORAGE RESOURCES By Jason Coari, director scale-out storage solutions at Quantum


t’s a fact: as a direct result of working with an even greater percentage of high resolution content, the pace of digital transformation in the M&E industry continues to accelerate. Driving this trend is the mainstream adoption of 4K and high dynamic range (HDR) video, and the strong uptick in applications requiring 8K formats. Virtual reality (VR) and augmented reality (AR) in particular are booming across the M&E landscape. The 2020 Olympics in Tokyo will be the first major international sporting event captured natively in 8K, and Japanese broadcaster NHK will begin broadcasting permanently in 8K by the end of this year. In TV drama, David Lynch and Showtime have teamed to create a virtual reality experience for the cult classic Twin Peaks. Even corporate video used in prominent retail spaces now requires 8K footage. These highresolution formats add data to streams that must be ingested at a much higher rate, consume more capacity once stored, and require significantly more bandwidth when doing real-time editing. All of which translates into a significantly more demanding environment which must be supported by the storage solution. So, what is the underlying storage technology required to edit that kind of media, or play it out from a broadcast standpoint? Depending on the particular use case, the answer is not as straightforward as it might appear. Large disk arrays are excellent at providing the aggregate capacity and performance needed for playing out high-resolution, uncompressed files. Alternatively, if the workflow consists of multiple streams of high-resolution, compressed content, an all flash-array provides the best mix of low-latency and performance. The reality is that many organisations are dealing with

both scenarios simultaneously. Therefore, it is important to understand how to create an environment that can align the optimal storage resource to the task at hand. NEXT-GENERATION FLASH: NMVE Flash-based, solid-state drives (SSDs) are an excellent solution for specific workflows commonly found in the industry. With extremely low-latency when compared to disk, any task requiring high I/O will benefit from their use. Taking this one step further, NVMe (non-volatile memory express), a relatively new NUMA (non-uniform memory access) optimised storage protocol that connects the host to the memory subsystem brings critical performance improvements. NVMe is ideal for high-performance workflows, especially for tasks like real-time, high-resolution content editing. However, if flash storage is localised to the client, it can quickly become expensive. NVMe over Fabric (NVMeoF), a very new technology, addresses this challenge by allowing multiple clients to access a block storage array shared over the network. Storage resources still appear local to the clients, but they are shared, so the expense of flash can be amortised over multiple clients – drastically improving the overall economics. FINDING BALANCE IN MANAGING DYNAMIC WORKFLOWS Even when sharing a flash resource, it’s not prudent for an entire storage solution to be flashbased. Most broadcasters and post production facilities have multiple workflows operating at any given time simultaneously. Real-time editing and colour correction require very fast storage, but offline work with high-resolution content does not.

This is where a modern, highly scalable file system can prove to be invaluable. An advanced file system allows all users to access content in a global namespace, no matter what tier of storage that data resides on. Therefore, in a mixed workflow scenario, some clients can be attached to the flash storage over a very fast Ethernet at the same time as low-priority clients conducting offline editing, connected via NAS by a lower-bandwidth Ethernet. This creates a more cost-effective storage infrastructure and optimised workflow. For those media organisations that have data life-cycles which extend beyond play-out and editing, there is the added complexity of needing to preserve and protect more material than ever before, especially as the value of historic content continues to increase. Having a file system with integrated and automated tiering capability can be of tremendous value in these scenarios. Policies that automatically protect and archive content into cost-effective storage—either onpremise or cloud—give users the flexibility to add capacity in the most logical location based on their specific use case. FLEXIBILITY IN WORKFLOW OPTIMISED STORAGE Flexibility is a paramount feature of any modern storage architecture. Specifically, being able to choose the type of storage media (SSD vs HDD) and networking configuration, yet still offering unified access to this rich media in a global namespace, is really what defines a workflow optimised storage solution. And by architecting such a solution across multiple tiers of storage, organisations can have the best mix of performance and capacity at a given price point across the entire life-cycle of their data. n


TVB60.quantum.indd 1

09/11/2018 14:23


DELIVERING GAELIC SPORT FROM JUST ONE BOX How Open Broadcast Systems is helping gaelic sport reach a bigger audience


emeton TV is a production company, based in the heart of the Irish countryside and in the culturally diverse city of Glasgow. Each year, it produces over 600 hours of premium content for broadcasters, sports organisations, and commercial brands. This includes creating a wide range of content for The Gaelic Athletic Association (GAA), a community-based volunteer organisation promoting Gaelic Games, culture and lifelong participation. With sporting matches taking place across the country and often simultaneously, Nemeton TV needed a cost effective, yet reliable, way to deliver this. GAELIC SPORTING PROWESS The GAA is Ireland’s largest sporting organisation. GAA sporting matches are an important part of Irish identity and culture, keeping many traditional national sports alive. Therefore, for the GAA it is

crucial that these are accessible by everyone across the country. Nemeton’s coverage of these games includes the GAA Allianz National Hurling and Football Leagues, the GAA AIB National Hurling and Football Club Championships, and the GAA All-Ireland Hurling and Football Championships. The footage from these is used for a GAA2018 Highlights Programme which is aired on TG4, the Irish language Television channel, every Monday evening. It is also used to extract live clips which are published across GAA’s Twitter, Facebook, and Website, under its brand GAA Now. AIRING SIMULTANEOUS MATCHES The matches in all of these leagues and championships take place at county and local grounds all over Ireland, with the finals played at Europe’s third largest stadium, Croke Park in Dublin. The structure of the National Leagues changed this year, meaning that up to three games might now be happening simultaneously. With simultaneous matches, this naturally increased complexity but it also increased the number of feeds that needed to come into Nemeton TV’s base in County Waterford. Three matches means a need for six receive/decode channels, so essentially Nemeton needed a way to deliver four extra channels compared to previously. The problem is that would normally required four extra pieces of hardware, which would take up valuable space in the studio, as well as being extremely costly. SATELLITE TO IP The feeds are all received via satellite, but need to be converted for delivering via IP. Even before it was obvious that extra feeds would be needed, Nemeton TV approached Open Broadcast Systems about the feasibility of adding DVB-S2 demodulation and Basic Interoperable


TVB60.openbroadcast.indd 1

09/11/2018 14:23


Scrambling System (BISS) Descrambling on the C200 decoder platform. Having demodulation happening in the same box as encoding for IP delivery would naturally make that process of receiving and sending transmissions much more efficient. At the same time, ensuring they are encrypted using the industry standard would keep the live feeds protected in transit whilst ensuring they could be easily decrypted, regardless of equipment being used at the other end. FOUR CHANNELS, ONE BOX When it was obvious that extra channels were also needed, Open Broadcast Systems was tasked with developing a solution that could handle all of those processes across four channels simultaneously. This was the first time DVB-S2 demodulation and BISS descrambling had been added to any of the company’s decoders or encoders. The team spent time developing code that would enable those processes to be a seamless part of the decoder. This is something

that can now be copied across to any of the company’s encoders and decoders, enabling customers to easily and cost-efficiently process satellite-to-IP feeds. The C200 decoder was also adapted to decrypt and decode four high quality 10-bit feeds at the same time on a single server, meaning those extra feeds for simultaneous matches could be handled from just one box. The resulting solution also requires very little power and naturally uses much less rack space than the alternative box per feed. Having shipped a test unit within the space of a few weeks, Nemeton was able to spend the best part of a month testing the software before ordering the software licences for the start of the season. “The Open Broadcast Systems team very quickly responded to our unique requirements and developed a cost-effective yet extremely reliable solution for us. “Despite handling a great deal of feeds and processes in just one box, the GUI is really simple to use,” says Fiachna Mac Murchú, technical systems manager, Nemeton TV. n


TVB60.openbroadcast.indd 2

09/11/2018 14:23


LEVELLING THE FOOTBALL FIELD Ampere Analysis’ Alexios Dimitropoulos takes a look at the value of Europe’s big five football leagues


uring 2018, four out of the EU’s top five football leagues signed new domestic TV rights deals with their values now beginning to converge. The English Premier League’s (EPL) deflation, £4.64 billion for the 2019-2022 cycle, down from £5.14 billion for the previous deal, against the rights value inflation of France’s Ligue 1 (60 per cent increase), Spain’s La Liga (10 per cent increase) and Italy’s Serie A (3 per cent increase) creates a more consistent crossmarket trend in terms of rights value relative to domestic TV market turnover.

THE RELATIONSHIP BETWEEN LEAGUES’ VALUES AND THEIR MARKET Historically, the EPL over-performed relative to the income of UK pay-TV and commercial broadcasting sector, while Ligue 1 and Bundesliga under-performed; however, the latest auctions moved all leagues to a more balanced point. Rights for four of the leagues now cost between 12-15 per cent of the industry’s annual turnover. Only Spain’s La Liga is out of step with this trend, but we believe this to be partly the result of the extended economic slowdown in Spain following the

financial crisis, and the associated negative impact on the TV market. As the Spanish TV industry returns to growth, we expect that picture to change. It appears to be a different ball game for second tier leagues, who account for between just 1-4.5 per cent of local operator and broadcaster income. According to Ampere’s analysis, the Netherlands’ Eredivisie seems the most undervalued league due to a 12-year deal signed in 2013. In this contract, the opportunity for possible rights inflation was sacrificed for funding stability.

BRINGING DOMESTIC POPULARITY INTO THE EQUATION However, this analysis does not factor in differences in fan-base sizes across markets. But once we take into consideration the level of domestic demand (assessed through a survey of 20,000 European internet users), the second tier leagues no longer appear to be as under-monetised as they initially seem. This data reveals that the main reason for smaller leagues’ lower apparent performance is more an indication of poorer domestic demand than it is an indication that they are under-monetised. And when this local demand is taken into account, their rights revenues are on-trend. From this demand vs industry value analysis, some leagues still appear to over- or underperform, but the cycle of each league’s TV rights should also be taken into consideration. Ligue 1’s new deal in France doesn’t begin until September 2020, so by the time the new deal is underway, the value to market size ratio will become smaller and the league will move further towards the expected ratio. Finally, Bundesliga is well-positioned to improve its rights revenue in the next domestic rights auction. Looking at the number of Bundesliga fans in Germany, the scale of the local TV industry, and the impact of potential new bidders like DAZN or even Amazon, there is potential for significant gains in 2019. n


TVB60.ampere.indd 1

09/11/2018 15:46

STAY CONNECTED TO THE GLOBAL AV INDUSTRY Make all the connections you need at ISE 2019 Integrated Systems Europe is where the latest innovations and solutions are showcased and where professionals go to learn, network and do business.



Profile for Future PLC

TVB Europe November / December 2018  

TVB Europe November / December 2018