TVB Europe October 2019

Page 1

Intelligence for the media & entertainment industry



COME JOIN THE SERKIS The actor/director explains how technology has had a profound impact on his career




Editor: Jenny Priestley Staff Writer: Dan Meier Graphic Designer: Marc Miller Managing Design Director: Nicole Cobban

At my first IBC in 2017 I remember everyone talking about Internet of Things and how our lives were all going to change, and how media technology would be a huge part of that paradigm shift. At this year’s IBC, I don’t think I spoke to a single person about IoT. How times change. Instead, this year I felt the big focus was on both 5G and 8K. There’s no doubt that 5G is going to have a profound effect on the media technology industry, whether that be from a production, distribution or consumption point of view. It was interesting to see vendors starting to embrace that change and develop new products ready for market.

This year’s IBC was particularly special for me as I had the opportunity to sit down with actor/director Andy Serkis. If there’s anyone that has really embraced how technology can capture and enhance a performance, it’s Serkis. It was a joy to talk to him about how motion capture has led him down a career path he never envisioned. I also managed not to use up all of my time raving about The Lord of the Rings, which was not easy! We also take a look back at this year’s IBC with a round-up of the Conference and audio news, and celebrate this year’s Best of Show Award winners.

‘In the space of just 24 months it seems to me that the industry has realised that 8K is coming, and it’s no longer the TV set manufacturers who are driving the change.’ Another big talking point for me in 2017 was 8K. There I was, my first ever trade show, asking when 8K would become industry norm. And the answer was “not anytime soon” from a number of vendors. Again, how times change. 8K was pretty much everywhere at IBC. Every time you turned a corner there were screens purportedly showing 8K content. BT Sport became one of the first global broadcasters to demonstrate live 8K sport on their stand, and of course the camera vendors were busy showing off their kit. In the space of just 24 months it seems to me that the industry has realised that 8K is coming, and it’s no longer the TV set manufacturers who are driving the change.

While I got to speak to one of my favourite actors for this issue, Dan Meier has been talking to the production team behind his favourite TV series. RuPaul’s Drag Race UK is set to debut on BBC iPlayer on 3rd October. Also this month, we take a look at how the small screen is fast catching up with the big in terms of content available. We ask if shortform mobile-only streaming service Quibi can find an audience, and hear how one company is enabling viewers to move around virtual environments in real time, with zero delay. Philip Stevens continues his series on the soaps with a trip to Hollyoaks, and The Foundry’s CEO Jody Madden has this issue’s Final Word. n

Contributors: Philip Stevens, Ann-Marie Corvin, David Davies, Mark Layton Group Content Director, B2B: James McKeown

MANAGEMENT Senior Vice President Content: Chris Convey Brand Director: Simon Lodge UK CRO: Zack Sullivan Commercial Director: Clare Dove Head of Production US & UK: Mark Constance Head of Design: Rodney Dive

ADVERTISING SALES Group Sales Manager: Richard Gibson (0)207 354 6029 Japan and Korea Sales: Sho Harihara +81 6 4790 2222

SUBSCRIBER CUSTOMER SERVICE To subscribe, change your address, or check on your current account status, go to or email

ARCHIVES Digital editions of the magazine are available to view on ISSUU. com Recent back issues of the printed edition may be available please contact for more information.

LICENSING/REPRINTS/PERMISSIONS TVBE is available for licensing. Contact the Licensing team to discuss partnership opportunities. Head of Print Licensing Rachel Shaw

Future PLC is a member of the Periodical Publishers Association

All contents © 2019 Future Publishing Limited or published under licence. All rights reserved. No part of this magazine may be used, stored, transmitted or reproduced in any way without the prior written permission of the publisher. Future Publishing Limited (company number 2008885) is registered in England and Wales. Registered office: Quay House, The Ambury, Bath BA1 1UA. All information contained in this publication is for information only and is, as far as we are aware, correct at the time of going to press. Future cannot accept any responsibility for errors or inaccuracies in such information. You are advised to contact manufacturers and retailers directly with regard to the price of products/services referred to in this publication. Apps and websites mentioned in this publication are not under our control. We are not responsible for their contents or any other changes or updates to them. This magazine is fully independent and not affiliated in any way with the companies mentioned herein. If you submit material to us, you warrant that you own the material and/or have the necessary rights/permissions to supply the material and you automatically grant Future and its licensees a licence to publish your submission in whole or in part in any/all issues and/or editions of publications, in any format published worldwide and on associated websites, social media channels and associated products. Any material you submit is sent at your own risk and, although every care is taken, neither Future nor its employees, agents, subcontractors or licensees shall be liable for loss or damage. We assume all unsolicited material is for publication unless otherwise stated, and reserve the right to edit, amend, adapt all submissions.


PU T A PAUSE IN YOUR DAY With so many demands from work, home and family, there never seem to be enough hours in the day for you. Why not press pause once in a while, curl up with your favourite magazine and put a little oasis of ‘you’ in your day.

To find out more about Press Pause, visit;


OCTOBER 2019 09 Shantay UK

Dan Meier gets the T on RuPaul’s Drag Race UK

12 “It’s been a strange, banana curve of a career”

Jenny Priestley sits down with actor/director Andy Serkis

16 Can Quibi take a bite out of the streaming market?

As the streaming wars heat up, Jenny Priestley looks at new short-form platform Quibi

22 Smile!


Celebrating the winners of this year’s Best of Show Awards at IBC

26 Are we ready for video entertainment in the car? ACCESS Europe’s Robert Guest has some answers

30 Mobile networks: The next generation

Mark Layton reports on TVBEurope’s recent 5G webinar

33 On set with Hollyoaks Philip Stevens continues his look at soap operas

45 Cutting right on cue Philip Stevens takes a close look at a system that makes live directing less stressful

50 The IP transition: Dos and Dont’s



Artel Video Systems’ Rafael Fonseca lays out the ground rules


Can OTT and streaming services replicate the big screen on the small screen?


By Chris Wood, CTO, Spicy Mango

obile phones have advanced considerably over the past decade; what was once merely a communications device has become everything from a diary and TV remote control, to a gateway to multimedia content. With bigger screens, better processing and the introduction of mobile apps for OTT platforms, it’s no surprise that consumption of video on smartphones and mobile devices has risen. According to one eMarketer report, over 75 per cent of total worldwide video consumption occurs on mobile devices. The figure is smaller when it comes to OTT subscription platforms, but 20 per cent of Netflix viewing occurs on smartphones with 50 per cent of customers logging in from mobile devices once a month. But is it actually possible to bring the big screen to the small screen and replicate a similar experience? They are two very different things: cinemas and TVs bring a stable, quality viewing experience, whereas a viewing experience on a smartphone or other mobile devices, like a tablet, can be temperamental and inconsistent. Let’s compare each factor of the big screen with the reality of viewing on mobile. EASE OF ACCESS There are still many people who have a catalogue of DVDs at home or go to the cinema regularly. When we think about Blu-ray and 4K in its most accessible forms, it is still physical media that is most readily available and brings the widest choice of content. In the world of consumer electronics, the 4K ecosystem is vast with TVs already fully equipped to display great pictures. However, when it comes to the digital world, it is much harder to come by 4K supported content in our OTT services – surprising given the availability of this content on the shelves of shops. When users choose to watch online over the internet, there is nearly always inconsistency in connectivity – particularly across mobile networks. As 4K content becomes more mainstream, this is an issue that will only exacerbate the frustrations that come with waiting for the spinning wheel on screen to stop. A dark room, a bowl of snacks, and a large screen TV brings us so very close to replicating the cinematic experience at home. For many people, settling down to watch a film or binge a TV series is an occasion - and in most households, it’s still treated like one. Given the choice, it’s less likely that people would choose to watch a blockbuster movie on their smartphone at home when they have a 32-inch TV in their living room; smartphone viewing is usually reserved for passing time on the daily commute with ‘snackable’ content like TV series.


EQUIPMENT Another factor that truly makes the big screen experience great is the equipment itself. New TVs are not only 4K ready – but are supporting HDR technologies that are delivering those ‘wow-factor’ experiences. Even with the little content available, we’re not amazed by the levels of pixels-perinch that 4K delivers, but by the depths of colour and contrast that we see with HDR. By comparison in the mobile space, our devices still have some significant ground to make up, with very few offering the support for 4K and the compression technologies needed to play this content, but even fewer offering support for both of the mainstream HDR formats. And so, another format war begins. Audio too plays a key part in the experience, but these bottom-dollar outof-the-box smartphone headphones are designed to be functional, not high quality, and good audio is essential for shutting out the rest of the world. In a home or cinema setting, the audio is always going to be more immersive, so in order to attempt to replicate the experience, smartphone manufacturers and content providers have that issue to contend with too. THE SOLUTION? As an industry, it’s hard to ignore that perhaps we’ve missed a step in the race for better picture quality, with the world jumping for instant 4K readiness on mobile devices. As we’ve said, it’s not more pixels we need, it’s better pixels. HDR technologies such as HDR10 or Dolby Vision provide incredible depths of colour and contrast – generating the beautiful pictures that we watch with open mouths. Our 1080p H.264 encoders are still more than enough for today’s consumer screens – and when coupled with HDR technologies, mean providers don’t need to increase the bandwidth to accommodate four times the amount of pixels in 4K. And as for audio, manufacturers need to be shouting about the importance of good equipment for mobile viewing – and we can start by throwing away those bottom-dollar out of the box headphones. So is there really a solution? Immersive experiences are immersive because they isolate our viewing experience from the outside world around us, which is why a cinema or a home movie night is so enjoyable. And although issues surrounding picture quality are already improving as new handsets begin to bridge the gap, the answer to the question of whether we will ever view or experience content on a mobile device in the same way as the big screen, still remains as no, not yet. n


Sink or Swim? Why low-latency is the lifeboat in a sea of streamers By Lars Larsson, CEO, Varnish Software


he video streaming market has burst its banks, with seemingly endless OTT services launching over the last couple of years looking to make a splash. From established broadcasters to content owners, sports organisations, service providers and telcos, everyone is dipping their toes in to the water. As a result, there are now so many streams that it’s easy to feel like we are drowning in an ocean of content. As consumers, we have so much choice that we’re unlikely to stay loyal to a service that cannot satisfy our lust for instant gratification. Buffering has consequently become one of biggest turn-offs for consumers - quite literally! Some 85 per cent of viewers will give up streaming if their content is taking too long to load, and three-quarters will abandon a service altogether if they experience buffering issues several times. Value for viewers is now less about how many hit shows they get for their monthly subscription, and instead, equates to an instant, buffering-free experience. That means that delivering content with low-latency and high reliability is now a crucial consideration for streaming providers. Consumption habits are also changing. Not only do we expect low-latency content when streaming an on-demand series to our smart TVs, we now expect it when streaming a live event from the other side of the world; from stadium to smartphone. A report from IHS recently forecast video streaming as the killer app for 5G. The deployment of next-generation networks will mean higher bandwidth and lower latency, which will support the growth in streaming over mobile. As a result, video streaming will account for 70 per cent of mobile network traffic by 2022. The speed and performance of 5G will increase streaming popularity particularly in developing markets, which are often mobile-first. There are countless brands eyeing 5G video streaming as an opportunity to expand their global footprint into these markets by targeting mobile consumers. But this massive jump in streaming content on mobile devices further complicates the reliability challenge for rights holders and OTT providers, particularly when streaming live and lucrative sports content. Live content is increasingly valuable to broadcasters due to its higher audience retention and the inability of that audience to avoid the advertisers, but live streaming now comes with the same high expectations for low-latency as VoD. Large numbers of global viewers live streaming on a range of devices

can cause serious latency issues if the content delivery network (CDN) has not been built to handle this challenge. The number of requests can easily overload the origin server and cause it to fail. No diehard fan wants to miss a moment of a gripping sports contest or live musical performance because of an unreliable stream – and broadcasters don’t want viewers disengaging after spending millions, or even billions to acquire the rights to an event. So how do you achieve low-latency when streaming a Newcastle United game to New Delhi or a Beyoncé festival set to Beijing? You build a CDN with an integrated caching policy. Video distribution to a large number of devices, in different geographical locations and using different networks, typically requires a lot of bandwidth. However, by caching the content, it’s possible to deliver reliable streams to more viewers without increasing the load on the back-end. A well-known component of web domains, caching has evolved beyond simple website acceleration into the video streaming space, as a means of delivering content fast, on-demand and at scale. Live streaming can be tackled simply and efficiently using caching. Without getting too lost in the technical details, the effect of live streaming is that the majority of devices will be fetching the most recent video segments, or ‘packets’, at the same time. Using a technique called ‘request coalescing’, similar streaming requests can be handled together, meaning the origin server only sees one request – allowing it to run smoother when hit with a surge in demand. By successfully building out a CDN with an integrated caching policy, content providers can reduce latency and minimise or eliminate instances of the feared spinning wheel. Scaling to new regions simply entails moving content as close to the end-user as possible via strategically placed pointsof-presence, putting streaming servers exactly where they need to be to provide the best user experience and lowest possible latency. OTT providers new and old will have to continue to invest in reliable, low-latency platforms to stay afloat. Having a big-budget blockbuster or star-studded sporting spectacle is no longer enough, because in an ocean of content, we consumers have become the blood-smelling sharks. We want content wherever we are, delivered to our pockets, and we’re not prepared to wait. Any platform that can’t keep up, is shark bait. Now, does anyone know where I can stream Jaws from?..... n



The future of broadcast connectivity: updating to IP KVM By Jamie Adkin, VP EMEA at Adder Technology


VM equipment has been utilised by the broadcast industry for many years. Over that time, many in the industry have recognised the importance of using IP based KVM to break down technological barriers and enable real-time access to resources that are so crucial for live production environments and post production facilities. Broadcasters can enhance versatility across production workflows by using IP as a transport network. This will ultimately decrease the total cost of ownership when compared to traditional approaches. Due to the enhanced advantages that IP is bringing across the production space, more and more broadcasters and post production houses are willing to implement IP KVM to ensure they do not lag behind their competitors. The benefits of IP are huge: it brings significant cost savings whilst also improving scalability and flexibility, to name but a few benefits. The broadcast landscape has changed significantly over the last few years with consumer demands causing major changes to content creation workflows. However, faced with rising costs, shrinking budgets and the drive to do more with less, IP is providing the answers to many broadcast challenges. CONNECTING REMOTELY One example which illustrates this perfectly is flexibility. High performance IP KVM gives broadcast operations the ability to connect remotely to many computers from any user station, allowing the computers to be relocated into a purpose-built server room. Not only does this approach improve the user environment by saving space, it also provides operational freedom to maximise the use of production spaces, radio studios or edit suites. Having multi-purpose environments or rooms that can be reconfigured in minutes enables broadcasters to squeeze the very most out of their physical and technological real estate. IP KVM offers enhanced flexibility to broadcasters in the way they manage infrastructure and workflows, enabling direct improvement of ROI. It also provides the quality required for today’s demanding environment, which is likely to become more complex in the future.


This trend of relocating resources away from the users has been very popular with broadcasters upgrading infrastructure, optimising security and particularly those building new facilities. For customers in the broadcast industry, one of the key drivers for adopting IP is scalability. Proprietary KVM solutions often pose limitations to broadcasters who are looking to expand their facilities – whether that is in the form of pre-configured ports or the need to purchase additional costly hardware to allow for such expansion. LIMITLESS NATURE OF IP In contrast, the limitless nature of IP adds huge possibilities to an ever-developing broadcast workflow and its growing infrastructure setup. IP KVM is designed to expand as a business’ needs flex – meaning there is no need to try and identify the number of required ports at the beginning of a project. Supporting this, the adoption of IP means there is no ceiling to the number of connected users, computers or endpoint devices such as tablets and cameras. An IP KVM matrix allows operators to switch, extend, share machines and content without loss of quality or performance, and to add access points or re-route parts of a network. This can be further extended with remote access solutions based upon WAN networks. This was perfectly illustrated in 2017 when the centre of a major German city was evacuated to diffuse a residual World War II bomb. Within the evacuation area was a major German television network in the middle of a live transmission. An IP KVM matrix meant the customer could reroute the broadcast networks and access the resources from outside the building – leading to a safe evacuation without any disruption for viewers. High performance IP KVM brings significant benefits to virtually every stage of the broadcast workflow. From enhanced flexibility to cost-effective scalability, these benefits make it clear to see why many of the world’s leading broadcasters, studios and post production houses are rapidly adopting IP. n



into the phenomenon it’s become. Dan Meier gets the T on And in the UK, I think it took a little longer. Literally when we were RuPaul’s Drag Race UK doing the first season we were talking from World of Wonder’s about doing a version in the UK, but you know, it took a couple minutes.” Fenton Bailey and “A funny footnote here is that Randy and I founded World of Wonder a long time Randy Barbato

fter 11 seasons, years of speculation and millions of sequins (or is it sequence?), RuPaul’s Drag Race finally sashays onto UK screens, as a fresh batch of homegrown queens get ready to snatch the crown. The British series debuts on BBC Three via iPlayer on 3rd October, with guest judges like Graham Norton, Alan Carr and Andrew Garfield joining RuPaul and Michelle Visage on the self-proclaimed Hunger Games of drag. Rumours of the format crossing the pond have circled for almost as long as the show has been on air. Since Jonathan Ross reportedly attempted to bring it over in 2014, two Drag Race stars have been in the Celebrity Big Brother house on Channel 5: judge Michelle Visage (also on Strictly Come Dancing at the time of writing) and contestant Courtney Act, who won the series and the sympathies of anyone for whom a month stuck in a house with Ann Widdecombe sounds like hell on earth. Netflix also ranked the show its fifth most popular reality title this summer. So what took them so long? “I think television executives are nervous nellies by and large, and so they’re not always the greatest visionaries shall we say,” says Fenton Bailey, co-founder of the show’s production company, World of Wonder. “With the exception of the television executives at BBC Three,” adds co-founder Randy Barbato. “Once that ball was moving it was pretty quick, it was a few months,” continues Bailey. “But it was years and years of knocking on every door and we have a collection of rejection emails. And Michelle Visage actually has always been a great cheerleader for us, and as you know she’s beloved in Britain.” “The thing about RuPaul’s Drag Race is it’s had an unusual trajectory in general in terms of television shows,” says Barbato, “because it really has been the little show that couldn’t. And here we are going into our 12th season, and we’re bigger than we’ve ever been. And that’s not the way most TV works. And so even here in the US, it’s taken a long time for it to grow

ago,” says Bailey, “and one of our early shows was back in the day when Channel 4 was very much an innovative, cutting-edge, risk-taking network, and they commissioned RuPaul’s Christmas Ball. So Ru has actually been on British TV and been a part of the culture there for many years. And that was -” “100 years ago,” Barbato cuts in. Given their decade’s worth of experience as executive producers on the shadiest show on TV, you wouldn’t expect there’d be much left to shock Bailey and Barbato, but the British queens are apparently some fierce mothertuckers. “I certainly do remember sitting in the control room as the first contestants entered and thinking, ‘Oh my god it’s so rude!’,” recalls Bailey. “The Brits may seem very polite and reserved, but my god the mouth on those girls!” That said, Bailey and Barbato have no concerns about American audiences being turned off by the coarse humour or the regional dialect, so hopefully we won’t see another Cheryl Cole-style accent debacle. “Drag is always fascinated by new and shiny and different things; drag is not about always wanting the same thing,” notes Bailey. “So it applies similarly to language. Drag loves new words and strange accents and strange pronunciation. Drag loves to play with words as much as wigs and hair and clothes.” Barbato adds: “I think part of the reason that the show continues to grow is that it’s attracting a generation of people - it’s kind of the opposite of what’s happening in politics right now - it’s not about people who are into building walls, it’s about people who are into experiencing new things and welcoming people in. Right now we are at DragCon in New York City, and we just were with all the girls from the UK. And the show hasn’t even come


PICTURED ABOVE: The Drag Race UK contestants

“This show attracted phenomenal talent from lighting to design to director.” RANDY BARBATO

out over here, and they already are like pop stars. Drag queens are the new pop stars; they deliver what people need now and what pop stars don’t deliver.” They certainly werk harder than your average pop star; over five weeks the contestants act, sew, sing, dance and lip-sync in outfits most people would find challenging even without tucking. “It’s a gruelling schedule for the girls,” confirms Barbato. “It’s really hard work. Because when you think about it, RuPaul’s Drag Race is basically a combination of every competitive reality show all wrapped in one; it’s like American Idol meets Project Runway meets Big Brother. And they do this stuff themselves in record time. So they’re the heroes of the show. It’s amazing that they get through it. “While we’re shooting we have this thing called ‘on ice’,” he continues. “We put them on ice; we really try and minimise their communications. Even during shoot, if the cameras aren’t rolling and the girls are all hanging out, we do ask them not to talk. We’re making a TV show and part of it is for whatever they are experiencing, capture it. When they go home after a day’s shoot, they are there in their own space. They do not socialise or fraternise. They are quite isolated.” After all, this is not RuPaul’s Best Friend Race. In terms of the maxiest challenges the producers themselves faced in making Drag Race UK, Bailey is quick to respond: “Air conditioning. The studios can get very hot, there’s a lot of lights, and with global warming nowhere’s getting any cooler, and air conditioning isn’t as universal in the UK as it is in the States. So that’s proved a bit of a challenge because when we were shooting we were like, ‘Well it’s going to be the middle of winter, it’s not really a problem here,’ but it was a very mild spring.”

On the plus side, British TV has a wider scope for what you can get away with showing, as Bailey explains: “The BBC does have a watershed and does have certain slightly different requirements, but funnily enough, I think the latitude is greater with the British broadcasting than US. Which is fortunate for these queens because they’ve got the mouth of a sailor.” Barbato adds: “And we had the great fortune of working with the best people in the UK. This show attracted phenomenal talent from lighting to design to director. We were fortunate enough to work with the cream of the crop and so they delivered in spades.”

Since America and even Thailand have done already done had their own iterations, what can UK viewers look forward to from the BBC’s version? “I’m excited for the UK to meet this cast and I’m excited for the world to meet this cast,” says Bailey, “because they are stars.” “Really it’s amazing how RuPaul’s Drag Race is one of those shows where we introduce stars,” continues Barbato. “It does matter who wins but it kind of doesn’t because if you’re in the show, you’re a winner. It’s launched so many careers and so that’s what I think we’re most excited about.” So to all the girls we say good luck, and… well, you know the rest. n



“IT’S W BEEN A STRANGE, BANANA CURVE OF A CAREER” Jenny Priestley sits down with actor/ director Andy Serkis to discuss how technology helped change the course of his career, and tries very hard not to ask him to say “Precious”

hen Andy Serkis first boarded his flight to Wellington, New Zealand back in late 1999, little did he know that the role he was to portray in Peter Jackson’s epic Lord of the Rings trilogy would change his life. Playing the oft-maligned Gollum not only brought Serkis as an actor to a whole new audience, but Jackson’s decision to film his performance using motion-capture opened up a new creative world, not just as an actor but also as a director. Fast forward 20 years, and Serkis is now recognised as an industry leader when it comes to using technology to enhance an actor’s performance. So much so, that he was presented with IBC’s highest award, the International Honour for Excellence, in Amsterdam last month. The actor/director admits to being “blown away” when he first learned of the accolade. “I really feel very honoured to be given this award because it’s been a strange, weird banana curve of a career,” he laughs. “I’ve encountered so many extraordinary and talented people across the board, from animators to concept designers to CG artists to software designers. I never anticipated that my career would take me on this journey. And so looking back, I feel I am standing to a certain degree on the shoulders of lots of other people.” Going back to Serkis’ introduction to how technology could help enhance an actor’s performance, he is quick to pay tribute to director Peter Jackson: “Without doubt it was Peter who set me on this course,” he says. “The visual effects we used on Lord of the Rings helped me become that character, and then what kind of ensued after that. I literally thought I’d be going back to my life as a conventional actor, whatever that means. I had no idea that it would mean that this whole other track would open up.” Since first pulling on that motion capture suit, Serkis has gone on to wear it a number of times for Peter Jackson (King Kong, The Adventures of Tintin: The Secret of the Unicorn, The Hobbit trilogy), and in all three of the rebooted Planet of the Apes trilogy. He’s also branched out into directing, first with indie drama Breathe and most recently with the visual-effects heavy Mowgli: Legend of the Jungle. He’s also gone on to establish The Imaginarium Studios, the UK’s leading performance capture studio based at Ealing Studios. “Setting up Imaginarium Studios, which is a laboratory that was always going to be creatively led, the idea was to bring in writers, directors, actors to be in a space where you can actually see characters come to life and iterate those characters,” Serkis explains.

“I’m so inured to any difficulties of working with performance capture technology, that I don’t actually see it.” ANDY SERKIS 12 | TVBEUROPE OCTOBER 2019


FEATURE “It is a real true marriage of art and science and with deep respect from both sides, but leading from a creative point of view. I didn’t want to just create a dry for-hire studio that was just going to be a service provider to other films. It’s ended up being a sort of boutique environment where we can explore so many different forms of creative expression and different platforms and different ways of telling stories and nextgeneration storytelling. That’s really what’s been exciting about it from a director’s point of view, trying to push the areas that are going to be meaningfully important to the productions that I’m doing.” Serkis goes on to discuss one of his upcoming projects, Animal Farm. “We’re spending a lot of time looking at how to have real-time facial capture retargeting an actor’s performance to an animal’s physiognomy. How can we have actors drive a quadruped animal? How can we adapt machine learning to be the back legs, for instance. One of the most important things is having real-time facial playback, so that in a cut you can immediately put in a meaningful performance that’s going to read and give you a true calibration of what the performance you’ve shot on the day is going to look like.”


“I feel I am standing to a certain degree on the shoulders of lots of other people.” ANDY SERKIS Serkis is working with Netflix on Animal Farm. The streamer bought the rights to Mowgli, and he says they’re very hands on with the development of the new project. “We are in the process of going into a long predevelopment, leading up to the shoot,” he reveals. “We’re doing it with Netflix and they’ve been working on the script and evolving that with us. That script has taken a long time to get the tone right. Obviously it’s a very pertinent story and it feels very zeitgeist. But we have to do it in such a way as to reimagine Orwell’s classic for an audience in the world that we live in, and resonates with a modern audience. We want to make a film that is for everyone to watch.” Serkis remains unique in the sense that he has worked with cutting-edge technology both as an actor and

PICTURED BELOW: Serkis on the set of War for the Planet of the Apes


director. Does he find it easier to work with tech when he’s in front or behind the camera? “I’m so inured to any difficulties of working with performance capture technology, because I’ve been doing it for so long, that I don’t actually see it,” he admits. “If I’m looking into an actor’s face and it’s got dots all over it and a headmounted camera, it doesn’t really bother me. I just don’t see it, I’m in the moment acting with them and that’s it. “I’m aware of the technology from a director’s standpoint more because of the distance it still needs to go. Actually, it’s really interesting working out what you want, and the tools which will really benefit the next stage. I love virtual camera work. When you create the scene and you work with the actors, and you’re happy with it, to then go in and be able to pick angles, and have those either as pre-vis or as final product. You’re able to then really be creative and use that to imagine all sorts of ways of cutting the scene.” Since its inception in 2011, Imaginarium Studios has has provided performance capture for major blockbusters including Rise of the Planet of the Apes, Avengers: Age of Ultron, and Star Wars: Episode VII - The Force Awakens. In fact, Imaginarium was involved in the latest iteration of one of the screen’s biggest (in every sense of the word) superheroes. “Mark Ruffalo came to us in our early days, and wanted to start working with us to find the character of the Hulk, not just be the Bruce Banner part,” says Serkis. “He said to us that he felt at the time that there was a disconnect between himself as Banner and then going into the Hulk. “We wanted to illustrate a working atmosphere to production, where you can actually make that environment for the actor. So for instance, a technique that we used when I was doing King Kong was to use big loud speakers on set. We could digitally kind of pitchshift his voice so it would be much deeper and resonate. It gave him the size as an actor and made him feel bigger. That really did help us put weight on him to help with the physics of the character and the movement. I think that

even if that was just the beginning stages, it had an effect and the use of capture in the subsequent films has made that character much more believable.” Of course, like any technology motion capture has changed drastically since Gollum first found his “precious.” For Serkis, one of the biggest transitions took place between Rings and King Kong, which he feels is when facial capture really came into its own. “After that, it was really the Apes movies where we started to be able to shoot just once on set, untethered with head mounted cameras,” he adds. “We could take the suits and cameras on to a real physical set and only film it once. We were shooting scenes just as you would a conventional film. Then from the first Apes film, which was predominantly on sets in the studio, we went outside for the second film which was set much more in the rainforest, with a lot more technical challenges, obviously. Then in the last film we were shooting snow and all that kind of stuff. So it got bolder in its ambition.” As mentioned earlier, Serkis is currently working with Netflix on his next feature. Where does he sit in the ongoing streaming vs cinema debate? “I think that there’s absolutely room for both worlds,” he says. “That is the reality and neither of them are going to go away. The tent-pole movies are always going to exist. I think independent film is probably in the most difficult position because it’s still intended to be a cinematic piece. But, they are the hardest things to get off the ground. And I know that because in amongst all of the big films that I’m involved with, I’m trying to make other smaller, lower budget, more personal stories. That is the most difficult and challenging space to be in at the moment. “Netflix have been incredible, and I think their ambition to be able to do both is interesting too in terms of being able to have cinematic releases for periods of time or for their subscribers. They’re now getting into the traditional cinema space and owning cinemas. I think that’s very interesting. You have the subscribers but then people who really want to see those films on the big screen can and I think that is a really wonderful compromise.” Finally, with a couple of directing projects in the pipeline, when can we expect to see Serkis acting again? “At the moment, I’ve got a director focus because I’ve got two big projects coming back to back and so that’s where my head is. But I love acting and I actually really want to do some theatre. I did so much theatre before I went into film and TV. The last time I was on stage was in 2002. The concept of learning huge pages of dialogue, big scenes and sustaining a character, I’m sure I could do it. But the fear factor does certainly grow as you get older!” n



CAN QUIBI TAKE A BITE OUT OF THE STREAMING MARKET? As the streaming wars heat up, Jenny Priestley takes a look at new shortform content platform Quibi and its potential to disrupt the market


ack in the summer of 2017 Jeffrey Katzenberg first announced plans to launch a streaming platform offering short-form content for mobile devices. At that time the SVoD world was in a very different place. Since then Apple, Disney, WarnerMedia and NBCUniversal have all revealed plans to launch their own streamers - although none of them will be focusing on short-form content. Now christened Quibi (short for Quick Bites), Katzenberg’s project is expected to launch in April 2020 and appears to be preparing by signing deals with some of the biggest names in Hollywood, from Steven Spielberg to Reese Witherspoon, Idris Elba and Kiefer Sutherland. The question remains though, can the market sustain yet another streaming service? According to Fred Black, analyst with Ampere Analysis, the answer is yes: “In terms of content there is nothing stopping Quibi finding an audience,” he says. “Short-form content has a demonstrable audience currently being catered for by the likes of YouTube and Snapchat. Monetising such an audience using paid short-form video is an unproven model however, with no significant successful precedent, and Quibi’s success hinges on making this work.” Of course any new platform launch has to offer consumers a unique selling point in comparison to its competitors, both those already in the marketplace and potential launches. With the imminent arrival of Disney Plus and Apple TV+, how does Quibi set itself apart? “The platform is clearly set apart from these services, which are not yet making serious moves in the short-form space,” says Black. “Quibi series will have episodes of 10 minutes or under, and only be available on mobile. Quibi is also likely to be wholly reliant on original content, as there is little to no premium short-form content available for acquisition.” THE PRODUCER’S VIEW That is a pertinent point - Quibi can’t “do a Netflix” and


buy in its back catalogue, it has to attract content from both Hollywood and smaller producers. Teresa Potocka, CEO and founder of award-winning company Sensethefuture Pictures, explains why content producers are being attracted by the service: “Quibi as a new entrant aims to differentiate itself through a wide variety of genres in a format that allows for a more intimate experience. By working with a company that offers a completely original slate of content and structures its episodes in a new way, not only does Sensethefuture Pictures have the opportunity to innovate and reach new audiences, but also to earn fees consistent with what we make in traditional TV. “We also retain the rights to our intellectual property. Short-form here no longer equals low-budget. Quibi’s brand expenditure isn’t limited to funding the content. On top of that spend, there will also be a significant marketing budget to ensure the audience is there so the incentives are there from the outset for us to create compelling storytelling.” Potocka goes on to point out that unlike Disney Plus, Quibi won’t be able to offer viewers any tent-pole brands. “That means Quibi will need to build its audience. Short films have recently enjoyed a renaissance online so if the Quibi audience is positively primed, the service will have considerable influence on how the marketing message is received across territories.” “Quibi is coming to market with a key disadvantage in not being a recognisable brand name like Apple or Disney,” agrees Black. “Creating originals with celebrity names attached is a way of attracting subscribers and growing brand recognition early on.” Potocka also points out that Quibi has an advantage in that it is being deliberately targeted at those who watch content via their mobile phone. “Since existing streaming services have increased the amount of content featured which is optimised for mobile, this has resulted in growth in overseas markets. Quibi could well replace short-form


PICTURED ABOVE: Teresa Potocka

Ilustration: Sam Richwood

as an addition to the content mix on existing platforms and become the alternative to traditional pay-TV,” she adds. Black agrees that Quibi will also face stiff competition from the likes of YouTube and Snapchat but believes it sits apart from other short-form players in that the content is “more premium” in nature. “That will be crucial in convincing customers to pay for the service,” he adds. “In that sense, Quibi not only needs to have better content than YouTube and Snapchat, it needs to be better by enough of a distance to convince customers it is worth paying for. YouTube’s Premium SVoD service has struggled to make headway in a crowded marketplace, with the company recently starting to move its original content in front of the paywall, reverting to the AVoD model that has served YouTube’s user-generated content so well. “Quibi has identified a gap between the long-form SVoD services like Netflix and Disney Plus, and the free-to-use socially-oriented services like YouTube, Snapchat and Facebook Watch.” MONEY TALKS A big factor in all of this is whether viewers will embrace the short-form format. While it appeals to a younger viewer, will that be enough to make the service financially viable? “Quibi has obviously raised a lot of start-up capital and has invested most of that in content,” says Black. “Making only short-form allows Quibi to produce more titles - it currently is vying with Disney Plus for the most upcoming originals of the new streamers. Quibi is thought to be pursuing a similar hybrid model to the successful one deployed by Hulu, with a cheaper tier with some advertising, and a more expensive

tier entirely free from advertising. “Short-form content is arguably as popular as more traditional formats in its own way – YouTube alone gets 1.9 billion monthly users,” adds Black. “The more pertinent question is whether users are willing to pay for it in the same way. Quibi will be looking particularly at the success of short-form content in China, which has been ahead of the game in this audience trend over the last few years, with short-form focused user-generated platforms including Kuaishou, Xigua Video and the most internationally successful TikTok. However, Quibi remains unique in asking the audience to pay for that content directly.” From a producer’s point of view the creation of a direct relationship with viewers has led to a change in production values, monetisation and consumption patterns for nonlinear content, be it short- or long-form. “Both differ markedly from each other and the available budgets ultimately influence the risks Sensethefuture Pictures are prepared to take and the ease with which content is created,” explains Potocka. “In the short-form online world the producer is expected to bear the upfront cost of production and then recoup their investment, usually from a share of the advertising revenue from the online platform. In the new serialised storytelling world created by Quibi, the platform carries the risk, covers the production fee and after two years we would be able to repackage the Quibi content back into long-form series and pitch them for distribution to other platforms.” So as attractive as Quibi sounds to both analysts and potential producers, it remains to be seen whether the service can take a quick bite out of the competition. n

“Quibi not only needs to have better content than YouTube and Snapchat, it needs to be better by enough of a distance to convince customers it is worth paying for.” FRED BLACK


WHY HOLLYWOOD JUST CAN’T WAIT TO BE CLOUD Ann-Marie Corvin wraps up some of the highlights from this year’s IBC Conference


hile hot topics around OTT, AI and the growing dominance of Silicon Valley were in abundance, the biggest innovations at IBC this year were arguably to be found in the Auditorium. This year’s Big Screen strand ran throughout the entire Conference and the sessions offered delegates a glimpse into future production techniques and workflows that will set the bar for years to come. Employing a live-action film crew that can work virtually to set up and execute shots in an animated world is one example; a technique employed on Disney’s alldigital, astoundingly photorealistic The Lion King remake. In one keynote session the film’s cinematographer Caleb Deschanel and its VFX supervisor Rob Legato demonstrated how they used VR headsets to plan out their shots. “VR allows you to walk around the CG world like you would on a real set and put the camera where you want. You know where the light goes, you know where the


camera goes, you know where the actors are,” Deschanel explained. “We were able to get some obscure points of view and tell a different story about the characters - and VR gave us that opportunity.” According to Legato, the film’s budget could have been much larger had the production gone to Africa to shoot and then composited CG characters into the shots. As a result he predicted that VR previsualisation would change how movies were made and scheduled while Deschanel argued that it would democratise the filmmaking process for a new generation of filmmakers. “A 25-year-old aspiring director will be able to practice a shoot at home virtually before sharing their vision with the world. This is where the next generation of talent will come from,” he predicted. HOLLYWOOD’S VISION FOR PRODUCTION IN 2030 In another Big Screen session, Hollywood studios’ tech bosses gathered to discuss a new whitepaper that is set to transform production workflows in an industry where massive increases in content production across multiple platforms is proving a real challenge. Michael Wise, CTO of Universal Pictures, argued that “faster production cycles and more rapid iterations”

FEATURE were needed. MovieLabs - an independent body founded by five big studios to explore new technologies and workflows believes the answer lies in the Cloud. The body’s ten-year vision is for all studio assets to be created and ingested straight into the Cloud. Furthermore, any tools used on content assets in this new workflow must come to the Cloud, rather than the other way around. For this to happen, Disney’s SVP of technology Arjun Ramamurthy acknowledged that the studios would need the support from “everyone in the production ecosystem.” He added: “We want the best of breed tools, but they need to be interchangeable and interoperable – I would say that this is the key message here for manufacturers at IBC.” BBC’S NHU PUSHES ENVELOPE Another production trailblazer - and a big influence on The Lion King according to its cinematographer - is the BBC’s Natural History Unit. Big Screen delegates were wowed by UHD HDR versions of the NHU’s recent hits including Planet Earth II, Dynasties and its latest production, Seven Worlds, One Planet. Colin Jackson, senior innovation producer at BBC Studios, revealed that most of its productions are now filmed in 7.5K and 8K and the unit was “committed to driving innovation.” According to Jackson, this includes the use of drones in ways that would have been unthinkable only a couple of years ago; at night, over water, in caves and circling around the Russian Arctic And Rachel Butler, a producer at the unit, added that new technology was literally taking its crews to new depths: “Using air rebreathers, for example, means our underwater teams can now stay down for 3-6 hours at a time.” FAANG PARTNERSHIPS Those other California players 350 miles north of Hollywood - the so-called FAANGs - were also out in force this IBC, hungry for partnerships and insisting that they were friends, not foes. In her conference opener YouTube’s head of content, Cecile Frot-Coutaz, urged pay-TV companies to harness the platform to reach new audiences. The former Fremantle chief executive said that YouTube’s partners in the pay-TV space were increasingly viewing the platform as an essential part of their content offering. Traditional free-to-airs are also embracing VoD on social, with examples provided of how ProSiebenSat.1 and France Télévisions are harnessing Facebook’s video on demand service Watch, to drive audiences to their TV programming, with previews and exclusive content. The social media giant’s social video product manager Erin Connolly added that, on average, audiences are currently spending 26 minutes in Watch.

“We have a long way to go but we are excited about the progress we have seen,” he said, adding that the platform was looking for “long-term sustainable and mutually beneficial partnerships.” Facebook was also an IBC exhibitor this year and during the Conference announced updates that it has made to two Facebook Live features, Watch Party and Creators Studio. Also delivering keynotes at the conference were Shalini Govil-Pai, head of Android TV (Google), and Alexa’s Max Amordeluso (Amazon), who both espoused the benefits of voice control technology. “We find that the user has to be trained to use Google Assistant, but when they get it, it’s a magical experience,” said Govil-Pai. However, not everybody believes the way that the FAANGS are operating is magical. During Friday’s keynote Noel Curran, director general of the EBU, warned that any partnerships between broadcasters and social media companies would remain “unbalanced” while platforms coveted broadcasters’ audience data. “We need to have access to the data social media platforms hold on our output and we need to get to a position where we get due prominence on those platforms,” Curran insisted. Regulation, Curran continued, was the answer, as he called for governments to get tougher on Silicon Valley: “Why is there no regulation in terms of data? Right now we have an unregulated social media sector, dominated by four or five big players that have an unprecedented amount of control.” AI BUSTS FAKE NEWS Another strand running through IBC this year was Tech Talks, which sees the show’s Conference committee anonymously select whitepapers on the basis of their merits, and then invites the authors to give their presentations to IBC delegates. These included a session on AI, which saw Al Jazeera’s head of media and emerging platforms Grant Totten present a proof-of-concept AI, which was aimed at busting fake news. Totten explained that the work involved combining enriched metadata with contextual video analysis to perform a number of compliance-related tasks on its news-based content. These ranged from ensuring political candidates get equal speaking time to validating maps of disputed territories. The most challenging level of the research, he added, was the detection of deep fakes; AI-generated synthetic videos of events and actions that haven’t occurred. “Detecting deep fakes is the most advanced level of our research because it takes a visual orientated approach to fake news and is also textual as well as visual. We’re pushing contextual video analysis to its limits,” Totten said. n


FAST, EFFICIENT AND FLEXIBLE With IP-based workflows and Next Generation Audio still firmly in the ascendant, the story of audio at IBC 2019 was one of continuing trends, writes David Davies


ast, efficient and flexible’ is a description that could be applied to most broadcast audio product launches over the last few years as vendors have responded to the escalating need for total versatility with regards to broadcast applications. The move from SDI to IP connectivity has been a great enabler, and it will surprise no one to discover that there were many networkingrelated developments at IBC 2019. There was also evidence that manufacturers are investing more into allowing broadcasters to cope with increasingly complex live productions – including those with a Next Generation Audio component – as well as the growing interest in implementing at least some degree of remote and/or virtualised production when appropriate…


ON THE SHOWFLOOR Several of these strands came together in Calrec’s appearance at IBC, with the company expanding its suite of applications designed to give broadcasters “more efficient tools without the requirement to purchase more hardware.” Hence, Calrec’s Assist – which was shown controlling the VP2 and Type R virtual consoles at the show – is a browser-based interface allowing users to set up shows, memories, fader layouts, input and output levels, routing and other key tasks. Also browser-based, the Connect app can be accessed anywhere and includes built-in broadcast features like input controls, network diagnostics and GPIO-style logic connections. Lawo was another company demonstrating its ability to cater to various production needs. The UHD Core is a

FEATURE network-based, software-defined DSP engine offering 1024 fully featured channels and supporting capacity expansion for more demanding productions. As a result, it can be used with a single Lawo mc256 or mc296 console, or shared by up to four different desks. Further underlining the current interest in virtualised production, Lawo also introduced the latest version of its software-defined IP routing, processing and multiviewer platform in the form of V_matrix software suite 1.10. Among other features, V_matrix app vm_udx natively supports both ST2022-6 and ST2110-20 UPT video, as well as ST2110-30/AES67 and Ravenna IP audio streams. Speaking of Ravenna, the ALC NetworX-originated IP transport solution enjoyed a dynamic IBC with visitors able to see a demo in operation with Ravenna/AES67-enabled devices from a dozen manufacturers supplemented by several table-top devices, including the TM9 interface from new Ravenna partner RTW and the QARION headphone monitor from Qbit. Satisfying the wish from different broadcasters to work with IP in contrasting ways, the demo encompassed ST2110-30 audio stream interoperability (including Dante AES67 streams) as well as ST2022-7 stream redundancy and NMOS IS-04/IS-05 support. Another Ravenna partner, Merging Technologies, presented a new record and playback solution consisting of the ANUBIS monitoring device and collaborator Cymatic’s uTrack24 player/recorder – both connected through a Ravenna/AES67 network controlled by Merging’s ANEMAN software. Moreover, Merging was also at the heart of one of the show’s most high-profile customer announcements with news that the Master Control Centre facility operated by broadcast services provider NEP in Hilversum, the Netherlands, has selected ANUBIS for use with its Virtualised Editing Platform. Remote production – which allows off-site resources to be optimised for productions that are more cost- and personnel-efficient – was an important theme of this year’s IBC. One of the most interesting launches in this vein was AETA Audio’s Remote Access feature, which enables any device to be controlled from any location. Ease of use was one of the guiding R&D principles, with AETA Audio suggesting that it will be ready accessible to journalists with minimal training, as well as allowing technicians to make remote adjustments back at base. High-quality broadcast microphones have always been the bedrock of successful productions, and in this regard too IBC visitors were not under-served. For instance, new from Audio-Technica were the BP892x, BP893x and BP894x MicroSet headworn microphones. Specific features include a subminiature omnidirectional condenser capsule in the BP892x that delivers intelligible, natural with a flat, extended frequency response. Primary improvements across the range when compared to the previous-generation products include detachable cables and more secure ear loops for comfort and fit.

AT THE CONFERENCE As immersive and object-based audio technologies continue to become more prevalent in broadcast, film and gaming, several sessions at the IBC conference explored some of the associated challenges. One of the most intriguing, The New Sound of Hollywood, was presented in association with SMPTE and featured Sound Particles CEO Nuno Fonseca and sound designer Tormod Ringnes discussing the impact of CGI-like sound design software and immersive 3D audio production on blockbusters from Hollywood studios such as Warner Bros and Skywalker. Despite its undoubted technical and creative challenges, there is no doubt that immersive audio offers film and TV producers a way to differentiate their content as the volume of options confronting consumers continues to grow. More complex mixing sessions and the need for acousticallyoptimised workspaces are among the requirements of immersive productions, the specifics of which were discussed by Avid manager audio presales Greg Chin in his session entitled Native Immersive Audio Production: Captivating the Modern Viewer. If immersive is still at a relatively early stage in terms of audio’s overall history, then sound for esports is surely in an even more formative phase. This is an area of considerable complexity: in terms of participants, players need to be able to focus on audio from their computers as well as coordination with their teammates; and regarding the audience, they require access to in-game sound as well as continuous commentary. The challenges for sound teams in delivering a balanced, detailed yet exhilarating experience for esports viewers and players was discussed in a far-reaching paper presented by Riedel Communications director APAC Cameron O’Neill. n

PICTURED ABOVE: Audio-Technica’s new MIcroSet headworn microphones


SMILE! Say hello to some of this year’s Best of Show at IBC 2019 winners




By Marcus Bergström, CEO, Vionlabs

C PICTURED ABOVE: Marcus Bergström

onsumers now have access to more content than ever before thanks to the proliferation of VoD and SVoD services like Amazon, BBC iPlayer and Netflix. Yet, this market is set to become even more competitive as new services are slated to be launched from Apple, Britbox, Disney etc. The focus of these streaming wars has primarily been on programming, but the reality is that the services that will reign supreme are the ones that offer highly personalised and intuitive user experiences, particularly when it comes to content discovery. Today’s consumers are spending 25 per cent or more of screen-time looking for something to watch: this is far too long and is one of the key drivers behind users leaving video services. Currently, the most common content discovery recommendations rely on external metadata sources for information on the video content in an operator’s catalogue, which usually results in fairly poor recommendations. For example, a viewer may have just finished watching The Hunger Games, a film in which Jennifer Lawrence has the starring role. A content discovery platform just analysing metadata will produce recommendations which include other films Jennifer Lawrence has starred in, such as Red Sparrow, or the film Hunger, because it has the same word in the title. Such content discovery platforms may produce relevant recommendations that a viewer wants to watch from time to time, but due to the low quality of input data the recommendations will be highly unreliable and many times irrelevant. This in turn can lead to a viewer leaving a service, because they are frustrated by the, seemingly, limited content choice. If operators want to increase user engagement and subscriber retention, then they need to realise that content discovery based solely on metadata sources is simply not nuanced enough to provide the


accurate recommendations that are so important in keeping viewers returning to a service. AI can solve this content discovery challenge that almost all operators face because it has the capability to analyse each video in great detail and combine this with the viewer’s watch-history. Metadata will still play a role in content discovery, but we don’t believe that it’s anywhere enough by itself. AI has enabled us to completely rethink the metadata paradigm, we use a number of different neural networks to find patterns in colours, camera movements, objects, stress levels, positive/negative emotions, audio and many more features of the content. Our AI engines have been trained to analyse these variables and produce a fingerprint timeline throughout the content. The AI engine then learns what matters and how changes in these fingerprint timelines are connected to what content individual viewers enjoy. A key element of this is a technique we call content similarity analysis, which compares each content timeline with every other timeline to evaluate how similar each content asset is to every other asset. The AI engine uses this content similarity database and the viewer watchlist to provide viewers with the most accurate and relevant recommendations. For example, if a viewer has just


finished watching a horror movie our platform will understand that some viewers may want to explore more films in that genre while others may want to watch a comedy to lighten the mood. Through deploying AI for content discovery our platform doesn’t need any details from the viewer, apart from their watch history. This then makes the need for individual user profiles redundant, because AI can combine its understanding of the consumer’s favourite content with data points such as device type, time of day, and the chronological consumption pattern, so that operators can provide accurate and relevant experience for all members of a household without the hassle of switching between profiles. This feeds into our previous point about leading viewers to different destinations because it will understand that if someone has watched a horror film on a Friday night on their TV screen then they may want to continue watching similar content well into the early hours. However, if someone has just finished watching a horror film on a weeknight, they may want to watch something more relaxing before having an early night. Our AI-based content discovery is generating a lot of interest in the VoD market, but it definitely also has a role in linear programming. An operator with both live and VoD services can use AI powered content discovery platforms to understand its VoD content, which means that the VoD library can be the foundation

for creating hyper-personalised linear programming. This is analogous to how online music services such as Spotify use similarities between songs to create music channels from users music streaming behaviour. A similar analysis of video can enable operators to create completely personalised linear programming experiences through offering personalised channels such as weekly discovery, TV-pilots for you, space channels, based upon VoD consumption behaviour. Here at Vionlabs, we firmly believe that AI is the future of content discovery, which is why we have developed our platform that uses AI and machine learning to analyse movies, TV series, documentaries etc. in order to significantly improve the accuracy, reliability and relevance for personalisation and discovery of video services. Content service providers have already drawn the battle lines around programming, but a viewer will abandon a service if they cannot find the great content that they’ve been promised. During the last 12 months we have productised our technology and deployed our content discovery platform for operators that are now enjoying a significant uplift in VoD buy-rates and engagement. By serving up the right content at the right time, our content discovery platform maximises the time viewers spend watching the content they love and minimises the time they spend searching for it, leading to happier viewers that spend more time and money on a video platform. n

PICTURED ABOVE: A visual fingerprint of two James Bond films, Quantum of Solace and Dr No. Each tile represents a scene from start to finish. The colour represents the average colour-scheme of the scene, the speed of which the tile moves up and down represents the intensity of movement, and the width of the tile represents the length of the scene.




ur appetite for media and entertainment has continued to grow over the last 50 years. The TV set has become the centre point of most living rooms with additional sets through the house. Video on demand (VoD), which started on tablets and PCs and then to smart TVs has now spread to the mobile world and media consumption habits continue to evolve. Today’s millennial audiences are watching more content on smartphones on-the-move than on the big screen at home. Frequent flyers on long-haul now consider the in-seat screen - and personal entertainment - as a necessity rather than a luxury. In many ways, the last untapped sector in media and entertainment is automotive, but that is about to change. With improved mobile connection reliability, advances in In-Vehicle Infotainment (IVI) hardware platforms and support for Wi-Fi to enable BYOD (Bring Your Own Device) we are at the start of the in-car entertainment revolution. Although IVI is not a new concept, the stage is set for a massive market growth over the next decade. The catalyst is a confluence of technologies that are fundamentally changing the automotive transport industry with an even bigger shift when fully autonomous passenger vehicles hit our streets.


MORE VIEWERS Improved connectivity and reliability, coupled with technologies such as adaptive bit-rate delivery of content mean that streaming audio and video services can be delivered to vehicles now. Deployment of in-car Wi-Fi hotspots will ease the ability of passengers to stream their favourite content to their personal devices, opening up entry level options for car manufacturers to still deliver digital services to their customers. Electric vehicles will also offer an opportunity for media consumption while charging, another opportunity for manufacturers to develop their digital brand and customer relationship. The availability of full autonomy will overcome the major limitation of IVI; a lack of viewers due to strict driver distraction regulations. According to the UK’s Department of Transport, in 2017, UK citizens travelled 808 billion km by car, van or taxi. That equates to tens of millions of hours on the roads, however currently 60 per cent of all journeys are made alone so video services are simply not possible. The number of journeys with more than two passengers only equates to 15 per cent of all trips. In a future world where the driver can relax, either when recharging or when the car is autonomous, in-car entertainment consumption is likely to soar. A modest estimate suggests that the global

FEATURE automotive infotainment market is expected to reach $40.17 billion by 2024. EVOLVING IVI At present, the IVI experience is centred around the driver, with key applications such as navigation, Bluetooth integration and broadcast audio services. In higher-end vehicles, connected services are becoming more common, with in-car Wi-Fi hotspots and 4G connectivity to power additional connected services such as Spotify for music or LoJack for vehicle security. 5G will become a driver for new connected services and will become standard on vehicles from most tier-one car manufacturers in the coming years. This is not just for entertainment but as part of an effort for manufacturers to build a digital relationship with their customers by providing valuable services through better communication, support and after-sales services. At present, audio services are king in the car, but this is starting to change when there are passengers, providing a clear indication that consumers want more. Most video entertainment consumed in-car is via the BYOD option, with many passengers already watching content on the road through pre-downloaded video on their personal devices. However, this will subtly shift as vehicles with built in Wi-FI and 4G/5G become more widely available. Audio services such as Spotify and voice recognition systems like Google Assistant, Amazon Alexa and Apple Siri are becoming ‘built in’ with original equipment manufacturers (OEMs), such as Ford, teasing the prospect at recent CES and NAB Show events. It will be interesting to see if the BYOD option represents a stop gap as the experience of watching highquality content on a small smartphone screen is not ideal, and having to pre-download content is far less convenient than streaming from a large online library. VEHICLE CENTRIC The automotive sector is moving towards a new alwaysconnected environment, providing opportunities to enable a highly personalised experience and ongoing customer relationship. We can already see this in the way cars are evolving to include larger screens, connected/online services and voice assistants. Video content could be seen as a natural next step. Car manufacturers now have an interesting choice to provide media services; they could deliver a solution that simply “mirrors” a mobile device screen via Android Auto, Apple Carplay or Mirror Link. This enables services within the car, but what value does it deliver to the car OEM? Using this approach they are simply a conduit for third parties to have the business relationship with - and revenues from - content providers. They also lose the direct in-car relationship with their customers as the in-car screen becomes a dumb display. Perhaps a more sustainable approach would be to

provide a suite of targeted services within an OEM branded environment, extending these with regional “must have” applications. This would then echo the proven smart TV experience where the brand has the initial user experience and then additional services are presented within this. The automotive and content industries have an opportunity to collaborate, enabling both to foster new important connections with consumers, and tap into new avenues for monetisation as other avenues become saturated. By not giving up the in-car experience to third parties, broadcasters, automotive manufacturers, advertisers and third-party service providers will start to be able to understand more about the viewer than just that they are sitting in front of a screen. Through consensual sharing of data, this quartet of interested parties will start to gain a thorough picture of journey data, habits and visited locations that only the likes of Google currently have access to. This insight will unlock new possibilities such as advertising - via video, audio and images - based on locations nearby or destination-based retailers or attractions. Additional content tailored specifically for the journey - such as localised travel news or content about what is happening in the destination town or city - could be delivered on route. This could be offered as part of a personalised content package offering further monetisation opportunities. INNOVATION THROUGH STANDARDS Crucially, much of the technology already exists to achieve many of these ideas. Leveraging proven technologies such as 4G/5G, W-Fi, UPnP and potentially de-facto standards such as Linux and Android allow innovative solutions such as the ACCESS Twine Service Platform to interface to the platform and provide applications and services via established APIs. Both the media and entertainment and automotive sides of the equation are currently in the exploratory phase when it comes to online entertainment in the car. At present, the IVI landscape is still experimenting with interfaces and feature sets needed to provide the next level of services to encourage consumers to invest in new vehicles, with online services moving from “SHOULD” to “MUST” in requests from manufacturers. Each new motor show highlights further innovation that shows the inevitability of the car becoming an entertainment device over the next few years. Although electrification and autonomous vehicles are currently gathering the most headlines, entertainment services offer a huge opportunity to foster the ongoing digital relationship that will enable new revenue opportunities ranging from short-term agreements such as holiday packages through to subscription services, with possible intermediates such as advertising-funded “freemium” services and revenue sharing with locationbased services such as drive-through fast food or coffee. The future of in-vehicle entertainment will certainly be exciting to be part of as it evolves! n


VRJAM CEO Sam Speaight shows Dan Meier the joys of Ministry of Sound without the crowds


f you’d have told me 10 years ago that I’d have experienced Ministry of Sound virtually before I’d done so physically, I probably wouldn’t have believed you, but I definitely would have been confused. That proved to be the case when a VR headset took me to the world-famous venue from the other side of the Thames, thanks to an innovative new platform called VRJAM. The platform allows mobile phone users to move around virtual environments in real time, with zero delay. “Usually one would be hampered by the laws of internet connectivity, like buffering on YouTube; you do not have that with our tech,” says the company’s founder and CEO Sam Speaight, explaining that the server will


kick users out and ask them to log back in if the connection drops. The other key distinction of VRJAM is that it’s built for mobile rather than desktop-based VR systems. “The only people that own Rift and VIVE are wealthy nerds,” says Speaight. “They’re very expensive, you need a really high-quality computer to run them, and it’s just not how you reach a global audience. So we’ve really tried to rise to the challenge of building tech for mobile.” Running a demo of the app on a Galaxy S8 CPU, Speaight shows me round a simulation of the main room in Ministry of Sound. “We went in there, we measured the room, we photographed the surfaces, we created a fully realistic space.

FEATURE What I am able to do is activate anything that’s inside that environment in real time,” he says, triggering a lightning strike inside the virtual environment. He notes that the server is located in Paris but the change in environment happens almost instantaneously. “The DJ is representative of a real, actual human being and it’s being driven by a motion capture hardware system that’s running in our studio in the Netherlands,” he adds, before turning the DJ into a robot and then a manga character, the changes again occurring at the touch of a button. Over the course of the demo Speaight triggers lighting effects such as strobing, as well as changes in music and location, placing the user in colourful alien dimensions or floating space environments, complete with asteroid storms and trippy particle effects. Speaight notes that live music is just one possible deployment for VRJAM, which could also be used to immerse the user in sporting arenas or corporate events: “We’ve got a sports application we’re working on where we’re looking to put two football teams in real time into an immersive environment, whereby as a fan you can stand on the sideline and watch the game, or stand in the goal and watch the ball whizz past your head, or hover above the centre forward as he’s kicking the ball. Anything’s possible!” As for the content creators themselves, the company is keen to see them rewarded equitably. “We feel that platforms like Spotify and YouTube get the content valuation model totally wrong,” Speaight explains. “And we feel that the terrible lack of fair revenue sharing that’s given to content creators is killing so many parts of the artistic industry, whether it’s video or music or what have you; a scenario where YouTube makes $1,000 from streaming content and gives $10 of that to a content creator has to end. It’s unsustainable and it can’t last.” VRJAM’s new model is based on a 25 per cent revenue share, meaning a quarter of all the revenue generated via assets published on the platform goes back to the content creator. “Which is, we believe, something really, really important in terms of beginning to redefine the broken value system that exists online in terms of content,” adds Speaight. Another broken model the company is hoping to transform is advertising, as Speaight explains: “Display advertising is also dead. We use it as a main primary revenue source because we know we can get loads of money from it. We prefer not to.” VRJAM is looking to introduce “premium content”, whereby interactive ads are built into the immersive environment. “Let’s say for instance, I’ve got a company that makes basketballs, Sam’s Basketball,” says Speaight. “Well inside Ministry of Sound we can have a basketball hoop over in

the corner, and I can wander over to the basketball hoop and I can shoot virtual baskets. I hit a three-pointer, I get a token that pops up on my account, and allows me to go to Sam’s Basketball website and download a basketball for a 50 per cent discount. At that point, I’ve got the brand message, I’m having loads of fun because I love basketball, I’m hitting every three-point shot I throw because I’m in VR and I’m suddenly the world’s greatest basketball player,” he continues. “But hang on, this is an ad?! An enjoyable add?! That’s fun?! Wow.” The potential for content creators and advertisers alike is therefore highly valuable, as well as offering graphics and CG artists the chance to work with their favourite performers. “They’re collaborating with this artist to create new immersive content in real time, they’re generating a revenue share of the advertising that’s

coming from that content, so they’re getting paid, and they’re also getting a huge personal payoff in terms of them being directly involved in creating art with this very high-profile artist,” says Speaight. “What we’re trying to do is create a platform for collaboration between artists, which again is something very new.” Is the arrival of 5G an exciting prospect for this realtime functionality? “It’s going to help us immensely,” says Speaight, adding: “I am mildly sceptical about it to be honest. I’m hoping that it delivers on its promises. If it does it’s going to allow us to do some very special things. We’re talking to Google at the moment about some 5G-related stuff but is not much more than a conversation at this point. “As in all things, anything to do with frontier tech, and 5G is a frontier technology, I’m always like, OK, we’ll see,” he laughs. “Certainly the telcos have invested a lot of money in getting it there. I believe that they wouldn’t have invested a lot of money if it wasn’t capable of delivering on their promises. We’re excited but we’re tempering our expectations.” Until then, I can vouch for the immediacy of VRJAM’s functionality - or at least my avatar can. n

PICTURED ABOVE: A robot DJs at Ministry of Sound



MOBILE NETWORKS: THE NEXT GENERATION Mark Layton reports as BT Sport’s Matt Stagg reveals how 5G could revolutionise remote production for broadcasters and why you shouldn’t expect 4K videos on your mobile just yet


he launch of 5G is set to be the next leap forward in mobile infrastructure, offering faster and more reliable data transmission on an ultra-high bandwith with very low latency. Its potential applications go far beyond simply being able to check the news headlines on your mobile phone a little faster and its arrival could herald big changes for broadcasters. Matt Stagg, director of mobile strategy at BT Sport, led the team that delivered the world’s first remotely produced sporting event and is responsible for developing the 5G technology strategy for broadcast, media and entertainment for the wider BT Group. In a webinar recently hosted by TVBEurope, Stagg explained some of the potential benefits of remote production that 5G could provide for news and sport broadcasters: “When there’s breaking news, all of the news companies go to the location and they’re all using the uplink, so you’re going to end up with a congested network. We knew we had a problem to solve and that’s where we could look at 5G. We are very interested in using remote production, it has a number of really fantastic qualities that will revolutionise our industry. He continues, “Sending trucks up and down the country is not only expensive, but it leaves a huge carbon footprint.


If we can just connect the cameras back to the studio and not have an OB (outside broadcast) then there’s a work-life balance – we don’t have to send our staff all over the place at the weekend, they can come into work at the studio.” Stagg says that 5G can also improve mass media file transfer, which could eliminate the need for a courier to take digital media rushes back to editors to create a rough cut at the end of the day. “When you’re looking at ultra-high data rates, when we move towards putting up the uplink, potentially that can change the way that’s done, because you don’t have to wait until the end of the day to send all the media, you could be doing it after each cut.” While 5G may work wonders for remote production, Stagg says that live video latency won’t really be affected, despite it being as fast as just one millisecond: “Video doesn’t need an ultra-low latency,” he explains. “In-fact, the 4G latency is fine for live sport because you have so much in the workflow that the difference between one millisecond latency and 80 millisecond latency is absolutely dwarfed by anything else in that workflow, including the ABR (adaptive bitrate) buffering on the phone. But then if you think about the workflow production, it goes in from camera to screen, it’s unnecessary.”

The move to 4G may not have had as huge an impact on the broadcast industry as many might have expected, but Stagg says that the switch to 5G will be different, due to the amount of input that has been received from key stakeholders from the get-go. He explains that when 4G was launched traditional broadcasters weren’t used to the constraints that came when working with mobile technology. “We saw that uplink was being used, that was not envisaged when 4G came about, but you could now - and lots of people still do - use it for breaking news outside of broadcast. It’s still on a best-effort basis, but at that time, uplink wasn’t used at all, really. “So this is where the collaboration between the mobile network operators, the standards bodies and also the broadcasters and the media companies really comes into its own, because there is a great opportunity to change a number of things in the industry.” Stagg adds that he has had many discussions with mobile network operators and broadcasters to make sure that anything they need from 5G will go into the standards now, with one of the biggest requirements being guaranteed quality of experience for applications. He continues: “The reason that most people didn’t

take up broadcast on 4G was because of handset density and penetration. Now it’s in earlier, it’s also got, which it didn’t have so much from the beginning, this engagement from the industry. We have the broadcasters putting their requirements in their standards, saying this is what they want. So, it’s in earlier, it’s been written into the standards with a view that it will be used. “With 4G broadcasts it was ‘you could do this if you want’ and it was only later on that people said ‘oh actually, we need to do that’ and then it’s very difficult to retrospectively bring that industry and that technology to life. Also there’s other opportunities such as DTT (digital terrestrial TV) augmentation or replacement. So everyone is in early and the industry in general, end-to-end, saying this is a requirement, not a nice to have.” One of the big advantages that 5G is set to have over 4G is network slicing, which is the ability for mobile networks to partition, or ‘slice’, the spectrum they own into bits that can be used as smaller virtual networks for different uses. Stagg says that the cost for one of these network slices dedicated to broadcast has yet to be discussed but he expects that the method of payment wouldn’t be anything other than “buying a service for an amount of time.” He explains: “It would be somewhere along the same



lines as those that you would buy a service from a media broadcast company or a satellite. I think when we talk about 5G we need to look at services. The commercial aspect will need to be developed alongside the technology. “I think we are a long way from working out the cost, but that will come once we understand how we can provision that service and it needs to have a return on investment. If it is too expensive to actually do, then it won’t be done, but I’m confident that we will be running this type of service.” While broadcasters and mobile network operators will be very interested in what 5G can do for video, Stagg highlights how when 4G first became available video was not such a major consideration: “It was never envisaged that the killer application would be video. When 4G was launched it was an enhanced mobile broadband experience. We started to see that it was being used a lot for video. It was a perfect time, the planets aligned to make this phenomenon. You could now watch video over mobile because you had very good connection,” he says. “4G was rolled out very quickly, so more people could have access very quickly, the iPhone had come around, it was the rise of the apps, screens were bigger and screen resolutions were a lot better and more content was becoming enabled. We saw that YouTube was moving away from purely short-form homemade cat video type things to more curated longer form, and we started to see the rise of the YouTube influencers. “So at the moment, video dominates mobile networks. We anticipate that, certainly on our network, global consumption will hit around 75 per cent by 2021. Depending on what happens with 5G and the take-up, those numbers may change, because we won’t just have a 4G network.” However, just because video content may currently be the application du jour of 4G mobile data use, Stagg explains that it is simply unnecessary to bring 4K quality videos to phone screens and would be too expensive anyway - and that consumers should not expect to see a sudden jump in image resolution after a switch to 5G


mobile devices and the higher bandwith they provide. “Well stated in the industry is that 4K has no value to anybody watching it on a small screen. Nobody can see all the pixels for 4K on a small screen,” he says. “There are a number of techno-economic reasons around it, one of them being the cost in terms of bandwith required. There is a cost for all amounts of bandwith, not just for charging mechanisms but also the investment you have to make as a mobile network operator. “So watching an hour of 4K is prohibitively expensive in terms of cost to serve or cost to buy. At the moment, if I’m just a consumer with a data package, I don’t want, having 4K, to not be able to tell the difference and it costing me four times as much in terms of data. Now, when we move towards unlimited, if I provide everybody with unlimited 4K I bear that cost in the terms of the capacity of the network. So, small screen, HD/HDR is the optimum viewing experience.” In the UK, 5G is currently only available on certain operators and in certain parts of the country, but with more coverage on the way later this year and next year, the opportunities that this next generation of mobile network technology can offer to both industries and consumers should be even more apparent. n

“There is a great opportunity to change a number of things in the industry.” MATT STAGG


ON SET WITH Philip Stevens continues his look at soaps with a visit to the fictional northern village


hen Hollyoaks first appeared on the UK’s Channel 4 in October 1995, just one episode a week was shown. Since then the air time has increased and the ‘youthful soap’ is now seen five times a week. And that reflects the impact the programme has had – not just on the young target audience, but the increasing viewering by an older generation. Set in the fictional suburb of the north west city of Chester, but shot just outside of Liverpool’s centre, the series is produced by Lime Pictures. Over the years it has won close to 60 awards, including the Best Soap of 2014 and 2019. As well as the UK, Hollyoaks is also broadcast in the US, Canada, Sweden, Norway, Finland, Iceland, Serbia, Bosnia and Herzegovina, and South Africa. Hollyoaks’ executive producer Bryan Kirkwood describes Channel 4’s flagship soap as being made up of three types of stories. Huge gothic soapy plots, thrilling romances and serious issue-based stories. “We tell those issue-based stories because they matter hugely to our

young audience and start conversations in living rooms that otherwise may not have happened.” Kirkwood adds that the drama is often debated later in classrooms and online where a large proportion of the digitally connected audience can be found discussing the issues during and after the episode. Those issues include male rape, abuse and various health problems. “The way in which we tell issue-stories is tried and tested. It is always with a charity partner. Sometimes stories will come from publications like the NSPCCs annual report, which reported self-harm as biggest emerging threat to teenagers’ health. Or it will come from the writing team but then be meticulously investigated by our research team to ensure that while not losing drama, the important detail is factually correct.” The next step is to host an interaction with a charity or advising body. As the story progresses form scripts to screen, relevant parts of dialogue are sent to be signed off and amended under the advice of the charity and complied for broadcast. “Help and support will then be strategically planned to run around the episodes, ranging from a



Twitter Q&A with the charity to a Facebook Live, where we were able to offer direct advice and support to viewers and crucially interact with them, which is key to the reach of social posts.” He continues, “In short, Hollyoaks provides its family audience with 23 minutes of escapism, mixed up with side issues which affect that audience and their friends and family.” The ‘tag line’ for the series is ‘Life, but brighter.’ And that reflects the production values – telling stories, but in an environment that looks bright. It means that the set is 150 per cent lighter than many other programmes. And there is a great deal of movement which is helped by the fact that the episodes are shot single camera. Moving away from a multi-camera production means there is freedom for a great deal of walk and talk. Using that method of production also means that some unusual camera angles can be utilised – for example, moving in to a key hole and then emerging from the other side.

PICTURED ABOVE: Bryan Kirkwood

YOUNG APPEAL Kirkwood says that his writers are well tuned to the thinking of the younger generation – and that enables scripts to be produced which meet the expectation of that target audience. One production technique that appeals to the younger audience is the use of music. At the start of each programme, background music is used to set the scene for the episode. And then throughout the programme music tracks are used to add to the emotion. “I know our audience is used to listening to music and it certainly helps with our storytelling and it has become part of the British way of thinking,” states Kirkwood. Series producer, Hannah Sowden, adds, “Music certainly


adds to the appeal for the younger audience. We believe that audience would not enjoy the opening sequence if the music was missing. And we need to add that the choice of costumes is also important for that age group.” Another production innovation is the overlaying, as a graphic, of text messages being sent from one character to another – thus avoiding the traditional over-theshoulder shot of a difficult-to-read phone screen. The same technique is used when a character is reading a web page or other material from the internet. So, what does Sowden see as the challenges in producing Hollyoaks? “Our approach has to be fresh and big, and at

the same time appeal to a broad audience. We need to find ways of telling stories in a unique way. We are very hard on ourselves when it comes to the strategies of story planning – finding new ways of covering issues. Our production values are incredibly high.” SUPERB SCHEDULING Head of production, Colette Chard, says her challenge is shooting five episodes a week based on a 19-day production module. “When you consider ongoing availability of the 50 some cast members, and the fact we don’t have any studios in the conventional sense –

everything is on location at our facility – and the same set may be needed for separate episodes on the same day, scheduling is a major consideration.” Indeed, the scheduling office includes a massive board that occupies the whole of one lengthy corridor. The board is covered with details about who is available when, which sets are being used and a host of other details to ensure the smooth running of the complex operation. As part of that forward thinking, the planners install a Christmas tree in their office in July to remind them about the time frame for which they are producing schedules – for, as Chard explains, that story cycle starts

PICTURED ABOVE: Sienna (Anna Passey) enjoys the sunshine in this exterior shoot


PICTURED ABOVE: On track for drama

about six months from transmission. “Once the stories have been approved, we can commission the writing of scripts. Directors, of whom Hollyoaks uses a freelance pool of between 30 and 50, are given approved scripts three weeks before shooting. “We use the same director for a whole week’s episodes so there is good continuity. They have three weeks of prep, three weeks to shoot the five programmes and a further two weeks for post production,” says Chard. EDITING THE SHOWS Alistair McMath, technology manager, post production picks up the point about editing. “On site, we operate 12 edit suites, 10 of which support Hollyoaks. We have relatively recently moved to Media Composer. That provides us with future proofing for 4K and HDR. We have 600 Terabytes of storage with Nexis E4 and 60 Terabytes on E5. We have also installed a remote back up system in our London office of 1.2 Terabytes.” The edit team comprises three staff editors, two of which were trained in-house as part of a scheme run by Lime Pictures, and several freelancers. “Many of those freelance editors are well experienced with drama through working


on other soaps and programmes such as Doctors and Casualty,” says McMath. The directors oversee the edit on their shoots, although there is also an edit producer who checks for continuity between programmes and compliance. McMath goes on to say that Lime Pictures has also recently replaced its five dubbing suites utilising Pro Tools. Although there is a certain amount of colour correction carried out, there is no grading as such. “We do not have lighting directors as would be normal on a soap, but because we use single camera filmatic techniques, we employ a team of six directors of photography,” explains Chard. “We are able to avoid the flat lighting that is often associated with the production of soaps. The other shooting crew comprises six camera operators, 12 assistant and sound engineers. They are all staff based.” According to McMath, Sony 2500 cameras are the preferred option, although evaluations are currently being carried out with a view to updating the equipment in the medium term. “The 2500 fits into our workflow very well – and we need a robust camera because of our demanding shooting schedule.”


CHANGING SETS Head of design, Julian Perkins, says that schedule calls for a sizeable team to ensure everything is in the right place at the right time. “Essentially, we are producing drama five days a week. Fortunately, we have everything we need on site – including our own CGI facility and that makes it easier to accommodate changing demands very quickly.” He goes on, “We may have to change a set from one shooting block to another several times a day. Often people will come to work to find the corridor outside their office is a different colour from the previous day as that location is needed for a particular scene. But that’s the benefit of having everything on site and available. It means we can be extremely reactive.” Hollyoaks is shot in what used to be an art college – with many of the original buildings being utilised for the new use – both as interiors and exteriors. “Audience research has revealed that Hollyoaks is the only soap that is growing a youth audience - up 12 per cent,” reveals Lucy Connolly of the Lime Pictures press office. “In addition, there is also a good audience for the 35 to 55 age group. One fascinating statistic shows that a growing group of viewers are mothers on maternity leave

with their second child.” Connolly explains that the series is shown on two of Channel 4’s outlets – the main C4 channel and e4. The latter is the main platform for the younger audience who are also interacting via social media at the same time. However, the two platforms do not mean two different cuts – the episodes remain the same. She continues, “We know from research, that 40 per cent of our viewers watch the Channel 4 broadcast with friends and family – and they want to discuss among themselves the vital issues that are covered. And, as already mentioned, we provide contact details for those who want to seek help. It’s all part of acting responsibly towards our growing audience.” So, how does Kirkwood sum up his role as executive producer? “We know that audiences here in the UK love their soaps – the viewing figures bear testimony to that fact. And that love affair continues with Hollyoaks – but it is different. We have provided a mixture of dramatic storylines, sensitive issues and, of course, romance. And that combination – and our emotive way of shooting each episode – has meant that the series continues to appeal to our audience.” n

“We do not have lighting directors as would be normal on a soap.” COLETTE CHARD





utocue has almost become a generic term for prompters, in much the same way that Hoover can refer to any make of vacuum cleaner. And perhaps that is understandable when we realise that Autocue, formed in 1955, was the first to offer prompting and presentation systems. But since then the company has developed a broad teleprompter range, with hardware and software products at every price-point. So, with almost 65 years of history, what were the landmark moments in its development? “There have been so many over the years, but I think most probably the development of the network capabilities of the QMaster hardware to enable the prompt engine to be accessed anywhere when that kind of technology was in its infancy,” reveals Robin Brown, product manager, prompting, Vitec Production Solutions. He goes on to say that the most popular products available today are iPad prompters. “These have appeal for everyone from a single videographer to a broadcaster looking for a lightweight and portable remote solution. Aside from iPads, 17-inch monitors have been the


Philip Stevens presents an overview of recent teleprompter developments standard for many years. We’ve seen that across all of our ranges, from our Starter Series that’s used by customers looking for a cost-effective solution, our standard bright Professional Series, and our high-bright Master Series with HD-SDI input that’s used by broadcasters worldwide. In the broadcast space there is a growing trend towards larger monitors so maybe that will change.” NEW APPROACH The Vitec Group’s other prompting brand, Autoscript, was founded in 1984. Autoscript has always focused on providing new systems that approach prompting in a different way and with more features and options to cover a wider range of productions. “We began by asking producers, camera operators, presenters and prompter operators what they really needed and developed the products based on their responses,” explains Brown. “Nearly 10 years ago we launched the unique E.P.I.C. system, which combines a high-bright prompting screen with an integrated talent monitor. This reduces the weight of the system, removes obstructive brackets from under the monitors and is also really power efficient.”



He says that an even bigger development came in 2017 when Autoscript launched the Intelligent Prompting range of products that enabled a complete end-to-end IP workflow for the first time. “The huge majority of our customers were still focused on video and we kept our compatibility with Composite and HD-SDI video workflows, but we could see IP as the future trend with huge potential.” MAKING NEWS Brown continues, “Autoscript is renowned for newsroom prompting software and WinPlus-IP, our refresh of the industry standard prompting application, maintains that tradition. In terms of hardware, E.P.I.C. has become the EPIC-IP with Composite, HD-SDI and IP inputs, and remains the best-selling teleprompter design for us.” WinPlus-IP News offers compatibility with all leading NRCS including Avid iNews, AP ENPS, Dalet, Octopus, Annova Open Media, Ross Inception, Rundown Creator, Tinkerlist and others. “WinPlus-IP News enables instant script updates from Newsroom systems. The connectivity status of each Newsroom is viewable at a glance and can be given a user defined name for easy reference. WinPlus-IP’s simple connection with playout systems streamlines

the newsroom operation and enables us to be part of an automated workflow.” Autoscript was awarded a technical Emmy Award for its work with the MOS protocol. BOUTIQUE Portaprompt, founded in 1976, is located in High Wycombe, UK. “We are a family owned, boutique prompting company - this is all we do – which has been at the forefront of global teleprompting since our launch,” states Jon Hilton, sales and marketing at the company. When setting up the business, the owners decided they would develop their own prompting software and hardware. In fact, the company was awarded an Emmy in 2008 for the “pioneering developments in electronic prompting.” Hilton explains what Portaprompt offers is different to the existing providers. “The first thing we wanted to understand is ‘why do you need prompting?’ Sure, the basic is remembering lines, but there is much more to this and one could argue it is an essential production tool today delivering at least two fundamental production values. The first of those values revolves around saving money and this is achieved by reducing the time required to get that perfect take. The second benefit is that your presenter will be word perfect - and legally compliant -

“In the broadcast space there is a growing trend towards larger monitors.” ROBIN BROWN


PRODUCTION AND POST shelf ’ CAT 6 routing technology which makes for a very cost effective and flexibility system. “Our IP prompting solution is based on the replacement of our standard VGA/composite video or SDI card in monitor with a new IP card and the upgrade of our WinDigi Prompting software. This means existing Premium and Quasar monitors and our WinDigi software have clear upgrade paths which will extend their life cycles.” Hilton goes on to say that the arrival of cost-effective tablet and phone prompting devices has opened up business opportunities where traditional prompting may not have been financially justifiable before. “These devices are not replacing the existing products, but with prices starting at around £75 a day to rent and under £500 to buy, these tools are now able to provide affordable prompting for lower budget productions.”

PICTURED ABOVE: The Interrotron uses a traditional prompter where the presenter sees the interviewer’s or director’s face

with a professionalism that we all expect to see today by maintaining eye contact with the audience.” According to Hilton, the biggest change over the last 20 years was the arrival of flat panel screens to replace CRTs (Cathode Ray Tubes). “This meant we could increase the size of the prompt monitors on camera without compromising the payload on the tripod or pedestal. We now regularly supply 24-inch screens into TV and broadcast studios and now have a massive 32-inch on camera prompt monitor which gives long reading distances of up to 20 metres.” NEWS ANGLE “Beyond that, prompting software continues to develop with integrations into Newsroom system as a standard feature. We are a partner of the Associated Press ENPS News Production System and actively support the ENPS MOS user group meetings. Our WinDigi prompting software has been specifically written and designed to interface with ENPS using the Media Server Communications Protocol (MOS) as well as the Avid iNEWS ftp, Dalet and Octopus systems.” He continues, “The next stage of development, we believe, will be the benefits of using IT/IP Networks for both hardware and software integration and operational flexibility that provides, including remote production across multiple locations.” Portaprompt has been looking at the role that IP Networks can play in delivering advanced prompting in broadcast TV since IBC 2015. The company uses ‘off the


MAKING IT NATURAL Hilton is also keen to talk about the Interrotron solution. “An Interrotron is the use of a traditional prompter, but instead of the words in the script the presenter sees the interviewer’s or director’s face. This means the director can coach a natural conversation from the presenter - who now has a human face to react with. You also have the ‘through the lens’ credibility and, as with all prompting styles you are minimising the amounts of takes, to get the perfect one, saving production cost!” There are two types of Interrotrons. First, there is a passive system which uses reflections by way of the customised prompting hood and reflector. Secondly, an active system can be employed which takes a video signal from a second camera to feed the prompter display One of Portaprompt’s first Interrotron jobs was to use the monitors purely as a bright light to attract a baby’s attention and they have also been used with pets with the owners face on screen to get the animals attention down the lens. Hilton concludes, “Among our plans is the development of remote production and the practical use of voice recognition software control systems in TV and broadcast prompting.” ACCORDING TO SCRIPT When CueScript was set up in 2014, its stated aim was to provide customers with systems for ‘today’s productions that would integrate with today’s technology’. In addition, it devised a plan for how the whole dynamic of IT infrastructures would shape future integration. “Connectivity via IP was set in stone in our development path from the outset,” says company director, Brian Larter. “CueScript quickly established itself as the technology prompting provider that met the needs of broadcasters who were on this path of replacing


“The most important consideration when it comes to prompting is the presenter.” BRIAN LARTER

internal infrastructures, but required a system that would provide that key term that is used so commonly – ROI (Return on Investment). A system that would work in every production or technical scenario, not just today, but in five years.” He states that the major shift in all productions around the world - whether news, sport, light entertainment, corporate or education - has been automation. “This was the biggest challenge for us when we set out. Our hardware systems or prompter displays had to work straight out of the box and with all the variations of studio cameras, manual supports, robotic systems and other studio equipment. We didn’t have the old legacy systems to support, we could look forward from day one.” SIZE IS IMPORTANT Larter reveals that, with the introduction of ever more compact cameras, the most commonly asked question is, ‘What’s the smallest prompter you have?’ “In truth, the most important consideration when it comes to prompting is the presenter. You then consider the tripod head and support or robotic, the camera being used and lens configuration. All of these combinations may be getting smaller, but presenters’ eyesight don’t get any better, and distances from the camera to the presenter are only increasing as we see more virtual sets in use or bigger studio sets.” In 2016 CueScript launched a system that would allow the use of the many variations of PTZ cameras on the market. No special tripod or mounting needs to be used simply a lighting stand or a lightweight tripod. One of the key elements of the CueScript solution is the ability for customers to start with a basic system and upgrade with anything from wireless scroll controls to time displays, talent monitors, and so on. The systems can be expanded to employ greater functionality in the software with API integration, such as on-screen timers and messaging systems. Adding more controllers within a customer’s site is simple plug and play, using Ethernet or USB connectivity. When it comes to IP, Larter says that flexibility and integration are the major advances. “We have developed the world’s first complete IP2110 end-to-end prompting solution – including the CSMV2, the industry’s first patent-pending SMPTE 2110-enabled prompting

monitor. Installations no longer require SDI-SMPTE gateway convertors, since our system offers seamless interconnection with 10 GigaBit fibre IP switches. For customers that want IP flexibility but do not have a 10GB IP-based video switch, the CSMV2 also includes CueScript’s innovative IP connectivity, now referred to as CueTALK, which provides seamless operation through standard CAT5/6 IT infrastructure.” Larter goes on to talk about the integration with newsroom systems. “Our prompting application CueiT News is compliant with MOS versions 2, 3 and 4. “We collaborate and integrate with all the major newsroom system vendors and we have test environments in-house for development and proficiency in set-up and troubleshooting.” MOVING FORWARD So, what does Larter see as the next major step forward? “The prompter displays can only improve and get lighter and brighter. With the development of OLED screens and foils, weight and size reduction will allow the use of lighter camera supports and robotics. We are developing systems through our API to allow content to be viewed on location or in studio, allowing presenters to view stories or items on handheld devices.” Larter concludes, “A great many features and functions have become integral parts of our products as a result of a customer request – one such was developing an iOS version of our CueIT software to further widen the accessibility of the system to Mac-based organisations. We are very proactive at working with our customers to help them make their jobs easier.” n




Ronan Poullaouec, CTO at AVIWEST, explains how bonded cellular and the expected impact of 5G can help improve live production delivery


ne of the most critical challenges for broadcasters is getting a good and reliable network connection at an affordable cost. Bandwidth costs can be extremely prohibitive, especially for live HD and UHD video. KEY ADVANTAGES OF BONDED CELLULAR Typically, broadcasters have used satellite, ENG or mobile production trucks to deliver live sports and events. But those traditional production costs continue


to be expensive, so many are turning to bonded cellular technology as an efficient and cost-effective alternative for live video production. One of the biggest benefits of bonded cellular technology is receiving bandwidth almost everywhere with good efficiency. By bonding together multiple unmanaged cellular networks, broadcasters can not only achieve the desired bandwidth, they can save a lot of money and produce content in a very quick and easy to set-up way. For example, consider how much bandwidth one cellular


using forward error correction (FEC) or Automatic Repeat reQuest (ARQ). Another benefit of bonded cellular systems is their lightweight, portable and mobile design. Being in the field covering a live event, broadcasters need equipment that can be carried and set up quickly. Bonded cellular systems can be instantly deployed in the field compared with the hours it takes to set up traditional satellite links. All it takes is a camera, tripod, and cellular backpack. After connecting the modem, broadcasters can manage live video transmissions from anywhere globally. If there are any issues in the field, master control can operate the link remotely, cutting down on the number of people that are needed in the field.

network might provide. Aggregation of up to eight modems can add up to several Mbps, which is perfect for delivering live video with sub-second latency. Encoding plays a huge role in reducing the bitrate needed for high-quality live video delivery. With the latest H.265/HEVC codec, broadcasters can improve bandwidth savings by up to 50 per cent compared with H.264/AVC, delivering the same video quality. For live applications, broadcasters should use a bonded cellular system with advanced capabilities, including adaptive bitrate and low latency encoding. A second encoder can be used to record high-quality, constant bit-rate video for the record, store, and forward purposes. Keeping in mind that the quality of live video has to be perfect, reliability can be managed

THE IMPACT OF 5G ON BONDED CELLULAR Cellular coverage around the world is increasing, further building the case as to why bonded cellular is a smart choice for delivering high-quality live event coverage. With the global rollout of 5G networks currently underway, broadcasters will have a lot of benefits at their disposal for video delivery. For instance, the 5G frequency spectrum, which is much wider than 4G, will enable superior coverage and higher bandwidth. For live video production, having reliable and secure connectivity is key. 5G networks, with slicing and QoS management capabilities, offer an extremely effective live production solution, even where there is poor pre-existing infrastructure or where the cellular network is congested. Network slicing allows the use of 5G in overcrowded environments with a high quality of service. 5G trials have successfully been conducted with telcos and broadcasters to demonstrate the power of 5G and the benefits to the video quality and the latency that this brings. Given the impact that 5G is expected to make on the


‘One of the biggest benefits of bonded cellular technology is receiving bandwidth almost everywhere with good efficiency.’ RONAN POULLAOUEC broadcast world, it’s important to look for bonded cellular transmitters that support 5G technology. Together with the latest codec improvements, 5G will certainly be a game changer for the content creation and production market.

bandwidth is limited. Elections and natural disaster coverage are also events that would benefit from bonded cellular technology and its ability to provide the required bandwidth for the real-time results that viewers expect.

WHERE TO USE IT There are several lives sports applications where using bonded cellular equipment is ideal. For example, broadcasters can use bonded cellular rigs for live coverage of local sports, supplementing their satellite connection with cellular connectivity via external antennas on the roof of the truck. Taking advantage of the compact and lightweight design of bonded cellular, broadcasters can strap the equipment on the back of a vehicle, motorbike, helicopter or golf cart to capture different angles that might otherwise be hard to get. Bonded cellular systems are also perfect in scenarios where there’s a large crowd that limits the use of satellite trucks, instances where broadcasters want to report live from multiple geographically diverse areas, or where

CONCLUSION The broadcast industry will benefit from the rollout of 5G networks. Wider frequency spectrum combined with advanced tools, such as network slicing, offers broadcasters and other video professionals a powerful and efficient live video solution. Using bonded cellular systems with the latest encoding technology (i.e., HEVC), widespread network support (i.e., the option to use wireless, Wi-Fi, Ethernet, and satellite), and 5G technology, broadcasters and other video professionals can address the growing consumer demand for high-quality live video coverage of sports, breaking news and other events in the most affordable and bandwidth-efficient way possible, from anywhere in the world. n




Philip Stevens takes a close look at a system Stevens takes a less closestressful. look at a system that thatPhilip makes live directing makes live directing less stressful.

PICTURED RIGHT: Per Zachariassen


ormulating and then executing the production of a music or talent show takes a great deal of preparation on the part of the programme director. Often there are videos to watch in order to capture the feel of the performance or act – and then camera angles are worked out and committed to paper scripts. And then, during transmission, the director is calling shots while the vision mixer/technical director keeps up with the script. It can be very stressful. “I have directed a great many shows such as X Factor and Got Talent in Denmark,” states Per Zachariassen. “And I came to realise that there was a better way of preparing and then directing the live show. That better way involved committing all the camera shots and cutting sequences to a computer-based timeline that could then activate

the vision mixing panel – and other equipment such as a lighting console - during the programme.” The result is now called CuePilot, sold by a company of the same name of which Zachariassen is the CEO. “I basically created the computer program for myself. I love technology and I understand how to direct a music show – and because I couldn’t find anyone else to help me, I did all the work myself.” Zachariassen explains that CuePilot is a tool to take out the stress of complicated shows. “With big music or talent shows you have a large team and every minute of



PICTURED ABOVE: Adding shots to the timeline created in CuePilot

rehearsal time is expensive. By being able to do a great deal of the pre-preparation away from the studio and on his or her own the director is able to consider all the options without the pressure of knowing that every minute is costing a great deal of money.” The result is a well-prepared programme long before the crew assembly in the studio. So, how does CuePilot work? First, the director assembles each act and shoots the performance on a single camera – even an iPhone will suffice. The file, including both sound and vision, is then loaded into the CuePilot program on a PC or laptop (both Windows and Mac versions are available). Elements can subsequently be added to the file to show the information contained on a traditional running order – camera angles, moves, lighting changes and so on. “The director will need to be very familiar with how to ‘block’ the show. But once the audio and the single shot video is in the timeline, and the director has determined where the cameras are to be placed on that set, he or she can start to plan moves and transitions. The CuePilot program will detect the beats and bars to make it easy to plan cuts in the appropriate places. While some directors use these beats and bars, others may prefer using seconds, when it comes to the duration of shots.” Each camera is listed in the computer program menu and the appropriate number is dragged on to the timeline


to match the director’s plans. Additional comments, such as shot size, moves and so on, can be added which will appear on the camera operator’s cue program known as CueApp (more of that later). “Of course, the director may use dissolves and wipes as well as cuts, and these can be added to the timeline simply by selecting the transition type and duration,” explains Zachariassen. Once a particular sequence is completed, the director can view the result with reference to the original rehearsal shot and make any modifications necessary. The data from the CuePilot files are stored in the Cloud, making it easy for other authorised users – for example, the lighting director and sound supervisor – to add their own instructions for the sequence in question. If necessary, paper scripts can be generated from CuePilot. Once all the preparation has been completed, the files are placed on the Nexus studio server in the production gallery. This is an all-in-one unit with a 20inch touch screen monitor, that allows the director to control the output. “The user interface is similar to the desktop version, but with additional features to manage timecode, control the vision switcher, and operate the built-in GPI triggers for pyros, lights, e-mems and special cameras,” says Zachariassen. “CuePilot will interface with all the high-end


production switchers using a Serial RS-422 connection. It will also read timecode from any device with LTC output - typically from EVS, ProTools and Cubase. Plus, there is a LAN Network, to synchronise data with CueApp in the studio, and with the CuePilot Cloud Server.” The interface also allows the director to skip certain shots if, for example, a camera is not ready. Of course, not every part of a music show involves a performance by an artist – there may be interviews and other ‘ad-lib’ sections. “Normally, the CuePilot program will be connected to an M/E row of the vision mixer. That means manual vision switching can still take place on the programme bus as usual. Once the music needs to resume, CuePilot is reactivated and the vision mixer switches to the M/E row,” explains Zachariassen. CueApp is the equivalent of cue sheets which are traditionally clipped to the camera. With CueApp, each operator has an iPad located next to the viewfinder. The camera operator will attach an earpiece from the iPad and wear it under headphones. Taking data from CuePilot, the operator will hear a countdown to his or her next shot. The list will not only show that camera’s shot list, but also the overall running order. “This app is also a useful tool for other crew members,” emphasises Zachariassen. “During each act, all devices running CueApp are synced in real time to the CuePilot

Studio Server. So, everyone on the team can know what is happening next and in the upcoming moments. More than that, each member of the crew can view and rehearse every act, as often as they like - no matter where they are, even from home. And they can add their own personal notes for each shot, and view videos from rehearsals, synced to the playback in CueApp, and this way get a complete sense of the details and timing of each shot or cue.” With all this technology available for directors and their teams, how easy is it to learn the techniques? “It is very simple,” says Zachariassen. “For most people, one to two hours of Skype instruction is all that is necessary.” He continues, “CuePilot was used for the first time on the Eurovision Song Contest in 2014. I directed that show, but I had used it for many years before that occasion There is a lot of work for a three-hour live programme, but with all the preparation carried out in advance, I could sit back and enjoy the show. The bulk of my work had been completed.” CuePilot is available in three versions depending on the complexity of the show and the extent to which control is required. Zachariassen says he is working on different applications for CuePilot, and one involves classical music concerts. n

PICTURED ABOVE: Camera direction is easily added to the timeline



PROVIDING A FULLY MANAGED SERVICE Jellyfish Pictures expands its relationship with ERA to include a fully managed service


ile-based working has brought significant benefits to post production, notably increased efficiency and the opportunity for wider networking. It has also seen a proliferation in the amount of material that has to be stored and a greater risk of footage being accessed - or even stolen - by unauthorised people. Increasingly, facilities are handing over management of their entire infrastructure to third party providers as a way of dealing with these problems. Among the visual effects (VFX) houses to take this route is Jellyfish Pictures, which has moved to a fully managed service provided by broadcast IT specialist ERA. Jellyfish opened in 2001 and now has five studios split between London and Sheffield. It is a leading


name in animation and VFX for both feature films and TV series, with credits including Rogue One: A Star Wars Story and Dennis and Gnasher: Unleashed. WHAT’S INCLUDED: • High performance storage system based on PixStor • DR storage services • ERA designed animation workstations operating on PCoIP for remote access • Managed security services using Cisco and Fortinet • Network infrastructure management and control, including Mellanox Core and Dell Edge gateways


MOVING TO A 24/7 CENTRALISED OPERATION ERA already provides disaster recovery (DR) services for Jellyfish but changing needs, including the addition of a new facility in the north of England, has led to a broadening of the business relationship. Jeremy Smith, chief technology officer at Jellyfish, explains there were a number of reasons behind the decision to expand the arrangement with ERA into a fully managed service: “Space and power is always at a premium in London and we need peace of mind to be able to expand capacity easily when the demand is there without worrying about space, power and cooling issues.” A key driver in moving to a flexible, centralised operation that runs 24 hours a day, seven days a week was being able to bring together and better control a number of remote facilities. Jellyfish has four sites in London - Margaret Street in Fitzrovia, two at the Oval and one in Brixton to the south of the capital - and the recently opened virtual studio in the Yorkshire city of Sheffield. The company also has animators working in various locations across Europe who feed their contributions back to the UK offices. ERA runs the managed service from its data centre in the west of London, linking to Jellyfish’s London studios over individual 10GB fibre connections - with 1GB internet back-ups - and to Sheffield using a 1GB internet link. “We can also bring in additional workstations and servers on the ERA infrastructure when we need them without worrying about connectivity or bandwidth,” says Smith. “There’s also the option of public Cloud services because ERA can offer on-demand

capacity from the major Cloud providers.” The managed service has full TPN (Trusted Partner Network) approval. The TPN scheme was jointly devised by the Motion Picture Association of America (MPAA) and the Content Delivery and Security Association (CDSA), with the aim of raising awareness of the need for security and preparedness in the media and broadcast sectors. As part of this the TPN is designed to promote the capabilities of modern assurance technologies and set a benchmark for all manufacturers and service providers. The ERA/Jellyfish installation includes: a high performance storage system based on PixStor, which runs a containerised workflow; DR storage services, high-end animation workstations designed by ERA, which work on PCoIP (PC over IP) for remote access; managed security services for firewall design and management, using Cisco and Fortinet products; and network infrastructure management and control, including Mellanox Core switching with Dell Edge gateways. The new service will be managed by ERA from its data centre, while Jellyfish, which pays a monthly subscription for the service, runs all the applications. Jeremy Smith says that as well as the flexibility of a centralised workflow, a major consideration in opting for a managed service was the increased security: “As part of this ERA has included TPN approval for the data centre services and also provides completely managed firewalls and networking to meet the TPN requirements expected by the likes of Netflix and Disney.” n



THE IP TRANSITION: DOS AND DON’TS By Rafael Fonseca, VP product management, Artel Video Systems


he shift to IP comes with challenges both technical and operational. Once content and processing truly are situated in a multilayer IP framework, the opportunities afforded to the media company clearly outweigh all of those challenges. Still, every organisation making this move can save itself valuable time and money by navigating the IP transition carefully and correctly. There are several dos and don’ts that can simplify the transition, ease growing pains associated with the adoption of new technologies and workflows, and help media companies leverage their IP investment more effectively. DO DEVELOP A MIGRATION PLAN Development of a migration plan is critical if a broadcaster or other media company is to transition to IP without compromising its ability to continue offering the services essential to the business and its customers. In creating such a plan, it is essential to identify the parts of the network can be transitioned first and which parts will be capable of supporting the business throughout the transition. One key to maintaining service continuity to minimising technology “islands” that might hinder service availability and assurance. A well-thoughtout architecture that considers the requirements needed in support of the broadcast services is fundamental to success. The migration plan should account not only for technical resources, but also for human resources. As with any major update across a broadcast operation, the shift to IP will have a significant impact on personnel. DON’T OVERLOOK TRAINING PERSONNEL The shift to IP can be a cultural shift for an organisation, and one that requires change — including new skill sets and knowledge. A fundamental challenge that comes with the IP transition — like the industry’s earlier analogue–digital and SD–HD transitions, among others — is the need to train existing personnel on the relevant aspects of new IP technology. In other similar transitions, organisations have acquired expertise by means of bringing in personnel with experience in IP from other industries. In many cases, these employees had sound knowledge of IP but lacked relevant broadcast industry experience. Consequently, it can be necessary to train some staff in working with IP-based systems and to train others in the systems and workflows that drive organisations in the media and entertainment market.


DO TAKE ADVANTAGE OF DEVICE INTELLIGENCE In the IP realm, every element — cameras, receivers, displays, software, and more — is intelligent and addressable. Each has a name that can be used to identify and refer to it, classify its capabilities, and allow it to be authenticated so that it can use services in the network. Some of the benefits and new capabilities are workflow automation as end devices can automatically register, advertise and “agree” on a common set of functionalities, operational cost reduction as manual intervention or interaction is minimised and reserved for exceptions, and ability to tailor the audience experience as media flows are addressable and interchangeable.

TECHNOLOGY While device intelligence offers a host of new opportunities for optimising workflow efficiency, it also necessitates robust management layers for effective control, as well as policies that govern the movement of content and data. DON’T NEGLECT MANAGEMENT AND MONITORING TOOLS Because the transition to IP introduces addressable entities across the network, the organisation’s monitoring, control, and reporting solutions must allow for remote, centralised operation. With these systems in place, along with robust management layers, it is possible to monitor systems and data across the network, and to ensure that exchanges between systems take place as they should. This capability is critical in the more complex world of IP traffic. Monitoring in the SDI realm is relatively straightforward, largely because signals are confined to cables (and they stay there). When signals enter the IP network, however, they hit the switch along with numerous other flows. While each signal is routed independently, issues across the network can cause different flows to get mixed. The source of the problem can be hard to pinpoint without monitoring tools designed specifically for troubleshooting across IP networks. Consider, for example, the timing flows that are carried over the network along with media flows. Using PTP for synchronisation, these timing flows require tools capable of taping into the flow and extract information that confirms proper synchronisation across the network. Effective monitoring of IP traffic is essential to maintaining the latency, quality of service, and high availability that contribute to positive customer experiences. DO LEVERAGE CONTENT AND METADATA CREATIVELY IP makes it easier for broadcasters to offer more data and better analysis. IP allows for more extensive analysis and richer viewing options. For a high-profile sports broadcast being shown around the world, IP makes it easy to replace a French- or English-language audio flow with a Greek, Spanish, or Japanese flow. Metadata from the broadcast can be leveraged to support in-game analysis, as well as archiving and searches within just minutes of the event. Just name the sport, and data is being used to help convey the strategy behind player choices and the skills that are required of players, whether it’s a golfer’s club head speed or launch angle on the golf ball or the revolutions on football as it’s kicked. Consumers’ thirst for that kind of information will never be quenched. IP also supports greater viewer autonomy in selecting the feeds or angles they get to watch. Right now the technical director calls the shots, but in the long term, broadcasters will take advantage of the IP framework to give viewers the ability to select shots of interest. Looking even further ahead, adoption of artificial intelligence will open up amazing capabilities in terms of archiving and recalling specific content. DON’T FORGET ABOUT SECURITY While there are many benefits to working with addressable entities across an IP network, this shift also opens up potential vulnerabilities much like those associated with the public internet. To prevent

security breaches or threats such as distributed denial of service attacks (DDOS), the organisation must create “walled gardens” for the network and implement other safeguards to protect it. Some of the safeguards start with asking vendors if they have done any vulnerability scanning on their devices and whether they have acted to mitigate any significant findings (some of these are very extensive, tens of thousands of tests). Examples of these vulnerabilities are: SSH logins and logins in general with default credentials, obsolete or “end of life” operating systems, and open ports that are not being used among many others. DO LOOK AHEAD TO FURTHER OPPORTUNITY When a broadcaster or other media organisation makes the transition to IP, it is part of a larger wave that will transform the industry. The capabilities and opportunities enabled by IP will change the way content is delivered and consumed. The IP framework will support richer programming, greater interactivity and personalisation, and other advances that enhance viewer engagement and satisfaction. Dealing with immediate challenges may be a sometimes-painful necessity, but ultimately endless possibilities that come with IP will make the transition well worth the effort. n





aptions and subtitles are required by all major broadcasters and legally regulated around the world for two major reasons. They help make content accessible to millions of viewers with hearing impairments. In addition, they enable viewers across the world to view foreign language content in their own local language, as well as learn new languages. CHALLENGES WITH SUBTITLE AND CAPTION QUALITY Adding words to video can be complex. Captions and subtitles need to be synchronised, or time stamped so that each phrase is displayed at the accurate time. While this process used to be extremely labourintensive and time-consuming, automated tools are now available that simplify this task. Even with the use of automation, human intervention is still required to correct misrepresented spoken content due to mispronunciations, accents, dialects, or background noise. Captioning live telecasts, like news, sports events, and other live shows is especially challenging, as captioning needs to be done on the fly. Let’s look at some of the complexities involved with adding captions and subtitles to video content. For starters, during the encoding process, a variety of methods are used. In the US, DTV 608-708 is the popular format compared with Teletext in Europe. These two can be encoded as a separate track within media or a side-car file while editing and exchange. Alternatively, they can be encoded within video frames for TV transmissions. Another common issue is lost or distorted captions during editing. At the transcoding phase, captions can be lost if the step involving extracting and re-encoding captions in a new format is skipped.


Moreover, metadata information that is used to display captions properly onscreen can sometimes be lost, rendering the captions in the edited portion unusable. Changes in frame rate is another complexity that can make it difficult to maintain high-quality captions. In order to synchronise the video and captions, broadcasters need to encode the captions at the same frame rate as that of the video. If the video frame rate changes, it can cause the captions to become out of sync, an issue that will need to be corrected.


Different encoding points may introduce shift or drift in captions, which become noticeable only after a few minutes of video. Shift usually happens when captions are inserted before or after the intended audio. Sometimes this misalignment is found throughout the file (i.e., there is an overall shift, early or late, between audio and captions). For a synchronisation drift issue, captions start out with the correct timing, but slowly become earlier and earlier (or later and later) over the duration of the video. This can happen when the video and caption frame rate is different. KEY QUALITY CHECKS FOR CAPTIONS AND SUBTITLES It’s increasingly important for broadcasters today to produce highquality captions and subtitles in all of the required languages in a short amount of time. To do that, broadcasters need efficient, cost-effective and automated solutions. QC solutions must provide comprehensive quality checks throughout the entire workflow, starting with transcription to transmission. It’s especially imperative for broadcasters to look for an automated QC system that can provide the following checks: Format detection: Captions and subtitles are available in various forms. Since special hardware and software decoders are required to display specific formats, broadcasters need to be able to ensure that captions and subtitles accompany media in the intended format, or they may not be displayed. Sync with audio and video: After caption or subtitle text is extracted, it should be checked for sync with audio and video. The positioning of captions over video should also be checked to ensure that they are not covering or hiding a critical part of the video. Caption alignment can only be verified when compared with audio. A misalignment of more than a second or two may make captions unreadable, cause viewer confusion, and result in the final captions of a show segment or commercial running into the next programme. Accuracy and completeness: Captions need to match the spoken dialogue verbatim. Many times simplifying text or using alternate text does not convey the same meaning and can be frustrating to lip readers. Broadcasters need to ensure that there aren’t any drop-in captions intermittently within video. Languages check: An asset might have individual caption files for different languages. It’s important to check that content is being distributed in the right languages with the correct captions at the right times. Presence of undesirable text: Checking captions for profanity is an important step from a regulation standpoint. Spell check: The caption text needs to be spell checked to ensure there are no misspellings. Metadata conformance: Adhering to metadata and conformance related to frame rate, positioning, character set and time codes is required to ensure proper decoding.

CONCLUSION By deploying an automated QC system with extensive subtitle and closed caption checks, broadcasters can ensure that content is being created, edited, distributed, and received in the right languages, with the correct text accompanying it. Checking media at every point in the content lifecycle helps operators verify that captions and subtitles are correct and in-sync, in the proper format, and that the correct languages appear on the correct track. Automating this process greatly improves efficiencies and reduces manual labour, allowing operators to focus on other important areas of their business. Through speech-to-text and character recognition technologies, along with checks for undesirable and misspelled words, broadcasters can easily comply with industry regulations and deliver a superior viewing experience. n



THE FINAL WORD Foundry CEO Jody Madden is the latest industry leader to have the final word on what’s happening in the media technology space

How did you get started in the media tech industry? I began my career in the industry in technology roles at Industrial Light & Magic. While I didn’t study engineering at university, the combination of technology and creative talent at ILM was the draw that made me apply for my first job there. The common thread through all of my jobs has been connecting teams across businesses at the intersection of art and technology. How has it changed since you started your career? There has been a significant amount of consolidation throughout the industry, both in software organisations like Foundry and across our customer base in VFX and animation. While there are still small start-ups, it doesn’t seem that there are as many medium-sized businesses that stay independent for long. Foundry’s history - creating software for media and entertainment companies since 1996 - is quite unique. What makes you passionate about working in the industry? The opportunity to work with passionate people, from whom I can learn every day, who generously use their talent to enable the creativity of others. I can’t write software myself, but it’s incredibly fulfilling to be a part of this team and dedicate my time to ensuring their success. If you could change one thing about the media tech industry, what would it be? While the industry is quite collaborative, I think the focus on open standards and the increased support of open source projects is more important than ever to expedite everyone. Foundry continues to be committed to the open source community through our strong support of the Academy Software Foundation and ongoing contributions to open source projects from our own development teams. Greater contribution from all of us to the open source community is critical to enable greater flexibility across applications and greater collaboration across both small and large studios. How inclusive do you think the industry is, and how can we make it more inclusive? I think we have made some progress, but we still have work to do to create a more inclusive industry. In an industry that is built on creativity and innovation, contributions from more diverse backgrounds will benefit everyone. At Foundry we recently announced our sponsorship of Access VFX, a global, non-profit organisation focused on driving inclusion and


diversity in VFX, animation, and games industries. Mentorship programs like this are so important as the next generation of talent truly is the future of our industry. How do we encourage young people that media technology is the career for them? The range of opportunities is limited only by your imagination, and the rate of change creates new opportunities quite quickly. We often expect young people to know what they want to do at such a young age, and I believe this industry provides a great amount of flexibility and growth opportunity for someone with drive, resilience and a commitment to their team. I do believe the most successful tech companies thrive because of the team and not just one or two individuals, so this is a great industry to explore and contribute to in countless ways. Where do you think the industry will go next? As long as the increased demand for content continues, the drive for efficiency will become even more important over time. Flexibility across VFX and animation pipelines, locations, and companies to distribute work seamlessly and use resources in the most efficient manner possible will be critical for working at any scale. What’s the biggest topic of discussion in your area of the industry? Access to talent, regardless of location for both artistic and technology jobs continues to be top of mind. Freeing up time for artists to focus on the work that impacts the quality of image on screen is critical given the volume of work they are trying to produce on timelines that seem shorter than ever. Regardless of team or company size, our customers want as much time as possible for the creative process, so the more time we can free up for them through performance improvements, workflow innovation, or automation so they can focus on creative iterations rather than repetitive work, the better. What should the industry be talking about that it isn’t at the moment? The conversation is already in progress, but I think we are all going to be spending even more time across the industry working to make all processes more efficient at scale. This is a long-term and complex challenge at nearly every level that we are all facing, so collaboration and partnerships between content producers, hardware, and software companies are more important than ever. n


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.