TVB Europe 66 July/August 2019

Page 1

Intelligence for the media & entertainment industry




Raising The Standard on Robotic Camera Operations The next-generation of robotic camera pedestals is here today. The Leader in Camera Robotics •





ow’s your summer going? Will you have a break before the media tech industry decamps to Amsterdam? Or are you working through your busiest time of the year? As always, we’re not slowing down at TVBEurope towers with a whole host of content debuting on our digital platform throughout the summer. In mid-July we kicked off our new State of Play webinar series with a look at 8K/UHD/HDR. As part of the webinar we asked viewers to vote on whether 4K/8K or UHD are more important to the industry right now. The result was pretty emphatic with two-thirds of respondents opting for UHD. How much will that impact what we’re all talking about in Amsterdam in September?

Editor: Jenny Priestley Staff Writer: Dan Meier Graphic Designer: Marc Miller Managing Design Director: Nicole Cobban

fantastic insights into how technology continues to push forward the production process, as well as a few controversial opinions! I’m sure those of a certain age (myself included) will recognise the stars of this month’s cover, or at least the characters they’re portraying anyway. Dan Meier talks to Ben Kellett, director and executive producer of UKTV’s Dad’s Army: The Lost Episodes, who offers some fascinating insights on how production processes of the past have helped shape the brand new episodes. Also this month, George Jarrett meets NATPE president and CEO JP Bommel for a discussion about content creation and delivery. We’re delighted to feature one of the most talked-about series over the past 18 months in Killing Eve as Daniel Gumble talks

‘We asked viewers to vote on whether 4K/8K or UHD are more important to the industry right now. The result was pretty emphatic with twothirds of respondents opting for UHD.’

Contributors: George Jarrett, Philip Stevens, Daniel Gumble Group Content Director, B2B: James McKeown

MANAGEMENT Managing Director/Senior Vice President: Christine Shaw Chief Revenue Officer: Luke Edson Head of Production US & UK: Mark Constance

ADVERTISING SALES Group Sales Manager: Richard Gibson (0)207 354 6029 Sales Executive: Can Turkeri (0)207 534 6000 Commercial Sales Director, B2B: Ryan O’Donnell Japan and Korea Sales: Sho Harihara +81 6 4790 2222

SUBSCRIBER CUSTOMER SERVICE To subscribe, change your address, or check on your current account status, go to or email

ARCHIVES Digital editions of the magazine are available to view on ISSUU. com Recent back issues of the printed edition may be available please contact for more information.

LICENSING/REPRINTS/PERMISSIONS TVBE is available for licensing. Contact the Licensing team to discuss partnership opportunities. Head of Print Licensing Rachel Shaw

Future PLC is a member of the Periodical Publishers Association

We’ll have more webinars in the runup to IBC and also after the show, keep an eye out on for more announcements. I’m also really excited to reveal we’re launching a new podcast series looking at the Summer of Sport with guests from UEFA, Eurovision Sport and more. Expect that to debut at the end of July. Turning to this month’s magazine, we’re focusing on production and all of its various processes from editing to colour grading, sound recording to VFX. There are some

to the show’s award-winning sound crew. Plus, Philip Stevens visits Salford’s dock10 to celebrate a decade of successful production; and BARB chief executive Justin Sampson discusses how the company is adapting to a fragmenting TV industry. Hopefully, all of that should keep you busy throughout the summer whatever your plans are! n

All contents © 2019 Future Publishing Limited or published under licence. All rights reserved. No part of this magazine may be used, stored, transmitted or reproduced in any way without the prior written permission of the publisher. Future Publishing Limited (company number 2008885) is registered in England and Wales. Registered office: Quay House, The Ambury, Bath BA1 1UA. All information contained in this publication is for information only and is, as far as we are aware, correct at the time of going to press. Future cannot accept any responsibility for errors or inaccuracies in such information. You are advised to contact manufacturers and retailers directly with regard to the price of products/services referred to in this publication. Apps and websites mentioned in this publication are not under our control. We are not responsible for their contents or any other changes or updates to them. This magazine is fully independent and not affiliated in any way with the companies mentioned herein. If you submit material to us, you warrant that you own the material and/or have the necessary rights/permissions to supply the material and you automatically grant Future and its licensees a licence to publish your submission in whole or in part in any/all issues and/or editions of publications, in any format published worldwide and on associated websites, social media channels and associated products. Any material you submit is sent at your own risk and, although every care is taken, neither Future nor its employees, agents, subcontractors or licensees shall be liable for loss or damage. We assume all unsolicited material is for publication unless otherwise stated, and reserve the right to edit, amend, adapt all submissions.




JULY/AUGUST 2019 06 Let’s get personal’s Scott Davies on how personalisation enhances a brand’s audience engagement

10 Recreating TV once thought doomed

Ben Kellett tells Dan Meier how he recreated three lost episodes of Dad’s Army

16 Measuring the race for viewers

Jenny Priestley talks to BARB chief executive Justin Sampson about tracking SVoD ratings

19 Vlad the entertainer



Dan Meier meets the Framestore team responsible for bringing Vladimir Putin to life on the BBC

34 IBC2019 will explore ‘a new era in media’

Conference producer David Davies discusses what’s new at IBC this year

36 Making a killing

Daniel Gumble asks Killing Eve’s award- winning sound crew about the show’s technical challenges

46 Here is the news

For its 65th anniversary, Philip Stevens reports on the history of BBC News

50 The key to dock10’s achievements


Philip Stevens heads to Salford to celebrate 10 years of dock10

60 The impact of blockchain technology on media and entertainment

IBM Aspera’s James Wilson explores the potential of blockchain for the industry


Let’s get personal


By Scott Davies, CEO of

ith so many different brands fighting for viewers’ attention, it has become harder than ever to effectively engage audiences. A witty narrative or stunning visual is no longer enough. Brands now have to build an affinity with the audience, enabling them to be part of the campaign or influence it in some way. This is vital when it comes to extending the lifespan of a campaign and it all revolves around the ability to look beyond the traditional delivery limitations of linear TV production. Many assume that TV campaigns have to stay static, but this is no longer the case. Indeed, modern campaigns have to be flexible if brands really want to boost campaign reach and brand affinity, as well as drive purchase intention. This is where personalisation comes into play. When it comes to building a relationship with an audience and generating engagement, personalisation is key. And the good news for brands is that it can come in many forms. At a basic level, personalisation could just mean showing someone’s name, an image or a video. Or, it could also mean highlighting someone’s influence, i.e. their vote or collective opinions. Whatever approach they take, it’s becoming more and more apparent that brands can’t afford to neglect that personal touch. THE POWER OF PERSONALISATION Ad personalisation has become particularly prevalent online in recent years, simply because it’s now fairly simple to provide a one-to-one experience online with a webpage or animated interaction. However, the real power is when audiences come across personalisation in unlikely places. Television ads still provide the best example of this, as it’s much less likely that a person’s image or submission will appear in a TV ad due to the more complex delivery mechanics involved. This is why when well-known brands offer the chance to be part of an expensive and well recognised TV campaign, consumers get behind it in their droves. French food company Danone used social media to maximise its creative relationship with its audience by letting them be a part of a campaign for its yoghurt brand Oykos. Danone offered members of the public the chance to star in an ad with TV celebrity Mark Wright. All they had to do was submit the reasons why they should be in


the ad, along with an image, via Facebook. Submissions were reviewed in real-time and the ad was then automatically re-rendered once the winning entry had been chosen to create a unique advert every time. Of course, social engagement quickly climbed as the ads were aired, driving online engagement with the brand, extending their social reach, and enabling Danone to prolong the life of the campaign and have a much wider impact. NEXT-LEVEL CAMPAIGNS So, personalisation can clearly have a big impact on audience engagement, but what do brands need to think about in order to enhance their own ad campaigns? The first factor to consider is the importance of keeping content fresh. Collecting user-generated content through social media, for example, is an obvious way of achieving this, enabling brands to change the creative of an ad on a regular basis. This doesn’t just have to be user-generated content submitted in the traditional sense, but can include any dynamic data involvement that can affect the creative of the ad. Brands should consider using automation tools, which can keep additional production costs low or even non-existent in many cases. Next, remember to extend the reach beyond the screen. Extending the conversation and fostering a two-way dialogue to strengthen the relationship is where brands can really maximise the value of personalised campaigns. For example, using social media can be a great way to start an ‘off air’ conversation, while using tools like Messenger will drive new interactions and purchase intention by increasing product knowledge. This can also bring interactions back to the screen. Informing users that their interaction or submission may influence today’s or tomorrow’s ad will encourage people to tune in, creating a perfect circle of engagement. Finally, don’t forget to think about the long-term impact. Audience engagement shouldn’t stop once an ad campaign has finished. All the work that goes into creating personalised ads that get consumers excited about the brand will be for nothing if it isn’t built into something more long-term. Ultimately, brands have to remember that TV and digital content provision is a crowded space. But, by driving audience engagement through the use of personalised and user-generated content, brands can create TV campaigns that make an impact and drive real business benefits. n


A crowded OTT market means opportunity ahead for telcos


By Kamal Bhadada, president, CMI, Tata Consultancy Services

he media industry is undergoing an unprecedented period of change. Consumers are cord-cutting expensive pay-TV and switching to streaming services like Amazon Prime and Netflix, which boast flexible packages with no termination fees. According to Ofcom, the number of subscribers of streaming platforms overtook the number of people with traditional satellite and cable services in the UK for the first time in 2018. Established content creators, studios and broadcasters who previously sold content through distribution channels are now going direct to consumer, offering their own streaming to stop the hemorrhage of cord-cutting. Newer entrants to the market such as Apple and Disney are battling more established players like Sky, BBC and HBO, making for an increasingly crowded online streaming market. In the world of telecoms, all these businesses are called OTT, or ‘over the top’ platforms, because their service literally sits on top of the high-speed internet infrastructure provided by telcos. Telco businesses are facing their own challenges, with many anticipating high capital expenditures to modernise their networks. However, there are huge opportunities here. Telcos have the chance to monetise their infrastructure assets by providing their own next-generation OTT content and smart home services as add-ons. There are plenty of new business opportunities for telcos that forge new paths, taking advantage of 5G innovations to provide their customers with the kind of creative, value-add services previously only offered by OTT players. Telcos often struggle to manage disparate bandwidths, configurations and legacy networks while maintaining high-quality services. One of the most important investments for telcos today is in building up Cloud-based digital platforms that can unify these different technologies. This is a long-term commitment and serious investment, but it is the only way to provide the scalable bandwidth required to support consumers’ ever-growing appetites for data. Videos will contribute to more than 82 per cent of global internet traffic by 2021, according to Cisco. The more customers stream content, the more strain is placed on the existing infrastructure, which will become increasingly commoditised. If telcos remain confined to network and infrastructure services, they risk sliding into a downward

spiral of falling customer numbers, falling revenue and lack of investment in the infrastructure needed to maintain the right levels of service. A robust, modern network infrastructure will provide telcos with an abundance of new business opportunities to monetise across new channels. Smart home services, gaming, AR, VR, 4K and eventually 8K streaming are all key areas of potential growth and monetisation. The key is building a robust platform capable of delivering these services directly, rather than relying on OTT players. This will require building tighter integrations with the wider industry ecosystem. Put simply, 5G gives telcos the upper hand: telcos that run 5G infrastructure will be able to negotiate partnerships with content providers. OTT platforms will need 5G speeds to distribute their content to increasingly mobile-first consumers, especially for low-latency content like sports and esports. Telcos already have experience building these ecosystems, and partnerships like BT’s rights to the Premier League and the Champions League chart a way forward for the wider industry. 5G gives telcos more leverage to develop and extend such partnerships in the future. Currently, there is an overwhelming number of OTT subscription services available to consumers. This fragmentation has resulted in an inconsistent customer experience across disparate platforms. Industry surveys have found that most customers would only choose up to two OTT services and would prefer a single consistent interface. Telcos are in a unique position to be an aggregator of this content and create a single, unified digital marketplace. According to Salesforce, two thirds (67 per cent) of consumers would pay more for a great experience, and there is real incentive for telcos to invest in delivering this experience. Telcos need to have strong analytics capabilities to deliver the right content to the right consumer on the right device at the right time. The ability to mass-personalise and create commercially attractive bundles will drive customer loyalty and up-sell. We encourage telcos to harness the abundance of opportunities on offer with OTT. It will require a mindset change, ability to innovate technically and create new business models that will position them at the core of the media industry. n



Why VFX studios struggle to adapt to distributed post production By Mathieu Mazerolle, senior product manager - Athera, Foundry


he world of VFX post production is moving fast, as the industry re-shapes to keep pace with a growing global audience appetite for top-quality, engaging visual experiences – not just on the big screen, but even more notably on the small screen as mega-budget TV shows and endless original streaming series releases continue to captivate viewers. This shift in demand for more and better effects everywhere means lots more work for everyone – but smaller VFX studios in particular have yet to carve out an ideal business model for adapting to the new data-driven reality, where contract renewal so often depends on the results of audience analytics. Few are set up to easily pivot to the next thing as they sit out the long wait to see if they might be invited back for the next episode or season. Those who’ve succeeded have transformed themselves into resilient nomads – capable of jumping from one short-term project to the next, having built up a solid rolodex of contacts, and building up the necessary team and equipment within a few short phone calls. This new generation of studio stays lean and agile by maintaining minimal overheads and infrastructure - taking advantage of distributed teams and Cloud capability to scale up and down as needed, and deliver faster turnover. As the concept of the dependable, long-throw project slowly fades into oblivion, these players have learned how to keep business moving by juggling smaller portions of many projects.

But while companies, providers, and platform people excitedly throw around terms like “studio in the Cloud”, “virtual studio,” and “all in one studio” as they imagine the business possibilities, the distributed model continues to face headwinds related to its implications of remote collaboration, and costs shifting from Capex to Opex as studios transition from in-house infrastructure to virtualisation and “post as a service.” Nobody’s really put the virtual studio together in a way where it puts the creatives in a position where they can just be creative. Those considering the leap quickly run into the stress of unknowns related to security, cost, and performance – concepts that are more or less nailed down in the traditional on-prem studio. And they often have trouble moving away from that familiar model, and the advantage it offers in keeping people physically next to each other. They’ve got a pattern of working that they’re used to – so even if it’s not as applicable in the future they have a production pipeline that works. So now we have to go and reinvent everything with this new distributed world where maybe you have the right people, but they might be remote, and you might not have all the equipment in one place. Security concerns quickly arise, with best practices on breach prevention as yet to be defined in the distributed world. Developing trusted partnerships and finding solutions with built-in security becomes key. Creating just as much anxiety are concerns over cost

‘Nobody’s really put the virtual studio together in a way where it puts the creatives in a position where they can just be creative.’ 08 | TVBEUROPE JULY/AUGUST 2019


and performance. Unlike the on-prem way of working - where a studio buys its equipment up front, pays out labour costs from there, and can refer to a history of jobs to make estimates on budget - at present there is no simple way for a virtual studio to accurately predict project costs. At the same time, many start-ups lack the IT expertise required to reason about performance requirements, and are left scratching their heads over fundamental questions like: What kind of internet connection should we ask for from our internet provider? And what sort of storage should we ask for from Google? They’re coming into the Cloud platform having no idea what they need. They just move really fast, wanting to get this stuff working because there’s this juicy project and they know the only way they’re going to win it is by building this tribe of people really quickly and finding some way of gluing them all together to get the work done. Foundry hopes to make those logistics easier for virtual

studios with its Cloud-based platform, Athera. Launched over a year ago, Athera is the result of over two years Foundry spent in beta and R&D packaging together optimal configurations for meeting the needs of performance, security, scaling, and everything else VFX freelancers and studios need to maintain smooth pipeline production. Partnered up with Google Cloud Platform, Foundry’s Cloud-service offering allows studios and freelancers to access industry-leading tools from any location, and on flexible terms better suited to the ebb and flow of their business needs – and we believe it’s the closest thing to the all-in-one solution studios seek for handling the tooling and performance woes of building a virtual operation. Anyone who has kind of squared off all of those issues about security, cost, and performance and put it into a nice package is going to relieve a lot of the tension created by the supply side of a larger VFX distributed industry trying to meet the demand side. n

‘Foundry’s Cloud-service offering allows studios and freelancers to access industry-leading tools from any location.’

Fully Flexible Hybrid IP/SDI Test & Measurement ST 2110, ST 2022-7, ST 2022-6 Generation, Analysis & Monitoring Qx IP Rasterizer • SMPTE 2110 (-20, -30, -31, -40), ST 2059 Precision Time Protocol (PTP) and two-port ST 2022-7 seamless protection

• Packet Interval Timing (PIT) analysis for rapid diagnosis of issues like packet congestion

• Simultaneous monitoring of 1 video, 2 audio and 1 ANC data flows in up to 16 active and scalable windows

• User-defined instrument display layout for up to 16 instruments, now with presets

• 8 and 64 channel audio support at 1ms and 125us with either PCM or AES3 Transport

Sx TAG IP Handheld Instrument • Truly portable handheld IP Analyzer and Generator • Ideal for line checking IP SMPTE 2022-6 & ST 2110 networks • Analyze ST 2110 (-10, -20, -30, -40) streams up to 3G Generate ST 2110 (-20, -30, -40) streams up to 3G

• IP-to-SDI and SDI-to-IP gateway functionality for testing hybrid systems

• Analyze and Generate ST 2022-6 streams for up to 3G SDI-to-IP and IP-to-SDI gateways for both ST 2110 and ST 2022-6

• Generate an analog reference output slaved to the ST 2059 PTP or ST 2022-6 input

Visit us on booth #10.B12



Dan Meier gets permission to speak to Dad’s Army: The Lost Episodes director and executive producer Ben Kellett

PICTURED ABOVE: The cast of Dad’s Army: The Lost Episodes


hese days it feels like we are being served a bottomless supply of absolutely fabulous British sitcom revivals at all hours, but UKTV’s Dad’s Army: The Lost Episodes has the distinction of being lovingly recreated from original episodes of the series that haven’t been seen since they were first broadcast 50 years ago. “I’m very keen to make it as authentic as possible because it’s so iconic and so much-loved,” says Ben Kellett, director and executive producer of the three revived episodes from the second series of Dad’s Army: ‘The Loneliness of the Long Distance Walker’, ‘A Stripe for Frazer’ and ‘Under Fire’ - originally shown in 1969 in black and white then sadly wiped by a tape-limited BBC. “When I first got in discussion with UKTV about it I was absolutely hell-bent on digging out the old cameras from museums, which had been done before, and making it black and white and 4x3,” Kellett explains. “But actually they made the clear business decision that no one’s going


to want to watch it in 4x3 and black and white, and let’s make it modern-day quality.” As a result Kellett used four modern cameras (there were five on the 1960s shoot) and shot in widescreen, HD and colour, but graded the picture to more closely resemble the look of series three which was in colour, “because you can’t really grade colour to look like black and white because you make it black and white,” says Kellett. The other consideration was to emulate the original shot rate, which was around half that of a modern sitcom; 200-250 shots per episode, compared with 500-600 in today’s half-hour shows. Kellett was able to source the original broadcast scripts from the BBC, so he could use the actual camera scripts along with amendments made by either David Croft or Harold Snoad, who initially directed the episodes. “I was able to replicate a lot of the blocking sort of in reverse; by knowing where the cameras were and what camera shot

FEATURE what, I could try and work out where people might be in order to achieve that,” says Kellett. “It’s a slightly backwards way of doing it. I might have obsessed too much about that!” Set designer David Ferris paid equal attention to detail in rebuilding the set by studying the series and photographs. “We have some wonderful photographs from the Radio Times archive, as well as published photographs,” says Kellett. “So he was able to do an amazing job recreating what they’d done originally with our principle set that everyone knows - the church hall, Mainwaring’s office, the bank - and a lot of guest sets, some of which we had drawings and photographs of, and others he just had to design as he thought they would have done then.” The same goes for costume designer Howard Burden, who found a tweed jacket in a fancy dress shop for Captain Mainwaring, only to discover Arthur Lowe’s name scrawled on the label. “A huge amount of detail has gone into all departments,” says Kellett. “Nicola Bellamy was our makeup and hair designer, and she was obsessively detailed with the wigs for David Heyman and Kevin Eldon, who played Frazer and Jones. “With Kevin McNally for Mainwaring, because we all agreed that him being follically-challenged was key to his character, we tried various pieces and then Kevin very manfully just said ‘Look just shave it,’” Kellett recalls. “So we shaved Kevin’s real hair and dyed what remained to give him Mainwaring’s lack of hair, because it was a very important part of his character that he had that.” This casting stuck closely to the essential traits of each character, explains Kellett: “Mainwaring for example - short, fat, balding man. We all felt, obviously because Jimmy [Perry] and David had felt when they first originally cast Arthur Lowe in the part, that these aspects were important to the character one of the reasons Mainwaring is chippy is because he’s short, and he’s slightly balding, and a little bit tubby. “And then you’ve got Wilson as the foil - he’s very tall and elegant looking and slightly effete. These are very much a part of that character. So Robert Bathurst was quite right in replicating the ear touching of Wilson and the face touching, much like John Le Mesurier did,” Kellett continues. “They weren’t doing impressions, that we said at the beginning. Robert Bathurst put it rather well when he said we were recreating the original scripts with the understudies.” The episodes were filmed in front of a studio audience, with Kellett shooting each scene twice to provide choice in the edit. “I want the scenes always just to run, I don’t want to stop things in between,” he says. “So it’s important for the cast to have the flow, and for the audience to have the flow and the energy that the audience brings. “Where Jones for the first time says ‘They don’t like it up

‘em,’ it got a massive cheer,” he continues. “And it got a slightly smaller cheer the next time. So I think we ended up with a sort of halfway house in the end of going for the slightly smaller cheer because the massive cheer felt almost too much. “That’s also the case with any studio sitcom,” he notes. “If you do something and it goes wrong, and it goes wrong again, and it goes wrong again, by the time you get it right you quite often get a massive cheer. And of course viewers at home don’t know what the hell that’s for. So with sound

particularly, you’re actually more often than not bringing the laughter and applause that were there down in level, so that it doesn’t overwhelm the person watching at home.” Kellett adds that the sound was recorded using modern booms, “which aren’t that different from the ones they would’ve used - and we tried to use as much of the original music from the original scripts which I got as interstitial stuff.” So don’t panic Dad’s Army fans; the whole team approached this revival with the utmost respect for the source material. “Everyone in the cast came to it with a huge sense of responsibility and love for the project, which made it very, very challenging because we didn’t want to let people down, but also a lot of fun to make,” Kellett concludes. n

PICTURED ABOVE: From left to right: David Hayman, Timothy West and Ben Kellett

“We were recreating the original scripts with the understudies.” BEN KELLETT




GATHERING THE ENTIRE ECOSYSTEM UNDER ONE ROOF George Jarrett meets NATPE president and CEO JP Bommel to discuss content creation and delivery


n assuming his role as president and CEO of the National Association of Programme Executives (NATPE), JP Bommel identified three priorities, of which making this non-profit trade body a 365-day reality was his key desire. He also wanted to expand the value of the recent NATPE Budapest International conference and content market, and introduce new membership tiers. His origins were steeped in the Cannes-based shows MIP, MIPCOM and MIDEM, and working for Reed MIDEM for four years. He subsequently ran his own global events and strategic partnerships business Barton Creek, which saw Bommel consult on the creation of several big entertainment shows and on constructing digital strategies for clients. “When I came on board with NATPE I saw the great opportunity to make it a bigger and ever present 365-day support for our 5,000 members,” says Bommel. “It keeps me on my toes because the content creation and delivery business changes so rapidly. “You have to stay very fresh and aware, and we have managed to keep NATPE ahead of the curve,” he adds.

The 365-day ambition has solid roots. “It relates to our mission statement, which states that NATPE is the indispensable resource in the evolution of creating and selling content, and in creating connectivity, business, and a deal-making environment for members,” says Bommel. “We used to be two shows – Miami and Budapest – but now we are recognised as the world’s best representative of content and distribution people. Having a very strong board with the heads of studios and executives from companies like Netflix has helped us create so many more shows and meetings.” THE ‘FAB 4’ OF STREAMING The specific instigation that proves NATPE to be on the money with its event strategy is its NATPE Streaming Plus conference (30th July). Keynotes are due from Pluto TV CBO Jeff Shultz; Amy Reinhard, the VP of content acquisition with Netflix; Hulu senior VP and head of acquisition, Heather Moosnick; Julie McNamara, EVP of original content with CBS All Access; and Ben Relles, head of innovation at YouTube. “I know that people have discussed streaming endlessly,


FEATURE but this is the first time that a credible and large organisation has set up a conference with the ‘FAB 4’ and many other big players,” says Bommel. “One is the French TV5Monde network. “What we provide is a forum for producers, content creators and distributors to experience the economy of that paradigm, and see how they can connect to the big networking corporations,” he adds. The Bommel master plan is to duplicate this kind of initiative around the US and throughout the year, wherever a strong business presence exists. He hopes to bring NATPE to the UK and France soon, but the next big event is the flagship event in Miami in January. In May NATPE staged its LA Screenings Independents, a three-day event, and this preceded the huge Budapest event. “The LA screenings produced a significant growth of delegate numbers from new regions, in the form of strong turnouts from Eastern Europe, Africa and the APAC region. It featured an international coproduction presentation, and a global content executive forum, and we brought executives from China, Brazil, America, the UK and Latin America onto the stage. We also brought in about 100 producers from the US to talk to those people,” says Bommel. “You could see the quality levels of exchange and the creativity that was firing in the room. Being 365 is providing opportunity and connectivity, and our next step is to go after and empower what I call the YouTubers. Bubbling under are maybe 100,000 producers who need a little hand to get across the bridge to television, and becoming successful production companies.” MISSION STATEMENT To expand its 365 intentions, would NATPE co-produce a big event with another media industry group? “We do have strategic partnerships with various entities, and we are working on a couple of those now. If the model is right, and supports our mission statement and our purpose, which is to empower content creators, sure we would take on new partners,” says Bommel. How will his ambition to open up new membership tiers work, and will it connect in any way with the NATPE Educational Foundation (founded in 1978 by NATPE co-founder Lew Klein, who is its current day president)? “We won’t attract a huge amount of new members with new tiers; the purpose is to win the right members as the market evolves. As we provide access to the top tier of the content community, we have to make sure that our members are screened properly,” explains Bommel. “Once you become a member, you have access to our huge database.

The NATPE membership is the backbone of our industry.” NATPE is working on recruiting from the online creative community, but it puts as much attention on finding new talent for the industry. “The Foundation mandate is to help on two fronts – to help students find opportunities in the content business, and to help the faculty stay relevant. We have programs where we send professors to real businesses so they can learn what is happening to creativity and the technology around it. All our students are basically interns who are hand picked by their professors,” Bommel says. “Students get a free membership for two years, and for one year after they have tended school, so they can find jobs. We believe this is the next generation,” he adds. “We are 5,000 strong, but reach about 40,000.” QUICKLY FILLS UP YOUR YEAR With its plans to launch yet another major event in 2020, how does NATPE sit with the commercial content market events, and companies like Reed? “We have a great relationship with them because we all belong to a community and all need each other. NATPE has a very complete and appropriate valuable position that makes us very relevant. We are about the entire ecosystem getting together under one roof,” says Bommel. “If you come to NATPE, go to MIPCOM and other shows, that quickly fills up your year.” At the Miami event in January the increased levels of production coming out of Brazil, Argentina, Columbia and Chile will be a big feature. In Budapest in June regional content was also a big theme. “We want to make sure that local content has a window to the rest of the world,” says Bommel. “We have not come to the UK yet or France, but we really focused on local content in Budapest and we do it in Miami with the focus on Hispanic product.” NATPE has all sorts of guns blazing, one being the NATPE Lens invented by its strongly peopled NXTGN advisory board. “With the Lens we go to industry companies: we basically bring the members of that NATPE committee into their world. It is a lens on direct to consumer, like the lens on Amazon Shine we did. A few months back we were in LA with Stanford University, doing events and creating discussion,” says Bommel. “We are witnessing an amazing time in the business of content creation. And that is what we are about. We are thrilled because we see a lot of communities disrupted, and we see a lot of opportunity for writers, producers and distributors,” he adds. n

“We are witnessing an amazing time in the business of content creation. And that is what we are about.” JP BOMMEL



MEASURING THE RACE FOR VIEWERS Jenny Priestley talks to Justin Sampson, chief executive of BARB, as the company continues to futureproof in the face of a fragmenting TV industry



“I should point out that at this stage the kind of technology we’re deploying is only giving us aggregate level viewing time, it won’t actually identify which individual programmes are being watched on Netflix or Amazon. We absolutely recognise there’s an interest in getting hold of that information independently from the services themselves.” Another key point about the router meters is that they will hopefully be available to measure more than one device at a time, rather than just the big TV set. So for example, it will understand if one viewer is watching YouTube on their phone while another has got Netflix on the main TV. “We are going through some final confirmatory tests of what we will and won’t be able to report, but we are very optimistic,” says Sampson. “I think our initial priority for reporting services such as Netflix, Amazon and YouTube is how much they’re being used on the TV set. “The reason that’s such a big priority for us is that we have this phrase, which you might have heard before, which we call unidentified viewing. This is when we know the TV set’s on, but it’s being used for doing something other than watching a BARB reported channel. In the last few years, the proportion of unidentified viewing on the TV set has grown very significantly. In 2018, one in five viewing hours on the TV set were what we call unidentified, and for young adults, it was two in five. What our customers really want is to understand what is in that bucket, which we currently call unidentified. So our first priority will be to report viewing levels of those services on the TV set.” The router meter will give another benefit in terms of catch-

Ilustration: Sam Richwood

t’s well documented that the big streaming services don’t like revealing their viewing figures. They’re more than happy to announce when a show or film has broken what they deem to be a record (Stranger Things season three being watched by 40 million households worldwide in its first four days is a recent example). But for shows that aren’t rewriting the record books, it’s almost impossible to find out how many viewers they’re attracting. BARB, the UK television audience measurement currency, is aiming to shed some light on those elusive figures and has recently commissioned Kantar to install new technology into BARB’s panel of homes. The new technology is a router meter which can track video streaming activity from a designated list of BVoD services such as BBC iPlayer, ITV Hub, All 4 etc, SVoD services such as Netflix and Amazon, and online video services, YouTube for example. Historically, BARB has relied on in its panel member homes, where the meters are attached to a TV set. According to Justin Sampson, chief executive of BARB, the new router meter is significantly different: “It will identify certain types of internet activity that we’re interested in; it won’t track all internet activity from the home, but it will track essentially what we call a white list of sites that we’re interested in. “It means we’ll be able to track activity to internet distributed services regardless of whether these services choose to be measured by BARB or not. Obviously one of the benefits of this technology is that we expect to be able to report aggregate level viewing for services such as Netflix, Amazon, YouTube, etc. which to date we haven’t been able to do.

“We expect to be able to report aggregate level viewing for services such as Netflix, Amazon, YouTube, etc.” JUSTIN SAMPSON

FEATURE up viewing. Currently BARB is able to report catch-up or time-shifted viewing, but in the vast majority of cases, it’s unable to be precise about whether that’s somebody playing back a programme through a recording device, or through a BVoD service (unless it’s on Sky). “This is where the router meter will help us because it will help delineate post-broadcast viewing between playback and on-demand access across the whole population,” explains Sampson. BARB’s current panel size is 5,300, and the router meters will be installed in those homes that have a broadband connection, which is thought to be between 85-90 per cent of the current sample size. Sampson says he expects to be able to release the first set of figures “sometime during 2020. I hate to use the word backstop, but our backstop position is we’re expecting the meters to be fully deployed across all panel homes by the end of 2020. It might be that we may be able to start reporting before it’s fully deployed. But we need to ensure that if we do take that decision, then the data that we’re reporting is from a part of the panel which can be representative of the whole panel.”

“We’ve continuously developed our techniques to keep up with fragmentation.” JUSTIN SAMPSON BARB’s other major ongoing initiative is Project Dovetail which aims to meet the television and advertising industry’s need to understand how viewers are watching television on devices other than the TV set. Due to the complexity of the project it has been released in three stages, as Sampson explains: “We took the first big step last autumn when we started publishing multiple screen programme ratings. We’re getting very positive feedback from broadcasters and agencies about the insight this is giving them across TV sets, tablets, PCs and smartphones. “For example, with the first season of Killing Eve, if we take episode six we can see that when you add in viewing on tablets, PCs and smartphones then the average audience for the episode went from 8.8 million to 9.9 million, that’s a gross of over 10 per cent.” Sampson explains that drama is the genre where BARB is seeing the biggest increase in terms of viewing across the four devices it measures. What does that mean for a show such as Game of Thrones, where viewers were downloading the episodes as soon as possible ahead of the linear broadcast so they wouldn’t be spoiled? “When we report in circumstances like that we report the average audience for the programme to incorporate live viewing time as well as time-shift,” explains Sampson. “The figure that is widely relied on is what we call our consolidated seven day TV set viewing and our four screen viewing is also consolidated seven days. So everything is included.”


PICTURED: Justin Sampson

Obviously with the change in viewing habits, it’s also time for broadcasters, agencies and pundits alike to change their mindset around the traditional overnight figures and be more aware of how a show has performed in the consolidated seven, 14 or 21 day figures. “The final episode of Game of Thrones, the consolidated seven day audience was just over 5 million. And of that, 4.7 million watched on the TV set. It’s one of those shows,” says Sampson. “The show that kind of busts all the records is Love Island, which gets a massive uplift from other devices. I don’t have any figures from the current series in front of me, but we know that they were getting uplifts in the region of 25 per cent during 2018’s series.” The next stage for Project Dovetail is that rather than just publishing programme ratings, BARB intends to publish net reach and time spent viewing for channels and programme series. “We’re working towards having that released in the second half of 2019,” says Sampson. “Then the third step is one that’s very important to advertising agencies, which is to deliver multiple screen campaign performance. We’re a little bit further away from delivering that, we don’t have a timeframe for that just yet.” Finally, as content becomes more fragmented and there are more SVoD and AVoD platforms coming on the market, can Sampson gaze into his crystal ball and predict how different BARB will be in five years time? “To look into the future you need to look into the past,” he laughs. “The reality of BARB’s existence for nearly 40 years now is that we’ve continuously developed our techniques to keep up with fragmentation. That is nothing new to BARB. “The initiatives that we’ve been talking about are putting us in a really good place for the next decade. We were particularly encouraged by the fact we’ve now got to the stage of being able to employ a router meter because that gives us the capability to do things that we haven’t been able to do before such as measuring services that didn’t want to be measured, even if it’s just at an aggregate level. So looking into my crystal ball and predicting where we’ll be in five years time is continuing to be versatile in applying the techniques we have to the new ways that people are watching their favourite programme.” n

VLAD THE ENTERTAINER Dan Meier meets the team at Framestore who are Putin a twist on the chat show format PICTURED ABOVE: Alastair Campbell meets ‘Vladimir Putin’


elieve it or not, animated characters have been interviewing real celebrities since Cartoon Network’s Space Ghost Coast to Coast in the 1990s. But those interviews have never been filmed live, and the interviewer certainly has never been Vladimir Putin. This is the premise of the BBC’s Tonight with Vladimir Putin, a co-production between Phil McIntyre Television and Framestore that piloted on BBC Two in June. Nathaniel Tapley plays a mo-cap version of the Russian premier, rendered in real time by Framestore in front of a live studio audience as he chats to Alastair Campbell, June

Sarpong, Joe Swash and Deborah Frances-White. The show’s co-creator Simon Whalley, executive producer for creative development at Framestore, recalls the idea’s inception back in 2015, when Framestore Ventures greenlit the concept on a shoestring budget. “3D wasn’t an option so we went with 2D and made a virtue of that. It was a cartoon, two-dimensional Vladimir Putin interviewing politicians in real time in the same way.” This early version of the programme was streamed live on YouTube and hosted by comedy website The Poke, whose co-founder Jasper Gibson created the


FEATURE show along with Whalley and animator Joel Veitch. “We demonstrated that one, from a character point of view, Vladimir Putin worked for a number of reasons. And that two, a human being interviewed by a cartoon worked seamlessly,” notes Whalley. “So all the kind of technology glitches aside, they felt like they were having a conversation with this character.” From there the team moved onto a scaled-up, 3D version and secured funding from the BBC for a pair of 12-minute pilots. “We grew up on Spitting Image, and that was how we had a lens or a filter into what was happening in politics, and that’s how we could process the news,” adds Whalley. “Spitting Image isn’t here anymore and the Americans are very good at doing that thing with The Daily Show and the like; their comedy is a lens for interrogating the news or politics, it’s a brilliant thing. We didn’t feel we had much of it in our own country and so we wanted to bring something to the world that would do that specifically.” The idea of mocap as 21st century puppeteering provided the technology to realise this vision, allowing Tapley to perform as Putin live and interact with guests in real time. “He’s not doing an impression,” remarks Whalley. “He’s creating a character based on Vladimir Putin as a device to have these sort of comedic interviews. But he’s been with us from the beginning, so he started from being in his own clothes waving around in front of a Kinect camera to being in a skin-tight black suit with dots all over and head-mounted cameras.” The suit is covered in 63 reflective markers that Framestore’s VICON mocap systems reconstruct in 3D space, creating “skeletal animation” at 120 frames a second. “Alongside that, he’s wearing a head-mounted camera, which has a single HD feed directly in front of him,” explains Richard Graham, CaptureLab supervisor at Framestore. “That video is going into another piece of software that’s basically tracking the movements of his face, tracking the contours. And that facial animation is being added to the skeletal animation and the whole thing is being animated.” Both the audience and guest can see the animated Putin on screens, allowing the guest to effectively make eye contact with him, “as though it were really just like a Skype call,” adds Graham. “They’re all in the same space. The guests could hear him and Nat [Tapley] has a feed from there from the lapel mic. But he’s physically in the same room, it’s just a black curtain in between so he can feed off the audience. It’s not like he’s in another room or anything. We’re all in the same space. We’re just hidden.” For Whalley, this sense of immediacy was imperative, citing the quick turnaround times of topical US formats such as The Daily Show: “It’s pretty rapid because it has to be current. We wanted that immediacy. Now whether going



“What we try to do with these kind of technologies is to have as little impact on the current methodologies as possible.” RICHARD GRAHAM forward we have that level of immediacy or not will come down to how it’s commissioned, what the budget is to make it and how it’s broadcast. But the technology allows for it to be that immediate and to have super topical information.” Since the character is being rendered in real time, the interviews can be captured live, allowing the show to be produced like a regular chat show. “Always what we try to do with these kind of technologies is to have as little impact on the current methodologies as possible,” notes Graham. “So everybody who was coming into the TV studio for the day’s work is basically doing their job as they normally would. It’s just there are these people behind a curtain with loads of computers going ‘What’s all that over there?’ But other than that, we produce it in the same way that you’d normally produce a TV show.” “You just have a few constraints,” he continues. “You can’t re-light the set minutes before going live. You can’t move the furniture - not yet, there’s probably a solution to that, but it’s more time and money. And Nat has to stay on his stage. He can’t walk into the audience - he can’t be David Copperfield just yet. But other than that it’s just a regular shooting day in a TV studio.”

Tapley’s space is 16 feet across and 8 feet deep, with a desk in front of him that matches the one in front of the guest, enabling him to interact with the physical space as it’s copied behind the curtain. This makes it easier for the guests to engage with Vlad, which apparently came quite naturally to Alastair Campbell, no stranger to tough interviews (or conversing with dictators). “He came in and immediately engaged eye-to-eye with this disembodied head and body on a monitor,” recalls Graham. “He didn’t go “Oh how is this happening?” He just went straight into it. Some of that’s testament to Nat and his brilliance at giving the performance and also the fact that the technology works to the extent that you can convince a guest to have a human conversation with a screen.” “Should a series get commissioned, we now have a very clear roadmap for exactly how to go about making this one of the most wonderful things it could be,” adds Whalley. “We’ve effectively done a really compressed R&D period, but learnt an awful lot in the process.” One man who won’t be watching however is the real Vladimir Putin, according to a statement by the Kremlin itself. And with free publicity like that, who needs enemies? n




Dan Meier visits visualisation specialist disguise to see extended reality in action PICTURED: disguise’s xR solution in action


fter making its name in the live events space creating immersive, 3D shows for the likes of U2 (that explains the glasses), disguise is poised to disrupt the broadcast tech industry with a new extended reality workflow known as xR. Instead of a traditional green screen environment, xR uses camera tracking technology to capture data and renders it in real time using LED screen technology,


thereby building a virtual environment in which a presenter can be totally immersed. “That whole thing of weather forecasters not knowing where cities are goes away,” says Peter Kirkup, technical solutions manager at disguise. “The workflow here is a 3D simulation engine, so we’ve got a 3D model of the LED screen set up. And then we understand the tracking data that’s coming in, so we update our model to give it more

FEATURE realistic information to understand where the camera is, lens intrinsics and everything about that camera. And then in the process of doing that, we get enough information to render the scene from the perspective of the camera.” The workflow is agnostic on the tracking system and content system (“Here we’re using stYpe but it could be Ncam or Motus,” notes Kirkup) meaning it can be deployed by any studio with a screen and tracking system. “What we’re doing is packaging together the workflow as part of our software solution,” Kirkup explains. “And then we’re working with partners to deliver this, so we’ve got solution providers who are xR solution providers, they can deliver the end-to-end, including the LED screen, the processing, and all of the other bits and pieces that are needed to pull it all together. So although we’re the ones kind of pushing the software side of things, ultimately there would likely be a partner who’s decided to deliver this as a complete workflow.” Kirkup demonstrates how an image can be rendered onto the front plate at the front of the camera, as well as the back plate (the LED screen). “We can make it virtual and we can move around it; the presenter can stand within the environment,” he says. “Or we can actually render it to both the back and front plates. That would allow the presenter standing in the space to know where the object is and the AR to present it in front of them as well, so they can stand and relate to that object, which is quite a unique capability to be able to align all the pixels individually.” This requires the system to map every pixel exactly and to know where the camera sensor is “at quite an extreme level of detail.” That’s where the company’s experience in live events comes into play, having built its own projector alignment toolset used for displaying projections on buildings. “For our xR workflows, what we’ve done is extended that capability to a studio environment,” says Kirkup. “We learn the actual geometry of the space and all of the intricacies of where the pixels are placed and so on from that calibration technique. And that allows us to very quickly deploy this type of xR setup into a studio environment. So we’re not going out with tape measures and laser measures and things to get accurate data, we’re using the tools that are already in the space to deliver that.” As a mixed reality workflow, xR combines the physical studio (LED screen, presenters and everything in the actual space) with the virtual environment, meaning the studio continues to display even when the camera is moved off the LED screen. “We create this kind of seamless world between virtual and real studio. That allows us to create kind of infinite studios and also to hide away all of the production elements, so lighting, DoPs and things like that don’t end up being visible to the shot when the camera

comes off, it just continues round.” In order to seamlessly blend these environments, disguise has built its own colour calibration mechanism to match the colour between the two spaces. “That allows us to deliver this accurately matched world very fast and fully flexible for whatever the environment is,” Kirkup explains. The environments themselves are created in software tools such as Notch, Unreal or Unity, then added to disguise’s timeline, allowing the user to build up different sequences. “We can also have virtual windows in there,” says Kirkup, editing a layer in the timeline to move the presenter into a floating window within the environment. Kirkup adds that potential applications range from interviews to news and sports broadcasting, “but it could equally be a virtual set within a much bigger drama piece. I think episode stuff like all of these streaming services are moving towards, there’s great opportunity because they don’t want the physical cost of building all of these sets all the time. They can turn programming around much faster if they’re working in these virtual environments because there’s one studio. All the cameras, all the lighting, everything else is already in place. They just change the environment at the touch of a button and they’re able to deliver completely different programme environments. “The biggest thing is about it not being an alienating environment,” he concludes. “That means that you can do things like interview pieces with completely untrained presenters in a virtual environment, which you couldn’t do with green screen. If you bring a teenager into a green screen environment it’s just going to be terrifying! You can bring them into this type of environment and they just naturally fit because it’s a virtual environment, they can see exactly what is being rendered there.” n

“You can do things like interview pieces with completely untrained presenters in a virtual environment.” PETER KIRKUP



Jenny Priestley visits boutique visual effects facility Youngster to find out how their colourist and creative artists are using FilmLight’s Baselight to colour grade content

MY WORLD PICTURED ABOVE: Youngster graded the video of Sanctuary for Japanese signer songwriter Joji


estled on the top floor of the Heal’s Building on London’s Tottenham Court Road, visual effects boutique facility Youngster was formed when editing companies The Quarry and Whitehouse moved in together. The facility works on everything from commercials to music videos to feature films, offering full colour grading facilities with a dedicated suite running Baselight. Youngster began working with Baselight in September 2018 and uses it for all colour productions passing through the building. The company has one dedicated Baselight suite and 20 Baselight for Avid plugins, meaning every Avid in the building is able to


work with the grading platform. Jim Bracher is Youngster’s senior colourist and spends most of his day working with Baselight. He first used the platform while working for another post house and then worked with another system while freelancing, but he describes Baselight as being “infinitely better.” Why? “The tools are more in depth,” explains Bracher. “The user interaction with the system is more organic. Of course it’s what you know, but I find that I can do more with it, more quickly, more intuitively. The whole way it’s layer based, for my mind is logical, it works. I’ve never got on with nodes, although Flame uses nodes.”

FEATURE PICTURED RIGHT: Jim Bracher, senior colourist at Youngster

“Flame has a timeline layer system for the edits, but I also work with nodes for compositing which I can do within the timeline or as a separate render for more complex shots,” adds Robin McGloin, Youngster’s VFX supervisor and Flame artist. “I started with Smoke so when Flame adopted the timeline from its little brother, it was a natural progression for me.” Bracher describes grading as very personal, particularly in terms of the artistic side. He says it can’t really be taught, the colourist just needs to get their hands on the kit and work with it: “I’m a guitar player and it’s similar to that. A guitar is the type of instrument you teach yourself, it’s not like a classical instrument, it’s you and the guitar. In a way, that’s what colour grading is like. You kind of find your own ways of working. On our Baselight Slate grading panel I’ve changed the button layout many times just because of the way my fingers move. With the standard set-up I kept pressing buttons I didn’t want just because I wasn’t looking and I was used to the larger version of it. Where my finger was naturally going, the button I want is there now.” One of the stand-out features for Bruce Townend, partner at Youngster, is Baselight’s BLG workflow based on the small and portable OpenEXR BLG file that can be used to create, review and transfer looks between Baselight, Flame and Avid: “The Baselight connects the BLG file which is a small low data sized

file, and Avid and Flame can read those files and it means we can just import the grade and it’s working on pictures instantly, there’s no rendering involved and it’s instantaneous.” “Coronation Street use it,” continues Townend, “and it just seems perfect for commercials, when you’re working against time pressure all the time, it offers you incredible flexibility. It’s the same for Flame as well. You can start work on the Flame project before you’ve graded anything. You don’t have to wait until three months time when everything’s graded and done before you start the Flame job, it can be started the minute anything’s been approved.” “As soon as you’ve got some VFX shots, I can have their latest cut conformed,” adds McGloin. “I can start work on the raws, ungraded. At that point Jim has already come up with like a One-Light kind of grade, a starting point for a producer or director to show their clients, their agencies. I can keep working on that and they can change their grade as much as they want, because I’m working on that raw and it’s essentially a filter that pushes all the colours up together, whatever I’m doing. As long as I’m looking at it through that BLG file, to see what the colour is going to be in the end result, it’s going to push everything the same way so even if they change it drastically it should just work and come back in and I don’t have to drop new plates in, it all just sits on the

“We can email BLGs around to your heart’s content because they’re such tiny little files.” BRUCE TOWNEND



PICTURED ABOVE: Treatwell’s TV commercial edited and graded by The Quarry and Youngster, using the BLG workflow to enhance the process

PICTURED RIGHT: Robin McGloin, VFX supervisor and Flame artist at Youngster

timeline. I can just import the new BLGs and there you go. It’s a massive time-saver for Flame.” Talking of time, how long does Bracher estimate he spends on each project, be it a commercial, video or feature film? “It’s difficult to say but my personal rule of thumb for grading has always been 10 seconds is an hour,” he explains. “But of course it varies. Some directors like things to look natural and others want you to put shapes on everything. So if you have a 60-second commercial, you want at least six hours on it. And is that the same for McGloin? “No, it’s even more,” he laughs. “I have a rule of thumb that for 30 seconds I’d need about five days in general. Sometimes there’s no work needed on a project, sometimes it’s just colour grade and add some titles so that can be just a day. “But in this day and age you’re working on a lot more shots than you used to. When I first started out, you’d work on one or two shots in the commercial, and now you’re working on the majority. There might be little tweaks and things like that that can be a quick fix or it can be a big VFX shot, which can take two or three days. It really depends on the subject matter.” Obviously working on different kinds of content requires different kinds of colour grading - a music video is often going to be really vibrant because of the need to catch the viewer’s eye, whereas a feature film uses a more


subtle colour palette or completely different colour tones. What are the differences when Bracher and McGloin are working on the three kinds of content? “There’s no steadfast rule,” says Bracher. “The main difference in approach is the time constraints. You usually have a lot more time for commercials. That 10 second rule I mentioned earlier is mainly for commercials, you couldn’t apply that to a music video, because that’s four

minutes long so if you applied that rule to that, you’d be talking about days and days and days. “In terms of bringing out detail, if you’re working on a commercial you’ve got all day on it and you want to change someone’s t-shirt colour, then that’s doable. But, on a music video we’re talking about maybe 200 shots rather than 40. Then you have to find other ways of doing those kinds of things. So there are limitations with time when you’re working on longer-form projects. And when it comes to movies you generally go through, do a kind of technical pass, get everything looking natural, and then you would work on looks.” When the team are working on a grade are they working autonomously or with a director or producer sat watching over their shoulder? “Ideally you want the director and DoP in the room at all times because you’re making decisions that they need to be in on,” says Bracher. “When you have long-form work, they are busy people so they can’t be there all the time. You want to have a session with them, get an idea of what they want, then you spend hours filling in the gaps on your own. “There are times that people can’t make it, which in some ways is liberating because you just get on with it, and there’s less intervention,” he admits. “But the problem is then you have back and forth while you’re sending clips for approval and then you’re getting emails back. It’s much better to have them in the room and have a discussion because it’s very time-consuming sometimes. If they want to make changes, sometimes in an email it’s hard for them to describe what they want and a lot of miscommunication can occur. So ideally you want them in the room.” Lots of vendors within the media tech industry like

to promote the fact that their technology can be used off-prem, the engineer or editor doesn’t need to be in the building to work on a project. Is that the same for Youngster? “The BLG workflow means anyone can work remotely but the grade has to have been done upfront,” says Townend. “We can email BLGs around to your heart’s content because they’re such tiny little files. I could be anywhere, and as long as the original footage is here to grade, I can be sent a BLG anywhere in the world, and I’ll be able to carry on working with a graded project.” One of the services offered by FilmLight that the team at Youngster find particularly compelling is the support they offer their clients: “I’m on the phone to them fairly often, usually with feature requests rather than problems,” admits Bracher. Of course anything the Youngster team requests will also find its way to FilmLight’s other clients. “If it’s going to help us and save us time then that’s brilliant,” says McGloin. “Yes, other people get to use it but we’re ahead of the curve because we know what we’ve asked for and we know it’s there.” Finally, how are the team at Youngster dealing with the higher-resolution formats that are becoming more prevalent in the media industry such as 4K, HDR, 8K etc? “I usually get rushes at 4K and I always work with the raw files and transcode them on output,” says Bracher. “I will usually conform up to whatever my highest output is,” adds McGloin. “Generally at the moment it’s HD, but if it’s going to cinema then I will finish it at 2K, or whatever the highest resolution that is needed, because if I have massive plates then that might slow me down. But saying that, more things recently have come through that are all required to be 4K.” n



MIC Sound mixer Ivor Talbot tells Dan Meier how DPA Cores became his great white hope


id you know there were more shark attack movies released in 2017 than actual shark-related deaths? One of these was 47 Metres Down, a cage-divegone-wrong feature directed by Johannes Roberts, whose follow-up floods cinemas this August. What 47 Metres Down: Uncaged has that its predecessor did not (other than Sylvester Stallone’s daughter Sistine) is the ability to record dialogue underwater between four or five people at a time, made possible by DPA microphones and the work of sound mixer Ivor Talbot. “They needed to be able hear each other, and also talk to the director and the dive coordinator on the surface. It turned into quite a complicated little communication


setup,” Talbot says. “Johannes the director initially asked me to test wireless underwater communications systems but we quickly discovered they were not up to the requirements of the project so it required a rethink.” This previous system involved a helicopter-style contact mic that needed to touch the mouth/face inside a full diving mask. “A diving mask is a very unfriendly environment for a quality microphone due to the changing pressure inside the mask [as the diver breathes in and out]. That pressure affects most microphones,” explains Talbot. “When you put four or five people underwater on those contact mics, due to their dynamic response you get a very nasty sounding regulator noise… people on the surface


“The benefit of the DPAs was they made it sound natural.” IVOR TALBOT would actually be wincing with pain.” Naming no names, Talbot describes testing various microphones that (literally) struggled under pressure. “I’m surprised any of them worked because of the negative/ positive pressure situation, which is very harsh.” Talbot discovered that the microphone would have to be ported (ie. with a hole at the back of the capsule to allow the pressure through) to avoid being “overexcited” by the effects of breathing inside a diving mask. “The mics that worked best were the DPAs,” says Talbot. “The Cores had this really wonderful effect, which they do on most other instruments or voices you put them on, they make things sound more natural and they don’t exaggerate.

They’re very flat and they just make everything sound the way you expect it to. So they really worked in that sense, and it was pretty obvious pretty quickly, four people were going to be able to talk to each other and be heard with those underwater.” The DPA Cores had the added benefit of being able to easily swap the capsules when they got dunked in water. “Generally you could shake out the water or blow out the water and it survived,” notes Talbot. “I think we had eight capsules, and only one of them completely died. They got drenched every day, over a period of 30 days in the tanks. They were incredibly robust.” For Talbot, the greater challenge became fitting the

PICTURED ABOVE: 47 Meters Down: Uncaged was mainly shot in a water tank at Pinewood Studios


microphone into the mask. “That was really hard actually, because it has to be waterproof, sealed, it can’t leak any water. Health and safety issues were huge, because we don’t want actors at the bottom of the tank when suddenly the mic pops out and floods their mask. That was the tricky thing,” he recalls, to say nothing of the (admittedly motorised) sharks. Along with cable specialist Stuart Torrance, Talbot fashioned a nylon bung connected by cable to a two-pin connector that could be unplugged and replugged underwater. “This is something I didn’t know about until I got this job,” adds Talbot. “We had a lot of meetings where we were scratching our heads going, ‘How does this fit with this?’ British manufacturing at its best! Frank Barlow, first assistant sound, had a really hard time constantly maintaining the equipment once Stuart had made it but was up to the task of keeping them working.” Another advantage of the DPAs was their size, since they wouldn’t cover the actors’ mouths. “If you watched the first film, there’s a little red disc in front of their face all the time. And the director and the editor hated it,” says Talbot. “We wanted to see the lips and the mouth move.” To achieve this, Talbot and the team manufactured a grill in which the microphone could sit without being seen. “Those mics are so small, we had a 1.5 inch by 1 inch tube to put the connector and the microphone in. A very tiny space. Maybe half an inch above that is where the capsule sits, a little cable, the two connectors and the bung. We were very lucky that not only did the mic work, but it was small enough to fit in the space that we had available.”


“A lot of the time in sound recording it’s not just about how good the mic sounds, it has to fit in the space you’re recording in.” IVOR TALBOT

The results were deemed good enough to use by Roberts and editor Martin Brinkler, as Talbot explains: “They were incredibly happy with the difference, because it meant the editor could actually sit and listen to the recordings. The DPAs made it sound natural. A lot of the time in sound recording it’s not just about how good it sounds, it has to fit in the space you’re recording in. So if you’re recording in a cathedral, you expect it to sound like a cathedral. You record it underwater in a mask, you expect it to sound like you’re underwater. So they made it sound like it looked half the time.” As well as the DPA mics underwater, a PA system was used for communication between the director, dive coordinator and the sharks (OK, maybe not the sharks) which had to be fed back into the tank so the actors could hear everything. “It became quite a hard task mixing everyone, because there’s still a lot of ambient noise down there; the tanks blowing out gas, the regulators - you get four or five of them going at the same time - and the panicked breathing,” recalls Talbot. “But even then you could still hear what they were saying, and they could still hear themselves. “The benefit of the DPAs was they made it sound natural; they weren’t just a piece of equipment that would be tucked in there because it just worked,” he continues. “It was something that actually recorded the sound incredibly well and made it sound like it looked, which is sort of the challenge to recording sound really; to make it good but also capture it in the place you’re recording it so it’s believable. I think the sound we recorded was very believable for where the actors were.” As a result, Talbot swapped out a lot of his old mics for DPA Cores, so clean and reliable was their sound even in the most challenging environments. “As soon as you chuck anything in a tank of water, everything becomes more difficult,” says Talbot. “It wasn’t a huge budget film but the director and producers gave me the time to prep the best-quality equipment for the job, as they realised without a good level of communication, they just wouldn’t have been able to make the film. How are they going to make the movie if they couldn’t hear each other or speak to each other? Well the final proof ’s in the pudding isn’t it?” On that the sharks will surely agree. n



Director Daniel Scheinert discusses getting weird with Adobe Premiere Pro


have the camera swing around with the actors here. POV shots were a common tool in the film to get the audience really into a character’s world. In this case you’re wondering what on earth is going on with this wounded man whose eyes you’re viewing the world through.

THE FILM’S OPENING HAS SUCH A GROGGY ATMOSPHERE CAN YOU EXPLAIN SOME OF THE TECHNIQUES YOU USED TO ACHIEVE THAT EFFECT? We spent a whole shoot day shooting the opening scene very much like a music video. We had done tests before production with fireworks and fire effects (not to mention countless tests when I was a pyromaniac teenager in Alabama) and got lucky on the night of the shoot that it was really humid and hazy out. But once we set off the fireworks and got the bonfire going the smoke in the air just naturally made for a dreamy look. We knew it would be set to a melancholy country song and unfold in slow motion, but we weren’t sure which one.

WERE THE SCENES LIT BY A LAMP AND A FRIDGE LIGHT TRICKY TO PLAN? We planned practical appliances around the house a lot with Ali Rubinfeld (production designer). We knew we wanted to mix colour temperatures and use the appliances around the house to create different spaces. This applies to sound design too; we added noisy appliances on purpose and created moody tonal shifts using laundry machines, water heaters, ice machines, and ceiling fans. So lighting-wise the fridge light was a vivid green blue in a red kitchen (augmented with hidden LEDs) and the lamp is a huge character in that scene you’re referencing so we bought countless options, and tested different bulbs in there. Everything came together beautifully on the shoot day and went very smoothly.

et in the aftermath of the titular event, The Death of Dick Long is a deeply strange, darkly funny and gradually chilling movie about masculinity and alienation. TVBEurope asks director Daniel Scheinert how he brought this freaky film to fruition.

THE SEQUENCE IN WHICH ZEKE CONFESSES TO LYDIA HAS QUITE A STRIKING VISUAL QUALITY - WAS THERE SOMETHING DIFFERENT YOU DID FOR THESE CLOSE-UPS? We knew the whole movie could succeed or fail if that scene landed. Ashley Connor (my DoP) found a montage of Jonathan Demme scenes where actors look just barely past the lens and we got excited about that as the approach to this sequence. We had red lights in the kitchen take over the space more and more throughout the film (inspired by Robby Müller’s work) but this scene is one of the first times you’re in a warm reddish hellish environment with no escape (lighting and colour-wise). The equipment was pretty straightforward; shouldermounted camera with a wide-angle prime lens. IN THE SCENE IN THE CAR AT THE START, THE CAMERA IS ALMOST FROM THE POV OF DICK AND TURNS SIDEWAYS AT ONE POINT - WAS THIS HARD TO MANOEUVRE? Ashley hand-held the sequence while crouched down on a stinky stained car seat. She never complains even when maybe I’m asking too much of her - although this time I think it was her idea to shoot the car screeching to a halt from this angle. In the edit it made the scene so much funnier to


WAS PREMIERE PRO A HELPFUL EDITING TOOL? Premiere was great when working with the editor Paul Rogers. He’d version out countless versions of a scene, oftentimes pushing things way too far just to see when the scene would break, and it was easy to organise the growing project and go back three months at a moment’s notice to see where the scene had started. I’m also an impatient director and hate sitting on the couch giving instructions. I had a hard drive of my own so I could experiment and easily send Paul a sequence if I discovered something interesting. WERE THERE ANY OTHER EDITING TOOLS YOU USED THAT YOU FOUND USEFUL? I also sat on the couch doing all the VFX for the film (mostly cleanup where we’d made a mistake. I’m terrible at playing dead so I kept breathing and blinking when I was playing dead Dick). I’d throw a clip in Adobe After Effects and send it right back a few minutes later full resolution. Daniel Scheinert’s new film, The Death of Dick Long, was created using Adobe Premiere Pro and recently premiered in the UK at Sundance Film Festival: London, where Adobe was Presenting Partner. n


IBC2019 WILL EXPLORE ‘A NEW ERA IN MEDIA’ By David Davies, consultant and conference producer


ith conference and exhibition dates now aligned, this year’s IBC will take place from Friday 13th – Tuesday 17th September 2019, at the RAI in Amsterdam and will provide an unprecedented opportunity for attendees to enhance their understanding of a media production environment whose pace of change is continuing to accelerate. Themed conference days, increased representation of women on stage, and a brand new esports event are among the developments that will greet visitors at this year’s IBC as organisers continue to strengthen its credentials as the world’s most influential media, entertainment and technology show. In line with the multitude of creative, commercial and technical issues now influencing the development of media production, this year’s IBC conference has a different theme for each day: Friday’s is ‘create and produce: creating disruption’; Saturday’s is ‘manage: automating media supply chains’; Sunday’s is ‘publish: embracing the platform revolution’; Monday’s is ‘consume: engaging consumer experiences’; and Tuesday’s is ‘monetise: scaling audiences and revenues’. Confirmed keynote presenters include Cécile Frot-Coutaz, head of YouTube EMEA; Arnaud de Puyfontaine, chairman of Vivendi; and Max Amordeluso, Amazon Alexa evangelist in the EU. Building on the success of last year, IBC is also bringing back the Global Gamechangers Stage, which has a special focus on the technologies and business developments expected to change the game for the media industry. Speakers on this


stage are set to include Gary Shapiro, president and CEO of the Consumer Technology Association; Jane Turton, CEO of All3Media; and Lisa Opie, director of factual for BBC Studios. The final day of the show, Tuesday, will play host to an innovative new event, the Esports Showcase, powered by ESL, EVS and Lagardère. Combining a series of conference sessions – which will include the participation of key players such as Ginx TV, Twitch, Riot and Blizzard, as well as developers like EA Sports – with a live demonstration of Counter-Strike featuring professional teams from ESL’s National Championships. Also new for IBC2019 is the Media-Telecom Convergence Catalyst. This collaboration between IBC and the TM Forum will see three catalyst projects on the showfloor that highlight open innovation between the telecoms and media industries. Participation from Al Jazeera,

Associated Press, BBC R&D, RTÉ and more will show how 5G, AI and big data management can solve business and technology challenges, and improve the customer experience. IBC2019 provides a vital annual opportunity to become acquainted with the latest media solutions. It is also a venue for serious networking and deal-making. IBC is a show where business genuinely gets done and deals are signed on the show floor. It’s great for exhibitors, but good too for buyers, who have the opportunity to compare solutions from all the leading vendors around the world in one convenient showcase. Attendees will go home equipped with shared unique insight and experience, that will drive their own businesses forward in the year ahead. What visitors will experience at IBC2019 will energise and motivate them, and reveal new opportunities – for their businesses and as individuals in the year ahead. n For more information, and to register, please visit:


A FOCUS ON SONY’S VENICE UPDATE Jenny Priestley talks to Claus Pfeifer, head of technical sales, broadcast and cinematography, Sony, about the company’s recent update for its Venice broadcast cameras


riginally announced at NAB in April, Sony rolled out a new firmware update for its Venice cameras at the end of June. The update has been released following feedback from camera operators and DoPs, as Claus Pfeifer, head of technical sales, broadcast and cinematography at Sony explains: “A lot of our customers are rental companies or DoPs who have either rented or bought Venice. They were very happy about the versatility of the camera including the picture quality, the colour depth, the size of the sensor, which has been very popular, and the capability of recording anamorphic and different aspect ratios, as well as different lenses. That has been really appreciated. The only thing that was missing was the higher frame rates. And that’s what we now have delivered with version four.” That’s great news for users, but what does the firmware update actually entail? “This version of firmware has five different features included in the new software updates,” continues Pfeifer. “The most important is the high frame rate feature. That means that Venice will now go up to a maximum of 120 frames per second in 4K mode. So, in the 2.39:1 aspect ratio, you will be able to record in 120 frames per second, which is ideal for any commercial shoot, sports or any special effects. “For customers using anamorphic lenses the 4:3 sensor readout is important. With this mode, you can go up to a maximum of 75 frames per second, which means three times slow motion in the 25 frames a second rate that is used in Europe a lot for commercial

“We continue to listen to how our cameras are used, and optimise the software for those uses.” CLAUS PFEIFER shooting. Last but not least is the popular full frame 6K 3:2 mode - the maximum frame rate is now 60 frames per second.” One of the key features of the Version 4.0 update that Sony was keen to stress during their NAB press presentation was live shading. According to Pfeifer, this will be particularly key to both live entertainment shows and sports broadcasts where the camera operator would traditionally be dealing with focus and zoom: “All the operation itself in terms of frame rates, iris

control, the settings of the camera is done remotely inside the OB van, so the camera operator inside the stadium is only finding the best picture,” he explains. “For example, if the sun is changing, you must change the iris settings using an RCP remote control unit, that can be controlled by a joystick inside the OB van. That capability has now been implemented into Venice.” Again, Pfeifer says this development is down to feedback Sony has received from its customers because they want to be able to use Venice cameras either at large concerts or inside theatres. “I know that the National Theatre in London has been using F55 cameras in the past, usually a mixture of Super 35 cameras and broadcast system cameras, with the two-third inch sensor,” he continues. “Depending on where you are, if the camera is far away from the stage, then you don’t need the large format sensor. But if you’re close to the stage, then the shallow depth of field makes a lot of sense. That is the advantage that Sony has because we have the whole lineup of products all working together using the same RCP, the same protocol, the same control mechanism.” “The Version 4.0 updates includes elements that a lot of our customers have been waiting for. But we did not stop there because we launched Version 5.0 at CineGear in LA last month!” reveals Pfeifer. “It will have a lot of smaller updates that customers and DoPs have been waiting for and have asked us to do. We continue to listen to the customer, continue to listen to how our cameras are used, and optimise the software for those uses.” n





Daniel Gumble delves into Killing Eve’s technical challenges with its award-winning sound crew



ew shows have captured the imagination in recent years quite like Killing Eve. And that’s saying something, given the golden age the TV world has been enjoying for the past decade or so. In an era where blockbusting budgets, A-list casts and CGI extravagance have become commonplace on the small screen, the first half of 2019 has already treated TV audiences to new series of Game Of Thrones, Stranger Things, Big Little Lies, Jessica Jones, Black Mirror, Good Omens, Catch-22 and Chernobyl to name but a few. All of which makes the success of Killing Eve all the more remarkable. In essence, it’s a show that has taken the fundamental principles of great storytelling - impeccable writing, endearing characters and peerless performances - and subverted them to create something that is not only unique in today’s TV landscape, but can also be seen, and indeed heard, over the squall of fire-breathing dragons, explosive special effects and Hollywood sheen emanating from the competition. Like the first series in 2018, the second series of Killing Eve, which concludes in the UK at the end of July, has drawn plaudits from across the globe, scooping numerous awards and proving a major hit once again with the public. For the uninitiated, the show is based around the relationship between British Intelligence investigator Eve Polastri (Sandra Oh) and Russian assassin Villanelle (Jodie Comer) and takes place across an array of glamorous European city locations. And while plaudits from across the globe have been bestowed upon Killing Eve’s writing team and its star turns from Oh and Comer - the show won BAFTAs for Best Actress (Comer), Best Supporting Actress (Fiona Shaw) and Best Drama Series - equally crucial to the its success is the work of its sound production crew. In April of this year, Killing Eve was announced as the winner of the sixth annual AMPS (Association of Motion Picture Sound) awards for excellence in sound for a television

PICTURED ABOVE: Sandra Oh and Jodie Comer

drama, which recognise the importance of ‘clear dialogue and impressively crafted sound in any feature film or television drama.’ According to location sound mixer Steven Phillips, Killing Eve’s second outing was all about refining the process, as opposed to any major upscaling on the work carried out in series one. “We had to accommodate the fact that the first series was so successful, which made shooting on the streets difficult; ADs had to mask out paparazzi or people walking down the street so they can’t interfere with the shoot or see spoilers,” he explains. “Everyone knows the cast now and we don’t want to hear people walking past and saying ‘oh wow, it’s her off Killing Eve.’” For Phillips, the key challenges with location mixing on the show remain the same - battling with the bustling sound of the busy city locations and clearly capturing dialogue. “Sometimes it feels like trying to make a silk purse out of a sow’s ear because we film in some especially noisy locations, like London, Rome, Paris and Bucharest,” he continues.



“When you see cities on screen you have to hear it too, but not too loudly, so we have to get rid of most of it and provide the post production team with as clean a dialogue track as possible, then they can cut the sound to the picture wherever they want. One of our tricks is to put up an atmos mic, so we can provide a clean track of traffic background or an aircraft going across. We capture all that out of the ordinary sound so that we have a clean effects track that can be put underneath. This way we can keep that atmospheric sound without making a big thing of it. This really helps the guys in post production. “We also shoot a lot of two-camera stuff, but we have limited shooting time, so sometimes we are forced to shoot ‘tight and wide’ (the camera teams might film with 135 or 180 and 50 or 25 mm lenses at the same time). This


means we can’t get anywhere near the actors for the tight shots, so we radio mic everyone. Radio mics have their own challenges and on Killing Eve some of the costumes are really noisy or tight-fitting, so hiding the mic pack can prove difficult. Villanelle is famous for her costumes and in the first series I had second assistants working exclusively with the costume department and cast to figure out where to put mics, with some being sewn into the outfits or placed in the cast member’s hair.” The show’s international feel poses different yet equally difficult challenges for supervising sound editor and dialogue editor Tom Williams, whose role entails editing all location sound and recording and editing all ADR and crowd sound. “My biggest challenge is the multi-lingual main cast and

PRODUCTION AND POST loop group,” he says. “I was working with a voice coach in ADR to get the pronunciation right, while still trying to preserve the magic of the performance. The female prison scene was fun, with six hyped-up Russian ladies in the studio! Preserving the dramatic scenes (the episode one hospital murders are brutal) is very important, while also adding humour where possible. For example, Villanelle loved her food and we reflected this with exaggerated eating Foley. This was straight off the bat with the first ice cream scene. Historically you don’t see assassins eating much!” With such a variety of contrasting locations in which to shoot, one of the most vital tasks for the sound crew was achieving consistency. And, as re-recording mixer Nigel Heath explains, it was crucial that the show’s audio component was not only even from scene to scene but from series to series. “The second season was very much a continuation of the first,” Heath elaborates. “With a much publicised new writer on board, I was mindful of the potential for critics to compare the second season to the first, so I worked hard to ensure sonic consistency so the ‘box set’ audience would feel it just as a continuation.” Heath also highlights the soundtrack as one of the key elements to the show’s success: “I work in conjunction with my amazing assistant (now co-mixer, Brad Rees), and my job is to blend together all the incoming ingredients from the various sound editors involved in the project, dialogue, sound effects and sound design, Foley and, of course, music. The producers and directors had a clear idea of what they wanted to achieve with the soundtrack from the start, and the styling was soon ‘solidified’ during the post production of series one episode one. The score was by David Holmes and Keefus Ciancia and, together with music supervisor Katherine Grieves, they loaded the show with awesome sounds.” Though delighted to see the bar being raised in the realm of TV sound production, Heath does express a slight concern with the fact that the great work being done behind the scenes may be lost on those consuming shows through less than ideal audio platforms, such as flatscreen TVs and mobile devices. “It’s great to see more ‘near feature film’ values appearing in TV work when it comes to sound,” he continues. “But I worry that sometimes it’s easy to forget that most punters hear it on very basic kit in a non-ideal domestic listening environment. To me it’s slightly ironic that as methods of sound acquisition and distribution have improved, an increasing number of devices used for the actual reproduction of said sound have become less sonically satisfactory.” However, in spite of the dichotomy between premium audio and far from premium listening devices, sound effects editor/designer Darren Banks, insists there is no

room for anything other than the very highest production values in television today. “The standard of TV drama sound is rising all the time,” Banks says. “The line between a high-end drama soundtrack and movie soundtrack has become increasingly blurred in regards to quality and content in the last few years. Alongside that, viewers’ expectations have also risen, and so has the scrutiny.” For Philips, the increased scrutiny and closing of the gap between film and TV sound is a testament to the great work being done not just on Killing Eve but across the medium as a whole. “Everything on TV is driven by what’s done in film,” he states. “It’s just that we have less time. The number of radio mics used, along with the multi-track machines and multiple headsets handed out to production now is an indicator of how important sound has become. In the ‘90s, a TV drama might have had a sound crew with two people, now it’s three or four trained crew. The gap between TV and film is severely narrowed now. The budgets and timeframes aren’t as big, but we use pretty much identical kits.” With a third series already in the pipeline, the team is already relishing the prospect of reconvening and discovering where Killing Eve can be taken next. “On the second series, nearly everyone came back, and across the board it was a really nice atmosphere,” Phillips concludes. “We had a good dialogue with the editors and production, they were supportive and it’s not always the case that you get that level of support and understanding from a production. We were working as part of a bigger team where everyone was trying to help each other. That’s why it was wonderful to win the AMPS award and I’m thankful and honoured that the work of the entire sound team has been recognised.” n




LEGISLATURES Philip Stevens concludes his mini-series with a look at three more parliamentary broadcasting set-ups


roadcasting proceedings from the European Parliament (EP) began in 1982 when an OB van bought by the parliament’s audio-visual unit produced coverage of the mediatic parts of the plenary sessions for both archiving and for use by television stations. In 1995 the European Commission started a project called EbS (Europe by Satellite) which included transmissions from the European Parliament. Today, all the plenary sessions are broadcast from start to finish. In addition, parliamentary committees, press conferences, press points and special events in the twin EP venues of Strasbourg and Brussels are included. “We have two TV channels in broadcast quality both by satellite and accessible and downloadable by web in our Multimedia Centre,” explains Jesús Carmona, media director, Directorate-General for Communication, European Parliament. “We broadcast live and recorded and we have several news slots per day for news summaries. The speciality of the European Parliament is the multilingualism and therefore the live contributions on these EbS channels are transmitted with both the floor and 23 interpretation channels.” The Multimedia Centre also contributes to a web portal of the BBC called Digital Democracy where all British Parliaments are to be found together with the European Parliament. Carmona continues: “The Bureau of the European Parliament has produced the rules of live TV coverage of plenary sessions. In order to maintain the dignity of the debates, the coverage focuses on the intervening MEPs and avoids, as far as possible, attracting attention to the deputies who disturb the good development of the session. In the event that the president or other MEPs react to the deputies who disturb the debate, the sequence will be shown.” Carmona says that it is important to mention the use of the signal(s) for internal use. “Plenary sessions are available on our in-house IPTV distribution platform in


all language versions. Further, our signal is being used to make an official document contributing to the minutes of the plenary meetings - again in all language versions.” THE OPERATION Although the European Parliament provided the technical infrastructure, maintenance is under contract with a dedicated company. “For the operation of the channels, we have a contractor who provides staff in Strasbourg and Brussels. We also employ a worldwide contractor to cover abroad the EP president and EP delegations’ visits.” In Strasbourg, the EP uses eight remote cameras and one fixed fisheye lens camera for an overview of the chamber. These are Grass Valley LDK-8000 cameras mounted on Shotoku pan and tilt heads for remote operation. For special occasions, such as Macron/Merkel visits, and the Kohl funeral, the unit rented a crane, a Steadicam/wireless and a 40x zoom camera. In Brussels, there are seven remote cameras, also all Grass Valley LDK-8000. For special events an additional wireless camera is normally added. The press conference rooms also utilise Grass Valley cameras. In Brussels there are four LDK-8000s, in

Strasbourg four LDK-80s. Carmona explains: “In Brussels we have 20 rooms equipped with Sony BRC-H700 robotic cameras. Although an HD camera it delivers an SD signal for web streaming. Alongside that, we have five rooms equipped with Grass Valley LDK-8000 professional broadcast quality cameras. “In Strasbourg, we have six rooms equipped with Sony BRC-H700 robotic cameras for web streaming. All these meeting rooms are cabled in order to do broadcast multi camera coverage, if required. In such cases, we rent all the technical equipment.” All the Sony robotic cameras are controlled by special remote-control software that gives the possibility to mimic broadcast coverage with an automated system. This

software reacts on the “Microphone Open” signal from the conference system. In Strasbourg the press conference cameras are controlled by Vinten pan and tilt heads. The vision mixer in the Hemicycle gallery is a Snell & Wilcox Kahuna, while the choice for press conference coverage is a Ross Carbonite. “We do not have audio mixers in our galleries because we are depending on the conference service to deliver all audio in proper format. However, special care is taken to adjust the lip sync of the signals.” All recording is handled by the unit’s Grass Valley K2 servers. This system has an extension to the archive system based on a tape library. Most editing is carried out using Grass Valley Aurora Edit, but Adobe Premiere is employed

PICTURED ABOVE: The control room for the Riksdag overlooks the chamber Photo: Melker Dahlstrand/ The Swedish Parliament


PICTURED ABOVE: The European Parliament in Strasbourg in session

for special programmes. “Alongside our own output, we have large studio facilities with multi camera operation enabling television and radio stations to make their own programmes in Strasbourg and Brussels,” Carmona adds. “Further standup camera positions are available for live contribution for television stations.” SO, WHAT OF FUTURE PLANS? “The Strasbourg installation is going to be renewed this summer. Also, we will put in place a completely new server system based upon EVS XS servers and Harmonic storage in both Strasbourg and Brussels. Editing will be changed to Adobe Premiere and Web based editing will be added as a simple editing tool. Further, we are investing in transport of video over IP with the SMPTE 2022-6 and 7 standards in mind.” SWEDISH SCENE Sweden’s national legislature and supreme decisionmaking body is known as The Riksdag. Since 1971, the Riksdag has been a unicameral (one chamber) legislature with 349 members.


All debates are broadcast via the Riksdag webcast service, as well as certain other events that take place within the parliament building. Selected debates also include an English translation. “The Riksdagen started in-house video production in 2000 to enable us to reach a bigger audience both within Sweden and abroad,” explains Mats Tidstrand, Riksdagsförvaltningen section manager, IT Media Production. “All debates are broadcast via the Riksdag webcast service, as well as certain other events, such as committee meetings, seminars and ceremonies that are of public interest that take place within the parliament building. Technology for TV production is available in the Chamber, the former First and Second Chambers, the Skandia Room – which is used for the Committee on EU Affairs, and in the press centre.” The output means that broadcasting companies can link up to this service and pick up audio and video streams via the Kaknäs Tower. Located in Gärdet, Stockholm, the tower is a major hub of Swedish television, radio and satellite broadcasts. The parliamentary material is free for use by any broadcast-company. In addition, the material can be downloaded from in low resolution.

PRODUCTION AND POST “Our rules of coverage call for the material to be objectively produced from beginning to end and to be shown without any editing. We do not show any images of the public at open meetings,” states Tidstrand. The coverage is operated using in-house staff, although a number of consultants are also used when there are several concurrent productions – or the meetings extend for long periods. Eight cameras are used in the plenary hall, while three committee rooms and the press conference facility are each equipped with four cameras. All the cameras are remotely controlled using robotic systems. THE ACTION FROM ATHENS Voulí Tileórasi is the television station of the Hellenic Parliament in Greece. The name originates from Greek Βουλή Voulí, meaning ‘assembly’, ‘council’, or ‘parliament’; and Tileórasi, meaning ‘television’. “The Hellenic Parliament TV Station started broadcasting at 1998,” states Dimitris Galanis, technical director of the channel. “We cover everything that happens in the Parliament. That includes the Senate and the plenary halls and three committee rooms.” The primary aim of the channel is to provide direct access to the inner workings of the Hellenic Parliament. As well as live and recorded coverage from within the parliament building, the channel also broadcasts a daily parliamentary newscast that gives details of Parliamentary business. In cases where there is a common interest, the station exchanges material with other parliaments. “Although we don’t have any specific rules about the coverage, we concentrate only on the speaking MPs during the broadcasts.” Galanis continues: “Most of the time we have the same programme on the TV channel and our webcast. We operate three web channels where we can show the output of three different committees.”

Six cameras are used in the main chamber, while four are used in the committee rooms. All cameras are operated remotely. “We use Vinten Radamec remote controls with the Thomson cameras, plus we have both Panasonic and Sony remote controls and cameras. We operate both Panasonic and Sony vision mixers, while our audio mixers are mostly Soundcraft consoles.” Grass Valley K2 video servers are used for recording and playback, and there is an Arbor Media unit for low resolution recordings. “We use Avid editing systems, although an Adobe suite based on an iMAC computer is also utilised for some applications,” says Galanis. “Apart from taking in content from the debates and committees, the channel produces a lot of other programmes including music and historical documentaries, talk shows and news programmes.” He concludes: “We are planning to transmit all of our programmes in high definition - so we are looking forward to replacing the old technical equipment.” n




HOW TO IMPROVE THE EFFICIENCY AND QUALITY OF AUDIO DUBBING By Niraj Sinha, principal engineer at Interra Systems


s content goes global, overcoming language barriers has become more and more important for content creators. Digital television, along with its increasing adoption around the globe and rapid boom in mass media delivery over the internet, has made this requirement even more pronounced today. There is an increasing need to maximise the outreach of content across geographies, and to do this in various languages. To meet this global demand for non-native language programming, content providers around the world are adopting audio dubbing more widely. While audio dubbing used to be expensive, in recent years the cost has come down to become more affordable and has now become indispensable for content creators around the world. This has led to an increase in the volume of content being delivered worldwide and a growing need to automate quality control (QC) for dubbing workflows with greater efficiency. Using the latest artificial intelligence and machine learning technologies and advanced audio quality algorithms, it is now possible to QC dubbed packages with a higher accuracy than ever before. AI-driven technology can be used to effectively spot issues with reduced manual supervision. Thus, by deploying AI-based QC solutions, content creators can introduce efficiency and quality to their dubbing workflows. When media companies prepare content in different languages, it is multiplexed with the corresponding video and stored as separate tracks in digital files, with each track typically providing audio in a different language. The process of manually ensuring audio in the correct language


is on the correct track is complex and time-consuming. It’s an especially challenging task for one person, as there may be multiple language audio tracks in a single file and it’s unlikely that the person is fluent in all of those languages. A major challenge during audio dubbing is synchronisation between the audio and video stream. A lead or delay of 500 milliseconds can greatly impact the quality of viewing experience. Communicating the same message in different languages can result in remarkably different time durations, which is why synchronisation issues are likely to occur, starting at the transcription and recording stages. Issues might also arise during later stages of the workflow,


with the loss of one or more frames in any dubbed track. To resolve this issue, content creators need an efficient approach to checking for loss of sync between the dubbed track and master track. Moreover, they need a way to determine if the original audio track is modified after the dubbing stage, resulting in a mismatch of duration between the original and dubbed tracks. It is difficult to check for metadata-related mismatch between the original and dubbed tracks, particularly when there are a large number of dubbed tracks within the file. Using advanced QC solutions, content creators can streamline the audio dubbing process. Major advancements in AI technology have automated integral tasks such as language verification and synchronisation checks in dubbed audio tracks. This allows content creators to achieve greater efficiencies and accuracy. Checking for synchronisation between video and dubbed tracks, as well as between original and dubbed tracks, is essential during audio dubbing. Content creators can use the QC system to perform synchronisation checks between audio tracks, comparing the background bed of master audio tracks with that of dubbed tracks. The limitation for this feature is that audio tracks should share the same background music and effects. However, in most cases, both the master track and dubbed track have common background music or effects. Another benefit of using a QC system with AI features is that synchronisation of video tracks with audio tracks can be checked. This feature ensures that black frames in video include silence in the audio track. Similarly, it is possible to check that colour bars come in parallel with test tone. Content creators can configure this capability in such a way that an error is reported if colour bars come with or

without test tone for a given amount of time. With a modernised QC system, content creators can also drastically enhance language detection. Leveraging advanced ML algorithms, automated QC systems have the ability to detect language in any audio track with more than 90 per cent accuracy. Language detection can be customised to match the dialect spoken in the audio track. Furthermore, the detected language in audio tracks can be verified against the expected language defined by users, and content creators have the option to verify the detected language against metadata from the audio track. It’s important for content creators to choose a QC system that supports all the variations of media file packaging used in the industry. Even if the dubbed audio track is an external .wav file or wrapped in another container format, the QC system has to be able to support it. Setting up the QC system to merge or distribute file-based packaged audio tracks is a great way to maximise efficiency. With the QC system, content creators can also compare loudness values between audio tracks. The loudness value is calculated using standard CALM or ITU algorithms. Programme loudness is the integrated loudness of the whole track, and the loudness range gives the statistical measure of loudness variation within any track. Audio dubbing is a crucial part of the business for content creators, as it enables them to deliver video services to audiences in different regions all over the world, extending their brand and profitability. Due to the massive volume of content being managed, finding a fast, precise way to check for audio dubbing issues is a must. Innovations in AI-powered QC systems have automated this process, decreasing issues and improving the speed and accuracy with which dubbed content packages can be created. n

‘Using advanced QC solutions, content creators can streamline the audio dubbing process.’


PRODUCTION AND POST As BBC News celebrates an anniversary, Philip Stevens looks at the past and present operation


his is a start on something we regard as extremely significant for the future.” Those were the words of then BBC director general, Sir Ian Jacob, on the occasion of the first television news bulletin on 5th July 1954. And there is little doubt he was right – and as the BBC celebrates the 65th anniversary of that 20-minute programme it provides an opportunity to review just how television news has evolved. Previously, news had been shown on the BBC by means of its daily Television Newsreel programme. But this relied solely on film of events which had happened days earlier – more like ‘olds’ rather than ‘news’ – in the format of established cinema newsreels. There was no newsreader, commentary being provided by means of a voiceover. According to Jacob the introduction of television news had created “significant difficulties”. He said, “News is not at all an easy thing to do on television. A good many of the main news items are not easily made visual - therefore we have the problem of giving news with the same standards that the corporation has built up in sound.” One has to wonder what he would make of today’s news programmes with its huge diversity of technology to help bring news alive. THOSE EARLY DAYS Paul Royall, the current editor of the BBC’s News at Six and Ten picks up the story: “Richard Baker was the first newsreader and he explained to viewers that this was an ‘Illustrated summary of the news... followed by the latest film of events and happenings at home and abroad’. And that is exactly what happened. That first bulletin was really a programme of two halves. In the first section, a series of stills, maps and photographs appeared with the newsreader out of vision. The film sequences appeared in the second half.” It was a year later before viewers first saw a newsreader in vision. “That was Kenneth Kendall, another of those early famous faces on BBC TV. And we had to wait another five years after that,

PICTURED ABOVE: Huw Edwards in the main news studio within the rebuilt Broadcasting House



“That first bulletin was really a programme of two halves. In the first section, a series of stills, maps and photographs appeared with the newsreader out of vision. The film sequences appeared in the second half.” PAUL ROYALL

in 1960, for the first female news presenter to appear on BBC Television. Nan Winton was an experienced journalist who had worked on many different programmes including Panorama and Town And Around. She sadly died recently at the age of 93 and following the news, Fran Unsworth, our current director of news and current affairs, described her as a ‘trailblazer’.” Back in those early days, all external shooting was on film – mostly 16mm – because video tape was still many years away. Of course, one of the ‘problems’ with film is that it needs processing. In order to reduce the interval from the film arriving at the studio and its requirement for transmission, editors would cut the negatives, rather than wait for a print. The film was then electronically reversed in the telecine chain so that a positive image appeared on screen. And since all the film sequences were assembled in A and B rolls, any change in running order would mean some quick re-cueing on the part of the telecine operator. Beyond that, telecines need time to run up to the correct speed, so the director would have to “roll TK” (the BBC’s designation for the machines) some three to four seconds before cutting to the film. Since then, the transition has seen video tape (also with a requirement for early cueing) come and go to a large measure - to be replaced by file-based news packages. How much easier it is to have instant starts! Another early ‘problem’ was the lack of teleprompters – meaning newsreaders needed to look down repeatedly at their scripts. And in-ear talkback was unheard of, the production gallery relying on a telephone on the presenter’s desk for communication. Perhaps these were a few of the “significant difficulties” that were alluded to by Sir Ian! MOVING FORWARD “From 1936 until the early 1950s, except during the Second World War, Alexandra Palace in north London was the major production centre for BBC television, where landmark programmes such as the 1953 Coronation were broadcast,” states Royall. “By 1954 the main BBC output had been moved to Shepherds Bush and the two original studios at Alexandra Palace were re-equipped for use by the TV News Department. After 1956 it was used exclusively for news broadcasts.” He continues: “BBC TV News moved from Alexandra Palace to Television Centre (TVC) in 1969 with the last broadcast



from Ally Pally (as it was fondly known) on Friday 19th September. Operations were resumed the next day from the new TVC location with a lunchtime bulletin on BBC1 – in black and white.” News remained at TVC until March 2013 when it moved to the rebuilt Broadcasting House on Portland Place in central London. “This was officially opened by the Queen in June 2013 following an extensive project to overhaul, modernise and expand the building to accommodate staff being moved from Television Centre. On Monday 18 March, the BBC’s television news services – the News at One, Six and Ten and the BBC News Channel – began broadcasting from brand new highdefinition studios at New Broadcasting House. The News at One was the first programme to come live from the new newsroom studio.” Between domestic and English language broadcasts, there are five studios at New Broadcasting House. They are named A through to E, with E being the large studio that sits in the heart of the newsroom and where the News at Six and Ten is broadcast from each day. “The studios are equipped with Sony cameras operating on Furio tracks and Shotoku remote heads,” says Royall. “The Mosart automation system drives most of the camera movements with tweaks from the directors. Vision mixing is also via Mosart, but some complex programmes such as Victoria Derbyshire, Newsnight and The Andrew Marr Show involve manual mixing. Other studio equipment includes Calrec Artemis consoles and Congo lighting desks. “We use OpenMedia, which is the BBC’s new Newsroom Computer System. We began rolling it out in June 2017 and it’s now used by all of BBC News. Viz graphics are created from OpenMedia – while some effects graphics are designed on iMacs and sent to Jupiter, which is our in-house playout system, and played from Quantel servers.”


PICTURED ABOVE: A look at how it used to be done

“The News at One was the first programme to come live from the new newsroom studio.” PAUL ROYALL

Other landmarks include the appearance in 1972 of John Craven presenting Newsround, the first TV news bulletin aimed solely at children. The BBC’s first rolling TV news service was launched in November 1997 as BBC News 24, now known as the BBC News Channel. Today it’s Britain’s most-watched news channel, delivering breaking news and analysis all day. BBC World News, the BBC’s international news and current affairs television channel, has the largest audience of any BBC channel with a global reach of 95 million people every week. The first foray into international television broadcasting came in 1987 when the BBC launched BBC 1/2 Mix, later rebranded BBC TV Europe, which served mainland Europe, focusing initially on the Nordic countries. The channel carried some news programming including the UK domestic Six o’clock News. That channel ended in March 1991 and became BBC World Service Television, a forerunner to the current BBC World News channel.

MORE DIGITAL Beyond the conventional television broadcasts, the BBC has a sizeable online operation. “We have fantastic journalists who work for BBC News Online, which is one of the world’s most popular English-language news websites. With both home and international audiences to serve, our UK and World teams work closely together with other parts of News and the wider BBC to produce our digital news output for the website, and other associated BBC platforms.” As of April, BBC News had bureaux in 59 countries and 75 cities around the world, with a presence in 117 countries and 181 cities. These bureaux vary in their purpose and operations, including newsgathering, World Service, BBC Monitoring – a division which monitors, and reports on, mass media worldwide - and Global News, BBC News’ commercial arm. Royall explains: “BBC News is made up of thousands of people based in locations in the UK and around the world. It is divided between Network News and World Service Group – which includes 42 languages services and commercial content, and we work closely with local news staff in the nations and regions. We have a huge global newsgathering operation and our news output is where some of the more recognisable audience-facing programmes, services and channels are found, such as, in the UK, the BBC News channel, News Online, BBC Breakfast, and the News at Six and Ten, both which I am responsible for as editor.” n



ACHIEVEMENTS Philip Stevens travels to Salford to hear about a decade of successful production


hen the idea of establishing a major production centre in Manchester was first mooted, there were some doubts as to its viability. A decade on, it has become apparent that those reservations were groundless. Indeed, as dock10, located at the heart of Salford’s MediaCityUK, approaches its 10th anniversary, the future looks extremely rosy. “People can now see the incredible success that the project


has become,” enthuses Andy Waters, dock10’s head of studios. “There was a time recently when every room across the whole of the production facility was booked for some kind of activity or another. That shows the concept of having all the facilities under one roof really suits the industry at this time.” Today, around 160 staff are employed at the facility, but a significant number of freelancers are used both by dock10 and the producers who use the studios and other services. In all, 10 studios are available – eight for television, one for radio drama and the unique BBC Philharmonic Orchestra studio. Alongside that, the BBC’s Watchdog consumer programme is broadcast from a mezzanine floor above the facility’s audience reception area. This



“Where else can you go in the country to find all these types of facilities in one compact area?” ANDY WATERS

programme is currently in its third run from Salford, with a fourth being planned for later in the year. ALL ENCOMPASSING “The original concept of dock10, indeed the whole of MediaCityUK, included total connectivity, and that means any studio can be connected to any gallery,” continues Waters. “In the case of Watchdog, production is handled from the gallery of studio HQ3. Each summer, Blue Peter produces a programme from the outside Piazza area – again production is routed through an inside control room. This has proved invaluable. “Beyond that, you can see the multiplicity of production that we are attracting – Saturday night shows, daytime quizzes, sports programmes, drama, children’s programming and so on. Every morning you can see that diversity with BBC Breakfast and the wide

range of content that it produces.” Waters reports that the variety creates an exciting working atmosphere. “On one memorable occasion we had Blue Peter’s 60th anniversary programme for the BBC and The Voice for ITV in production at the same time. Tom Jones came out of The Voice and was so excited that he wanted to be involved with Blue Peter. It is an unbelievable culture and creates a unique feeling in the facility.” MASSIVE INVESTMENT Another sign of the success of dock10 was the announcement last year of an investment of £5 million in new technology. This will see dock10 make significant upgrades to The Studios, post production, media storage platforms and the network that connects all the buildings in MediaCityUK. New equipment in The Studios will include cameras, vision mixers, multi-viewer monitors and core routers. As part of that investment, dock10 announced in May that it is to launch a new industry-leading 4K UHD-ready virtual studio capability using Epic Games’ Unreal Engine 4 (UE4) rendering technology. By using UE4, programme makers can create photorealistic output in real time. Waters explains more: “The system enables studio sets to combine physical and virtual elements in such a way that they are indistinguishable from each other. It also allows cameras to point in absolutely any direction across the whole of the studio to deliver a seamless on-screen set.” Existing sets created in any 3D modelling package can be imported into the system and a wide range of pre-made assets can be easily sourced, adapted and added into the design. “This next generation technology is a really powerful creative tool for




“We have people come here to look over the shoulder of the professionals.” IAN DODD delivering even greater on-screen value, enabling much more innovative and content-rich sets to be created. This makes the new dock10 virtual studio solution ideal for use beyond traditional news and sports programmes, meeting the demands of other genres such as children’s and entertainment.” Another upgrade will see the building of a new broadcast network to future-proof the facility. Connected to the top-spec 200Gbps network will be the studios, galleries, post production suites, ingest, control rooms and the data centre. “We are currently testing a range of equipment, but have already reequipped the gallery for HQ1 – our largest studio,” says Waters. “We have seen a resurgence of studio-based productions and it can be a very efficient way of producing high-quality material when you have top-end technology. Where else can you go in the country to find all these types of facilities in one compact area? Not only studios, and post production, but hotel accommodation, restaurants and the like. It’s a really good technical fit. “Of course, we had the distinct advantage of starting with a blank canvas, rather than converting an old factory or using a facility that needed an OB truck to be parked outside. That said, over 10


years we have adapted the space from what we had here on day one. We are driven by our clients and their specific needs. That’s a core principle – adapting ourselves to suit our customers’ requirements.” Those clients include the BBC, ITV, Channel 4, Channel 5 and companies from the independent sector. With so many broadcasters on-site, security is a major consideration. “That’s one advantage of operating the connectivity across the whole campus, not just the broadcast facilities, but the accommodation and other services, too,” says Waters. “Clients can be certain that their communications are completely secure even with competing companies operating in the same facility.” With 10 years of experience behind dock10, what milestone would Waters pick out as being significant? “There have been numerous milestones, but perhaps one of the most significant stems from the fact that when the facility was planned, it was going to be classed as completely tapeless from the outset. The idea was that tapes were going to be things of the past. But they were still the strong medium of the time. We were ready for a tapeless environment, but our clients weren’t so convinced. They wanted to leave a session with a cardboard box full of cassettes!”

PRODUCTION AND POST WHAT’S IN A NAME? When the BBC was looking to move much of its production to outside of London, the idea for an independent creative centre in the Manchester area was raised. Salford Quays on the Manchester Ship Canal (MSC) was selected, and tenders for construction of the media centre were advertised. The Peel Group (which owns the MSC) won the contract, not only for TV studios, but also a number of adjoining facilities including hotel accommodation, restaurants, bars and other consumer services. When the original 19th century plans for the canal were consulted, it was found that the building work of the time went up to Dock 9, with provision for one further dock to be added later. That area became the construction site for what is now the studio complex known, appropriately enough, as dock10.

Waters explains that an education process was necessary, and that resulted in a growing confidence in file-based content. “It took two years, but then tapeless became the norm.” POST PRODUCTION Talking of education, it is not just the current generation of programme makers that are involved with dock10’s activities. Located on the same campus are three organisations that are providing training for the production personnel of the future. But dock10 also provides work experience opportunities for those considering a career in broadcasting. “We have people come here to look over the shoulder of the professionals,” explains Ian Dodd, dock10’s head of post production. “I spent a great deal of my early career in film cutting rooms, so I understand how important it is to have time seeing how the actual craft of post production works along with the interaction with producers and directors, as much as the technical side. You can’t learn that without personal, on-thejob, experience. We provide that environment for those thinking of coming into post production. We believe in promoting internally, we have just promoted three runners to assistant editors, two

assistants to editors, and a runner to an audio tracklayer. These will be the future stars in the making.” The post production department, which houses more than 50 offline suites, is expanding its Flame and Baselight grading capability and 4K monitoring to meet the growing international demand for UHD content. In addition, another Dolby Atmos dubbing suite is being added to deal with demand and provide customers with even more flexibility. “We are also acting as a digital dailies lab for productions that are being shot locally on location,” says Dodd. “This includes the new Julian Fellows’ production of The English Game for Netflix. Because of our 24/7 operation and dock10’s fast connectivity, we can receive 4K location rushes from the DIT and have them processed, uploaded and ready for use anywhere in the UK or worldwide the next morning.” So, there is a very positive outlook in this central part of MediaCityUK. “This is an amazing place and this model is becoming well known,” says Waters. “A month doesn’t pass without us showing around a delegation from overseas who want to adopt our ideas back home. All in all, everything is very good.” n





aving to tell your management or your board that your media workflows or processes are not fully optimised and might be haemorrhaging money, and you need new tools to address this, may not be the conversation you want to have. And this is the challenge Three Media has faced over the years working with clients who know they need to introduce change to ensure success and competitiveness. We achieve this by introducing longstanding principles of ‘describe once, exploit many’ and ‘management by exception,’ supported by configuration-driven automated workflows and processes. These core principles have developed over time as have the products based on these. Three Media started out 20 years ago as Broadcast Projects International, whose first large project was to help build the Digital Media Centre (DMC) in Amsterdam, which following its recent acquisition by TVT Media, became Europe’s first fully virtualised ‘private broadcast Cloud’ originating channels for major network-owned brands, globally. The DMC project required several years of focused effort, and by that I mean BPI was instrumental in the system and workflow selection and design, technology integration and implementation, testing and training. Basically, they were involved in most of what was required to get the DMC up and running. It was a highly collaborative effort, based on critical analysis of business needs and development of new tools and workflows to efficiently manage, move and store files, to transform metadata and to act as glueware for system integration. Specialist knowledge and experience was key to the success of the build. Then, relevant experience in this space was scarce, and what was available was relatively inexperienced. Most vendors and manufacturers offered solutions and products that relied heavily on resources and did not effectively support flexible integration, metadata driven workflows or

serious automation. If a system didn’t do what was required, other bits were often bolted on to achieve the needed functionality. But the DMC needed something more reliable, and hopefully more scalable and longer lasting. When the DMC was completed, those ‘tools’ remained, with some continuing to operate today. Based on the DMC, BPI was approached by BT to design and mplement a novel approach to asset management as opposed to simply repurposing outdated methods, much of which the industry still struggled with. BT wanted a system which delivered well-defined system architecture layers, more flexibility, automated workflows, integration to external functions and components, and a range of more useful tools to manage file-based delivery of assets and associated components as well as any type of supporting file or document. BPI, which had evolved into ‘BPI Improve’, was the one company that had the depth of experience required. This resulted in Content Store and was tailored specifically for ITV. Quietly but steadily, the team was becoming known in the industry. That team, now Three Media after acquiring BPI Improve in 2009, knew the secret of finding, and resolving, ‘hidden’ pain points and bottlenecks in media workflows and processes of ever-increasing complexity. We have known for a long time that the industry was changing dramatically in terms of metadata, both in the volumes and the increasing challenges in identifying and efficiently managing it. Costs to exploit content to non-linear platforms had to fall dramatically in

‘XEN:Pipeline needed to be able to manage complex and big data sets and to run the simulation and optimisation workflows much faster.’ 54 | TVBEUROPE JULY/AUGUST 2019


PICTURED: Three Media’s XEN:Pipeline platform

order to efficiently monetise their content. Three Media developed XEN:Optima specifically to achieve just that. We also knew that existing workflow modelling tools, including our own XEN:Sim at the time, did not efficiently scale to support future needs. We had to rethink and retool building on the foundations of our current product set, leading to the development of ‘XEN:Pipeline’. Briefly, XEN:Pipeline is a highly automated metadata and content management platform that enables advanced metadata discovery and management of assets and files located across multiple unrelated storage locations, and supports packaging and distribution of large volumes of content to linear and non-linear distribution points. It is supported by a flexible hierarchical data schema presenting assets and files within a single, hierarchy view, via an intuitive UI. XEN:Pipeline incorporates AI and ML to identify and match new or unknown content with related objects within the business hierarchy and to simulate and optimise workflows ensuring content is efficiently managed and delivered, compliant with SLA requirements. Over the years we have received very positive feedback on XEN:Sim for its workflow simulation and optimisation capabilities, but it was a tough concept to sell. We could show how to optimise workflows dynamically, but it was difficult for those who knew little about performance modelling to grasp. Today, much of the industry is catching up fast in terms of gaining a better understanding of just how critical AI and ML have become to optimisation and efficiency, but it remains a tough sell to management. But we also knew that in order to add even greater value, XEN:Pipeline needed to be able to manage complex and big data sets

and to run the simulation and optimisation workflows much faster and closer to near real time. It was also apparent that the optimisation had to be self-managing, requiring few manual interventions or a requirement for users to understand the logic that drives it. To achieve that, we undertook a year-long collaboration with Imperial College London, Arcitecta, and Big Oak Dynamics. The combined goal was to ensure that XEN:Pipeline and XEN:Sim were fast enough and flexible enough to manage big data and complex media supply chain requirements both now and in the future. Through a very clever development programme, XEN:Pipeline now lives on Arcitecta’s Mediaflux XODB and is optimised to maximise speed and accuracy and support massive scalability. The product is designed as a Platform as a Service (PaaS), delivered via the Cloud or locally, in order that it can easily support all existing and foreseeable media processing services and the optimisation of highly complex media workflows in near real time. And this in turn makes XEN:Pipeline much easier to promote and demonstrate the idea of process improvement, cost reduction, and optimisation to management – and makes for a far more productive conversation! n



PREPARING FOR A HIGH-RES FUTURE Jenny Priestley speaks to Films at 59’s George Panayiotou about the post production house’s investment in key storage technology


ffering everything from camera hire to full service post production, Bristol based Films at 59 has been busy futureproofing itself for both 4K and 8K. The company offers clients a range of data management, ingest, workflow consultancy and full service picture and sound post, employing a dedicated craft team of 30-35 members of staff. As it prepares for the demand in 4K and 8K content to rise, Films at 59 has been investing in new technology to enable it to meet clients’ needs, from new grading suites to storage. “We’ve recently invested in additional grading systems to facilitate the work that’s coming over the next 18 months to two years so we’re now running four Baselight grading rooms, when we were originally two,” explains business development director George Panayiotou. “It’s great to be able to see beyond 12 months in our schedules to know that we can make that investment with confidence. “We also have a couple of Lustre grading rooms that are part of our Flame offering, we have two Flame rooms for online finishing and we also run Avid Symphony suites for grade and online depending on the level of budget and type of programme.” “We have six dubbing theatres,” Panayiotou continues, “and we’ve recently upgraded our main studio to Dolby Atmos. That’s been driven by the likes of the SVoDs, BBC Studios Distribution and Sky who, as part of their deliverables require a Dolby Atmos HE delivery alongside a UHD/HDR picture delivery. Our main room is a mastering room, and we have a couple of other theatres that are Atmos pre-mix rooms.” Recent credits include Netflix’s first natural history commission, Our Planet, as well as the BBC’s Blue Planet II and Dynasties, and Plimsoll Production’s Hostile Planet. “As you know, Bristol is the leading global city for natural history production and that’s what Films at 59’s business was built on historically, and there’s a real resurgence and hunger for natural history content currently,” says Panayiotou. “We recently delivered a 10-part series for our client True to Nature, who are based in Bristol - it’s an expedition type series presented by Steve Backshall. We also do factual entertainment - we’re currently working on series 10 of The Great British Bake Off which again is produced out of Bristol; we also provide


post production for RDF Television Wests’ Dickinson’s Real Deal. So there’s a real cross-genre range of content.” Clearly, Films at 59 is a company that deals with a lot of data! As well as its investment in its post production suites, Films at 59 has also installed Pixit Media’s PixStor scalable storage platform to support its 4K high-end grading and finishing work. “We were at a point with our existing network storage where we needed to revisit that and look at an investment, and the PixStor solution absolutely delivered for us,” explains Panayiotou. “We’ve installed a tiered storage infrastructure where we invested in Object Matrix as our near line storage, and we have Avid Nexis storage for our proxy edit media for all our editorial rooms - we’re running 45 cutting rooms here, so the Nexis delivers a solution for that. “Then we’ve got the PixStor for high-end data storage for our finishing suites and VFX workstations. So we have a tiered storage infrastructure and that’s why we went with the PixStor solution. We had a San Solution before but we’d got to a point where it had to be

TECHNOLOGY retired purely because of the amount of data that we’re dealing with.” Panayiotou says the team at Films at 59 has adapted “really well” to using PixStor. “It’s delivering for us from a performance point of view,” he stresses. “I think the one thing that always comes to the fore is the fact that you always think you’ve got enough data storage, but of course whatever you’ve got, you fill it! It’s then about data management and that’s a key thing. “We work with our clients to ensure that where we’re holding a version of the data on one of our storage systems it’s not also being stored on another. Clients are aware of the cost of storage, but don’t realise the impact of holding onto data until it’s presented to them as a true cost to the production. As a facility we absolutely have to recoup our investment, so storage does come into play when we’re quoting for projects - when you’ve got a six-part series that is probably gobbling up 20-30 terabytes of storage, whilst in post that’s a hell of a lot of data that has to be managed and budgeted for.” With all this investment, Films at 59 is obviously more than able to deal with content in 4K. But does this also mean they’re ready to work in 8K? “That wasn’t our driver for PixStor,” admits Panayiotou. “What we wanted was an efficient and reliable solution for our 4K pipeline but with the knowledge that, if there is a requirement for 8K further down the line, then the system is expandable to be able to deal with that. “There are whisperings of 8K - we’re working with a couple of clients that are shooting 8K on projects, and although the main delivery may be 4K, a co-producer has come on board and wants to produce something at 8K. “It’s all really being driven by NHK in Japan if I’m being honest,” Panayiotou concludes. “They’re the only ones that are driving this. They’ll shoot 8K, we’ll be posting for delivery for 4K but there’s an element of some key deliverables that have to be managed through as 8K as well.” n

“We were at a point with our existing network storage where we needed to revisit that and look at an investment, and the PixStor solution absolutely delivered for us.” GEORGE PANAYIOTOU



CROSSING OVER WITH THE FINISH LINE Zeb Chadfield, owner of post production facility The Finish Line, explains why he decided to move to Blackmagic’s DaVinci Resolve for finishing


he decision to drop Avid in favour of DaVinci Resolve was really a transitional thing rather than an overnight decision. I’ve always been an advocate for using the best tool for a job and that’s lead me to working on all different platforms, I won’t bore you with the list but it’s safe to say I’ve used everything from tape based linear editing to all the current editing and grading tools. I used to always have a preference based on the type of work but we have hit a point now where there isn’t anything that can be done better in a different tool. The main change we have made is that we no longer go back to the tool in which the project was cut for the final online and all of our delivery processes are now in DaVinci Resolve. We started with Resolve in 2011 and as the developers have added more and more it just kept getting so much better. At this point I just can’t see any reason to use anything else. That won’t stay that way I’m sure, this is a technology driven industry so it will come down to development and features, but as long as BlackmagicDesign keep going the way they are I can’t see a reason to move. I suppose the decision is a bit controversial because in the longform television world Avid is still strong. From our point of view as a picture finishing company it’s normal for us to switch between different applications constantly. It’s also normal to use different tools for different jobs but in offline or ‘craft’ editing Avid Media Composer is the staple and all the post facilities tend to still finish in Media Composer because their talent has been working that way for a long time and retraining can be complicated when you are dealing with such a high turnover of work. It’s also always been easy to stay in the app that a programme was cut in to avoid rebuilding all the captions and effects. But that’s just not an issue anymore with Resolve. All the new captioning and subtitling tools along with the far better quality of the results make it crazy to continue to work in the old way. We have loved getting to know Fusion and working with all the possibilities it’s opened up to us too. There are a number of elements for some series where we used to manually keyframe every caption but with Fusion we have been able to automate all of that. This has removed the need for hours of messing about with titles and they can be completed in minutes now. The speed and functionality of


Resolve really just can’t be matched with our kind of work which is predominantly long-form factual and entertainment programmes. We are equipped with a full HDR kit to be able to finish and master Dolby Vision HDR to the highest level. For us this has been about making sure that we are set up and have everyone trained for Netflix finishing work but we are very excited about the new tools and how well integrated they are with Resolve. It’s great to be able to empower our talent to use the latest and greatest kit and see what amazing things we can do with it. We all love being challenged with learning new things, it’s one of the best things about working in a tech driven industry. We have also been using the EditShare Flow tool now that it has become available as a SaaS solution for automation. Clients are also using this for remote editing with us and we can take a file direct from Flow Story into Resolve for conform and finishing so there are some exciting possibilities there. We also use a lot of custom built tools mostly on AWS for various parts of our processes. We are very excited about the AI in Davinci Resolve 16 because on every level it will help us work faster and more efficiently, even with simple things like finding the faces of people that may need blurring in a programme. I’d love to see that feature expanded on to find other objects and even automated blurring for faces or branding. Anything is possible! n

PU T A PAUSE IN YOUR DAY With so many demands from work, home and family, there never seem to be enough hours in the day for you. Why not press pause once in a while, curl up with your favourite magazine and put a little oasis of ‘you’ in your day.

To find out more about Press Pause, visit;




s blockchain begins to usher in a new era for the internet, it’s no secret that it is set to dramatically transform a host of industries. From enterprise IT and finance to manufacturing and supply chain management, new applications and use cases of blockchain technology are emerging at a rapid rate as it continues to extend beyond cryptocurrency. In simple terms, blockchain refers to a distributed database that can be shared by multiple systems contributing to the same body of data, creating a shared system of record among those who have the required access privileges. It’s also immutable, due to the fact that every transaction is recorded chronologically and validated by other members of the network – meaning the data can’t be tampered with. The key thing to remember about blockchain is that trust is established through a combination of transparency, traceability and state-of-the-art cryptography. Access control mechanisms provide the relevant parties with visibility into the history of every single transaction, so users can be sure that the data is both secure and accurate. It’s this trust, along with the enterprise-grade security and authentication capabilities, that will be essential to blockchain’s future success in the media industry. What has become clear is that blockchain has the potential to provide a fresh and innovative approach to content distribution and monetisation, as well as enable new business models through its inherent transparency and security benefits and the ability to autonomously execute smart contracts. Although there are still hurdles to overcome, the potential impact of blockchain in media is enormous. BLOCKCHAIN’S POTENTIAL Why is the media industry getting so excited about blockchain?

Several innovative content distribution and monetisation solutions are starting to gain the interest of M&E organisations, while projects related to more foundational uses are starting to show more immediate potential. For example, the technology has a major role to play in enabling collaboration across the industry by verifying the provenance and pedigree of assets. Throughout their lifecycle, digital assets are exchanged between multiple parties and traverse many hosted services and multiple IT solutions that are more frequently being deployed into hybrid, multi-Cloud infrastructures. Media organisations therefore need ways to secure this exchange of content to avoid leaks and unauthorised access – especially with piracy set to cost streaming services $50 billion by 2022. Security often comes at a price, though, as data transfer performance and speed of asset exchange can suffer as a result of increased security measures. To truly add value to these collaboration and asset exchange processes, new technology must be both secure and performant. This is where blockchain comes into play. It offers the ability to establish trusted networks through an immutable chain of custody, build an additional layer of security around assets, enable forensic auditing, and reliably execute commercial contracts over the internet. For example, a global registry of assets that provides an interconnected network of metadata and integrations with different databases and systems would allow M&E organisations to take any asset that’s registered with the global identity registry and learn who has come into contact with it, what corrections or VFX techniques have been applied to it and who has the

‘The media industry has the potential to be a technology leader in its use of blockchain.’ 60 | TVBEUROPE JULY/AUGUST 2019


rights to it. Users would get full visibility into the asset’s lifecycle and be able to trust that it hasn’t been tampered with in any way. With this infrastructure in place, members of the media community would be able to set up ad-hoc networks between relevant participants for specific projects. This would offer a secure and auditable way to manage collaborative projects, providing an immutable record of everything that has been done to a piece of media and everyone who has had access to it. A multitude of disparate frameworks based on blockchain technologies are in development today that promote such a future. However, it’s become apparent that many members of the media community are yet to be convinced, primarily founded in concerns that blockchain networks are too open and may compromise the confidentiality of their high-value assets. This is an issue that has appeared in several other industries and highlights the need for a standardised framework that will give organisations the confidence to embrace a blockchain-powered future. What’s more, additional confidence may be gained by using open source tools that can be examined and adopted widely by any developer, along with integration into standardised IT tools (such as high speed transfer software). THE NEED FOR STANDARDISATION For blockchain to go mainstream in media, a standardised framework needs to be put in place that clearly lays out blockchain’s role in enabling the movement of assets through many different entities and participants in an ecosystem.

This will require vendors, customers, content producers and standardisation bodies to come together and reach an agreement on how these new networks should work, thereby enabling the industry to get rid of disparate systems and create a global network that people can use to identify and track assets through their lifecycle. As these standardised networks grow in size for larger content production and distribution workflows, users will also benefit from the enterprise-grade security and privacy capabilities that M&E requires. An open network that uses smart contracts to negotiate the exchange of encryption keys through integration with modern key store services would remove processes that currently cause a lot of extra overhead in workflows. Furthermore, smart contracts enable use cases like watermarking, which creates unique assets to mitigate piracy and leakage. Ultimately, organisations have to remember that a key aspect of blockchain is that it builds trust, and that trust can reduce both overheads and frictions in collaboration. We call it a trust network and that’s what people are looking to build – particularly in the media industry where businesses are dealing with extremely valuable data, assets and intellectual property. The reason many M&E workflows have yet to move to the Cloud is due to the lack of confidence in these environments, but blockchain technologies can be used to accelerate this migration. There is a huge opportunity for the media industry to leverage the power of blockchain to securely identify and distribute content and broker transactions. Indeed, the media industry has the potential to be a technology leader in its use of blockchain. It all comes down to the standardisation of tools and putting a solid foundation in place to encourage organisations to take the leap. n



THE FINAL WORD Marina Kalkanis, CEO, M2A Media, looks into her crystal ball

How did you get started in the media tech industry? I trained as a software engineer in the 1980s and worked in many different industries before joining the BBC in 2003. The BBC was an early pioneer in online media and in 2006 I started working on the first generation of BBC iPlayer. From 2009 I ran the Media Services department with responsibility for delivering all of the online media. It was an exciting time building services that broke new ground in online media technology. How has it changed since you started your career? When I got started in 2006 online media was not yet a mass industry and was more of an experimental place. Many experts thought the internet would never be able to handle media streaming bandwidths. In those early years most of the media processing was with proprietary solutions like Real Media, Adobe Flash, and Microsoft Windows Media. There was almost no media on mobiles and each handset manufacturer had their own operating system. The browsers also needed plugins and weren’t interoperable either. There was very little interoperability and the streaming quality was nowhere near broadcast quality. What has changed is the emergence of widely adopted standards like H.264/AVC, MP-DASH, HLS, CMAF, MXF. Also, the requirement to build open APIs around services and to have interoperability. HTML5, iOS and Android made a huge difference in creating standards. What makes you passionate about working in the industry? Offering a great quality streaming service is something that really benefits people. You can see a direct connection from creating a service that works well and the pleasure people take in using it. It is also great to see new ideas and improvements constantly being proposed. The media tech industry is the perfect combination of creativity and technical innovation. If you could change one thing about the media tech industry, what would it be? I would like to see greater diversity and inclusion in the media tech industry, so that the industry truly reflects the audience it serves. How inclusive do you think the industry is, and how can we make it more inclusive? Industry bodies like DPP, SMPTE, RTS, HPA, SVG, EBU etc. are making real efforts to increase inclusivity but our cultural stereotypes and unconscious biases are hard to overcome. I think the media tech industry is working to be more inclusive but there is a long way to go. That means offering additional support to under-represented groups. We all have


unconscious biases and we have to work to retrain ourselves away from these. I think it’s particularly important as we move in to machine learning, virtual reality and AI that we don’t bring our unconscious biases across into the virtual world. How do we encourage young people that media technology is the career for them? I think young people are naturally attracted to media but find the tech roles more intimidating. I think keeping pathways always open is good and encouraging staff to try different roles. Creating mixed teams that bring together different skills can help young people feel more comfortable with more technical roles. Where do you think the industry will go next? We’ll see much more crossover between traditional longform media and other online formats like esports, chat, gaming, VR etc. We’ll also see much more data associated with video as the AI and ML services mature and are used as standard across the industry. This means searching for video and for specific moments, people, places etc. in video will become increasingly easier. What’s the biggest topic of discussion in your area of the industry? In live streaming simplifying workflows is a big topic of discussion. Content owners want to maximise the reach and monetary value of their content through subscription and advertising. They need the service to run flawlessly particularly during the big moments. Receiving bad reviews on Reddit or Twitter for a live streaming service can be devastating to a content publisher. Low latency is also a big topic of discussion particularly for our live sport customers. The industry is responding to the recent announcement from Apple on Low-Latency HLS. Getting latency down to, or better than, linear TV levels will be a big push now as more sport viewing moves from linear to OTT consumption. What should the industry be talking about that it isn’t at the moment? The industry is not talking about sustainability in any big way yet. We are certainly expanding and replacing our media devices more quickly in this generation than happened 20 or 30 years ago. I can see more attention focused on sustainability in the manufacture and disposal of devices as well as the infrastructure that supports online video. We’ve seen a big shift from processing in local data centres to centralised public Cloud facilities. We should be looking for sustainability efficiencies now from the public Cloud providers. n


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.