Page 1

Intelligence for the media & entertainment industry

MARCH 2019



MARCH 2019


CONTENT Editor: Jenny Priestley



’m always delighted when TVBEurope is able to feature articles with young, up and coming talent within our industry. It’s really easy to think of media technologists in a certain way, so I’m always pleased when we can showcase some of the new faces establishing themselves, be that as an editor, sound mixer or design engineer. When I started my career in the media industry I was lucky enough to find a mentor in my first boss. He taught me so much more than I learned at college, and I still use practises and ideas he showed me some 20 years later. We’re also still regularly in touch

Staff Writer: Dan Meier Graphic Designer: Marc Miller Managing Design Director: Nicole Cobban

year, we hear from two of the programme’s first mentees about the benefits such a scheme offers and why they’d encourage others to take part. Plus, we celebrate the work of three young sound engineers from Molinare who are already garnering praise, and award nominations, for their work on documentary Three Identical Strangers. As well as welcoming new faces, we’re focusing on virtualisation in the media and entertainment industry. Last October I wrote about how it seemed to be the word on everyone’s lips at IBC, as the industry moved from talking about it, to doing it.

Contributors: George Jarrett, Philip Stevens, Alex Bassett Group Content Director, B2B: James McKeown

MANAGEMENT Managing Director/Senior Vice President: Christine Shaw Chief Revenue Officer: Luke Edson Chief Marketing Officer: Wendy Lissau Head of Production US & UK: Mark Constance

ADVERTISING SALES Sales Executive: Can Turkeri (0)207 534 6000 Commercial Sales Director, B2B: Ryan O’Donnell Japan and Korea Sales: Sho Harihara +81 6 4790 2222


Having someone there as a guide, advisor and sounding board is so important when you’re trying to establish yourself in a world that’s new and not always easy to understand.

To subscribe, change your address, or check on your current account status, go to or email

ARCHIVES Digital editions of the magazine are available to view on ISSUU. com Recent back issues of the printed edition may be available please contact for more information.


and I know I can use him as a sounding board whenever I need to. I believe it’s incredibly important for anyone entering this industry to have someone like that, whether their mentor is a work colleague or someone they meet through a mentorship scheme. Having someone there as a guide, advisor and sounding board is so important when you’re trying to establish yourself in a world that’s new and not always easy to understand. It’s also equally important that those of us who are well established in our roles should be willing to lend a helping hand. This month we’re featuring new talent from across the media and entertainment industry spectrum. As women in broadcast mentorship scheme Rise enters its second

George Jarrett meets Yves Padrino, CEO of Synamedia, Cisco’s former service provider video solutions business, while Philip Stevens heads to Cambridge for a visit to Pixel Power to see the actual benefits of working in a virtual world. If you’re wondering what this month’s cover is all about, we take a trip in the company of near space flight providers Sent Into Space. They recently livestreamed one of their payloads on its journey back to Earth. Which prompted the inevitable question, just how do you livestream from the edge of space? Finally, myself and the team will be at BVE at the end of February. Do say hello if you see us dashing around ExCeL! n

TVBE and its content are available for licensing and syndication re-use. Contact the International department to discuss partnership opportunities and permissions International Licensing Director Matt Ellis,

Future PLC is a member of the Periodical Publishers Association All contents © 2018 Future Publishing Limited or published under licence. All rights reserved. No part of this magazine may be used, stored, transmitted or reproduced in any way without the prior written permission of the publisher. Future Publishing Limited (company number 2008885) is registered in England and Wales. Registered office: Quay House, The Ambury, Bath BA1 1UA. All information contained in this publication is for information only and is, as far as we are aware, correct at the time of going to press. Future cannot accept any responsibility for errors or inaccuracies in such information. You are advised to contact manufacturers and retailers directly with regard to the price of products/services referred to in this publication. Apps and websites mentioned in this publication are not under our control. We are not responsible for their contents or any other changes or updates to them. This magazine is fully independent and not affiliated in any way with the companies mentioned herein. If you submit material to us, you warrant that you own the material and/or have the necessary rights/permissions to supply the material and you automatically grant Future and its licensees a licence to publish your submission in whole or in part in any/all issues and/or editions of publications, in any format published worldwide and on associated websites, social media channels and associated products. Any material you submit is sent at your own risk and, although every care is taken, neither Future nor its employees, agents, subcontractors or licensees shall be liable for loss or damage. We assume all unsolicited material is for publication unless otherwise stated, and reserve the right to edit, amend, adapt all submissions.



MARCH 2019

06 Virtualisation transforms channel creation from months to minutes

Telestream’s Stuart Newton on Cloud-based video delivery architecture

14 One small step for livestreaming Jenny Priestley chats to the team at Sent Into Space about livestreaming from near space

20 How blockchain can solve the industry’s rights management problems


Tracie Mitchell from Blue Lucy looks at the capabilities of blockchain technology

24 On the Rise

TVBEurope hears from the women who took part in Rise, a mentorship scheme for the broadcast industry

28 Three commendable sound editors

Molinare’s award-nominated sound team discuss their work on documentary Three Identical Strangers

32 Virtual power Philip Stevens meets Pixel Power executive vice president of sales and marketing, Ciaran Doran



40 LED it shine

Dan Meier asks DoP Jonathan Harrison about what to expect from his lighting seminars at BVE

44 Why Cloud containers are the future for TV service providers

Johan Bolin from Edgeware reveals the benefits of containerised software

46 Categorising consumers

PWC break down video consumption habits into five distinct segments


Virtualisation transforms channel creation from months to minutes By Stuart Newton, VP strategy within the corporate development group at Telestream


t is not too dramatic to say that we are on the cusp of a new technology-driven era where consumers can enjoy rich media on the broadest range of viewing devices in ways that would have been unimaginable just a few years ago. Today, OTT offers revolutionary opportunities to consume media where, when and how consumers dictate. The younger generation consume media in very different ways from their parents. The advent of OTT services is fuelling a viewer revolution, and it is the OTT delivery channel which offers so much commercial opportunity to content owners and video services providers worldwide. When considering the evolution of OTT services, the role of the Cloud will be dramatic, as it frequently makes practical sense for content that will ultimately be delivered via internet to be processed there. However, whilst content providers are increasingly using Cloud services, this is only one part of the story when OTT is regarded in a holistic fashion. Even if a big content provider puts all of its content into the Cloud, the media still needs to get to the consumer’s smartphone or other consumption device. The means of doing this might be via Wi-Fi Cloud, cellular network or broadband to the home. Whatever the means, the video service provider/network operator plays a key role, and, for them, the Cloud is just one of a number of technology issues. For service providers looking to deliver their own video services, infrastructure virtualisation is becoming critical to the efficiency and flexibility goals they are setting. To achieve these goals they will require video infrastructure virtualisation, which can

leverage the lessons and techniques learned from Network Functions Virtualisation (NFV) to provide similar flexibility for media functions. Video infrastructure virtualisation relies upon, but differs from, traditional server-virtualisation techniques, such as those used in enterprise IT. In NFV applications, a virtualised network function (VNF) may consist of one or more virtual machines running different software and processes, on top of standard high-volume servers, switches and storage devices, or even Cloud computing infrastructure, instead of having custom hardware appliances for each network function. This allows a repository of “functions” that can be called upon to be “orchestrated” into the delivery chain as needed, and provide excellent resource, scaling, and redundancy flexibility. The same kind of architecture can now be leveraged to provide video services where “media functions” (such as an encoder) can be orchestrated in a similar way. For the network operator and video service provider, it is a new way of viewing the supply chain challenge: they can virtualise an entire video delivery network. They can construct a virtualised video headend on premises and build the core delivery network within their data centres. Many of these organisations would ideally like to create hybrid architectures that combine on-premise video infrastructure virtualisation and Cloud to create an agile software-based video delivery architecture. In comparison, content owners don’t own the complete delivery network, so they rely more on Cloud operators

‘When considering the evolution of OTT services, the role of the Cloud will be dramatic.’ 06 | TVBEUROPE MARCH 2019


‘With virtualisation, new channel creation is transforming from a process that took weeks to one that can be achieved in minutes.’ (such as Amazon, Akamai and Microsoft) to provide their necessary Cloud services and capabilities for video delivery. The challenge here is deploying video headend architectures in the Cloud and achieving the same level of control, visibility and reliability that content providers are used to. At NAB, Telestream (booth SL3308) will showcase a new generation of container-based, Cloud-enabled, self-aware video headend capabilities, supporting the next generation of high-demand streaming operations by capitalising on orchestrated, integrated media processing, monitoring and analytics. The video delivery architecture allows for automated decision-making and builds a solid foundation for the future of self-aware video delivery architectures. By automating functions such as self-diagnosis, self-healing, self-optimising, and more, businesses can offer high-quality streaming services while still reducing operation cost and complexity. The new architecture is built on modular, container-based flexible design principles that can target Cloud and on-premise virtualised networks. These virtualised channel playout systems integrate live

adaptive streaming production with live monitoring and actionable analytics in a completely virtualised deployment. What results is one-click live channel origination that supports multi-Cloud, built-in video monitoring and diagnostics, live switching of live or file-based assets, real-time self-healing, and multiple redundancy options. With virtualisation, new channel creation is transforming from a process that took weeks to one that can be achieved in minutes. Whether creating short-term channels for major events or additional capacity or creating new channels to accelerate time-to-market or to pilot the latest video encoding techniques, the ability to generate new revenue streams or provide cost efficiencies will be transformed. Today’s audiences demand ‘always on’ performance from their streaming services, and providers need assurance that they’re meeting customer demand. The architectures we are now developing marry visibility and accountability to the efficiency and economy of virtualised architectures. This is what today’s content holders and service providers must have to gain market share. n



What’s next for broadcast live production? By Olivier Suard, vice president of marketing, Nevion


ithin the broadcast industry, we have been seeing a progressive shift from baseband to IP, particularly now within live production. One of the objectives is to improve the ability to generate content more cost-effectively, through better sharing of production resources and staff. To that end, virtualisation of infrastructure combined with a move towards integrating more software are key goals. Historically, media functions (eg. processing, such as audio embedding/de-embedding and encoding) have been delivered by dedicated hardware that could only perform one task (eg. a JPEG 2000 encoder/decoder can only do that). In recent years though, more versatile platforms have emerged that can run different media functions just as efficiently as dedicated hardware. More recently, we have begun to see some real-time media processing – such as audio processing – taking place on commercial off the shelf (COTS) IT platforms, though more demanding video processing remains a challenge. But what about a move to the Cloud? Will it happen, and if so, when? For that to happen there remain some serious challenges to overcome, but the industry is working on them. Public Cloud is currently falling short in terms of both processing and transport latency as it is unable to offer the same performance as specialised hardware and dedicated media networks deliver today. Another cause of latency is the processing in the Cloud itself because most of it is done on standard computer central processing units (CPUs) which offer complex and comprehensive instruction sets, but at the expense of processing speed. However, it is possible to overcome this challenge thanks to field-programmable gate array (FPGA) acceleration capabilities now being offered by some providers of public Cloud services. Provided the software has been written for those FPGAs, the processing performance should reach satisfactory levels. Currently, the transport of packets over the internet is less reliable than over a private IP network, and there is a substantially higher risk that not every IP packet will reach its destination. Traditional ways of ensuring packets reach their destination (such as TCP) involve a complex set of handshakes between senders and receivers but add delay.


However, in 2017, the broadcast industry worked on a new standard called RIST (Reliable Internet Stream Transport), which is seen as the emerging preferred negative acknowledgement (NACK) approach for professional video transport over the internet. This means the receiver only tells the sender if a packet has not arrived as opposed to confirming the receipt of every packet (ACK) which is time-consuming and costly. The first draft of this standard, which has now been completed and successfully demonstrated, may allow for package delivery to be completed over the internet. If public Cloud delivery is to become the norm, it is clear that signal and media function orchestration will be fundamental to the proper functioning of the entire workflow. In a baseband world, workflow is largely determined by the physical location and connectivity of equipment and the core router. In IP, that connectivity is logical – in other words, the physical connectivity of equipment is typically in place, but it is the control layer that determines how the media flows between the equipment. However, as media functions become virtualised, workflows involve connecting instances of software across or even within software-defined platforms. If that software is running in the Cloud, it may even involve spinning up and tearing down instances of the media functions, based on the processing capacity required. This hints to a totally new role for orchestration systems: they need to become software and virtualisation-aware, otherwise, the virtualisation benefits cannot be realised. What does this mean for the broadcast industry? Despite advances in public Cloud technology, for most real-time broadcast media transport, processing and monitoring applications, the best approach currently is to use software-defined platforms, built on hardware optimised for performance. Private Cloud is now also a viable option, both on-site and offsite, using specialised software-defined platforms or (for some applications) COTS hardware. A private Cloud is not much more than a data centre on the broadcaster’s premises, where signal processing and transport equipment is pooled. Technology is evolving fast though, and it’s only a matter of time before widespread COTS and public Cloud usage becomes a reality for real-time broadcast production. n


How to leverage virtualisation in the broadcasting Industry


By Richard Cranefield, head of product management, and Iain Shields, technical product manager, Red Bee Media

irtualisation is having a profound impact on the broadcast industry, allowing us to realise technology deployment models designed to increase flexibility, speed and efficiency. Virtualisation in itself is not, however, the sole means to realise our ambitions. There are wider considerations for any broadcaster who wants to become more efficient by recycling and reusing technology investments quickly and easily. Virtualisation has had its greatest impact in parts of our business where we traditionally have deployed hardware appliances to carry out individual functions and they have been physically separated. So, for virtualisation to deliver on its promises, you have to trust your tools and configuration for segregation within the virtual environment. It could be tempting to work in physical silos to accommodate different customers or user groups with different workflows, but virtualisation is only efficient if you make the most of your hardware. This may include being flexible and moving things around within the virtual environment as demand changes, rather than emulating an appliance-based system design with physical separation. Standardisation and homogenisation are key to success. Being able to separate the hardware and its associated installation from the services that are actually delivered is one of the main advantages of virtualisation, but only if it is done well. You have to build an environment that can do anything you need it to do, still maintaining enough headroom to grow or change. At the same time there needs to be a continuous process of refresh and maintenance taking place without impacting the services running in the environment. If you don’t build a whole living ecosystem, you haven’t reached the full potential of what virtualisation can achieve. This highlights a key consideration for any broadcaster looking to virtualise its technology. The end-game cannot and should not be virtualisation in itself and we need to be open minded towards all possible solutions in the quest for optimal ways of working. Without understanding why

we are virtualising, we cannot know if we actually have succeeded in meeting our aims. At Red Bee Media our efforts in virtualisation have created new opportunities in our ways of working. The software-only approach that comes from virtualisation has fundamentally changed how we look at configuration, deployment and maintenance. Configuration is now done through lines of code and script rather than based on physical cables and servers. To leverage the efficiency offered by a virtual estate we have had to focus on dynamic provisioning of not just the machine and software, but also the configuration of networks, firewalls and everything else that make up the virtualised ecosystem. Using software-only configurations enables both build efficiency and can improve the entire workflow. In order to do just that, and following two years of R&D, we have built a private Cloud infrastructure in partnership with Cisco that delivers pure software managed services, in order to serve all product lines with quality, performance and cost optimisation as its core values. With the right processes and tooling in place we have also found it possible to be flexible and agile through scripted bare-metal deployments. This gives us the same benefits we assumed could only come from virtualisation, but in a manner that removes the cost and compute inefficiencies of using a hypervisor. Our aim is not to simply virtualise, but instead to get more efficient and flexible use out of the hardware we deploy - sometimes virtualisation is the ideal solution; sometimes it is not. During a time when technology was static and served only a single purpose during its lifetime, there was little reason to make the rest of the organisation flexible. Now that virtualisation and software tooling allow for agile ways of working and flexible use of hardware, you have to rethink your whole workflow. Virtualisation is not the only change to make, but it is the catalyst and driver for a wider transformation both within our business and across the industry. n




THE SYN OF PIRACY Synamedia CEO Yves Padrines talks to George Jarrett


new and instant big player in streaming and credential piracy solutions, Cloud enabled virtualisation and video processing; Synamedia is not any simple regurgitation of NDS after six and a half years of digestion inside CISCO. CEO Yves Padrines went into CISCO with NDS in 2012, and became VP of global service provision, so what exactly did CISCO want to sell, and for what strategic reasoning? Officially it parted with its service provider video solutions business, the buyer being Permira Funds, which had previously owned 51 per cent of NDS. “CISCO decided a year ago to sell its overall video business which is NDS, its old video software unit, which was the remaining parts of Scientific Atlanta, and all the different acquisitions it had done around video processing,” says Padrines. “It also included all the online video and Cloud video distribution investment CISCO had committed to with infinite solutions, and Cloud DVR, which was co-developed with Comcast. “One reason they decided to sell the business is because it is a very different go-to-market model than selling products,” he adds. “CISCO is a really transactional product go-to-market model versus here where we are talking about long sell cycles, long-term contracts with recurring

revenue, and solutions that go-to-market which are clearly not one product hits due to a lot of requisite customisation.” So the services/solutions go-to-market culture did not sit happily with CISCO; Synamedia has the pay-TV market to service, and the traditional elements of this in broadcast, satellite and cable, especially in the US, have seen a wave of massive cost-cutting. “How we measure ourselves is the fact that as of today we are, in terms of size, the largest video or pay-TV technology vendor globally. We have 200 key customers, including the largest pay-TV operators, and our main goal is to bring together the worlds of broadcast and IP broadband,” says Padrines. Corporate priorities start with helping people to embrace IP. “We will help them to complement their offerings by embracing all the different OTT streaming applications, and to aggregate them on their platform with both a unified view and unified search,” says Padrines. “Consumers get more choice and more convenience. “This is something we see most of the cable and satellite pay-TV operators looking at, and the telcos and pure mobile and pure broadband players are also looking at aggregating more and more of those OTT streaming applications by not investing so much in content




acquisition,” he adds. “As a pure broadband player you could aggregate different OTT services such as Netflix and Hulu, and offer the aggregation of the OTT apps to your end user with a unified view and unified search, and then add some targeted advertising elements to generate revenue.” Padrines sees the consumer plus connected TV with 15 different OTT streaming apps as the problem. He says: “If you don’t know in advance where the content you might want to watch is, you have a bad consumer experience, because most people will not do self-provision. “A platform that is going to aggregate all these services with a unified view and unified search would give a great user experience, and not only on the main smart screen. It also reaches to mobiles and companion devices, and this is something that service providers in general are looking at,” he continues. FIGHTING HACKING ON TWO FRONTS The embracing of IP is one must. The second is the securing of revenue streams, armed with all the traditional pay-TV security solutions. “We probably still have the best lock for the door here with our conditional access system, but it is not where the hacking threat is coming from,” says Padrines. “Set-top-box hacking is still a threat, but the heavy hacking is coming through streaming piracy. This is a big issue in all the key markets, and this is where we are going to invest a lot of money. It has been in our genes for a long time, and our aim will be to bring out a service to fight streaming piracy, not only to do monitoring and identification as to where the attack will come from, but clearly to do some takedowns.” The aim is to legally kill an attack, and bring security to end-to-end services, rights owners, OTT players and all media companies with newly developed piracy disruption tools. But there is a second security front that eats at Padrines. “We want to disrupt the hacker’s ecosystem as much as possible,” he says. “One of the other big issues we see is ‘credential sharing’. A lot of people are sharing credentials, either with friends and family, or people are sharing credentials and making a business out of it. And then some people are hacking credentials and selling them on to be used around the world. They sold your credentials on a website. Our main goal is to keep honest people honest.”


In mid-December Synamedia launched its Credentials Sharing Insight product, and at CES in the New Year this received massive publicity and attention because it could be the tool to stop multiple billion dollar hits on pay-TV and OTT revenues. Explaining its structure around AI, machine learning and analytics, Padrines says: “This could stop illegal credential sharing on services such as Hulu and Netflix, and many others.” CUSTOMISATION AND CO-DEVELOPMENT The relationship between analytics and targeted advertising is another key area for Synamedia. “We codeveloped Sky AdSmart, which has been quite successful for Sky in the UK, and has been launched in Italy. We also launched AdSmart in India and we want to revive targeted advertising in general, mainly for linear where we strongly believe the biggest part of the ad revenue is still coming from,” says Padrines. Has Synamedia found that a lot of corporations turned to in-house development out of frustration fired by the need for competitive software tools that vendors could not supply? “Do we see a trend of users who were trying to develop things in-house? Yes,” says Padrines. “We are going to try and understand their needs at a very deep level, and in some cases it is going to sell some products, but in most cases it is going to be sell products that are happily customised. This could also be a co-development when talking about some of the larger companies like Comcast. It is really a co-development partnership that we have with the market, far more so than just selling products. “Some of the large telcos started in-house development, and it is easy to do the first release. But it starts to be difficult to do the subsequent releases, and becomes very expensive. And you are in a world where pay-TV and providers in general are suffering because their revenues are flatlining or decreasing but the costs are increasing,” he adds. “Bandwidth and content production/acquisition costs are increasing, so you can imagine that companies need to look at their overall Opex, and reduce spend in some areas. Some telcos are looking for the next generation platform, to help them move to the new world. “The market is moving, but the pay-TV industry at large is in difficulty,” says Padrines. “We strongly believe we

FEATURE would be the last man standing because of our scale, our innovation and the good thing of having two very strong shareholders. Permira acquired the CISCO SPVS business and it is here to grow the business, and we are very pleased to have Comcast/Sky as a shareholder because they are probably the best in class if you talk about the pay-TV industry globally. “You look at most of the service providers and pay-TV companies, and if they have broadcast and OTT most of the time they have two different back ends. Here we are bringing these two worlds together,” he continues. TOTAL VIRTUALISATION The Cloud-based platform and DVR that CISCO blessed Synamedia with is the front line of a rush to virtualisation. “Everything is moving to the Cloud, and the DVR was one of the things we virtualised quite quickly. Synamedia DCM, which used to be the CISCO/Scientific Atlanta DCM solution, has been a virtual DCM for the last two years, and is widely used by large pay-TV operators and media companies globally,” says Padrines. “Virtualisation is really everywhere. Even when we talk about security you have Cloud DRM as service platform,” he adds.

“The whole of our portfolio of products is already virtualised or is moving to the Cloud. AI and machine learning are a big part of a lot of our solutions, especially our infinite Cloud-based video distribution solution, and the new Credentials Sharing Insight.” Padrines does not see 5G with a mass roll out soon but it is a key part of innovation R&D. He says: “We are in talks with a few mobile operators that are in the Cloud about some proof of concept regarding different ways of distributing video via 5G, and we have done some demos around that. “We also want to help players around the IP delivery of video to make it possible to be as good as broadcast.” What will happen to one-to-many? “It will be there for quite some time, in satellite for the next 10 years, and in emerging markets especially, but clearly the penetration of OTT and the IP delivery of video is increasing fast,” says Padrines. “We are working with the Chinese government and the largest DTH platform in the world. It has 140 million subscribers and set-top-box deployment based on our conditional access system. That is not going to disappear soon, and they are launching a lot of these STB boxes in rural areas where they do not have cable, so oneto-many is not going to vanish tomorrow.” n

IP Made Easy

Talk to the leader in open-standards IP installations


In the race to live IP productions, with OB trucks, on-site production studios or remote (REMI) productions, you need a pace-setting partner that delivers: • The widest range of interoperable IP-ready solutions including cameras, production switchers, video servers, replay, infrastructure, conversion and multiviewers • Unique DirectIP connectivity provides IP communication directly between cameras and XCU base station • Easy to configure and control systems supporting self-discovery of IP edge devices, hitless redundancy, and policy-based security • Signal monitoring from beginning to end with unique media assurance technology • Proven installations throughout the world Copyright © 2019 Grass Valley Canada. All rights reserved. Specifications subject to change without notice.

GV Ad IP_185x115mm.indd 1

Join the conversation

05/02/2019 19:00



LIVESTREAMING How easy is it to livestream from near space? Jenny Priestley talks to Sheffield-based Sent Into Space who have been testing it out PICTURED ABOVE: The livestream in action


t the end of January I was sat in TVBEurope’s office, having a quick peruse of Twitter (nothing new there). One tweet in particular caught my eye. It was about a camera that had been sent to the edge of space and the descent was being livestreamed on YouTube. As something of a space geek, I immediately clicked on the link, all in the name of research obviously, and watched in fascination as the camera slowly descended back to Earth. When the livestream finished I decided I wanted to find out more about how you receive 1080p video from the


edge of space and livestream it to the watching world. The company behind the livestream is Sent Into Space, who describe themselves as the “global experts in Near Space”. Over the years they’ve sent everything from a teddy bear to a meat and potato pie to the edge of space, anywhere between 19 and 45 kilometres up, and sent images back to Earth. This time the plan was to send back live content and stream it straight onto YouTube. Livestreaming is something Sent Into Space has been working on for a while, explains projects and innovation

FEATURE PICTURED RIGHT: The Sent Into Space team conducting tests with their balloon and payload

lead, Alex Keen: “Essentially the livestream at the end of January was a test of our capabilities. We’ve been working with Broadcast RF who are one of the world leaders in remote transmission and video in order to pull it off. “It’s been as much of a regulatory challenge as it’s been a technological one. Broadcast RF has been working with Ofcom to secure licences, we’ve been working with the Civil Aviation Authority to get our payload design and our weight description signed off,” he explains. Keen says as far as Sent Into Space is concerned the test was very successful. One of key areas of the test was the use of an array of different ground antennas, with the intention to find out what kind of antenna would be best for receiving data reliably throughout the flight. “We had everything from patch antennas to highly directional,” explains Keen. “We managed to maintain a fairly clear video stream throughout the majority of the flight and we know we can improve that with multiple ground stations listening in. “Really the next stage is seeing what applications this has for potential clients. It’s something that has been asked for, especially by marketing clients but also some of our research clients for a long time so that they can observe in real-time what’s going on over the course of a flight.” Sent Into Space uses a balloon-based system to carry the payload up into near space. For the livestreamed test, the balloon reached almost 25,000 metres before bursting due to the drop in pressure. The payload took about an hour and a half to reach that altitude, and about 45 minutes to an hour to come back down to Earth. The livestream followed the whole descent until the payload dipped over the horizon. “It travelled quite a significant distance horizontally from our launch site so once it hit that point we couldn’t maintain a complete line of sight for the transmission to get through,” says Keen. For the flight in January, Sent Into Space used a Panasonic Lumix GH-5 as its camera. It streamed through a mini HDMI plugged into the side which went into a transmitter

“We managed to maintain a fairly clear video stream throughout the majority of the flight and we know we can improve that with multiple ground stations listening in.” ALEX KEEN TVBEUROPE MARCH 2019 | 15

FEATURE PICTURED ABOVE: Alex Keen hard at work during the livestream test

that encoded the stream and sent it to an amplifier broadcasting through an antenna on the bottom of the payload. The transmitter was broadcasting on approximately two gigahertz says Keen. “It was all intensively modified for cold and low atmosphere conditions. That broadcast back to our antenna array, which decoded the signal back into a video and back into a HDMI lead plugged into one of my laptops and then streamed via Open Broadcaster Software onto YouTube and Facebook.” The stream was broadcast in 1080p, although the camera itself is capable of going up to 4K. Keen says they decided against 4K as it becomes a transmission issue after a certain point. “We weren’t confident in its reliability to broadcast at

that kind of data rate,” he adds. “We didn’t have sound broadcasting because the sound that you get from a payload spinning and travelling up through the atmosphere, into the stratosphere and into near space, is mostly a high-pitched whistling noise, which is quite unpleasant to hear. In other private tests we have established it would be possible to broadcast sound back.” I’m no expert, but I suspect livestreaming from near space includes a certain amount of latency which Keen confirms. “The latency from the payload to the antennas was under 30 seconds. From OBS on to YouTube it was about 10 seconds. Bizarrely enough, the limitating factor was our internet speed at the launch site. Obviously we’re not in the middle

‘The stream was broadcast in 1080p, although the camera itself is capable of going up to 4K.’



of a big city with fibre-optic connection,” he laughs. I wasn’t the only viewer on the day. At the moment of launch and the moment of burst, the livestream was “floating at around 60 or 70 viewers,” says Keen (no pun intended). At the time of writing, the audience has increased to 1,400 views. Keen stresses that the final go-ahead for the livestream was very last minute. “We had all the clearances come through and then as if by magic the weather fell into place for us to be able to conduct the launch,” he says. “In order for any kind of high altitude balloon flight to take place we need to be able to monitor the weather conditions both on the ground and at various different layers in the atmosphere, and also know if it’s going to be passing over any kind of flight paths of commercial and military significance. Plus, it needs to land somewhere remote so that we can pick it up. “When it comes to the livestreaming aspect we also have to take into consideration the overall shape of the flight path. In this case, we were manually adjusting the antenna array on the ground

to point in the direction of the payload. So, we didn’t have a huge amount of opportunity to publicise it.” Having completed the test successfully, what does Keen think will be the next step for Sent Into Space? “We’ve been working for a long time on reliable two-way communication with the payload,” he reveals. “Obviously the livestreaming test goes some way towards that. “Being able to remotely operate some kind of machinery at altitude would be very exciting, both from a technical perspective as it unlocks the potential for us to conduct a lot more experiments, it allows us to monitor our payload’s performance visually in real time; and from a sales and marketing perspective it makes it possible for us to have a livestream going on where people can communicate with the payload and potentially control aspects of what is going on up there. “Being able to push a button on the ground and watch in real time as we deploy a firework, or have two little ‘rock ‘em sock ‘em’ robots having a fight or something, would be interesting.” n




ver the last couple of years, ‘virtualisation’ has become a common theme. But what is virtualisation and what does it mean for broadcasters and media? The shift from SDI to IP means that traditional library functions, workflow engines, automation, linear channel playout and storage archival workflows have been redefined so that broadcasters can be a lot more flexible about how they shoot, edit and distribute their content. Modern software development can leverage the innovations in virtual server environments, allowing the migration of systems that 10 years ago required physical server hardware to be deployed in a simulated environment, often referred to as ‘virtualised’ systems. This, coupled with the rapid growth of public Cloud infrastructure, means that metadata, monitoring tools and workflows have moved human interaction away from physical machines, and the media itself away from local hardware into a virtualised environment.


The shift towards virtualisation began years ago. Prime examples of its first applications in video systems are the weather presentation green screens with digital graphical overlays, and efforts to automate workflows and standardise delivery/transmission methods. Innovations in IP technology have reached the point that they can support the speed and quality of service requirements of broadcast media. Moving to an all-IP infrastructure supports Studio-Video-Over-IP (SVIP) and can virtualise every aspect of the broadcast chain, from simulated studios for on-air personnel, to computer-generated channel creation for special events. As we know, the public Cloud infrastructure plays a huge role in the virtualised TV world. When everything is digitised in virtual systems, broadcasters generate a massive amount of data, especially as more consumers demand UHD media (4K and 8K). Cloud infrastructure is a natural way to extend storage capacity and build deep archives. These


FEATURE PICTURED LEFT Tedial’s Version Factory solution

workflows allow the high-speed launch of new channels, which is a major advantage for fast monetisation through advertiser support. Monetisation of many applications requires broadcasters to take advantage of an IP infrastructure to enable them to participate and benefit. CUSTOMER CHALLENGE Tedial provides solutions that enable broadcasters and media companies to take full advantage of virtualised workflows. We recently completed an installation for one of the world’s biggest television content brands which includes over 27 years of weekly short-form television productions, full feature-length cinema releases, YouTube channels, video game cinematics and esports. The customer’s former operation was quite manual and de-centralised due to the global nature of the company. Its content, which is produced in its main studios, has to be localised and approved in other sites and distributed to more than 60 foreign language partners around the world. The company elected to place its entire system in the AWS Cloud and modernise its processes as part of the migration. After an analysis of the existing operations and establishment of a road map for successful migration, Tedial supplied its Evolution Version Factory solution, a single automated workflow which leverages the SMPTE IMF methodology to manage creation of the complicated media versions and distribution chores for international language versioning as well as OTT and VoD version support. Components and supplemental files are selected for collection and are identified as Composition Play Lists (CPLs), which define a particular set of media constituents and meet specific end-user requirements. For example, the version for Montreal, Canada VoD may require the video plus French audio and French subtitles. The receiving locations all require separate media version preparations including edits, specific video formats for playback, audio levelling requirements, automated quality control reviews, forensic watermarking, distribution via special content distribution networks and archive requirements. These location specific requirements are described by a collection of Output Profile Lists (OPLs) and named in simple text so that they can easily be recalled and employed by the user. Because the OPL definitions for a location can sometimes include additional items outside the transformations required, Tedial calls these enhancedOPLs ‘Destination Instruction Set’ profiles or DIS profiles. A command can be as simple as “send the Montreal VoD CPL to the Quebec Cable TV OPL,” or it can be more complex based on the end-site requirements. This approach drastically simplifies the content distribution process, as the customer simply needs to define the template to be generated for each partner and from that ‘copies and pastes’ adjusting the CPL (for languages

selection) and delivery options in a simple GUI. THE RESULTS The primary task of the virtualisation was to move media libraries scattered across multiple continents into a single, managed archive with two key access points; an office in Europe and an office on the US West Coast. Employing the AWS infrastructure allowed the media to be collected in a secure archive, with a reliable disaster recovery plan for the archives. Tedial worked closely with the client to ensure the new business processes orchestrated the staff daily chores in new and efficient manners, and the modernisation of the operations introduced some exciting innovations. For example, assembling new ‘Edit Decision Lists’ (EDLs) for OTT and VoD distribution versions allows the company to add pre-roll media such as colour bars or black segments, mid-rolls for commercial or promotion insertions, or post-rolls for end credits, etc. The Tedial Evolution MAM allows the client to relate non-video/audio assets to the CPL collections, so the artwork that applies to a season of programming, like poster art or photography can employ a ‘relationship’ in the Version Factory DIS package assemblies. In other words, a single ‘art’ asset can be applied to a season of episodes without re-copying the asset and attaching it to every episode. Also, the Version Factory leverages conditionality to allow distribution of partial media versions to meet contractual requirements. For example, if a contract states that a French version with French subtitles must be delivered by a specific date but it’s acceptable to supply the version with English subtitles to meet the due date requirement, the Tedial solution can support this conditional delivery mode. The result of this new virtualised operation is that the Tedial client can now schedule tasks and activities across their European and US-based operation centres, manage their deliverables to meet their contractual commitments and leverage the virtualised environment and security supplied by a world leading Cloud infrastructure supplier. Any broadcaster or media entertainment company should consider a move to a virtualised environment and take the first steps of analysing the options, costs and the expected return for a successful deployment. There are multiple steps to ready an existing facility for a virtualised future and to reap the rewards that it brings. n





E’RE FACING ONE OF THE INDUSTRY’S BIGGEST CHALLENGES As well as being a non-executive director at Blue Lucy, I’m a media technology consultant with a background in broadcast automation and media asset management (MAM) systems. Much of my time is spent helping clients integrate different, disparate systems designed and purchased to do very specific tasks including communicate with each other to form complete operational workflows to fulfil a client’s specific business objective. I bolt rights and scheduling systems to MAM systems, bolt them to the storage and then transfer content to automation systems for delivery to linear television or VoD platforms. One of the key problems I continually encounter when integrating these disparate systems is data fragmentation. Because so much data has been lost in the transition from tape to file-based systems (remember those scribbled cue sheets…) and, with each media business building systems that exist as silos (with their own database(s) with the metadata stored internally) we’re now faced with one of our industry’s biggest challenges – how to find, track and monetise content when we can’t access the data that surrounds it. WHAT ARE THE IMPLICATIONS OF DATA FRAGMENTATION? Imagine if you wanted to use a piece of content from a news agency in your package. So, you cut it into your story, and then you repackage and repackage that story throughout the day for different purposes and platforms,

20 | TVBEUROPE MARCH 2019H 2019

and before you know it, the genealogy is lost and it’s almost impossible to track or report the usage. There is no reliable way for the originating news agency to check that the usage report you do submit is accurate. THIS DEMONSTRATES THE TWO MAJOR PROBLEMS FACING CONTENT RIGHTS OWNERS: ■ Cost. Trust is maintained by third parties delivering reporting to each other to just keep each other in check. There is a lack of transparency for the buyers, sellers, owners and distributors. ■ Time. The reconciliation process is very time consuming


and manual - a recent piece of consultancy work for a marketing agency in London demonstrated that 50 per cent of the financial team’s time was spent on moving data from one system to another to reconcile their campaigns for the routes to market: print, broadcast and digital. So that’s 50 per cent of their time just to raise invoices. These problems are caused by the lack of ability to track content and data across different platforms, all the routes to market and the organisations. After considerable research and investigation, the conclusion I’ve come to is that blockchain has the capability to deliver the change needed to resolve these issues. A SIMPLE EXPLANATION OF BLOCKCHAIN Blockchain is clever but it’s actually quite boring. Also, blockchain for media has little, if anything, to do with Bitcoin and cryptocurrencies. Firstly, blockchain is a super computer - you could think of it as a new internet infrastructure. It’s like an online Google document that everyone can view, edit or delete

and where versions are automatically tracked and saved, creating an audit trail. But, unlike a Google doc where the timestamps, versions and tracking data are owned by Google, a blockchain is a distributed database called a ledger, spread across multiple computers (or nodes) – so no one person or organisation owns or has control of the total data. Secondly, a registration on blockchain is immutable. Every time you transact on a blockchain database, that transaction is timestamped into history and it cannot be deleted. It is unchangeable. Because it is distributed, the computer nodes in the chain regularly ‘agree’ on the status using a feature called a consensus mechanism, to create the immutable audit trail. Thirdly, blockchain takes security to the next level by including cryptography throughout the process – it’s like your user name and password on steroids – and the only people who have access to the original data are the owners who put it there. Finally, blockchain can utilise a feature called smart contracts to minimise lawyer costs, save time and negate conflict. Smart contracts are a set of instructions,

‘The best thing about blockchain is the link it provides between the content owner and collecting their money.’ TRACIE MITCHELL

Fully Flexible Hybrid IP/SDI Test & Measurement Qx IP – an unbeatable solution for next generation hybrid IP/SDI environments Qx IP •

SMPTE 2110 with PTP and 2022-7 support

1 x video (-20), 2 x audio (-30), 1 x ANC (-40) flows

Reporting of flow timing with respect to PTP

Packet Internal Timing (PIT) analysis for rapid diagnosis of issues like packet congestion

User-defined instrument display layout for up to 16 instruments now with presets

Support for up to 16 channel audio at 1ms & 125us

Visit us at booth N4508


FEATURE programmed into a blockchain database and stored in the distributed ledger and therefore act as a digital contract and negotiation tool for acquisition, raising contracts and documenting their terms. The digital contract can be agreed and executed and therefore used for content rights management or to agree any set of terms for use, eg. the distribution rights. HOW CAN BLOCKCHAIN SOLVE THE MEDIA INDUSTRY’S PROBLEMS? So how do we think the use of blockchain will resolve the problem of conflicts we see today in managing the disparate datasets that are in the hands of different clients and businesses? ■ IT WILL PROVIDE AN IMMUTABLE RECORD. The beauty of blockchain is that it provides a transparent, immutable ledger. This makes dispute and conflict resolution more based in fact. The transaction really did (or not) happen. I did transfer that content to you, and you did air it 10 out of the 12 times you have the rights for, on that platform. ■ IT PROVIDES A UNIVERSAL METHOD OF IDENTIFYING AN ASSET – AND ITS OWNER. Maybe, just maybe the Unique ID required in every silo system becomes mute. Certainly, blockchain will be able to determine the content owner of the asset so that there will be no dispute of ownership and identification of pieces of content in the genealogy can also be exposed. This will significantly improve the complications of piracy and copyright enforcement. ■ IT CAN HELP US TRACK MEDIA. Blockchain tracks, and therefore can supply, the history of the steps travelled by a piece of media as recorded in the immutable ledger. Additionally, by utilising the originating terms and agreement of the smart contract, the media usage is reported directly to the content owner. ■ IT PROVIDES REPORTING IN REAL TIME. The best thing about blockchain is the link it provides between the content owner and collecting their money. Collecting and distributing royalties will be automated based on smart contracts which will remove conflicts and capitalise on exact knowledge (eg. the monetisation of re-runs will be trustworthy and transparent). This will be achieved in real time, so the rights owner won’t have to wait for their payment. Using smart contracts will also result in significant operational overhead savings and a reduction in legal fees.


WHAT DOES THIS MEAN FOR CONSUMERS? While viewers have more content choice than ever before, what we can watch is limited by the service providers and entertainment packages we subscribe to – which is a by-product of inefficient content tracking and rights management systems. I see a future world where we can access any content on any system and a smart contract will figure out who to pay. Because ultimately, consumers do not care about who created it, about how it is distributed, or about the underlying infrastructure to support the supply chain, they just want to find and view the content. HERE’S HOW IT WOULD WORK: ■ A content owner or rights owner accesses a rights management system with their user name and password protected by cryptography. ■ They negotiate a contract by assigning the rights to distributors and negotiating the terms, essentially creating a smart contract. ■ They upload a piece of content to the MAM system and associate the rights from the smart contract. ■ The asset is registered on blockchain creating an immutable record of the content ownership, rights and the contract. This data can now be tracked and associated with the asset as it is transferred to the distributor. ■ The asset is played (or transferred) and tracking mechanisms invoke the smart contract which in turn informs the content owner of real-time usage. ■ Payment is executed based on the terms set out in the digital smart contract. WHERE TO FROM HERE? Billions of dollars have been, and continue to be, poured into the development of blockchain technology. After 10-15 years of academic and scientific research, the infrastructure and compute layers exist and are very stable. The third layer is now required - new business technology applications to be placed on top of these super computers to utilise the benefits discussed above in a reallife implementation. It is clear that the media industry’s current processes for content tracking, rights assertion and reconciliation are inadequate and are due a complete overhaul. I believe blockchain can provide this underlying functionality, if the systems that support our workflows – ie. our MAM, storage, delivery and distribution systems – begin to utilise the characteristics of blockchain and distributed ledger technology. Blue Lucy has already started this process within the BLAM product, now we just need other technology and software providers to follow suit. n

‘Blockchain takes security to the next level by including cryptography throughout the process.’


BLOCKCHAIN’S ROLE IN TV By Gavin Douglas, CEO, iPowow and the HIT Protocol


he future of advertising on television and video needs a reality check. Privacy breaches, fraudulent metrics, fake reviews, questionable data analytics, loss of trust between the platforms and the brands the complexity of advertising on TV and video content continues to gain momentum in 2018 with very few points of light in the chaos. Since data has become equal to currency in the media environment, there has been a shift in society’s outlook on privacy. With eyes on the proverbial prize of user data, a marked decline in regard for user privacy has come to the forefront. We’ve all seen this movie before. Say you’re looking at an online forum for weight loss. Consequently, and paradoxically, you’ll be the first one targeted by junk food companies who are going to assume you have been, or will continue to be, a major consumer of their products. While users often give consent by checking a box, it is not really ‘informed consent.’ They’ll say “yes” to get to the next screen or to download the new app, but they probably don’t fully understand the degree to which they’re sacrificing their privacy. This creates an ecosystem where users have little control over their own data and how that data is used and monetised. Fortunately, there is a solution to the erosion of user control. Blockchain technology can be leveraged to safeguard user privacy, thereby restoring trust between viewers, advertisers and content creators. Just as blockchain is being employed to transform the healthcare industry by eliminating the middlemen, the correct use of the best technologies available can do the same for the media landscape by utilising tokenisation. By creating direct relationships between the advertisers and the viewers, tokenisation can support the formation of direct avenues for data and value exchange between advertisers, consumers and content

‘By creating direct relationships between the advertisers and the viewers, tokenisation can support the formation of direct avenues for data and value exchange.’

creators. The elimination of intermediary platforms means that all parties can retain more value, because no percentage of transactions is being captured by third parties. Additionally, such peer-to-peer interactions are drastically more streamlined than the complex webs of media players seen today. In a win-win, advertisers earn greater return on their investments while users gain more control over their data. The process of tokenisation, or the monetisation of trackable actions, takes the evolution of this ecosystem one step further to include viewers among those benefiting economically. In other words, users can make money simply by watching and engaging with their favourite TV and video content. During a show, viewers are prompted with a call to action to participate in some way with the content they are watching. Viewers can respond directly on the device on which they are watching the content or, if they are viewing on their TV, through the app on their phone. Once viewers sign up to begin earning rewards, they can interact with specific elements of the show or advert and accumulate tokens, provided by the advertisers, for their time and attention. These tokens have real-world value and can then be used to buy other goods and services within the ecosystem, most importantly connecting viewing habits directly to purchase data. The fact that on-screen graphics containing tokens can be applied to any TV or video content on any screen means that, not just the viewers, but all participants in the media ecosystem can benefit from tokenisation. If employed effectively in the TV and video industry, tokenisation of content built on blockchain technology could form the foundation for a new digital order wherein all parties benefit from improved efficiency, transparency and security. n



On the


“Not to sound too cliché but I did find it very inspiring.” CALINA HO

n 2018 Rise, a group for women in broadcast, launched its inaugural mentorship scheme. It enables young women within the broadcast industry to receive help and advice from an assigned mentor, meet up with their mentee group on a regular basis, and attend industry networking events and workshops. The initial group had access to mentors from the likes of ITV, Clear-Com, Deluxe and Molinare who shared their expertise and supported the women as they progressed with their careers. Rise is now inviting applications for its second round of mentoring, with the programme running from May to November 2019. Applications must be submitted by 29th March via Rise’s website, TVBEurope talks to two of the scheme’s original mentees as well as Rise director Carrie Wootten and advisory panel member Sadie Groom. CALINA HO, LEAD PROJECT MANAGER AT SOHONET LTD What made you apply for the Rise scheme? When I heard about the scheme, I jumped at the chance to apply as I wanted to have access to a personal mentor and meet like-minded women in the same industry. What did you hope to get out of it? I hoped it would provide me additional support to help build my confidence, as well as meet new people and expand my network.


TVBEurope hears from some of the young women who took part in the first year of mentorship scheme Rise Did it live up to your expectations? I think it actually surpassed them. I was looking forward to having a personal mentor, but one of the things I did not expect was how close the other mentees have become and the support that we received from the group. How much time did you spend with your mentor? Were they always available to you when needed? We spent the suggested amount, approximately two hours per month. I felt that I could reach out to her at any time, and there were times when I did exactly that and she would always give constructive and supportive advice. What did you learn? During my time with the group, it was highlighted to me that a lot of people actually go through the same experiences, but do not usually talk about it. It has shown me that people generally are there to help and want you to succeed. It has helped me build confidence within myself which has shown through in my work life. I have also learned that knowledge sharing is a wonderful thing, on all levels. One thing I would point out is that everyone’s experience will be different, and that you will get out of it as much effort as you put in. If you could change anything, what would it be? Perhaps the length, I would have liked for it to have continued for a full year.


Would you do it again if you could? Absolutely! I thoroughly enjoyed it. How likely are you to recommend Rise to a colleague? Most definitely. Would you consider becoming a mentor yourself? 100 per cent - If I could be helpful to anyone, I would love to. Why do you think Rise is important to the media technology industry? Although it has improved slightly over the years, there is still a lot of work to be done in women leadership in this industry. This is a safe and supportive environment for females in this industry to connect and help get access to like-minded individuals/mentors as well as events that they may not otherwise attend. Not to sound too cliché but I did find it very inspiring, especially meeting all the encouraging mentors. What did your colleagues think about you taking part in Rise? Were they supportive? Yes, they were very proud and supportive, it was sweet. CARYS HUGHES, DESIGN ENGINEER AT SKY What made you apply for the Rise scheme? Before I heard about Rise, it had already been suggested to me that finding a mentor and working in a different role with a different set of professional skills might be really helpful. I saw the Rise scheme promoted and it was the perfect opportunity. An acquaintance at SMPTE also encouraged me to apply, as did a colleague – but by that point I was already in touch. It was encouraging to me that respected members of the wider industry could already see that the scheme had real potential. What did you hope to get out of it? I’m an engineer at heart and although I can manage stakeholders and multiple projects in parallel, those skills are the fruit of some very steep learning curves. I wanted to take the time to invest in my interpersonal and time management skills and it was great to talk through some of those challenges with somebody who had already learnt those lessons. I have great colleagues at Sky to continue to learn from, but I was really grateful for an opportunity to learn from skills developed in another context. It’s always useful to get a view from outside the “bubble”.

Did it live up to your expectations? Yes… and then some! It was so great to build an additional network of support with my fellow mentees. I think that’s been a real bonus for the whole group and a key factor in building confidence. Mentee meetups have really proven to be a safe space where we can learn from each other’s experiences. We also had plenty of amazing opportunities to engage with the rest of the industry and it was great to meet the other mentors and to hear how other capable women have made a place for themselves in our industry. Although I love working in engineering, for now at least; the demographic of my usual networking pool is pretty limited, so all of these interactions have been really valuable. How much time did you spend with your mentor? Were they always available to you when needed? I met with my mentor for about an hour, twice a month. If either of us needed to rearrange, we generally managed to work around that. We mostly met at the Sky campus, as that worked for both of us. But also over coffee, away from the office. What did you learn? Probably the most important set of lessons for me has been to approach both time management and professional interactions with colleagues with more intentionality. We have such a variety of professional interactions with colleagues and third parties, the dynamics and goals of which are really diverse. I was able to take away some practical pointers and start putting them straight into practice. I’ve still got some work to do, but I’m definitely more agile and confident as a result.


“ Mentee meetups have really proven to be a safe space where we can learn from each other’s experiences.”

If you could change anything, what would it be? I was able to fairly naturally try out some of the strategies that we discussed – but in hindsight I’d have been more proactive about that from the outset. There were a couple of networking opportunities that I missed, due to being out of the country or on annual leave. I’d have loved to have met more of the other mentors, but I think that we have a good enough network established now which means that can be made up for. Would you do it again if you could? Yes! There was such a variety of experiences amongst both the mentees and mentors, that we’d have plenty more to learn! There’s also a huge benefit to having that kind of impartial support during key times of change or “landmark”


FEATURE years. I’m four and a half years out of university now and the scheme fell at a fairly pivotal time for me – 2018 was full of opportunities to take on new challenges and engage with the wider industry, but amidst growing responsibility in my day to day work.

and our wider industry partners, also gained enormously from their engagement with the mentees. The impact and outcomes that the mentees experienced through gaining a wide female peer network will also remain with them for the rest of their careers.

How likely are you to recommend Rise to a colleague? I’d 100 per cent recommend it. Without question.

You’re about to launch the second year, any changes planned? The structure, length and design of the programme will be the same this year. But we will be accepting 20 women onto the programme this time round. In addition to this, we’ll be looking to build in more bespoke industry sessions to support the mentees’ specific needs.

Would you consider becoming a mentor yourself? I think I have a little more learning to do, so ask me again in a couple of years’ time! Having said that - I’d definitely be up for supporting a student or recent graduate. There’s a really obvious need for that, particularly in the technical side of the industry. Why do you think Rise is important to the media technology industry? Many of the benefits of a mentoring scheme are not specific to gender. But as long as we are so outnumbered, I think it’s really important that women entering or returning to our industry know that there is an established source of support for them. Not everybody is a ready-made extrovert, or naturally corporate. It’s also so important that we have somewhere to ask hard questions to individuals that we can relate to, as we do work that we love - while also trying to build families or walk through personal challenges. We’ll continue to lose talented people if we don’t intentionally meet that need – Rise are standing in that gap. What did your colleagues think about you taking part in Rise? Were they supportive? My colleagues were very supportive, as long as I was able to continue to manage my own time commitments. My team at Sky are generally very supportive of my personal development and the benefits of learning from somebody with established skills or working in a slightly different context are clear, so it didn’t take much persuading in my case. SADIE GROOM, MANAGING DIRECTOR AT BUBBLE AGENCY, AND CARRIE WOOTTEN, RISE DIRECTOR How did year one go from your perspective? Rise’s inaugural mentoring programme delivered much more than we anticipated. We had more applications than expected (and turning away women was really difficult - we would have liked to have accepted everybody who applied). We saw the mentees grow in confidence throughout the six months, with many of them progressing in their chosen careers or deciding which steps to take next. In addition to this, the mentors,


What did you learn from running Rise? We think the main lesson has been how much support from the industry there is. As soon as the mentees and mentors were announced in 2018, industry partners came to us asking how they can support the women involved in the programme and what can they do individually and as a company to help. We saw the mentees being asked to talk on panels and present keynotes - which was fantastic as it started to showcase the talent, expertise and ability of these women. It has also been great to interact with so many men and women and discuss gender diversity – our aim now is to talk less about the issue itself but start showing the industry that there are women in senior and technical roles in the industry. Rise is growing as an organisation and we’re looking to widen our work during 2019 to support women across the sector, not just those directly involved in the mentoring programme.


“We’re always looking for more mentors or speakers to be involved in the programme.”

Who are the mentors for 2019? How easy was it to get them on board? The mentors for this year haven’t been chosen yet, as they will be paired based on the need and aspirations of each mentee. But we’re always looking for more mentors or speakers to be involved in the programme and Rise as a whole - so please do get in touch. What do you hope mentees get out of Rise? When we start the mentoring programme, we ask the mentees to send themselves a postcard to outline what they hope to get out of the six months. We then ‘post’ this back to them at the final meet-up. Each mentee’s journey, if we can use that word, will be totally different. Some will want to build their confidence, some might want to move from position A to position B and others may want to work through any issues that they feel need to be addressed. We hope that what they get out of the programme is that when they look at that postcard on the final night, all of their goals have been achieved. n

PICTURED ABOVE: Carrie Wootten


Dan Meier talks to three young sound engineers from Molinare about their award-nominated work on acclaimed documentary Three Identical Strangers

PICTURED ABOVE: Chad Orororo, Nas Parkash and Kim Tae Hak © Emily Thomas / Emily Jane Photography


ne of the most revealing documentaries of recent years, Three Identical Strangers explores the bizarre case of triplets reunited in the most remarkable circumstances. Post production company Molinare handled the editing, grading and sound, the latter receiving a nomination for the Motion Picture Sound Editors Award. Appropriately enough there are three people on the team: dialogue editor and re-recording mixer Nas Parkash, Foley and archive FX editor Chad Orororo, and sound FX editor Kim Tae Hak. On top of the film’s scientific, sociological and psychological insights, Three Identical Strangers is also something of an emotional rollercoaster. “It evokes a lot


of mixed feelings,” says Tae Hak. “I wanted to ensure from my perspective that every single sound effect element supported reflecting those emotions and actions.” The story starts in a relatively upbeat manner, engagingly told by brothers David and Bobby, but deeper and darker revelations unfold over the course of the film. This gear-shift had to be considered by the sound team, as Parkash explains: “The way in which the contributors told this incredible story was so enthusiastic - certainly in the first act, so we didn’t want to intrude too much on that. The main areas of deliberation were the drama recon sequences, where we considered whether to have sound effects, sound design, or just dialogue and music. And if


“Directors and editors will look to your instinctive emotions after seeing the film for the first time and coming at it with fresh eyes and ears helps give a new perspective.” NAS PARKASH so, would the sound continue underneath the dialogue in the interview or cut in and out? In the end, we judged this on a case by case basis. There certainly wasn’t a convention and that’s what I like.” One case that required action by the team was additional dialogue recording (ADR), as Orororo recalls: “I did a large majority of the crowd ADR. Again because of the charisma of the storytelling we had a lot of fun doing this, not to mention that we had to improvise and use runners’ voices from around the building. Any of the director questions however were recorded by Nas later on in the mix.” Parkash continues: “Tim’s questions [Tim Wardle, the director] recorded on location were quite ambient and low level/noisy, so we decided it was best for the film to rerecord them. This is something we always try to avoid, but it was worth doing for clarity’s sake. We used a Sennheiser 416 going into a Focusrite ISA Pre, and I managed to direct him to takes that he was happy with. I remember we had a laugh about it because often when director’s questions are re-recorded, they sound way too inquisitive, and we likened this to Columbo!” What other technology was used in the sound editing process? “I used various software plug-ins but my favourites for the purpose of sound design were Pitch & Time Pro, Altiverb and Doppler,” says Tae Hak. “My first task was to look for realistic heartbeats for six-monthold infants. After successfully collecting some natural heartbeat sounds, I blended them with other synthetic elements, varying the pitch slightly for each of the triplets. I applied various effects such as chorus and reverb, so each of the heartbeats had a slightly different texture. Once I was happy with the foundation sounds, I added on other elements such as the sound of the underwater and ambiguous liquid sounds and so on. I felt it was important for the sequences to build in a dramatic way, starting as mono and gradually filling the 5.1 space, before a hard cut into the interview room.” “Regarding the sound design, a variety of plugins were used; Pitch n Time Pro, Speakerphone and Whoosh REAKTOR just to name a few,” Orororo adds. “I’ve always worked with the rule, ‘you make the kit, the kit doesn’t make you.’ You could have all the plugins in the world and still produce something mediocre or something that the

director doesn’t like. I think the art of sound design comes from understanding a story and using your imagination and craft to bring out more elements of the story that can’t otherwise be communicated visually.” The relationship between the film’s visual elements and sound design required close collaboration with the picture editor, Michael Harte. “Michael was around for the whole process, which was a great help,” says Parkash. “He was in the spotting session, gave us dub notes, and his guide audio was quite explanatory - but it’s also your responsibility to offer up suggestions, and try things out. Directors and editors will look to your instinctive emotions after seeing the film for the first time and coming at it with fresh eyes and ears helps give a new perspective.” Orororo offers an especially new perspective having only been in the industry around five years: “I got into the industry via the old-school method of handing out CVs to post houses on foot. I didn’t even realise there was such a thing as ‘sound design’ until I did it as a module on my Creative Music Technology course at Kingston University. That experience blew me away and I knew instantly that I wanted to pursue a career as a sound designer. “I initially started running at a company called Evolutions. It was great fun there and I took every opportunity to network with freelance editors and producers as well as sit in with the audio guys there. The general route into audio there was via working as an

PICTURED ABOVE: Eddy Galland, David Kellman and Bobby Shafran in Three Identical Strangers



operator in their machine room facility; it was there that I learnt about the workflow of every department that exists in post. This gave me a huge advantage once I moved over to Molinare (still as machine room operator) because it was rare for someone who was interested in audio to know the ins and outs of every department.” Orororo worked his way up to an audio assistant position with George Foulgham (Molinare’s head of documentary feature sound) where he honed his craft. “I think it’s vital if you can, to be an assistant before stepping up to a junior dubbing mixer/sound designer. It is important to understand the language of sound design as well as how to communicate and work with clients; both George and Nas are amazing at this and it really shows in the work that they do and level to which they do it. “Generally, the industry had been quite welcoming,” Orororo continues. “Being Black British in an industry that has a severe lack of diversity can definitely be somewhat intimidating and isolating when you first enter. To be completely honest I still feel like this now on the odd occasion but I’m glad I can be here to inspire people of all ethnic minorities! One of the main things I will point out is that people tend to pay more attention to your attitude, so the best advice I can give for anyone starting out is to be keen, polite and positive! You’ll be amazed as to how far those three things can take you. I work with a great team of people and our main focus is excellence. Excellence has no colour or creed and it’s been a blast working with such amazingly talented people on amazing content.” Parkash has also found Molinare a positive place to work. “I was working as a studio assistant in a recording studio for a year or so, making tea, setting up microphones and looking after the musicians,” he recalls. “I then joined Molinare as a runner, and my first audio role was in the Foley department. I was buying, sorting and cleaning all the props, for shows like Silent Witness and Miss Marple! I then progressed to assistant re-recording mixer and then to rerecording mixer. I have found Molinare highly welcoming - it’s the best place for nurturing talent and promoting from within. My career hasn’t been planned in any way. People generally ask me if I’d like to do something and I’ll say, ‘Sure, why not!’ It’s been quite fluid in that respect.”


“People tend to pay more attention to your attitude, so the best advice I can give for anyone starting out is to be keen, polite and positive!” CHAD ORORORO

Tae Hek’s experience has been similarly positive: “I started as a runner after completing my MA in Audio Production at the University of Westminster. Molinare is the only post production facility where I have progressed in my career. I think it’s a great place to nurture people into their careers, it’s a place where you can learn so much off of each other, and perhaps most importantly, it’s a place that recognises talent when it sees it.” In addition to sound design, Tae Hak works as a dubbing mixer: “I work in a small team with Gregg Gettens, the head of our broadcast factual sound department, who trained me up. Due to the size of our team, it is essential that we are able to multi-task, covering various aspects of the sound production process from the initial audio prep, sound design, VO recording and mixing. These are a valuable set of skills that are required throughout the entire audio production process. As a sound designer, I focus on the editing process as well as creating new sounds over shots where they usually need to be embellished. As a dubbing mixer, I mix all the elements of sounds including music as well as narration to finalise a product.” Speaking of the finished product, what did the team think of the final film? “I love it!” says Parkash. “I think it’s brilliantly executed. It’s become a seminal documentary that people will cite for years to come, and that makes me feel proud. Many have said that we are entering the ‘Golden Age’ of feature documentaries, because of how well they are faring at the box office. In this case, there was a set twins that after having seen this film, realised they had been adopted through the same agency, and were subsequently reunited!” “To be fair I was completely blown away before we even got started on the project!” Orororo adds. “I remember giving Nas a nudge towards the end of our spotting session and saying, ‘Mate this doc is going to be huge!’ I think the entire team behind Three Identical Strangers has done an amazing job engulfing the audience in a story that is almost too hard to believe.” Tae Hak agrees: “It is a brilliant, compelling film and I am honoured to have contributed to it. I’m so happy for our team, especially whenever someone says to me, ‘You have to see this new doc that’s just come out.’ Recently, more often than not, I know what’s coming next!” The director gets the final word, and Tim Wardle has this to say about Molinare’s performance: “Working with Molinare on Three Identical Strangers has been a great experience. Nas, Chad and Kim did an incredible job creating an emotive and vibrant soundscape out of a wide range of source material - from present day interview, actuality and reconstruction, to decaying video and audio archive. Working to a very tight timeframe and budget, they crafted something special that has played a significant role in the film’s extraordinary theatrical success around the world.” n



By Alex Bassett, advanced production technology solutions and innovation specialist


irtualisation is giving broadcasters a platform to do more than ever; be in more places, increase resource flexibility and lower distribution costs. However, when you have apparatus rooms full of legacy SDI equipment, you naturally look to a SMPTE solution to make the jump to IP, but isn’t this solving half the problem? Yes, there is an argument that everything switches to IP, it creates a centralised system with one router on the local network that makes the management of resource much better, but that is exactly the issue - it is local. A real problem facing a large number of major broadcasters is the fact that cost per square metre is a premium and it only continues to go up in cities like London and New York. Keeping all your equipment in the same building and losing multiple rooms and floors to storage - what a waste of space! Cutting the SDI cable should be liberating broadcasters so they can repurpose the space into revenue generating areas, such as more studios or offices, rather than have multiple offices across the country. This can bring more people under one roof, lowering multiple costs across the business. The smart solution is to lease a cheap warehouse unit in an area where rental prices are significantly lower, but there is the appropriate network infrastructure. The problem is, despite all the advances, we still haven’t solved all the performance issues. At present, we cannot control multiple vision sources when the back end of the switcher is stored in Cloud and then switch between them without packet loss or significant latency. Simply, in a traditional workflow when the operator pushes cut, they see this happen instantly without any lag or stuttering - every time. In the Cloud, you do not get this success rate and this is where we hit a roadblock - for now. There are areas we could tackle straight away, like graphics where a few frames delay has almost zero impact on the on-air product - don’t get me wrong, we should be doing this. If broadcasters have multiple control rooms, in multiple locations, a centralised storage hub would mean that the primary location could ‘borrow’ extra resources when they have a special project, such as an election. This only has benefits to both cost and disaster recovery. If something breaks, gone are the days of an emergency physical re-patch to get you through a programme! While the adoption rate for this type of solution on things like esports coverage and smaller productions is very high, major broadcasters are not in a position to make this full shift. The existing infrastructure and building legacy mean this is not an overnight switch. However, many are already undertaking testing on these type of workflows to understand the solutions and see if it is viable at scale. In the short term, it is likely that we have a

hybrid solution and then, as studios and control rooms are upgraded, more and more equipment will gradually make its way off site. A major factor at the moment is cost, even if we could solve all technical issues. Any piece of equipment will need IP capability so this is either an upgrade or replacement investment, followed by the ongoing licence cost for IP outputs and Cloud computing. Now, as the major broadcasters adopt this model, they will hopefully be able to negotiate a package with major suppliers to reduce costs, at least on things considered broadcast critical, for periods of the day. However, this may impact on the smaller productions, as they won’t be able to negotiate anywhere near the level of the major entities. I see a straightforward solution to most of these issues: a middle man who essentially owns all the equipment and gets preferential rates on Cloud costs. The users then only have to pay to play, they simply book for the time they require the equipment for their production. In an ideal world, a control room doesn’t need to look anything like it does now. As the use of automation continues to increase, and operator interfaces develop into a range of touchscreens and for now keyboards, this new world could look, in its simplest form, like a computer with some high-quality monitors and speakers. The end goal surely is to enable any desktop, on the correct network, to effectively take control of any studio? I appreciate that is an exceptionally broad overview and each individual studio set-up and programme requirements will need to be considered. Of course there will be high-profile shows that require more resources, but equally there are plenty that don’t, that are already on air multiple hours a day. We of course have to look at the human factor, we are creatures of habit and removing the four walls that make up the control room isn’t something that can happen overnight. In many legacy broadcasters this would be a massive culture shift, but equally there was a time when the idea of one or two people controlling a whole studio seemed ridiculous. I have no doubt that all broadcast equipment in the not so distant future will be ‘Cloud ready’ and that every major broadcaster will adopt this route, especially those in major cities. In the short term, the challenge is the technical teething issues, but the solutions for these feel within arm’s reach. The long-term battle, that will slow any large-scale adoption, is shifting the human mindset of what a studio and control set-up has always been. Once there are examples at the top end of the industry, where a traditional control room is a thing of the past and people are regularly executing clean, wellpresented live hours of television, then, everyone will follow suit…but first, we need a pioneer. n



Philip Stevens explores the virtualised world and sees the actual benefits


ontent management is a key part of the broadcasting workflow and nowhere is this more apparent than in the operation of playout. Ensuring the right content is played out at the correct time and with accurate details is essential. Reliable, virtualised broadcast playout solutions are critical – and they’re here. One company that has been developing the software for such virtualisation is Cambridge-based Pixel Power. Originally known for its powerful graphics and branding solutions, the company had the foresight some years ago to appreciate how its technology could be used beyond the production of on-screen content. “The transition from the graphics side is about how we perceived an industry trend in terms of a need,” explains Ciaran Doran, executive vice president, sales and marketing. “Take promo production as an example. There is often a need to produce a large number of variants. Maybe up to 50 versions of the same promo. The old way would have seen postproduction produce all the finished versions and then place them in a queue for playout. Using our technology, that operation is automated which allows the production of multiple versions of trailers and promos, with accuracy and high productivity.” The technology allows a broadcaster’s team of


graphics designers to concentrate on creative work, designing new campaigns and brand material rather than repetitive, manual end-board re-works. This switch to the new way of working involves building a graphics template - that had been agreed by all departments – that contains all relevant elements. Those elements – visual content, voice over, time, day – can be pulled automatically from the relevant files and drawn into the template and the resultant file inserted into the playlist. “By using an agreed template, the material will appear in the right format, the right font, the right colour, the right position on the screen and so on,” says Doran. “What’s more, by retrieving the information from data in the EPG or schedule then you have actually pulled what is coming up next with its date, time and any relevant information. So, if what is ‘coming up next’ changes, then you are displaying the correct information at that right time. In effect, you have evolved from a post-production function to a just-in-time function.” POWERFUL HARDWARE So, how has this technology evolved? “When we first started doing graphics and branding software there wasn’t the computer power available off the shelf. So, we built our own powerful hardware


“Virtualisable software applications enable new opportunities, such as hardware on your doorstep or in the private or public Cloud.” CIARAN DORAN



to drive the software applications. Now, with the huge expansion in the IT world, we’ve got COTS (commercial off the shelf) hardware that is powerful enough for us to simply buy-in and drive our software.” “But,” Doran adds, “there is more to consider. If there’s COTS hardware available, then software has to be developed that can run on anyone’s server, and the server can be in the broadcaster’s facility, in a remote office or it can be run in the public Cloud.” Doran explains that by building the software from the ground up, functionality could be included that allows broadcasters freedom to use specific features or blocks of features as required with flexibility that was never possible before. “For example, it’s the same software regardless of whether you have the machine sitting in the rack next to you or in the public Cloud. It’s the same licence. And we do not discriminate in any way between where you use that software and that licence.” If a broadcaster wants to start off with its own hardware, Pixel Power will charge the same for the licence even if later there is a move to COTS hardware or deployment in the public Cloud, they will even help with the transition. Doran continues: “Replacing bespoke hardware with COTS and using a pure software platform offers very real benefits to before. New features can be more easily deployed over the lifetime of an installation as they are needed. As one example, we deployed the latest NDI codec within a short period at the end of 2017 for a customer who said we’d really like this facility. That’s one benefit of our common platform – we don’t have to worry about what hardware is being used – and, more importantly, neither does the customer.” FLEXIBLE FACILITY Another significant benefit of the Pixel Power technology is the provision to acquire or use specific features of the software as and when required. “Many users of traditional hardware only use about 60 per cent of the features for 40 percent of the time,”


states Doran. “So why install a solution that’s a permanent feature when it’s either never used only just used occasionally? With a flexible software solution like ours a broadcaster can get access to a feature that is not needed all of the time, it can add on that facility for a determined length of time – whether that is by the quarter or by the hour. Our software is very granular and very flexible – indeed our pricing models include pay-as-you-go, payper-feature, Opex or Capex.” One broadcaster that has seen the benefit of the Pixel Power developments is Virgin Media Television (formerly TV3) in Ireland. “In an increasingly competitive market, the broadcaster wanted to increase the vigour and prominence of its branding and promotions across all its portfolio including SD, HD and multi-platform outputs,” says Doran. “Recognising that this could best be accomplished through automated conforming and fulfilment of graphics, the company investigated our solution.” The result was the introduction of Pixel Power StreamMaster BRAND. The system, which comprises the latest software solution running on COTS hardware, manages content across multiple channels and formats and delivers the power of the graphics engines. Using a single workflow and schedule, the system enables the automatic selection and insertion of the right graphics version at the right time and the right channel, as well as managing the squeeze-backs, and allowing different branding content in the SD and HD streams. “Virgin Media purchased StreamMaster BRAND to enable it to deliver graphics and branding right at the point of playout. So instead of preparing a fully finished graphic, or sequence, the creative department creates a template on which all the elements are then built moments before going to air. This ensures that any changes in the schedule results in new graphics prepared accordingly.” Another customer is Sky Creative Agency within Sky Television. It implemented Pixel Power’s Factory automated production technology using the well-known Clarity graphics platform and Gallium Workflow Orchestration. Sky’s commissioning tool provides all the data required for Factory to track and produce promo versions from the assets provided by the broadcaster’s creative and design teams. Using its own compositing, DVE and graphics capabilities, Factory generates all the versions completely automatically. Because it is driven by the information entered into Sky’s commissioning tool, which in turn is linked to their scheduling system, errors in transmission times, channels, sponsors and so on, are eliminated. “Sky started out with its simpler promotions, but were soon able to include the highly complex Sky Sports promos


“Broadcasters are already discovering the enabling potential of virtualisable solutions and preparing to move to less expensive locations.” CIARAN DORAN

and other services,” reveals Doran. Once Factory receives a set of instructions it generates all the versions, faster than real time, and delivers them as technically compliant files to the broadcast MAM system. “Using Gallium FACTORY Sky can create hundreds of different versions of the same promo,” explains Doran. “Once the ‘creative’ completes the compelling promo in 10, 20, 30, 45 second versions, Gallium FACTORY uses a database to fetch all the correct and relevant information to create the multiple versions of Now, Next, Later promos.” SECURITY Although all Pixel Power software solutions can be used on hardware at the broadcaster’s facility, it can also be deployed at a remote data centre or be used in the pubic Cloud. And that raises the question of security. “Broadcasters have taken this seriously and are continually bringing in that expertise. Look at the number of jobs that are being advertised for both IT engineers and cyber security individuals. Cyber security is a very real issue in many fields and broadcasting is no different. In many parts of the world broadcasting is political power and we have seen in recent times how hacking into those systems can do an awful lot of damage,” points out Doran. Although Pixel Power doesn’t currently offer cyber security, the company which recently acquired the business – Rohde & Schwarz – has built up a huge cyber security division to support customers in all its markets. The two companies will remain separate entities, with the Cambridge operation now known as ‘Pixel Power Ltd. – A Rohde & Schwarz Company’. “Rohde & Schwarz wants to protect the Pixel Power brand and benefit from it,” notes Doran. “What we are achieving is way ahead of others and we have strong sales growth these last few years. But despite being technically advanced and having a long and trusted reputation it is sometimes difficult for a major broadcaster in one part of the world to say ‘yes you are ahead of the game, but you are a small company based on the other side of the globe.’ Rohde & Schwarz helps us to give us that stature to move into new markets.”

WAY FORWARD Doran concludes: “The transition from SDI to IP in the last few years is simply changing from one form of transport stream to another - but the key is that it enables us to virtualise the software applications. All our solutions, whether a simple graphics tool or a full master-control/ automation/playout suite with sophisticated branding can be virtualised. Virtualisable software applications enable new opportunities, such as hardware on your doorstep or in the private or public Cloud. And using the Cloud creates a paradigm shift because it changes the way in which broadcasting can take place. “If you can operate applications from the Cloud, then you don’t need a bespoke facility. You can use less expensive office facilities and as long as you have a fast-enough pipe to wherever your data is, wherever your content is, wherever your playout system is, then you can manage all of that in a Cloud-based system. This will be the major transition in broadcast playout over the next 10 years. Broadcasters are already discovering the enabling potential of virtualisable solutions and preparing to move to less expensive locations. With technology like ours, you can manage content from just about anywhere.” n


SERVICING THE SERVICES Philip Stevens catches up with some latest developments for religious broadcasting


PICTURED ABOVE: Volunteers man the controls of the broadcast operation

hat do you do if you can’t find a piece of broadcasting kit to meet your unique needs? The obvious answer is that you produce it yourself. And that’s exactly what a church in the United States did when it needed to create a broadcast architecture to transmit its services. But rather than simply keep the system to itself, the church used an existing commercial part of its organisation to market the solution to the wider world – with some outstanding success. “Based in Denver, Colorado, Living As One is a privately held technology company founded in 2014 to fill a void in the professional video market for a highly resilient live streaming solution,” explains Collin Jones, vice president of sales and marketing. “We were very fortunate to already have two members of our congregation who were employed in corporate broadcasting. And they were aware that none of the solutions available to use would meet our specific needs. So, they set about devising a system that fulfilled our demands. And then we found other organisations were suitably impressed to want the solution for themselves.” COST SAVINGS That solution, known as Living As One Multisite Platform,


centres on a system capable of sending and receiving audio and video over the public internet. “It offers a level of resiliency that has not been possible previously,” says Jones. “And because it uses the internet, the costs are significantly cut. What we have found is that this solution is especially useful for those churches which meet in rented halls where a permanent set up for broadcasting is not possible.” Living As One makes and markets the package, which includes multisite encoders for real-time video capture, LAN and Cloud distribution for scalable delivery, and multisite decoders for live/DVR playback. All this is backed up with full support by Living As One personnel. According to Jones, the system’s encoders and decoders mean that there is zero content loss across the entire transmission path. “During playback, all content comes from the local solid-state hard drive which buffers live video in advance. What this means is that the remote sites will see exactly what has been encoded - without blackouts, buffering, or jitter.” PRACTICAL SOLUTION One of the organisations which has benefitted from using the Living As One technology is ICF (International

PRODUCTION AND POST PICTURED LEFT: imac users can easily access church services

Christian Fellowship). “The church began in Zurich, while the church here in Munich was started in 2004 by Tobias and Frauke Teichen,” explains Felix Hiesinger, technical director. Hiesinger says that the video coverage began as a service for people who were not able to attend at the church building. But the broadcast grew and it was found that an increasing number of people were watching, not only in Germany, but also in other parts of Europe and worldwide. “Although we believe that everyone should attend a local church, we know that is not always possible, so we developed our initial podcasts from being just a side product to a production that was becoming more important in our schedule. Since we have different campuses, we started using video messages in our locations and therefore needed to develop our broadcast to a higher quality.” He continues: “We needed a reliable livestream solution for our campuses. We wanted to have the option of setting different marks during our service, so the campus can individually decide where to jump in and leave the livestream. The campuses should be able to control everything on their own, without having an expert onsite. They should be able to decide the start of the message themselves since we have different service times.” Since the ICF campuses also have different internet connections it was important to have a reliable system with local buffering. An add-on was the option of sending more than two audio signals at a time. The search to find a suitable system was started in May 2018, and in August, the Living As One solution was accepted. By October 2018 it was being used regularly in services. “The project was completed within 10 weeks, including developing the system, installing, training the

crew and using it.” Although ICF employs one full time person for technical projects and leading the technical teams, volunteers have been recruited to fill the positions needed for service coverage. “The broadcasts are shown on YouTube, iTunes, BibelTV, München TV and locally in our other campuses. Right now, we use four cameras for the broadcast in a normal service, but there are plans to increase this to six cameras.” He concludes: “Finally we are able to stream our celebrations live to our campuses on a reliable basis with the service of Living As One.” FURTHER DEVELOPMENTS In late 2018, Living As One announced the release of the Web Platform, a ground-breaking system that completely automates high-quality and ultra-reliable online video streaming. Using technology originally designed for the Living As One Multisite Platform, it introduces a level of simplicity and resiliency to online video delivery that is unlike any other streaming solution. “The innovative system was designed to take the hassle out of live streaming - after a simple setup process - in fact, the Web Platform runs itself,” says Jones. “Recurring events, such as weekly church services, may be scheduled to stream automatically or launch along with existing Multisite Platform events. They can be easily embedded on to a website or other platforms, launching a robust online campus that requires no weekly maintenance.” Utilising next-generation streaming technology MPEGDASH and HLS, Web Platform streams may be viewed on any device with full DVR playback. Persistent URLs are also available for iOS/Android apps and OTT devices such as Roku and Apple TV. Additionally, the Web Platform supports scheduled Sim-Live (Simulated-Live)

PICTURED ABOVE: Preparing for a broadcast


PRODUCTION AND POST PICTURED LEFT: Worshippers from around the world can share in the Munich service

WORLDWIDE COMPATIBILITY Philip Stevens examines the Gearhouse kit used in church broadcast services

playback of content from the Cloud, providing the ability to host an enriching online guest experience through synchronised replay of events at any time. WHAT MAKES IT DIFFERENT? The Living As One Web Platform is the first streaming solution offering 100 per cent resilient Cloud transcoding for live video. With existing live streaming technology, if video from the encoder doesn’t make it to the Cloud within a few seconds, that video content is lost and the Cloud transcoder will not have a perfect copy to transcode into multiple bitrates. As a result, all viewers will see video that has been transcoded with missing data, resulting in a poor viewing experience including buffering wheels, jitters, and quality loss. With the Web Platform, which is powered by the patented Resilient Streaming Protocol, there is no time restriction on when the video is sent to the Cloud, and the transcoder can wait for a perfect and complete copy without worry of temporary network interruptions delaying the transmission. “This means, for the first time, viewers will see perfect quality video across any device. The delay is configurable, and even a two minute delay makes a great difference in viewer quality. This results in a high-quality, uninterrupted viewing experience atypical of most live environments,” explains Jones. Because of the Resilient Streaming Protocol, content is ensured to be transmitted without loss of data over any public internet connection, including 3G cellular networks. This means that no video content is lost, even in the case of a complete temporary internet outage at the broadcast site, as long as it’s not longer than the delay. Web Platform includes Cloud transcoding, which converts a single high-quality live video stream into multiple bitrates in the Cloud. Converting the video into multiple bitrates in the Cloud conserves upload bandwidth at the broadcast site, removing the need to provide every bitrate for the audience, allowing more to be done with less. Jones continues: “On the audience side, smooth streaming is reinforced by adaptive bitrate streaming (ABS), which dynamically selects a bitrate depending on available bandwidth and device processing power. ABS is very important to ensure viewers have the best possible viewing experience, and it is included with all Web Platform plans.” He concludes: “Through the Living As One Web Platform, we have a solution that truly gives peace of mind to churches and organisations allowing them to broadcast a high-quality, professional live stream without worry or hassle, but with excellence at an affordable cost.” n


With its heavy emphasis on musical arena-based events, a contemporary Christian church with places of worship across Australia, the US, Europe, the UK and further afield, recently chose Gearhouse Broadcast to help deliver its broadcast services from its Sydney and Melbourne church spaces. The church needed a solution to help it capture high-quality footage to share with audiences over the internet and its own television channel. “We needed to implement a solution that aligned with its house style and matched workflows already in use at the church’s other locations across the world, including the UK and Europe,” explains Simon Smith, systems engineer at Gearhouse Broadcast. “We supplied the church’s Sydney arena with a SAM Vega 700 router, Sony HSC-300R camera channel, Video Devices’ PIX 270i rack mount video recorder and a Ross Carbonite Black mid-sized production switcher with two control surfaces. The Vega 700 was selected as it offers impressive flexibility for its size, as well as being resilient and reliable. The Sony HSC-300R channel camera was chosen as it seamlessly integrated with the workflow and generated maximum returns on the church’s production budget.” The choice of kit was designed to provide an easy transition for the volunteers using the equipment to film content at the church. Kit deployed at the church’s Melbourne space included a Ross 72x72 3G/HD/SDI router, Ross Carbonite Black mid-sized production switcher with two control surfaces and Ross Digital Glue. “The installation in the church’s Sydney space provided an easy transition to the new higherquality equipment. Aligning with the house style, the volunteers in the congregation who film the content were able to come in, pick up and roll with the equipment supplied,” says Smith. “The new kit fit perfectly with the church’s house style as well as delivering a specification and level of service

Dan Meier asks director of photography Jonathan Harrison what to expect from his LED lighting masterclasses at this year’s BVE


PICTURED ABOVE: Jonathan Harrison

or the past 20 years, DoP Jonathan Harrison has been running lighting seminars aimed at teaching the principles that he saw missing from so many image makers’ skillsets. “There was a big gap to (excuse the pun) enlighten people into how to do a job professionally,” he explains. “I’d been doing some work with advertising agencies because they were swapping over from film to electronic pictures. Having worked at the BBC on film (I’m a film cameraman by trade) but moved to electronics, I’d worked out a way of setting up lighting that made electronic pictures look filmic - and not make them look like cheap, nasty electric pictures. “I started presenting seminars at BVE (the original was at The Production Show) at Wembley, and it’s just rolled on from there. Now the whole idea of these seminars is to enlighten and educate, to show people new equipment, but also how to use it efficiently and quickly - because for production companies, time is of the essence. A lot of people have lighting kits, and they come up to me and say, ‘I’ve got a box of lights and I just point them, I actually don’t know what I’m doing.’ They understand the word ‘modelling’ - you put a light up, you get a shadow. But what do you do with this shadow, what are you looking for?” LOOKING AND SEEING Harrison is running two seminars at this year’s BVE: Creative Lighting with LED sessions on Tuesday and Wednesday, and on Thursday his classic core lighting skills masterclass ‘Lighting on the Run.’ “I have emails from students and ex-students saying, ‘I’ve learnt more from this hour than I have from three years at college,’ which is


shocking!” he says. “I feel a little proud, you know, people send me emails like that, it makes my day worth it. I’m not just there spouting as an old fart has-been, this is stuff that many of them will have to do on a day-to-day basis. And it’s stuff that if people pick up on and really understand, they can go on to make really serious glossy pictures. “Anybody can illuminate, a school kid can illuminate you put three lights up and you can see a picture. But when a lot of people look and see, they’re not looking, or don’t understand the difference between a high-quality image and an average image - and there’s an awful lot of average imagery out there. And what I hope I can do is pass on core industry lighting skills to make people look and not just see.” CONTROL FREAKS With almost 40 years experience in the industry, Harrison has honed these lighting skills across all kinds of projects: dramas, documentaries, kids shows and music videos to name but a few. And while those lighting principles remain at the core of his work, the introduction of LED has smoothed the process considerably. “Using LED nowadays is an absolute dream,” says Harrison. “One of the problems in the past has been we had two light sources and only two light sources. We had tungsten lights, classic incandescent studio lights, or we had HMI, which were the classic daylight lamps which you’ve probably seen being used on the streets for feature films; big silver and blue lamps that create daylight. “Now, one of our problems is that all cameramen, all DoPs, all lighting directors, we’re control freaks - we have to be able to control our light, in terms of direction, texture

PRODUCTION AND POST and most importantly colour. What happens when you dim that tungsten light? It changes colour. And so if you change its colour you’ve got to change your white balance. Change your white balance for that lamp, everything else changes colour! All you want that light to do is get darker, not change colour.” Conversely, LED lights can be dimmed without changing colour, as Harrison explains: “If I just want to fine-tune a key light or a back light or a fill light, I just dim it and the light just drops off without having to go, ‘oh I’ve dimmed it by half, I’ve got to put a half blue filter on it to bring it back to the same colour temperature as the key light.’ And so consequently the flexibility of these new lights is just beautiful.” Harrison uses Kino Flo and Dedolight, lighting tools that he considers among the finest in the world: “The Kino Flo Diva-Lite range have RGB in them as well, so you can create colour effects. All the effect lamps are used on features all over the world because of what they do. There are other lamps that do this, but Kino Flo do it in a specific, joined-up way and it allows the cinematographer to use the same tool, programme that tool if you want to for a specific look, and you can set that look across the whole array of lamps accurately, continuously and over a long period of time.” LIGHTING THATCHER In 1993, Harrison received a BAFTA nomination for the BBC documentary Thatcher: The Downing Street Years. “We were with Thatcher for 40 hours,” he recalls. “A lady of her years had a challenging face and features for a photographer. Because of that I was using a much larger flattering soft source. So depending on the skin of the person and your location you choose your light sources based on subject and location, and portable, practical-sized lamps depending on what you’re doing.” Harrison explains that smaller LED lights can be battery powered, making them a much more efficient proposition: “Something that would traditionally throw out a lot of light you can run off a 24V battery for nearly an hour and a half, and all the smaller little Dedolights, they run off batteries, some of them for up to two hours. So the small portable efficient lighting systems that I use for certain areas of the business are practical because you can battery power them, they’re small, they’re efficient, and they give out a lot more light than its traditional incandescent counterpart.” He continues: “I would use compact colour-correct lamps for documentaries and small productions - food shoots for instance, you’d probably use what we call an Octodome; a five-foot diffuser source with smaller LED lights, accents for surgically inserting light into that shot, whether it’s a piece of food or if you just want to lift someone’s eyes out, you use a specific tool on a Dedolight

called a DPEYESET, which was actually invented for Home Alone, if you remember when he looked through the letter box. “But if I’m doing a studio light I use bigger lamps that will go up in a big studio that are pole-operated, and we control those with what we call honeycombs. So you can spread the light across the studio - broad overall illumination - and then you pick out individuals and key light them with specific light, directional light to give texture and quality modelling, and then you choose other lights for lighting the set and different lights that separate the foreground from the background.” COOKING WITH LIGHT All these considerations go into DoP work on a daily basis, whether it’s a matter of assessing the set, the subject or the camera. “It’s pointless if you’re buying lights that are not big enough or are too small and don’t give you the depth of field you need to work accurately with the cameras you’ve got,” explains Harrison. “You’ve got to be able to keep a picture in focus and if there’s no depth of field because there’s not enough light, you’re screwed. And so all these elements in terms of creation are going on all the time. “What I attempt to do in my seminars is pass on little bits of this because I can’t pass on 40 years of experience in an hour, and it’s my dilemma - every time I really scream in a dark room working out what do I leave out. There’s a million things you think about when you turn a light on, and work back through that to, ‘what do you want it to look like?’ Some of it comes down to camera but the most important thing: without light there’s nothing, and without quality light you’ve got an average picture, and I don’t want people to have average pictures, I want them to have something glossy that they will be proud of, that their customers will be proud of and come back and ask for them again because what they do is of quality. And there’s an awful lot of rubbish out there.” Also something of a foodie, Harrison has the perfect lighting analogy: “If you make any food you can just throw a bunch of ingredients together and you get food, and you can eat it. But if I spend a couple of hours, several hours, half a day, taking those ingredients with love and care, I can make something akin to a Michelin star meal with the same ingredients, and that’s exactly the same as lighting. You take a light source, understand why you’re putting it where you’re putting it and you add all the other lights and every light is like an extra ingredient into your recipe - it’s a spice, it’s a flavour, it’s a texture - and this builds up the emotion of that shot. Good image making is about using quality light to craft pictures that people remember.” n

“I hope I can pass on core industry lighting skills to make people look and not just see.” JONATHAN HARRISON

Jonathan Harrison’s lighting seminars run at BVE from 26-28th February at ExCeL London



AV1 vs HEVC: DOES THE BATTLE STILL RAGE ON? By Stefan Lederer, CEO and co-founder, Bitmovin


he battle between codecs is set to be a hot topic at NAB Show this year, with two obvious options as the lead protagonists: HEVC, perceived as the heir to H.264; and the new kid on the block, AV1. Pinning these two candidates down as the potential winners of the codec race may sound obvious to the entire industry, but the reality is much more complex, as always with codec-related questions. For many years, the hegemony of AVC/H.264 was a given. Every developer had to get familiar with the latest developments pertaining to this specific codec and be able to deploy it within existing infrastructure. These days are long gone. Today, we have access to a plethora of codec options that respond best to specific use cases. This is why we advocated for a multi-codec approach to video workflows – now more than ever. The fundamentals of encoding haven’t changed: improved compression rates will continue to be front of mind for developers worldwide as consumers watch even more video online, despite the lack of increased network capacity in the very near future. In addition to volume, consumers now want even more variety in content, available in high quality, and delivered within milliseconds on any platform over any network. This puts even more emphasis on choosing the right codec for the right use case. Digitally minded consumers today expect their online services to rival with linear broadcast both in terms of quality and availability. In Western Europe – and most mature markets – video tends to be transmitted over high-capacity networks such as 4G, fibre optics or highspeed broadband. This allows developers to put the emphasis on quality, safe in the knowledge that their encoding will be transmitted over the network. In emerging markets such as the Middle East, South East Asia and Latin America, the challenges are completely different: consumers use smartphones, primarily connected over 2G and 3G, with recurring network drops. This puts pressure on developers to encode their content in a format that will easily be transmitted over less reliable networks. Developers can no longer rely on one codec to serve all their needs. Instead, they need multiple versions depending on the type of content, target platform and network availability. Yet, encoding an asset in multiple formats is costly: developers have to factor in CPU cycles, power, cooling, rack space and storage, which grows with each new iteration that needs to be saved in a data centre. This impacts the developer’s ability to deploy multiple codecs at will. Our own tests have shown that AV1 was providing an up to 40 per cent increase in encoding performance compared to HEVC. This ability


to massively shrink the file size is a very attractive prospect for operators looking to deploy solutions for mobile video or in emerging markets with unreliable networks. This can also make data-heavy processes such as live sports easier to deliver to a wider range of devices. However, these results need to be balanced by the fact that HEVC is already a mass-market proposition and available as standard in multiple devices – including mobiles. Therefore, developers need to evaluate whether they want to focus on raw performance, or immediate availability. Before making a choice, developers need to pinpoint the content type, target device and network capacity. Once they have decided on the parameters they have to deal with, determining the right codec option will be a lot clearer. Today, developers can even test their content to easily identify areas of improvement and select the perfect codec for their exact needs. We strongly believe that AV1 will start to roll out to millions of viewers worldwide this year. The very biggest players in online video will take the lead, with premium VoD services first - as they can spread the cost of compute resources while the wider market begins mass adoption of AV1. Yet, there will be no real winner in the codec race as the trend towards a multi-codec world accelerates to respond to the need to allocate the right solution to a growing range of scenarios that developers will continue to face. n




t’s daunting… you’ve grown your media workflows up around a digital asset management (DAM) system, but either you’ve outgrown that system or that system has simply not kept up with the times; you are looking at how you can work with dispersed teams, become Cloud enabled and how to release the value locked up in your barely accessible archives. So your ‘next-gen’ project is launched to move away from your old DAM. Project stage one: analysis of your current state of play and requirements for the new system. You’ve taken a sharp breath as you realise there are silos of data on disparate storage systems (from offline drives, to SANs, to LTO) and that the DAM holds the metadata, goodness knows how, and in a proprietary format. Things can get tough and here are 10 good reasons why: IN YOUR OLD ASSET MANAGER: 1. Custom metadata and project configurations that existed in your DAM/MAM. This can include user access rights (and their quirks!). 2. Duplicated assets, assets that shouldn’t exist anymore (eg generated assets). 3. Processes that aren’t documented. WITH YOUR PLANNED MIGRATION TOOL: 4. Data migrations that can take hours even for small data sets but that can take months for large data sets. 5. Having migration software that understands the databases that you are migrating from. 6. Robustness against failures during the migration process and scalability (multi servers performing the migration in parallel). 7. Continuing using the legacy system during the data migration, whilst catching and migrating any changes made. 8. Maintaining an audit of assets prior to the migration and after the migration – can you prove it’s not the case (or find the asset) if someone claims an asset has “gone missing”?

10. The challenge of agreeing, communicating and educating the team on the migration/new workflows. Communication is only mentioned in point 10 but the reality is that any migration project needs a lot of internal communication, to capture requirements, to capture current workflows, to communicate what is going to change, and to train users on the new workflows. Where I’ve sat down to do migration projects I start and end in this area; communication is key to project success. I’d like to talk about the migration tools used themselves. A large migration can take several months and may need to migrate data from multiple hardware types. The ability for the team to continue working is of course paramount, therefore being able to have a robust toolset that can move the data in small batches can be essential. The tool needs to have a deep understanding of the products it is moving from and to, and furthermore, I’d strongly recommend audit of the actions it takes. The audit protects your back when an “asset goes missing”. It’s often an asset that was missing anyway in the old system, but you need to show that. Lastly, just a note on future proofing. Is it possible to have an asset manager that is future proof? Well, to a large extent, yes. The new location and the way that you store the assets and the metadata needs to be in an open non-proprietary format. Can you really afford to lock your metadata into another DAM? There are plenty of solutions nowadays where the asset manager can be thrown out and replaced, if need be, but without having to data migrate again. n

‘HIGHER LEVEL’ PROJECT CHALLENGES: 9. The need to make sure that this doesn’t happen again; that the destination system is non-proprietary and can evolve as technology changes going forward, rather than being a solution that simply becomes tomorrow’s migration challenge.





ncredibly high-quality televisions, the fastest internet connections, and mobile devices with crystal clear screens have changed the way viewers watch television. But these enablers of today’s TV don’t mean anything without one key element – the best quality of experience. And they need to deliver that in the form of the lowest latency live programming, uninterrupted content availability and increasingly personalised services. Meeting these challenges is no mean feat, but it is made much easier with a delivery system that lets TV service providers easily add capacity and new functionality in an efficient manner. IT SHOWS THE WAY FORWARD To design a solution that gives more TV providers the ability to easily customise the delivery workflows required in today’s television landscape, those designing a delivery system should take the lead of the IT industry. Specifically, how it’s learned to incorporate software as a replacement for outdated hardware and move entire workflows into the Cloud. What has become clear is that software – Cloud-based or otherwise – is now key to executing certain functions and creating a delivery workflow that can be changed as needed depending on user and market trends. The choice facing TV providers is whether to roll out said software using virtual machines (VMs) or software containers, with the latter quickly becoming the preferred option for a variety of technical and operational reasons. THE CALL FOR CONTAINERS One of the biggest benefits of using containerised software for TV delivery is that single functions can be deployed with everything they need to run in one package. This is much more efficient than VMs, which need a hypervisor layer that requires its own operating system. Container software sits on top of a container manager and shares an underlying operating system, making it much better suited to TV service providers that need to be able to continuously add capacity and enhance functionality. It’s this architecture that allows operators to easily grow the capacity and reach of their delivery platforms without having to deploy separate


VMs for each new application. Not only does this enhance delivery workflows, but it minimises disruption to services, which is crucial for meeting the demands of subscribers that expect to have continuous access to services. Cloud-based container software also enables TV service providers to pick and choose their preferred Cloud services from different vendors, requires fewer resources for deployment and is cheaper than relying on the VM alternative. This all points towards containers as being the future of TV delivery in the digital age. SATISFYING THE NEEDS OF THE NEXT GENERATION Ultimately, containerised software gives content distributors the power to adapt their workflows, tailor services for their audiences and meet constantly changing audience demands. It allows them to ‘mix and match’ different deployment options for TV delivery. This means that they can create a customised solution where central functionality is deployed in the Cloud and decentralised CDN functionality such as streaming and caching runs on purpose-built TV servers or on COTS machines. The key thing for TV service providers to remember is that containerised software creates a delivery workflow that they can use to deliver real-time customisations, going beyond what has traditionally been possible for broadcasters. This caters for future generations of viewers and lets broadcasters provide enriched services even before audiences demand it – something which is sure to keep them coming back for more. n


Categorising consumers

Indulgists want the latest gadgets and original content. They tend to be parents who buy DVDs, go to the cinema and spend money on video content, committing to pay-TV subscriptions for the next one to five years.

The next segment are Engagers. Usually male, and probably gamers, they are “the future of immersive entertainment”, looking forward to VR replacing movies and often posting on social media about the shows they’re watching.

Connoisseurs are picky about their content, paying less for TV subscriptions year on year. They’re the group most likely to have a college degree and believe that content should be more diverse and educational.


Are you a binge-watching Fanatic or a bookish Connoisseur? A new study by PWC into people’s video consumption habits has identified five distinct consumer segments

Thirdly we have Fanatics. Generally female, Fanatics like having access to large amounts of content at any time, and they love to binge-watch. They’re also likely to be cord-cutters and subscribers to multiple video services at once.

The final group are Traditionalists. With an average age of 40, they favour linear TV and spend a lot on pay-TV content. This segment is expected to shrink as OTT streaming continues to grow.