TM Broadcast International #92, April 2021

Page 1

Photo © ACE | Studio Borlenghi

EDITORIAL The broadcast industry is underpinned by a significant drive for innovation. We are not only referring to unprecedented news or surprising developments, but also to a clear commitment to relearning: analyzing our evolution with a critical perspective and rebuilding the pillars that make up a solid, courageous and productive sector. This month's edition of TM Broadcast International has an approach to agents that, far from sticking to the standards of our world, seek to strengthen them in order to find new opportunities. Stability is a valuable asset, but the rewards of progress are immeasurable. The feature article created along with Circle-O, a joint venture established to revolutionize the America's Cup, is a great example of how the latest trends can go even further in offering a much more logistically streamlined production and undoubtedly more valuable for the end viewer. Our approach to OTTs, SVT Play, and features

specific views showing the result of the new ways of structuring these services by adapting to the market's evolution and target audience. A detailed history of the 8K Showcase allows us to appreciate how the concepts from previous tests have been shaped to approach the future of broadcasting. The rethinking of audiovisual asset management for the massive adidas Runtastic application helps us to know first hand experiences applicable to our field. We cannot remain with our arms crossed. We should welcome novelties, but we also have an obligation to rethink how we got here. What better way to do this than to soak up the experience of pioneers in their particular areas? Now you have another very complete issue of TM Broadcast International. We invite you to know firsthand all the initiatives we have previously outlined as well as the rest of contents that we have prepared. Welcome again!

Editor in chief Javier de Martín

Creative Direction Mercedes González

Key account manager Susana Sampedro

Editorial staff Sergio Julián Marcos Cuevas

Administration Laura de Diego

TM Broadcast International #92 April 2021

TM Broadcast International is a magazine published by Daró Media Group SL Centro Empresarial Tartessos Calle Pollensa 2, oficina 14 28290 Las Rozas (Madrid), Spain Phone +34 91 640 46 43 Published in Spain ISSN: 2659-5966



18 OTTs:, SVT Play and The world of VOD never fails to astound us. What was a latent market but with very broad possibilities for the future has ended up being confirmed as a key element for the broadcast universe. Not only for those stations that perceive in this model an adaptation to new audiences and consumption habits; but also for coming up with initiatives that found in digital distribution an opportunity to respond to viewer demands, thanks to the Internet and a widespread access to this network globally.

44 America’s Cup: A completely new 360 concept

At the beginning of March, before the final of the America’s Cup took place, TM Broadcast International had a conversation via Zoom with Werner Eksler (Managing Director of Circle-O) and Tim Pushkeit, Riedel's Technical Director at America's Cup, to navigate the technological concepts that shape this edition.. 4


44 60

Andy Rydzewski: Vital lessons translated into images

74 80

AoIP: Management and Security

The Artificial Intelligence revolution: The technology behind deepfakes

92 94

IP-Ready solutions: 4 interesting options

Adidas Runtastic: Thousands of pieces of audiovisual content for 172 million users


Global 8K Live Streaming Showcase 2020: The technologies behind the scenes 5


Sennheiser amplifies its catalogue for Vloggers and video creators Sennheiser has announced the release of a new series of microphones and mobile kits for vloggers and video creators. This launch includes the new model of the shotgun microphone MKE 400, the new XS Lav series of clip microphones and different mobile kits for vloggers, video creators and other purposes. The German company released the new model of its MKE 400, which includes new utilities like a headphone monitoring output with volume control, a rugged internal shock-mount, a clever internal windscreen and automatic on/off function. The firm assures that the new model surpasses its predecessors eliminating background noise and it also provides a better wind protection and handling noise. The MKE 400 includes 3.5 mm TRS and TRRS 6

locking cables to be used with DSLR/M cameras or mobile devices. The mic’s cold-shoe mount with ¼20 thread allows for universal mounting atop of cameras, gimbals or even at the end of a boom pole. On the other hand, the company announced the release of its new serie of clip microphones for vloggers, broadcasters and podcasters: The XS Lav. The new models are available as XS Lav Mobile with TRRS connector, XS Lav USB-C with USB-C connector, and XS Lav USB-C Mobile Kit with an additional

Manfrotto PIXI Mini Tripod and Sennheiser Smartphone Clamp. Those new release enter the market with some accessories that Sennheiser has just premiered. For the MKE 200 and MKE 400 model, the firm offers 3.5 mm TRS and TRRS cables for use with DSLR/M cameras and mobile devices, respectively. For the XS Lav, the kit adds the Smartphone Clamp and PIXI to Sennheiser’s wired USB-C lavalier microphone. Finally, Sennheiser has released the CL 35-USB-C for all this kits. 


Chyron premieres its Prime Live platform and a new software-based 2-M/E switcher Chyron has announced the release of the Prime Live Platform production engine. This will be de Version 4 of the Prime product family of the company. Furthermore, the company premiered its new software-based, 2M/E switcher with a multichannel audio mixer. According to the company statement, the new Prime Live Platform can be deployed on different environments such as a custom hardware, a virtualized machine or entirely in the cloud. On the other hand, Chyron assures that its new switcher supports 4K, UHD, HDR HD and SD formatsm with a wide variety of inputs and outputs (SDI, IP, NDI, H264 streams). “With a comprehensive, specialized software suite that enhances production values and simplifies 8

PRIME Live Platform boasts industry-leading graphics, clips, and branding capabilities as well as new software-based PRIME Switcher.

workflows, PRIME has evolved into a uniquely robust and versatile platform,” said Ariel Garcia, CEO at Chyron “The newest addition to the platform, PRIME Switcher, is ideal for generating professionalgrade output for smaller broadcast productions, OTT delivery, direct streaming to social platforms, or disaster recovery while minimizing cost, complexity, and reliance on fixed physical resources.” The PRIME Live Platform

also features enhanced PRIME CG design and playout capabilities, including robust Adobe After Effects import. These capabilities, along with PRIME's already rich toolset, allow designers to build natively or leverage assets from other applications. With these new CG features, PRIME brings even greater creative power and ease of use to Chyron's end-to-end workflows for graphics order and template management (via AXIS and CAMIO). 


NEP inaugurates a new production centre in London NEP Group announced the opening of a new production centre at 200 Gray’s Inn Road, in London. The new building includes control rooms, edit, sub-cut and media management facilities, among others, and it’s connected to NEP’s global network. The production centre is connected via fibre to the NEP’s extensive connectivity capabilities, which allows the installation to be on touch with any location in the UK, Europe and North America. With a speed connection that goes up to 100Gb, it is also connected to NEP’s AnyLive network. NEP can also provide any of their in-house connectivity assets, including SNG’s, hybrid satellite/fibre vehicles, satellite flyaway systems, and more. This integration into the NEP’s global network is very important for Steve 10

Jenkins, President of NEP UK & Ireland who says, “London is integrated into NEP’s global network of data centres and production facilities, which provides access to the full range and depth of our resources and media solutions around the world, real-time. We can also rapidly deploy new, innovative products and services as they launch. This flexibility, combined with the power of NEP’s infrastructure, greatly expands the creative possibilities available to our clients.” Looking closer to the new building, it counts

with 3 production rooms, 2 sound control rooms, 3 multi-functional production spaces, 9 flexible control rooms, a green screen studio and a temporary equipment room. Conveniently located with easy access to public transportation and several major airports, the NEP Production Centre – London is in close proximity to London’s many media institutions and infrastructure. The building includes reception, atrium, breakout space and two meeting areas. 


Sky Italia renews its studio cameras with Grass Valley Sky Italia selected Grass Valley as a supplier to upgrade its studio cameras. The model selected by the transalpine company is the LDX 86N Series cameras, which allow native 3G/HD and 4K UHD image capture. The new equipment will be premiered during 2021 and the installation is been undertaken by Video Progetti, system integrator and Grass Valley reseller in Italy. Thanks to this, Sky Italia will have the opportunity to upgrade to route IP and 4K UHD. Grass Valley’s DirectIP functionality enables the XCU UXF base station to be located up to a 20,000 km distance, allowing remote or home studio management.

keeping our live news and studio production content flowing. We are also laying important groundwork for migration to SMPTE ST2110-based IP infrastructure and can operate in hybrid environments in the meantime”, said Enzo Paradisi, engineering director at Sky Italia.

network, allowing any camera to connect to any base station. When combined with SMPTE ST2110 IP infrastructure, this set up gives customers the freedom to optimize equipment usage and allocate camera resources flexibly as needed.

The LDX 86N cameras will go live in 2021 and give the Sky Italia team a choice between native 3G/HD and 4K UHD image capture. The DirectIP functionality means that base stations and cameras can be linked through an IP

“In this dynamic media landscape, our customers need to know that their investments today support future roadmaps for new technology upgrades,” added Marco Lopez, Grass Valley’s general manager for live production. 

“The ability to use our existing studio set-up and move the cameras as needed among various studios is central to 11


Red Bee Media develops the Ekstraklasa polish football league streaming platform Red Bee Media has announced an agreement with Ekstraklasa, the polish football league to develop its streaming platform. The fans will enjoy this new solution during the end of this season, and it will be fully operative for the 2021/22 season. More than 30 foreign and polish companies submitted their projects for the development of this platform. Red Bee, owned by Ericsson, won the contract. According to the company statement, will count with greater interactivity and personalization, further improvement of the user experience, and will implement an advertising model. The platform will be available in Polish via www and mobile apps on iOS and Android. Furthermore, it will also to be available on multiple Smart TV platforms, starting with LG, Apple, Android and eventually Samsung. The website


was launched on 2019. It has 200 thousand registers and provides access to the polish football matches to the fans abroad, except for countries covered by exclusive broadcasting licenses. Apart from this, the website offers multiple extra video content related with the competition. “We are happy to be a part of the evolution of Ekstraklasa.TV and we look forward to contributing to its development, streaming top quality European football to international and Polish audiences through our OTT platform,” says Steve Nylund, CEO Red Bee Media. ”We can see a clear growth in direct-toconsumer propositions like this, where brands such as Ekstraklasa are making the most of the opportunity to connect with their fans, and

monetize their content rights.” “Media consumption, particularly of video content, including sports, is rapidly changing. The extent to which the pandemic affected social life also expedited this trend. So, we’re observing the growing market share of streaming platforms, a deepening of the multiscreen phenomenon, and a concurrent increase in popularity of short but exciting videos, especially among the youth. We are monitoring these changes and plan to take our OTT platform to the next level, with new functionalities, and more brief, attractive, and information-rich video formats,” says Marcin Animucki, President of the Management Board at Ekstraklasa SA. 


Caracol Television chooses RTS VLink virtual intercom solution matrices, KP-Series

broadcasters across the

more than 50 years of

keypanels, ROAMEO DECT-

globe to work remotely

experience producing

based wireless beltpacks,

during the COVID-19 crisis,

internationally successful

and now VLink – a unique

shows, and continues to set

virtual intercom solution.

Caracol Televisión has

a benchmark for TV production in Latin America.

VLink is a user-friendly

as well as continue on-site operations while observing social distancing protocols.

smartphone/tablet app that

For the last 20 years

mimics an intercom user

Caracol’s eight VLink

Caracol has relied upon

station. It enables remote

licenses connect to their

intercom solutions from

users to interface with RTS

existing ADAM and ODIN

RTS, starting with ZEUS

intercom matrices via the

hardware (four matrices in

matrices and evolving over

Internet, allowing a new

a multiframe configuration)

time into their current

degree of control and

via an RTS TM-10K

state-of-the-art system

flexibility from anywhere in

setup, which includes ADAM

the world – including from

and ODIN digital intercom

home. VLink has allowed

Trunkmaster, allowing VLink to intelligently integrate and operate like another RTS matrix in the system. RTS technicians trained the Caracol team virtually, further ensuring COVID safety. VLink is now used exclusively for Caracol’s news programming, including its primetime news broadcast, and provides seamless simultaneous communications between hosts, reporters and technicians.  13


NENT Group selects Primestream's to automate and accelerate content workflows Sweden's Nordic Entertainment (NENT) Group has automated and accelerated its content production workflows based on Primestream's Workflow Server and Xchange™ media asset management (MAM) platform. Based in Stockholm, NENT Group operates the video streaming services Viaplay and Viafree, a large portfolio of commercial TV and radio channels, and the Viasat pay satellite platform. In addition, NENT Group is one of the largest content producers in Europe, operating more than 30 production studios in 17 countries. Like any media company, NENT Group has a critical requirement to store, access, repurpose, and archive its tremendous library of video assets. Previously, NENT teams operated without an automated capability for


media asset management, instead having to search manually through thousands of folders on siloed computers. The company sought a faster and more efficient means of accessing and working with live and prerecorded content for streaming coverage of high-profile, international sporting events. At NENT Group, Primestream Workflow Server provides a platform for production and playout integration. The system receives content via 30 ingest ports, encoding each file in XDCAM 1080i and generating a proxy version of the recording in real time. Workflow Server's smart integration with scheduling applications presents a clear overview of assets in the system or those that will soon be needed for playout. An additional integration with

Sweden's NENT Group automates and accelerates content production workflows based on Primestream's Workflow Server and Xchange™ MAM platform.

NENT Group's Pebble Beach Marina playout automation platform enables Workflow Server to perform automated restores for archived assets and issue notifications to managers if needed assets are not available in time. Workflow Server enables content to be played out even as it's being ingested, leading to rapid turnaround and timedelay playback. Primestream's native support for NENT Group's chosen Adobe Premiere NLE system enables editors to capture and edit video simultaneously for fast integration into live production workflows. 


Radio Belgrade safeguards patrimony with NOA Radio Belgrade, part of public broadcaster Radio Television Serbia (RTS), launched its first broadcast in 1929. In an effort to protect its rich history and facilitate access to valuable content, the media house recently decided to take on a major database migration project. NOA — in collaboration with its Serbia-based partner and systems integrator Kompani DigiTV — thus embarked on a scheme to modernize Radio Belgrade’s entire archiving process, phasing out its various legacy databases and replacing them with a single NOA mediARC Archive Asset Management system. To achieve the required results, it was necessary to normalize all source databases before importing the information into the central mediARC archive database. The new mediARC system works in collaboration with Radio Belgrade’s already existing NOA Record workstations

and supersedes the jobDB workflow system. What’s more, for some 50 years, Radio Belgrade had transferred catalog information towards Winisis and an MS Access databases, and because of changing working patterns over such a long period of time, consistency differed greatly, necessitating considerable consolidation. Fortunately though, thanks to mediARC’s semantic and relational approach, consolidation and normalization were not only possible but resulted in a consistent data design improving the archiving work of Radio Belgrade including separate item entities for persons, carriers, and content segments like Albums and Titles. In addition, because much of the archival material was tagged insufficiently or in a discrepant manner, a majority of the tagging process had to take place

Serbian National Radio

during the migration phase. A delicate job, when dealing with ten different databases, also considering the fact that this legacy data from the last 50 years constitutes the broadcaster’s complete, irreplaceable patrimony. That all translates into a huge digitization project, comprising the work of 10 cataloguers over so many years.  15


Aviwest and CyanView join forces in remote operations capabilities Aviwest and CyanView decided to partner to offer an integrated solution for remote operations and expand coverage capabilities. Therefore, the Aviwest technology will integrate into Cyanview’s RCP multicamera control panel. This integration will offer a low-latency IPbased control signals from multiple camera types and brands to the control room for live production. Both companies showed its strengths and worked

together in the production of the PGA Tournament.

powerful benefits of our

“With their experience in developing and deploying innovative live production solutions, AVIWEST provides our shared customers with valuable guidance and support in implementing more efficient, costeffective, and robust solutions for live production”, said David Bourgeois, founder and CEO at CyanView.

Poullaouec, chief

“We’re excited to partner with CyanView and showcase the

only featuring four

integration for broadcasters,” said Ronan technology officer at AVIWEST. “Recently, our solution was used by PGA Tour Entertainment as four of the world’s best professional golfers faced off in a charity event at Seminole Golf Club near West Palm Beach, Florida. Because of COVID-19, the event was very exclusive, golfers, six hand-held cameras, and zero caddies, but it went a long way toward satiating the thirst for live professional sports. The event also showed that remote workflows are possible for golf, as it was produced out of PGA Tour Entertainment’s facility 250 miles away.” 



Signiant extends its presence in Germany with new partner, Broadcast Solutions Produkte und Service Broadcast Solutions Produkte und Service will work closely with the Signiant EMEA team to strengthen Signiant’s presence in the DACH region. Broadcast Solutions Produkte und Service bring with them significant expertise in workflows, innovative and sustainable solutions for integration in the broadcast community. Broadcast Solutions Produkte und Service have hands-on experience with Signiant products and integrating products including Signiant’s Media Shuttle SaaS solution for personinitiated transfers; Manger+Agents, its enteprisee solution for automated, accelerated transport of large files between geographically dispersed locations; and Jet, Signiant’s newest SaaS solution.

“Media is a global and time-sensitive business and with our customers in Germany now adopting new formats such as 4K and 8K, Signiant solutions are a must have,” said Martin Schwöri, COO at Broadcast Solutions Produkte und Service. “We’re excited to partner with Signiant to bring its portfolio of marketing leading products to the media companies in the region.” Greg Hoskin, Managing Director, EMEA and APAC at Signiant, said:

“Germany is an important and expanding region for Signiant, and we’re excited to be extending our reach in the region by partnering with Broadcast Solutions Produkte und Service. The team’s knowledge of the industry, and close relationship with customers in the region will help elevate Signiant’s presence in Germany.”  17


The world of VOD never fails to astound us. What was a latent market but with very broad possibilities for the future has ended up being confirmed as a key element for the broadcast universe. Not only for those stations that perceive in this model an adaptation to new audiences and consumption habits; but also for coming up with initiatives that found in digital distribution an opportunity to respond to viewer demands, thanks to the Internet and a widespread access to this network globally. TM Broadcast International is devoting this month its special feature to addressing three projects that share the identity traits of OTT platforms; however, they have some peculiarities that are a sign of the appealing diversity in the sector:, the forerunner of hyperspecialization since 2008 and a well-established player of reference in its field as a result of its extensive catalog and a wide range of live events; SVT Play, an extension of the Swedish public broadcaster SVT with an attractive open format supported by smart encoding and a flawless UI; and, a proposal with great alliances in the motor sports industry and an approach to users that invites them to engulf in the offering for hours and hours.








Concerts, ballets, operas, documentaries, master classes, educational resources, jazz and much more; all this, live and on-demand for a global audience. The offering of the video on demand platform is the dream of any hardcore fan of classical music, but also a perfect introduction into this world thanks to a careful mix of AI-based editorial selections and recommendations. With an eye toward the highest audiovisual fidelity formats, this OTT service presents attractive challenges. We do not only mean managing a huge catalog of contents with lots of cris-cross metadata, but a growing commitment to live streaming. We found out the beginnings and addressed the present times of with the help of Hervé Boissière, Founder and Managing Director.



How does a platform like become a global reference? Our key pillars are the quality of the content, the diversity and richness of our offer. Today presents over 2600 firstclass programs dedicated to all genres of classical music, from concerts to operas, ballets, documentaries and masterclasses, to please different types of music lovers such as newcomers or connoisseurs. In addition, is the only platform to offer 150 live events each year from the leading international institutions all over the world. Entertainment is by definition driven by content and our added value definitely stems from our capacity to curate, select and promote the best content in the performing arts community. Classical music is definitely international, as there exist no borders for the composers, performers and viewers! We have the opportunity to work with the best musicians of our time and they have a global reach. 22

Hervé Boissière. Photo credit: Denis Rouvre.

What was the mission of when it was born? Was it streaming live events? Was the VoD platform also planned from its origin? The mission of is to serve our dream and belief which is that art can save the world. At a time

when there are major disruptions, crises and real difficulties for human beings, art is a strong message of hope, of intellectual development, emotion, and pleasure. Our goal is to share those values with as many people as possible. To accomplish this mission,


we have to create very strong relationships with the artists themselves to build long-term projects such as developments in terms of repertoire, in order to create the best possible conditions for filming and broadcasting them. We started very early, in 2008: At that time there was no real consumption for streaming services, both due to the technology and consumer habits. We knew that being dependent on a single revenue stream such as subscription, which was in its very early age, was too risky. We therefore decided from the inception of the project to create different revenue streams: B2C, B2B,


production and sponsoring. The idea of collecting the best possible catalog and streaming live the finest events was there from the beginning. It considerably helped us in gathering a faithful international community, since at that time it was a one-of-a-kind experience to “attend” a concert live at home from anywhere in the world. It was a fantastic democratisation of classical music! We started extending the reach of these remarkable artistic proposals, not only in terms of quantity (we usually multiply the live audience attending the performances by 20 or even 50 times) but also in terms of sociodemographic profiles. Classical music continuously suffers from its elitist reputation and many people are still shy to go to a classical music event. The possibility to bring to each home this content was the opportunity to create a link between the artist and a much bigger audience. The visual proximity we have created was also very important. People

could enjoy a whole new experience of the performing arts: Being in a way on stage with performers, the interactions with the conductor, the soloist, etc. This helped us a lot to engage with the public in the performances.

What do you consider to be’s biggest challenge from a technology perspective? There has always been a trio of challenges, which is putting together the best combination of content, product and technology. We cannot isolate the technology, because without the best content and a fantastic user experience, technology is redundant. Technology is just a piece of the puzzle! However, users are increasingly demanding with technology. They have access to outstanding online services every day (Uber, Netflix, Amazon…). These companies invest billions every year to improve their user experience. Today, everybody wants to have easy-to-use and effective services. As a 23


small company, here at we of course get the same pressure from our consumers to have the same quality of experience. To be more precise, and again connected to the specificity of our content, one of the key challenges is to cover the diversity of usages: of stream and of expectations, due to the huge number of different situations. From the mobile phone in the subway to the office, to the large home cinema at home for a three hourlong opera. This is really demanding, challenging, exciting and fascinating in order to cater to every type of consumer. currently offers most of its content in HD, but is beginning to offer UHD content. Is there a demand for this? What challenges does this entail? Obviously, we are very active on improving the quality of the events’ videos. HD is of course the standard people expect today and we are increasingly sourcing UHD, 4K or even 8K when 24

it's available. We are definitely proactive on that; the key question is the quality of specification of the devices and of the Internet access of our viewers. Again, there are large differences in the big community we have. Flexibility and adaptability should always drive our work.

Does produce live event coverage? Do you have an internal team or do you work

with external production companies? We do progressively produce our own content to complete an existing offer provided by our partners, which is already very good. It’s however great to have the ability to be present in some territories or in some genres; for instance, competitions became for a very important development area. We produced ourselves twice


What’s the workflow for broadcasting one of your events? What technological elements are involved in the coverage?

for instance the Tchaikovsky Competition in Russia or the Van Cliburn in the US and these events have been a major step in our short history, with dozens millions of people watching these competitions. We are really looking for a good balance between producers’ proposals and our capacity to generate ourselves our own recordings.

On the artistic side we have a permanent team to manage the productions (negotiation of the rights, management of the technical teams...) but we always hire a third party for filming and recording. We operate all over the world, and we are looking for the best resources and people in each country. Of course, it would not make sense to have a permanent team based in Paris when you have to operate on the five continents.

We are used to working short-term timings, which means that compared to other media such as television we can react very quickly. Live streaming supposes being positively opportunistic, meaning that the priority is given to the artist, the quality of the content and the editorial. If, for some reason, the project arrives at the last minute but it really creates added value for our line-up, we have to be available. Flexibility is therefore a keyword in our operations, that’s how we can combine very much anticipated major events such as festivals or competitions and a line-up of smaller forms like concerts. Regarding the technological elements involved in the coverage, today livestream broadcasts are getting easier with concert halls which have very good fiber internet connections. We have streaming partners all over the world 25


who can be physically on site and push the live stream on our servers. For this, they use various encoders such as vMix, Elemental, Wirecast… Then, we have several CDN duplicating our live contents everywhere, to allow our audience to get the best quality with the maximum reliability from all over the world.

What live video codec are you implementing for these types of events? 26

We still use H.264 since it is really perfectly integrated in all standards for now. The streams are pushed from event locations through RTMP but more and more in SRT (Secure Reliable Transport), an UDP protocol very good for both data safety and route reliability. Our audience is receiving a HLS playlist of 5-6 adaptive streams which allow people to get the best quality as their device and connection can

play. We are working on implementing HEVC codec and DASH protocol very soon now.

How has the encoding of your videos evolved over the years? It has evolved a lot in 12 years! When we started, we even received physical video tapes and we were encoding them in a type of file which was quite limited and very simple. Now, of course, we have dramatically improved our


capacity to be distributed on different platforms, in different formats. The streaming quality was linked with the possibilities of internet connections: in 2008, the stream was 512x288; in 2011, we streamed in 720x404; and now we have some concerts and operas available in 4K (4096x2160). We are now trying to lower our bandwidth use to save both money and the planet [laughs].

The audio bit rate we suppose is of great importance to How did you work in this area to meet the expectations of your audience? How can spectators recreate the sensations of being in the concert hall? Absolutely, being at the service for musicians and being ourselves from the music world, we always put the maximum attention on the quality of our audio recordings and restitutions. We work with classical music experts who have a deep knowledge of how to create the best audio experience, depending on

the acoustic of the hall and particularities of the recording.

restitutes the beauty and quality of this experience of classical music.

On video streaming we cannot use lossless codecs as audio streaming platforms can do: the video is using a lot of bandwidth and we need to keep a stream available for all connections. Our audio encoding is based on AAC, which is much better than mp3 according to all sound engineers, and we offer the highest AAC bitrate available at 320 kbps. That way, our audience can keep all the sharpness of the audio recording, the hall acoustic and the artist's performance.

What system do you use to manage the enormous amount of content that involves?

Of course, we cannot control the equipment of the people who watch and listen to our work. We constantly encourage them to get the best hi-fi system at home and it is fascinating to see how the devices, headphones, and connectivity for casting on TV have improved in the recent years. Of course, there is still room for improvement, but we believe that we have already reached a very satisfactory level, which

We have developed an in-house CMS thanks to our internal team of developers and product owners. They operate internally on a daily basis the management of the data due to the very specific qualifications of classical music, which has to be much granular than pop music for instance (you need to specify many data to describe a full opera for instance). We, therefore, created our own database with our data scientists and we continuously improve the management of this key element for our business, especially in the context of recommendations, specifications for marketing, profiling etc.

What is your cloud provider? We recently moved to AWS (Amazon Web Services). 27


What AI developments are you currently implementing to provide expanded services to your audience? We are focusing our efforts on personalized experiences, depending on a smart and accurate recommendation system. It is not only based on the largest consumption or previous viewings, but also integrates some editorial subjectivity. This means that we are trying to select and expose the most relevant content with regards to the profile of the viewer: Is he/she a fan of piano, ballet? Is he/she a newcomer or a specialist? We believe that our responsibility is to create surprises, to recommend content which people may not have imagined to watch. Music is emotion, art is a permanent discovery, so we shall be very effective with data and strong marketing machines, but at the same time we shall never forget that entertainment is there to create unexpected encounters. is also available on TV via Chromecast and Airplay; and on mobile devices. 28

WE HAVE WORKED VERY HARD SINCE THE BEGINNING TO ENLARGE THE DOOR TO CONCERT HALLS, OPERAS, FESTIVALS. THIS IS WHAT WE HAVE ACHIEVED WHEN WE SEE THAT WE NOW HAVE A REGULAR AUDIENCE OF HALF A MILLION VIEWERS EVERY MONTH, WHICH IS SIGNIFICANT FOR CLASSICAL MUSIC. How do you get to adapt each of these different systems? Today the ecosystem supposes that you cannot distinct these two different screens. A platform like us has to accompany our consumers in all circumstances. It’s a logical extension to be available on all these devices. We will be available on Roku TV before the summer, we also work on new channels to distribute our content via other partnerships with telcos and OTT companies. We try to fill every space where our content can be visible and contribute to accomplish our goal: Create the most connections between

artists and spectators.

Medici offers specialized content that is attractive to a global audience. How do you solve this great challenge of meeting this demand? What is your reach? Even if there are mainstream names like Mozart, Beethoven, Callas or Rostropovich, classical music still suffers from being considered niche content. We have worked very hard since the beginning to enlarge the door to concert halls, operas, festivals. This is what we have achieved when we see that we now have a regular audience of half a million viewers every month, which is


significant for classical music. Of course, some events can be dramatically bigger: Just one livestream can reach over 100,000 people, and some special projects such as the Tchaikovsky competition or some festivals can themselves alone represent over 20 million connections. We are very happy with the fast growth of our audience; our other priority is the loyalty of our audience. We are really trying to establish a long-term relationship: of course acquisition is very important – conversion is by definition the key of our business model – but retention is another very important priority. Indeed, classical music is so vast and rich, you need time to discover and explore the quality and richness of its repertoire. It’s very important to establish loyalty and trust between the viewers and ourselves. also has two parallel developments: Japan and EDU. Do both share your main technological platform? These are two very

different projects. EDU has been since the very beginning a major development of the company in order to develop our offer to the huge community of students worldwide. Today is considered as a reference for this educational digital resource and we work with close to 400 very prestigious universities, conservatories, libraries all over the world (Stanford, Columbia, MIT, McGill, Juilliard School...). It’s a fantastic success and we expect a strong development in the coming years for this educational channel which uses the same technology as the B2C platform. On the other hand Japan is totally another initiative. Japanese consumers always expect a localized service so we made a partnership with Avex, and they created this Japanese sub-website with a lot of editorial content connected to what’s happening for classical music in their market to promote and complete our main service

with local editorial content.

What’s the technological future of What will be your next step? Virtual reality and AI will definitely impact our business in the coming years. We don’t even know in which way nor we measure the next steps in those phenomenal disruptions. has to be on the right track and has to jump on this bandwagon in order to take the benefits of these fantastic improvements and new opportunities. We therefore try to be very selective and agile to ingest these new developments. For instance, the capacity these technologies will give us to enrich the viewing of the programs is definitely a great promise. You can add some information during the course of the viewing to explain who’s playing in the orchestra, who is this character in the opera, what kind of added value we can offer to as always try to make music more available, more accessible, for all generations.  29





When technology is approached right from the outset in a multidisciplinary way, everything becomes more fluid, intelligent and accurate. The strategy of SVT Play, the OTT platform from Swedish media entity SVT, is to streamline every single process in order to offer what is no doubt one of the most fluid and versatile public television platforms in the world. Absolutely open, it offers superb viewing quality content, a full range of accessibility services, and an ever-evolving development strategy. We spoke to Annika Bidner, Product owner at SVT Play; Stefan Berggren, Infrastructure Engineer; and Gustav Edling, Product Owner at SVT Play to unveil all keys.



Stefan Berggren, Infrastructure Engineer

What’s the history of SVT Play? When was the platform born? In 1996, we started experimenting with smallscale streaming on, our main page for news and program information. Ten years later, in 2006, the brand SVT Play was created, which was initially part of 2009 SVT Play was launched on and became a great success.

Nowadays, what differentiates SVT Play from other alternatives on the market? SVT Play is a video service that caters to everyone. With a focus on compatibility and accessibility, we deliver a wide range of content. 32

Annika Bidner, Product owner at SVT Play

Gustav Edling, Product Owner at SVT Play

Everything from minority programs to large entertainment programs and movies. Often with a focus on current affairs and learning, locally and nationally. You do not need to log in to SVT Play, everything is open over the internet.

new techniques, optimise, and follow up the result carefully. It is a tricky challenge to be a fast and responsive service with many large and beautiful images on every page.

As you just mentioned, SVT Play website is a fast, open, cleverlydesigned and powerful platform. What are the keys that allow you to offer this fluid and light service? We use API driven design, which means that we can put a lot of logic in cachable, easy-to-use APIs outside the Play application. We iterate constantly, try

One of our secret weapons is that we build a lot of our tools in-house. This gives us a lot of freedom to invent and evolve. For example, we run our own CDN in a hybrid mix with carefully selected external CDNs to be optimal for our target market.

Furthermore, the quality of viewing is truly exceptional. Does it offer more than 1080p? Are you considering offering content in UHD in the next few years?


What have been the milestones in the tech history of SVT Play?

1996 - Experiments with small-scale streaming 2006 - The SVT Play brand is created 2009 - SVT Play is launched on its own domain, 2010 - SVT Play is launched as an app for iOS after a major campaign aimed at Steve Jobs ( At the same time, the mobile site is launched. 2011-2012 - SVT goes from RTMP to HTTP streaming, adaptive streaming on all our material. 2012 - redesign with responsive design. 2014 - SVT Play gets its own app in Apple TV. New transcoding platform allows us to encode 1h video in 20 min, previously it took 6h. 2015 - Chromecast support added. 2016 - The first HbbTV app (connected TV app) is released. New and rebuilt native apps built in internally for iOS and Android. 2016-2017 - SVT builds its own CDN. 2018 - SVT Play receives support in Google home. 2019 - 50 fps for live begins to be tested. Packaging of video is starting to be done internally instead of on-the-fly packaging. SVT replaces commercial coding solution with self-built video coding based on FFMPEG.

We always strive to be as transparent to the source material as possible, regardless of resolution. We do not want our users to think about how many pixels are displayed, we want to provide the best possible experience, affordable streaming with little impact on the environment and distribution costs. As a public service company, we have a fixed budget regardless of the number of users, so we always struggle with cost versus quality. Therefore, we always need to be innovative and creative with our existing resources. We are constantly exploring new formats. UHD will be relevant when we feel that it gives a big profit to our viewers. New codecs and optimisations in our players are an

important piece of the puzzle. Even if we only distribute 1080P to users, we want to work with UHD as long as possible in the production chain. We see that UHD as a source material provides exceptional advantages even if we don’t distribute in UHD.

Why did you decide not to let the user choose the display quality, an option available, for example, on YouTube? We provide adaptive streaming, so we always give the user the best possible streaming quality, depending on their device and current connection. We don’t think more options here would improve the viewing experience. However, we also offer a lowbandwidth 33


alternative for web and mobile apps, for users who are concerned with spending too much data, or experience too many resolution switches. We recently moved this setting from the settings page to the web player, which increased the usage of the setting by 126%.

There are many TV OS + Mobile platforms out there. Where can you watch SVT Play? What are the complications of adapting SVT Play to such a variety of platforms? We have native support for Apple and Google's ecosystems and follow the standard for HbbTV. If there is no native app, our responsive web version should be customizable and cover narrower platforms where we cannot offer native solutions. If a platform emerges that we cannot support, discussions are held about how large the platform is and how many users are affected. By strictly following standards, we have generally succeeded well 34

in reaching many of our viewers.

What is the cloud provider of your choice for SVT Play? We use a multi-cloud architecture using both inhouse and external cloud providers. The roadmap for the next year is to diversify even more in some form of multi- or hybrid cloud approach. For video streaming (Live & VOD) we have an in-house CDN that takes a large chunk of the bandwidth. External CDN providers are used to compliment and to offload huge traffic spikes.

Is SVT Play your own development? Yes, mostly. Some parts like the CMS and live video streaming bits are commercial software. In total, there are approximately 20 teams working with SVT Play.

The combination of Big Data and artificial intelligence allows developers to build a variety of services that streamlines


recommendations, for example. How is this area developed, considering that SVT Play does not have a User system? Right now, we use local storage to create a user profile, with favourites, continue watching and recommendations. “Continue watching” is one of the most popular lists. This cookie-based profile works pretty well within each platform, but login has been discussed to make the experience between platforms more seamless.


We have a team that specializes in creating recommendations in our own recommendation engine Balthazar, and testing out where and how they should be exposed. They work together with the editorial department to create a great mix of man-made tips and machine learning smartness. We see that the interest in recommendations vary a lot depending on the program genre, so we don’t think the same principles can be applied everywhere.

Does your cloud partner provide some of the AI services? Yes, our choice to have multiple cloud providers allows us to choose from several different options.

How has the encoding of your video offering evolved? Is there a big difference between VOD and live streamed video? Two years ago, we chose to move from a commercial transcoding solution to a home-built alternative (Encore) to gain more control over all parameters in video

transcoding. This has meant that we have raised the quality at the same time as we have reduced the overall bit rate. We try to align VOD and live as much as possible for our users. The big difference is on the backend side where, for example, our video manifests are much more complicated for VOD and thus more optimized for different devices. Examples of functions for VOD that we do not have live are 5.1 and HDR

What are the main technological challenges



you face in your day-today? Broad support for devices. As a public service broadcaster, we want, and have the requirement, to reach as many people as possible regardless of the end user device. We want to playable on an old inherited tablet at the same time we want to offer the highest possible technical quality to the latest home cinema system. A challenge is connected TVs, where the smart part could be obsolete after a few years. The user has probably paid a lot of money for a product that is no longer supported. Then it may be that we have to filter out material from that platform due to an encryption that is no longer supported.

What are the main technological concerns of the users? Findability is the biggest concern for the users. On some platforms, people don’t think it’s easy enough to find their content. This spring, the search function will be a focus 36

The users also care a lot about that SVT Play is playable on many platforms, that the programs start when they should, and that sound and video play smoothly without buffering. SVT Play got the secondhighest grade for ”Video/sound” in the last big survey comparing streaming providers in Sweden, after Netflix. But we know that many people ask for better image quality in live events with a lot of movement, for example on sports events. We are working on improving that. One of our most common complaints to Customer Service is that the background music in the programs is too loud, which makes it hard to hear the dialogue. We’ll delve into that later. We are currently adding better tracking in all seven Play platforms to be able to spot our weaknesses in streaming quality. One of the changes we started doing already in summer 2019 was to move over to our own video coding engine, Encore,

with an optimized encoding profile for all animated content. It was carefully beta tested on thousands of users. We could now calculate that this change saves us around 110 tonnes of CO2e on a yearly basis. Many users want to use the big screen, and we have put a lot of effort into lifting the experience in these platforms. There has been a great increase in both users, loyalty and viewing time there as a result.

We’ve seen a variety of accessibility solutions implemented in your service. Could you tell us more? Accessibility is very important to SVT. We recently let the accessibility institute Funka audit, and they gave us a very positive review. SVT Play subtitles almost all VOD content, and a lot of live events too. Subtitles are used at 22% of the starts. A high number here is an indication that a program has dialogue that is hard to hear. We also have speech-to-text


ONE OF THE CHANGES WE STARTED DOING ALREADY IN SUMMER 2019 WAS TO MOVE OVER TO OUR OWN VIDEO CODING ENGINE, ENCORE, WITH AN OPTIMIZED ENCODING PROFILE FOR ALL ANIMATED CONTENT subtitles for local news, where there’s not enough time and resources to subtitle everything manually. There are also some functions even more specialized for the hard of hearing and deaf communities: programs in sign language, sign language interpretation versions of selected programs. A new addition is Tydligare tal (Clear speech), an alternative sound with higher volume for the voice channel and lower volume for other channels. This has become very popular in our police series ”Tunna blå linjen”. It’s a good example of where sound complaints from the public influenced that a show gets clear speech. Furthermore, for

the blind or visually impaired people, we offer audio description for several shows. Another function that supports both visually impaired and people with dyslexia or other cognitive disabilities is Spoken subtitles, which we are rolling out for VOD this March. On all shows with a foreign language, the user can let an automated voice read the Swedish subtitles aloud. We are conducting tests to find the most pleasant voice. On Smart TV, SVT Play has a unique service where you can watch a broadcast program, and from there select an alternative version in a tv app, with audio description or sign language interpretation.

What’s next for platforms such as SVT Play? What will define, in your opinion, its future of streamed video? We want to improve the areas that the users find most important, to find relevant content quickly and get a great viewing experience. Some teams are experimenting with new ways to delight viewers, such as graphics that can be turned on by the user or the possibility to watch together in our companion app Duo. We are adding A/Btesting and survey possibilities, so we can evaluate experiences on all platforms. We are also putting a lot of focus on analysing user segments and publishing in an optimized way, to make sure that all target groups have something new and relevant to watch. The future of streaming video will be defined by learning and adapting quickly to the changing needs of the users, while automating what can be automated, fixing technical problems early and finding efficient and fun ways to collaborate between teams.  37




5000 hours of video on demand, over 1,000+ live events a year more than the main competitions and a total of more than 125 racing series on its radar., the OTT evolution of, is a huge platform where users can stray for hours watching engaging contents of all kinds. The main weapons, leaving aside its extensive content catalog, are a versatile user interface and a determined commitment to broadcast quality. We discovered the pillars supporting from the hand of its chairman and also founder of, Éric Gilbert.



How was born? was a natural extension of, the #1 global editorial platform for racing news content. We started working on in 2017 and launched in 2018, so it’s still a very young platform. Since we launched in 2018, has been digital only, with a presence on the web, mobile devices (iOS and Android), and connected TV apps released in 2019 and 20 (Apple TV, Roku, Fire TV, Android TV). More connected TV apps are in the pipeline, too. As well as more distribution channels.

What differentiates from other platforms? is optimized for motor racing content and has a tight integration with the Motorsport Network ecosystem, but also the racing community as a whole. From a distribution model standpoint, 40

Éric Gilbert is also quite different in the sense that we support different business cases and monetization models: subscription, ad-supported free to view and pay per view. We aim at offering a 360-approach to all our different partners in the racing and automotive business. Also, our distribution goes well beyond the platform itself: through our embed

player, we manage and distribute video content for all the Motorsport Network editorial platforms (,, and many others), as well as out of network platforms.

You offer a free-to-view plan and a monthly subscription plan. What are the differences between the two? Does



the quality of viewing change too? The most important difference between our free-to-view and our subscription (monthly and annual) plans is content. Paid subscribers have access to original and

premium content that is simply not available anywhere else for free: premium live racing (FIA WEC, for instance), full archive content (full history of 24 Hours of Le Mans, our own racing archive under the brand “Duke Archives” which represent more than 1,000 hours of racing documentation), and exclusive content such as “Racing Files”. Free-toview content is high volume content, easier to monetize with advertising – in this case, content partners want the highest exposure possible. Free-toview content is still high quality, but a bit less exclusive. In terms of quality of experience, the only difference between freeto-view and paid subscription is advertising: paid subscribers enjoy an ad-free experience. Otherwise, streaming quality is the same.

You’re currently offering HD content. Is there a demand for UHD content in motorsports? Is it something you have on your roadmap?

For live content, 4K is still a novelty. At the moment, we don’t know of any racing series that livestream at 4K, since this resolution does require a perfect distribution chain, which is rarely the case on location. We can fully support native 4K content for on demand videos, of course. For live racing content, higher frame rate, such as 50fps, is becoming more and more interesting than 4K. For fast moving object, the smooth viewing experience provided by a higher frame rate is even more relevant that UHD. This is something that is definitely on our roadmap.

Your user interface differs from most of motorsport platforms to provide a YouTube-like approach. What’s your objective behind this decision? Was one of your goals to create a user-friendly interface that encourages agile viewing? Exactly. While started as a traditional OTT (à la 41


Netflix, if you will), our platform was to quickly evolve into an UGC platform. Hence the userfriendly look and feel. The upcoming version of our platform will lean even more on the UGC side, with full self-service UGC functionalities. We want to be fully used by the racing community (race teams, drivers, organising bodies, manufacturers, etc.). 42 features custom channels such as 24 Hours of Le Mans, Audi Sports, Dakar, Esports, Ferrari, FIA Karting and more! How do you handle the AV content contribution? Each channel has its own tools to share content or do you centralize all asset management? Our CMS and asset management are

centralized. But each channel owner has its own view and specific permissions.

What platform helps you manage all the assets? Is it an external development? Our whole platform is developed internally. Motorsport Network defines itself as a media and technology company, so developing



our own platforms has always been part of our strategy. is following the same strategy with our own internal team of developers and IT specialists.

What are the technological partners that make possible? Our main technology partner is Amazon Web Services. We have a few other partners for middleware services and advertising technology.

What cloud platform are you implementing? Do you use it to provide additional services such as big data, AI recommendations, bitrate optimization, and more?

AWS has a number of those services, of course. For more advances functionalities such as AI recommendations, we are investigating other partners and even developing the features ourselves.

Now you offer content for North America, Russia, Europe… How do you handle and manage the viewing rights of live competitions depending on the region? We have developed our own geo-management tools in our platforms. Knowing that viewing rights are critical with different racing series, geo-management was developed and integrated at the time we launched, back in 2018. And we have made various improvement on our system, since. We can geo manage at the channel level, at the program/show level, and even at the episode/video level. can be viewed via TV, web, Chromecast, Roku, mobile devices… What

are the difficulties of adapting your platform to each of the platforms? Do you adapt the application internally? All those applications have been developed internally, yes. The difficulty is purely management and finding the right people. Another challenge is to make sure the experience and overall look and feel are similar across the devices, while at the same time making sure to adapt to all those different environments from a UI standpoint.

What’s the future of OTT platforms? More specifically, how will evolve over the next few years? I believe the next big thing will be fans/viewers integration, at many levels: how to engage with the content with comments and live chat, but also how they actually take part of the video experience with their own content. Based on this, we will definitely work on functionalities that will enable fans / viewer to do just this.  43




America’s Cup A completely new 360 concept Since 1851, the America's Cup has provided spectators with an exciting competition that has managed to maintain, despite its history, its own, distinct identity. However, the technology associated with its dissemination has not stopped, offering now possibilities that were inconceivable 170 years ago. Circle-O, a joint venture formed by Riedel Communications and WEST4MEDIA, has been the company in charge of authoring the production of the latest edition of this competition. It has done so by offering a new 360 proposal that covers all areas of competition, thus exploiting the essential advances in the field of communications. The outcome is possibly the most comprehensive, complex and viewer-friendly coverage in the cup's history. At the beginning of March, before the final of the America’s Cup took place, TM Broadcast International had a conversation via Zoom with Werner Eksler (Managing Director of Circle-O) and Tim Pushkeit, Riedel's Technical Director at America's Cup, to navigate the technological concepts that shape this edition.

We’re talking early March and sadly America’s Cup’s first weekend of racing has been postponed for a week due to latest Auckland Covid-19 Level 3 lockdown. Was everything ready for

your coverage or do you actually appreciate these extra days? Werner: (Laughs) The short answer is: no, we are not appreciating the days! The event organization has done a tremendously good job preparing for

such situations. They have a plan in place for every situation, for every alert level; starting from alert level one to level four. All of these plans are aligned with the New Zealand’s government. For every alert level, there is really a

Photo © ACE | Studio Borlenghi



specific plan in how we deal with that to work just within our bubbles, etc.! The plan does not only covers TV production, but also the teams, the event organization… So, let’s say it was not unexpected. Due it was pretty planned, it was nothing that hit the organization and us by surprise. In principle, we would have been ready for this weekend.

The 36th America’s Cup is the first finish line of a path that you began as Circle-o in 2018. What’s the origin of this joint venture between West4Media and Riedel? Was innovation in America’s Cup production among your main goals? W: The principle of the joint venture was that Riedel and West4Media came together to offer what we call a 360 degree concept for the entire production. This means race management, all the technique and the infrastructure to run a race, the entire TV production, and also the entire infrastructure for 46

Werner Eksler (Managing Director of Circle-O)

broadcast in the event area. As Riedel and West4Media did successfully work together in the past, it was pretty obviously for us that we could join forces.

How did you think that 35th America’s Cup coverage could improve? What were the critical points you thought: “OK, we can do it better, or we can change one thing or the other…”? W: Honestly, I have to say that when we saw the coverage of the 35th cup we were all thrilled by the incredible quality they had and by all of the innovation they did. So it was really difficult for us

Tim Pushkeit, Riedel's Technical Director at America's Cup

to say: “Can we match the level or can we go beyond the level?” That is the first level. The second level came when we saw that we’ll face a total new boat class with total new challenges and total new race class rules. That meant we will have to face totally different challenges in the field of boat setup or race management setup. Therefore, I wouldn’t call it improvements to what we did, I would call it developments on how you need to install equipment or to run a race management on this new boat class; developments on how can we add new features, for example in the field of AR; or how can


we add something in the field of audio that’s coming from the boat...

In a few questions we're going to delve into the technical aspects, but we want you to sum up, to highlight, what are the key features of the 360-degree concept that Circle-O has prepared for America’s Cup. In addition, what we have seen in the Challenger series by

PRADA, a few weeks ago, is the same coverage that we will see in the main America’s Cup? Tim: We are adding features event by event. For example, we added a second helicopter in the Round Robin, we are adding a second camera chase boat to give different angles of the race boat and also to give more impressions from the water; we are also adding


Photo © Riedel



more experts that are on the water and the helicopter, like at the Prada Cup… The same for the AC75! We've started with 10 cameras. Afterwards, we added features like a 360 degree camera for social media, a body-worn camera which will be worn by the helmsman or biometric solutions. So, there was a roadmap to add features that would later be combined in the America’s Cup 36th. Back to the first question, the key feature of the concept is that the solution we offered is completely delivered out of one hand. It’s the first time that we are not doing the classical broadcast, in which, for example, we add some cameras on a boat. We integrated our whole technical solution into the boat, so that means into their system. And that’s not only doing the broadcast bit, which means audio and video. We are also doing all the data, for example, from the race management system. So the accuracy of 48

2-cm GPS which the umpires need to make a call or a decision or to talk to or to speak out a penalty if a boat has crossed the boundary, for example, is all combined and linked. We are receiving the biometrics, the GPS data, the heading, the speed… We are receiving everything! This is the reason why we can bring the boat closer to the audience. For example, through these links we are having the opportunity to dial in and get that quiet crystal clear audio communications between the team. But it is not done with an additional microphone for broadcast. The microphone we are providing is used for us, as broadcast, but also for

them, as team communications. We combined lots of things. We have matched the different interests and provided teams with a solution that they can use for themselves and what we can use for broadcast as well. As I told you, this is a 360 degree concept. We cover the race’s area, which is everything on the water, including marshals. We track up to 200 devices, for example marshal boats, press media boats… Of course this includes also the marks and the position of the tracks to define the start lines. Then we have the race area, which encompasses everything about racing. For example, with that we provide dedicated



Photo © Riedel

onboard feeds to team hospitalities, so team New Zealand can see the feeds and can hear the audio during training from their boat, which is out in the water; and the same for Ineos, Luna Rossa and American Magic. And then, the third area combines press conferences, the stage, the event village, the team hospitalities… That is all integrated in one big network of our MediorNet. And that even

integrates the local broadcast station Television New Zealand (TVNZ), which has a studio here in the village, but they are also doing a lot of remote production from the main studio. All that happens through our infrastructure. So we combined all three areas: race, event and broadcast.

There are two central challenges for the coverage of America’s

Cup. We’re probably just simplifying it too much, but there are two critical parts that we’ve seen that might be the most difficult to handle. One of them is communications and contributions to and from the yachts. How did you handle this? T: The biggest challenge we are having here in Auckland is that we don’t only have one race box. We have five race boxes in 49


total and this covers quite a big area, to be honest. The most important thing is to use race box b, c and d, which covers the socalled “stadium courses”. But then, of course, due to extreme weather conditions, we also have in race box a and e, for high wind and low wind and things like that. Consequently, every race box needs to be live at every time, because the race officer can change the race box in the morning. So, what we have here is a big network. We have four receive points across the whole city to cover all race boxes here in the Hauraki Gulf. And from there, we are receiving the whole network: the IP links, the video, the audio... We receive everything and then stream it with a wireless link into the broadcast centre. That's how we cover our race boxes. There is also redundancy, because a few race boxes are covered from two receive points. That's the whole intelligence we're using behind them. 50

About communications, that’s quite new, honestly. We are using all-Bolero products. Every boat is a Bolero stand-alone island… and the teams are using it even if we are not around for testing and training! For example, Ineos has been using it for two years. They have done tests with Bolero already

in Portsmouth for training or even when they had their team base in Cagliari. They can communicate from this with their own Wi-Fi link to their team chase boats, for example. During the live show, we are also picking up this link and sending it over IP into a broadcast compound,


which gives us the chance to hear this and which explains, for example, why a helmsman like Jimmy Spithill wears one microphone. For example, he can press the PTT to speak to the race officer or the umpire if he has a protest after a penalty. But then, we also have this audio line in our audio

Photo © Riedel


mixer. We just need to open this line or fade it in so we can hear this right away. We can even link helmsman or characters from different boats to each other. For example, the last time we did it was when Jimmy Spithill won against Ineos Team UK and Jimmy Spithill spoke to Ben Ainslie in high broadcast quality through our technology.

We’re still on the AC75. What video capture systems, communications solutions and microphones will be included on each ship? T: We have different kinds of cameras, both from us and DreamChip. One is mounted in a position which gives you the front shot from the

boats. Then we have three PTZ HD cameras. Two are connected to the mast and one is on the media post at the rear of the boat. And more! These cameras are designed just for the Cup by us or in collaboration with external suppliers. Regarding audio, we have eight microphones on the boat, which are custom units from Sennheiser. They prepared, delivered and provided a cage and a basket around it – and also waterproof foam – to make sure that these microphones are delivering quite good audio quality under these rough conditions. As communications, we’re using Bolero products. The rest of the system is a complete customized system which was 51


designed, developed and implemented in Wuppertal at Riedel.

Furthermore, you’ll deploy a camera boat and aerial coverage. What systems will these “vehicles” include? We assume you’ll need a reliable stabilization system for this, are we right? W: We have two helicopters and we have what we call chase boat or camera boats. These boats are specially developed for the America’s Cup because they need to be very fast. Nonetheless, despite being very fast, they are not as fast as the race boats! In total, we have four stabilized cameras: two on the helicopter and two on the boats. T: The gimbal cameras we're using are Shotover M1 systems. In there, we use a Sony P50 camera with a Fujinon 46 lens. That's the setup. The colleagues who are operating these gimbals have a lot of sailing experience and we have 52

already cooperated with them in other events.

Another great deal is to gather all feeds and deliver them in real time, since that’s the core of the races production. How will you manage this process? T: Officially, we are providing five feeds that we send into the world feed as compliant and standard. And then, we have two on-board feeds, so that means you can watch the race on one dedicated boat only and a data feed, which provides more details for the sailing experts who are interested in more technical details. That is the output. The input, of course, is also quite interesting. At the moment, we are receiving from each boat four cameras in parallel, so we can select which camera is getting transmitted to shore. Of course, this gives you then, with two boats, eight signals in total, but we have 20 cameras in use at the same time and we can live, switch them and

remote switch them from the IBC. Everything can be controlled from the main production room, so that means we have one dedicated camera operator sitting in the production room for one boat, who can control and pan, tilt and zoom these cameras. Then we have

Photo © ACE | Studio Borlenghi


the two helicopters, we have the two camera chase boats, and then, of course, we have a few other cameras in the village for audience shots, then also for press conferences or for the stage. So, we are talking about 20-25 streams coming in all the time. Not

a single camera is wired: everything is coming in wireless, which is also quite challenging. Regards frequency management, I think we have only one wired camera, which is a little camera in the umpire booth. Everything else is coming in wireless. Then, we are producing the

streams with two vision mixers. So, the one big vision mixer is doing the main feeds, the world feeds and all the graphics, and the smaller vision mixer is just doing these dedicated streams, the onboard feed and the data feed.



Are you opting for onsite production or are you delving into remote production? W: I think Tim can talk more about the technical thing. In my understanding, it could be a total remote production from the technical side. The reason why we have a really big IBC onsite is that this is not a production like a football game, where everything is standardized and you have a clear timetable of what happens and when. This is an event that changes every minute. The race codes can change, the start can change, the teams maybe can have an issue because they have been training and they broke a camera... You need close contact to the event and to the teams. Otherwise, it's not possible to run such a complex event in such a short time with such flexibility in terms of organization. Therefore, I think the reason for being on site with a larger group is not so much the technical infrastructure, which could be run standalone or 54

remote. It's really the communication that is a key part, because if one thing changes in the 360 concept it has impact on other things. You need to have everybody onsite and you need to react in short notice and talk to people face to face. Also take into account the time

difference between Germany and New Zealand… It makes more sense to have everything on site. Technically-wise I would say, Tim, it could also be run remotely. T: Yes, everything could be run remotely and it was also discussed at the beginning. Nonetheless, at


Photo © ACE | Studio Borlenghi

Both interferences in the water and the way salt water affects technical solutions are things you shouldn't underestimate. Also, it's quite busy on the water, and to get spare equipment or processes or do repairing on the water… It's not like that you can just walk from the grandstand to the football field, as Werner said. So it needs an average bigger crew.

the beginning, the scenario was a bit different. We were not scheduled to be here in Auckland for five months. That is due to Covid. In origin, it was planned to have World Series Events in Cagliari, in Portsmouth or somewhere else. We had different venues and

then the Auckland event was planned for a couple of weeks. And then, as Werner already said, sailing is a bit more intense, especially technically. You have bigger challenges with the technology on the water. It's not like a plug-and-play solutions.

Remote support, yes, it would be possible, definitely. I think the technology can do this nowadays, it's just a question of where you are and how good the link is back to the remote production centre, wherever this could be. We have Wuppertal connected in these days. But there, they support us mainly with engineering support. So they are not supporting us with things that influence to the live show and broadcast performance. That's not the case, but they are doing a lot of paperwork and jobs in the background. Especially at 55


the beginning, because as you can imagine, due to Covid. We were constantly connected through Wuppertal with all product management and also development, who have released new software or firmware, or who have added features to us. Anyway, they also monitor and could have access to the cameras on the boat. For example, the person who was responsible for programming the remote control of our own PTZ cameras, he has done it from Wuppertal. So, there was an engineering backbone. They could see the signals, they could see the data, they could have had access to all systems when it was required, and they were a backup scenario.

Photo © ACE | Studio Borlenghi

By the way, are you producing all the races in HD? W: Yes, 1080.

In addition, you already told that AR will have a great role in the coverage of America’s Cup, as we’ve also seen 56

in the Challenger series. What kind of implementations are we going to enjoy? What is your graphics provider for this project?

W: We are working with a graphic provider, which is ARL: Animation Research Ltd., a company sitting in New Zealand and well experienced in


doing sailing coverage. They have been involved in the last cups as well. They are really experts in the field of augmented reality and in 3D. Their graphics are a combination of creativity and advanced technology, so it’s really a great partner. As you see, the outcome is really great. We are constantly developing stuff, so you will see some more features. We will try to implement a cockpit view with some graphics, similar to which can be seen in F1 and similar productions. You will see more data like wind informations, currents and Photo © ACE | Studio Borlenghi

stuff like that. The main goal for that is to really give the viewer an understanding of what's happening on the boats.

Who are Circle-O's technology partners for this project? W: There are several companies involved in the project. ARL, doing graphics, and Amis, for the Shotover camera, are doing an excellent job. Both are experts in their fields. We are also working with Telstra, as a distribution partner. In total, we are providing seven feeds via fiber distributed by Telstra into the world.

T: The whole race management system is developed, defined and designed by Igtimi, which is a Riedel company based in Dunedin, on the south islands of New Zealand. They also have other maritime products like the YachtBot, tracking devices or wind measure devices. We also link them to our research and development hub in Porto, where we're using our fluid sensors, which are installed underneath the buoys and the marks, and can give you water current. This is that shown in the race management system, but the technology and the sensor was developed in Porto from Riedel research and development hub. From Lawo we have the big MC256 audio mixer. We are doing everything IP based here. This combined with our MediorNet Technology and Boleros gives us great opportunities. On the boat, as i said, we use Sennheiser, who customized the microphone for us to 57


make sure that we can use it as a waterproof microphone – as waterproof as a microphone can be, because if it's completely waterproof, you wouldn't hear anything. To get these data off the boat we also use iXblue, which is an established brand for providing accurate data. Furthermore, we decided to go with Simplylive as slow-mo system. This slow-mo system is not only doing the live replay or live slo-mo. It is also doing all the clipping and snippets, everything for social media. It’s also the interface for our EditShare, where we store all data and then the postproduction can have access to the whole footage. After the live show, Simplylive also can get clips like video news releases or the highlights from the post production, through EditShare, and can play it out straight away. And then, of course, for postproduction, we're using EditShare as a server. From there, we 58

upload everything into the America’s Cup cloud. One more thing! Regarding postproduction we have seven edit suites. We are using Premiere.

Are there any other key technical developments worth mentioning? T: As you said, every department or every technical device needs to fit as one piece of the

Photo © ACE | Studio Borlenghi

puzzle together. Otherwise, it wouldn’t work. Yes, we had some highlights. What i said about the comms to the boats, i think it is a new benchmark. Especially, because everybody said that this will be challenging, and maybe not really doable, which is quite a winning situation. For example, we have a water reporter 25


kilometres somewhere on a camera chase boat with 40 or 45 knots using a Bolero for live on air commentating… and the same in the helicopter! I think we have had great experiences with our mesh technology and the network IP technology. But as I said, it all needs to come together at the right time and it all has to work at the right time. The IBC

centre could be technically superb, but if you don’t receive a single signal with all these wireless signals from the boats, then you can't send anything. So, all departments need each other. I think that was the key attitude here, as well as on site.

What’s next for CircleO? Are you going to

continue working for America’s Cup for the next few years? Are your working to develop new advanced coverage for other sporting events? W: The Americas Cup winner is organizing the next Americas Cup. So, at this stage, if there is no winner, it's not clear who is the next organizer. So, we can't say if we’ll work on the next cup, but for sure we have the appetite and we would love it. But this is something that will be decided in the next half year, once the current America's Cup is done. As Circle-O, we see this as a role model also for other, let's say, sporting events or other sporting authorities as well. We think that we can offer quite a cool package with advanced technology with tailor-made technology and a lot of experience in broadcast and events. So, yes, there is definitely a goal to continue working with Circle-O, but at the moment we are really focusing on delivering for this America’s Cup.  59




Vital lessons translated into images



We’ve know from other conversations that you have lived a fascinating life, and that those experiences ended up creatively defining you. How much of your work as cinematographer is defined by your vital experiences and how much by your cinema education and references? When I was in film school I heard a piece of advice Akira Kurosawa once gave to aspiring filmmakers: “Read a lot and live an interesting life.” For whatever reason, that stuck with me. While my reading habits still need work, I’ve always tried to prioritize new experiences and adventures. This adventure-seeking may have delayed the start of my film career, but I think it was worth it. It shaped who I am as a person, so it follows that it influenced who I am as a filmmaker. Whether speaking about my film education or my overall approach to life, curiosity is an important pillar. I worry that too 62

Andy is a quirky photography director. When some people talk about the pace of a television production, he highlights the importance of meditation. When discussing his experience, he underlines the value of his mistakes and continued learning by quoting Jerry Seinfield. His path does not involve taking root, although he especially values people who have been with him in different stages. Throughout his career he has explored humorous formats for digital platforms such as 'Funny or Die'; he has created numerous episodes for 'Go90' productions; he has flown to Albania to take over ambitious comedy 'Skanderberg'; and he has recently found important recognition with 'PEN15', broadcast on Hulu. We do not know what his next venture will be, but we are sure it will be exciting if we consider his history to date...

much comfort leads to stagnation, so I try to stay curious and keep myself hungry in whatever way I can. I want to learn and experiment and push myself. This is something I look for in my collaborators, too, particularly directors. Sam Zvibleman is a director I’ve work with a lot; he and I push each other quite a bit. We are both ambitious and strive to

think creatively about making bold choices that enhance and support the emotion of a particular scene, story or character. That kind of approach gets my heart pumping and helps make the long hours and months worthwhile. Film school brought many lessons, but one of the biggest was: Having an opportunity to fail. At the time I didn’t see it that


What was it like working with (I guess) these limited resources? Did you bring this experience to the latest features you’ve worked on?

way, of course. When I made mistakes on short films- be they mine or someone else’s- it felt tragic. In hindsight, I realize that failing is an important and inevitable part of filmmaking. Being able to fail without huge repercussions (mistakenly deleting footage for a paying client, for example) is a key element to growing as a filmmaker. Mistakes are extremely valuable because they

help you learn at an accelerated pace. I recently heard Jerry Seinfeld say that, “pain is knowledge rushing in.” The older I get, the more true that seems to be.

You got started in the business on webseries such as ‘Beats Per Mnet’; later, you worked on ‘The Earliest Show’ for Funny or Die. What did you learned from these experiences?

Smaller scale jobs have been an extremely important part of my development and continue to be. I still shoot shorts when I can, especially if I see something creatively stimulating in it, be it the visual approach, an interesting director, strange location, etc. In general, limited resources force you into a state of creativity. You can’t throw money at your problems, so you start to think outside the box. I’ve used white t-shirts and cardboard covered with aluminum foil as bounce cards. I’ve taped table cloths to walls for negative fill and placed cameras on scraps of cardboard to create the poorest of poor man’s dolly movement. Necessity is the mother of invention and low budget shoots will teach you wonderful tricks. 63


copyright PEN15.

Also, being around different sorts of talent is immeasurably valuable. You not only learn how other people work, but how you work. I learned very early on that I am not interested in people who yell and that being combative is not a strength of mine. The way I react now to various personalities is colored by my years of dealing with all manner of people. Experience, hours logged 64

on set, is something you just can't cheat.

Afterwards, you were part of several series that were screened on the now discontinued platform Go90. How did this stage of your life developed? We guess these shows were made with more resources. Were you able to evolve your cinematographic skills thanks to technology?

In 2017 I shot 32 episodes of half-hour TV, all for Go90. It is one of the most important years of my career. I shot a thriller, comedy and mockumentary, so I was able to develop and flex different creative muscles and lighting approaches. It was the first time I was given resources like Fisher dollies and larger output lights to work with. It was also the first time I worked with multiple directors on


a single series and I had to learn how to keep the visual language of a show consistent, even when the directors were rotating in and out. In a lot of ways, that year was an entirely new education in film. Perhaps the most valuable aspect of Go90 for me was what I learned about coverage. Shooting that many episodes meant that I had the opportunity to film literally hundreds of scenes. I think it actually approached one thousand scenes in that one year. In addition, I worked a lot with a director named Charles Hood on 12 of those 32 episodes. Charles is perhaps the most dedicated director I know when it comes to coverage. He is extremely thoughtful when it comes to the craft of blocking and camera choreography. He wants to make creative decisions on set, not in the editing room so it is important to him that scenes develop within shots and are not broken up by constant edits. Charles taught me to be

more thoughtful about how scenes and sequences are put together and now it is one of my favorite aspects of filmmaking. The way scenes are covered affects the mood and tone similarly to lighting and color design. The other big takeaway from that experience was about life balance and the importance of keeping my mind and body up to the task. By the end of the year, I was running on fumes. I was scared I burned myself out very early in my career. Now I pay extremely close attention to my sleep and diet when I am on longer shoots. Film production is physically and mentally punishing and I want to make sure I’m prepared. I have strict rules about

what I eat, how much caffeine I drink and how much I sleep. The goal is to be able to perform at a high level when it’s Day 35, hour 15 and I’m working in the cold rain at 3am.

Also, if IMDB isn’t kidding us, you were involved in the Albanian comedy Skanderberg. How was the experience? Did you notice a big difference from working in America from a technological perspective? I have been lucky enough to work in Europe a handful of times and I hope to do so much more in the future. The Albania show was a unique one, though. Our lack of






resources on that show was extreme. We had no c-stands. It got to the point that we were using literal toothpicks to hold together backdrops. We had exactly three HMIs and when one bulb burnt out, that was it. We then had two. It was an extremely ambitious show, though. Our writer (Koloreto Cukali) and director (Adam Pray) didn’t let our budget limitations limit their imagination, so we swung for the fences every day. We built huge sets and set out to make the best

looking TV show in Albanian history. I’d like to think we succeeded. The two most important lessons I learned on that job were: 1) Meditation. I was so stressed the first two weeks of that job that I almost had a nervous breakdown. Luckily, my director, who had experience working in Albania, helped me slow down and learn to let go of certain expectations. Most importantly, he got me meditating. I started meditating every mday before arriving on set and haven’t stopped since. It has become one of the most important filmmaking tools I have. I cannot recommend the habit strongly enough. 2) Communication. I was working with many people who did not speak English and I didn’t speak a word of Albanian. I



had previously worked in Barcelona where the language barrier was similar, but this was much more extreme. I learned to communicate with smiles, gestures and sounds. I started to describe quality of light with hand gestures and mouthed sound effects. It was hilarious to observe, but also encouraging, as it actually worked. It taught me the value of non-verbal communication, which has helped me in both life and work since then.

Whether they are webseries or VOD platforms formats, you’re close to shows linked to television. What do you like the most about working on these shows? Do you prefer this to feature films? I grew up very focused on feature films, but over the last decade, TV has really stepped up, both in terms of quantity, as well as quality. When I was first pursuing film, I was always drawn to independent films. These small films have become more 68


difficulty to pursue, financially. I once heard Ed Burns say something about TV and independent film that helped shift my perspective on both. To paraphrase, he said, “Independent film has moved to TV. That’s where the writing is. That’s where the characters are.” This golden age of television is an extension of the independent films of the 90s that I loved so much. The filmmaking cinematography in particular - has gotten so exciting in TV. There aren’t the aesthetic limits on television the way there used to be. I’m grateful that I am often allowed to experiment and push the boundaries of the look of a show.

Some people agree that VOD platforms have brought unprecedented

creative freedom to television. What do you think about this? One of the most thrilling things to me about television right now is how bold it is making people. We are seeing filmmaking techniques that we never would have seen ten years ago. Filmmakers are not only being allowed to explore and push their styles, but they are encouraged to do so. In order for things to stand out now, shows are leaning into stylistic choices whereas a decade ago, they were often being watered down out of fear of being “too weird.” Another aspect that is wonderful is that more filmmakers are working now because there is more work out in the world. We are seeing more


filmmaking voices and, slowly but surely, more diverse voices. This is not only exciting, but imperative for the growth of our medium and craft.

How would you define your cinematographic style? Hopefully my style changes from project to project and is inspired

entirely by the story and creative voices behind it. That said, I have noticed a trait in my work that comes up again and again: Wabi Sabi. This is a Japanese term that, loosely translated, means “finding the beauty in the flaws.” As camera sensors become cleaner and sharper, I find myself constantly battling them. I am using vintage lenses, filtration, higher ISO values to beat up the image a bit. There are exceptions to this, but often when I see overly clean images or perfect camera movements, it feels fake to me, as though I can see the artifice of filmmaking behind it. Something about a dirtier image feels more true to me. The flaws make it feel real. Imperfections somehow feel honest to me.

What’s your approach to technology? What are your favorite gadgets on set? This is difficult to answer 69


because there have been so many advancements in filmmaking technology over the past decade; the low light sensitivity of camera sensors boggles the mind. RGBW LED lights are suddenly quite common and their output continues to rise. The color and depth information being captured by our cameras gives a whole new world of control in post. Camera bodies themselves are becoming smaller and lighter and can be used in ways we never imagined. The list goes on and on. With all that said, the tool I probably love the most is in the world of wireless follow focus and monitoring. Monitors and focus peaking have become so good that focus pulling is now a mixture of using one’s eye, as well as marks. Actors now have more freedom to alter their performances and are less constricted, while camera operators have more freedom to improvise and adjust. It is not a methodology for every 70

copyright PEN15.

scene or every project, but it allows for happy accidents and a constantly living, changing shot. Each take can be a bit more alive. My 1st AC, Bryant Marcontel, and I have worked together for thousands of hours. We’ve learned one another’s rhythms and habits. If I see something I want to chase as we are rolling, I can lean into it and trust that

he will be able to maintain focus on what is relevant to the moment. It’s a lovely way to work, though it makes life a bit more challenging for my AC’s. Sorry, guys! I love you.

We love how PEN15 is defined as a “cringe comedy”, even though it is accurate! We can say that this was a big step


for you in terms of acknowledgement. What was requested for the show from DP’s perspective and what did you contribute? I am extremely thankful for PEN15. It certainly brought me some recognition, which is important only in that it has given me more opportunities and opened

more doors. As far as the show itself, the mantra from the creators was, “Real, not beautiful.” They felt the same way I do: If the show looks too pretty or perfect, it won’t feel honest. This is a show that strives to be honest about the awkward years of growing up. First loves, feeling like you don’t fit in, puberty, etc. None of those things felt beautiful to any of us as we were growing up, so we wanted to reflect that in the visual approach. For us, that manifested itself in a number of ways: Minimal camera movement, getting rid of beauty lights, shooting at a higher ISO to add noise into the image. We also filtered everything and added a bit of grain in post. If it’s about the messiness of growing up, we wanted the palette itself to reflect that.

What was the camera + lenses package? We shot with two Alexa Minis on both seasons, but our lens packages changed from season one to season two. In season

one, we shot primarily with the Fujinon 19-90 and also used some Cooke S4 primes on our B Cam. In season two we wanted to quietly mature the look. We shot on a mixture of Zeiss Super Speeds and Ultra Primes. Another major shift was our lighting. In season one, we lit with HMIs for our window/ sun work, but all LEDs for close ups and the like. In season two, I wanted to shift out entire lighting package to tungsten. I am very happy with what it did to our actors’ skin tones and skin texture. The subtle differences between light sources was a great way to quietly alter the feel and texture of our show between seasons.

Everything in the show is so early 00s, aesthetically speaking. How did photography help recreate this feeling? I’ve touched on some of this already with filtration and “dirtying” our image, but I will add that we had a mantra. When we were unsure about the look or 71


feel of a scene- from lighting to performanceone of the creators (Sam, Maya or Anna) would inevitably ask, “How would they do this in Welcome to the Dollhouse?” That film by Todd Solondz was always our touchstone. It’s a midnineties, low budget, independent film that was the dominant influence on PEN15. It was our compass when we felt lost. It is a film that uses minimalism and naturalism to great effect. That was our North Star.

We’d also like to know more about your approach to lightning. We’ve seen a naturalistic + intimal approach to several scenes, but we don’t know if this is a preference or a circumstance. Can you tell us more about this? I love lighting and am always looking to improve and expand. Each project brings different aesthetic needs and challenges, so I try not to have one dominant approach. For PEN15, naturalism was the 72

goal from the start. In season two we let mood dictate light more than we did in season one, but it was always subtle. Season two has very specific pops of red and blue that we never would have added in season one, but we felt it fit the characters’ journeys. This type of visual expression interests me a great deal. Using color to enhance a character’s state is something I am finding myself more and more drawn to. It is also important to note, performance is always key to any project and on this project that was particularly true. We worked really hard to give actors space to work and even avoided marks when we could. It was challenging for our G&E team, as well as our camera team, but it felt worth it. Having lead actresses who love to improvise, as well as a cast of young actors meant we needed to be constantly flexible. That led us to lighting through windows and from above

copyright PEN15.

as often as we could so actors weren’t trapped by lights a grip equipment. I’ve been leaning more and more toward simpler lighting setups so we can move quickly and get some of the gear out of our way. But “simple” never means less important. Quality of light is often where I start my creative work.


linger and develop slowly. In TV the pace needs to be adjustable in the editing room, which means you need to shoot more coverage and make sure show runners, producers, editors and directors have the ability to speed up or slow down various scenes. In features there is a bit more freedom to let shots and scenes play out. I am eager to push myself toward patience in the way we (the director and I) design shots and sequences. Ultimately, though, after a year of being stuck at home, I’m mostly just excited to get back on set.

What’s next for you? Will you stay close to TV series or will you move on to feature films? In 2020 I was attached to several projects, but the pandemic changed all that very suddenly. As we move into 2021 there are a number of projects I have lined up, but I have learned not to assume

anything will happen until I’m on set, shooting the first shot. My main excitement for 2021 is that, after four years of shooting almost exclusively TV, I now have several feature films on the schedule. I’m excited to get back to features and see how it compares to the pace of TV. My main hope is that I can explore shots and sequences that

I hope to work in Europe again this year or next. Working overseas feels like I am given fresh eyes. Seeing the difference in light quality in Barcelona versus Albania versus southern California is inspiring. The texture of the streets, the variation in facial structures and skin tones… These are some of the things I hope to explore in 2021 and I beyond.  73




In the first part of these AoIP series we reviewed the advantages of using this technology in our production environments and its main differences as compared to traditional audio, whether it was analogue or digital. In the second part, we addressed its practical applications and what features were necessary for AoIP systems to achieve these distinct functionalities. In this third part, we will focus on how to securely control and maintain our AoIP network, concepts that have been so far unheard of in linear digital or analog audio environments, but very common in IT environments for more than 30 years now. By Yeray Alfageme

In traditional linear systems, be they SDI, AES or MADI, monitoring where the signals were going to was fairly easy. After all, they were point-to-point connections. The problem inherent to these systems was scalability. If we want to introduce redundancy in these linear systems, it is ‘as simple’ as duplicating equipment, signals, and connections. But what happens if our array is full, if we do not have any additional equipment, or if it is just that no more cables fit? Then we have a problem. This complexity when increasing the size of the systems led to two extremes: a static, rather rigid system designed for a specific use and a given dimension; or an oversized, expensive system, with the aim

of gaining in flexibility at the cost of increasing its size disproportionately. Neither of these situations was desirable.

IP and redundancy are almost synonymous In IT environments, redundancy is part of their design and is intrinsic to the features of the equipment. Achieving said redundancy does not involve duplicating the cabling or equipment, since connectivity itself is used to convey in a bidirectional way signals of any kind and in greater numbers than in linear systems: IP is data-agnostic. Whether it is a video signal, an audio signal, an Ethernet frame or any other 75


data packet intended for transfer, the IP infrastructure supports it and offers the same features, including redundancy. In fact, increasing the capacity of a network, either in terms of number of signals or bandwidth, does not usually require duplicating equipment or cabling: trunk connections are established between pieces of equipment and their capacity is shared, just as simple as that. The 'issue' may lie in the fact that one additional layer of abstraction is added as compared to linear systems. In the latter, it was enough to follow the cable to know how a system is connected, but not in IP. A single cable conveys multiple signals in a multiplexed, bidirectional manner, and between several devices, which means that 'following the cable' is no longer valid. However, the redundancy that we have just gained is almost unattainable, in practice, in linear systems. 76

Sharing resources In addition to achieving the desired redundancy, the fact that the same equipment, cable or simply the capacity of the system in a global way is shared between different productions, be they studios, mobile units or similar, is natural and intrinsic to the system. There is no need to worry about some signals crisscrossing each other; the system takes care of it for us. And this is one of the great benefits of IP systems, whether audio or otherwise: doing more with the same, or even with less. It is no longer necessary to properly streamline the equipment, or oversize it accurately based on the intended use in the future; it is not that scaling it up and sharing resources between productions is easy, it is just the right way to go. With all this maelstrom of shared resources, the following question always comes to mind: what if an operator uses a resource that does not correspond

to his/her own production, thus impacting another production? In linear systems this would be nearly impossible, unless someone pulled out the wrong cable or got into someone else's control panel; but in IP environments it is enough to choose the wrong crossover point in the software to wreck havoc. That is why user authentication, beyond security, is essential in order to properly control these shared resources and establish strict rights and limits on their respective use. As you can see, it is not about someone unauthorized accessing our resources, but rather that a person with operational capacity makes a mistake and creates chaos, not only in the production in which they are involved but in productions running in parallel. Because, in addition to the impact of this occurrence, detecting the source of the problem and correcting it usually is neither quick nor effective.


Attention, unauthorized access

deleting them. Materially unfeasible.

Before dealing security, let's delve a little more into access control. This requires two things: authentication and rights control. The former is provided by the standard corporate IT environment, typically through the LDAP protocol. In a very simplified way: it is a centralized system where all the credentials used throughout the corporate environment are stored; all subsystems can connect to it to learn the specific credentials of a particular user.

Once authentication has been dealt with, let us move on to rights control. This part must de addressed system by system. In LDAP we specify the user credentials and which systems can be accessed, but the rights within each one are so specific to that system that it is essential to do so on a case-by-case basis.

By doing this, adding or removing users from a system or from several systems at the same time, is much easier. Normally, a user not only accesses a specific system to operate it, but also needs access to a microphone, audio processor, mixer, or to a recording and encoding system, for example. Having to recreate the same credentials in all of them is very tedious, not to mention updating, maintaining and even 78

This solves issues relating to user management, maintenance and access control in a simple and practical way, so that sharing resources is neither a headache nor a risk.

More IP... I mean, IT In addition to integration with user control systems like LDAP, there are many more things that AoIP systems can benefit from. Monitoring, for example. Due to the typical high complexity of IT systems, it is already a standard procedure to use systems such as Nagios, PRTG,

Grafana or Datadog to discover and monitor the entire network in real time. You can set alarms of almost any type and establish patterns to predict when a system will fail or exceed a certain capacity threshold, thus being able to foresee when to expand or replace part of the network. Another very practical protocol is SNMP, which allows discovering and 'auto-configuring' new systems connected to the network, making it even more scalable and easy to expand.

What if we make it even more virtual? Finally, we are going to introduce another


disruptive concept in broadcast environments, but quite widespread in the IT world: virtualization. We are not referring to the use of cloud environments (although that too) but to virtual environments. Let us go a bit deeper into it. A virtual IT environment is one in which computing resources do not exist discretely in the way they are used. That is, there is no processor, RAM, hard disk and motherboard for each machine that is working, but a set of these that are split or combined in order to virtualize as many machines -whether small or large- as needed within the total limit of available resources. In a more pragmatic way:

there are a number of computers, usually powerful, interconnected, with a management system that allows them to be split and combined in order to provide users with a certain number of computers with configuration capabilities. For those of us who come from a pure broadcast environment in which the machines did what they did and could be physically operated, this is something that requires a second or third reflection. However, when combined with everything previously seen, this virtualization notion provides us with a level of flexibility, capacity, shared resources and security unheard of in linear environments and with an optimization of costs with respect to equipment that was just unthinkable a few years ago.

Conclusions AoIP is not only taking our signals from a balanced cable to an Ethernet one, but also breaking with the

traditional concepts of signal processing in order to embrace the great benefits offered by IT environments. It is not easy, especially at a logical and conceptual level, but if we manage to break off with the established paths and adopt these 'new technologies' -that have been around for 30 yearswe will realize the great advantages that this provides. Special care must be taken with security and planning, since as they are distributed systems and more abstract than traditional ones, any error can be much more costly and far more difficult to correct. However, there are great experts and good practices that our IT colleagues can offer us so that this does not become too overwhelming. Let us not get overpowered by everything new, let us go little by little, but if we stay in wonderland we will discover how far the rabbit hole goes.  79


The technology behind deepfakes By Alejando Pérez Blanco Alejandro Pérez Blanco is a vfx artist. In 2018 he began to experiment with the application of Artificial Intelligence in a professional environment. Ever since, he has taken part in about 30 projects in film, TV, internet, advertising and corporate, and he has done all kinds of deepfakes.

A brief induction Let's imagine that a doctor sets out to create a dictionary of symptoms in order to diagnose any disease. He will collect thousands of clinical pictures and start writing: "If the patient has a fever, loss of appetite and red pimples -> chickenpox" 80

And so on with each disease. After putting in some effort, he discovers that the job can be made easier by creating a graph of connected dots, thus saving a lot of time and space (Figure 1). But a mathematician comes along, sees his graph and says: “this could

be done by a computer. If we convert diagnostics into numbers, the machine can learn through trial and error. “The essence of this system is trying out random connections, and when a correct result is achieved those connections are reinforced; and whenever a mistake is made, they weaken and eventually disappear.




This is the basis of Machine Learning. Mathematicians began to think about all this when Santiago Ramón y Cajal saw, for the first time in history, neurons through a microscope. The brain was no longer seen as a mysterious thinking machine, but as a network of neurons that, as they are interconnected, are somehow capable of thinking, recognizing the world around them, reaching conclusions and taking action. How this was achieved still remained a mystery, but being able to see the network opened endless doors to progress. Physics, chemistry, medicine... And mathematicians have been trying to translate into numbers everything we gradually learn about neurons. Machine Learning creates algorithms like the ones a mathematician or a computer scientist might work out, but instead of programming them, what ML does is design a digital simulation of a blank brain to see how it is filled up with content. Because a 82

Figure 1

Figure 2. A machine can learn to connect values by giving them different degrees of importance. It is the basis of Artificial Intelligence and we believe that human neurons work in a similar way.

brain is a computer that is not programmed, but trained (Figure 2).

In our example, when the mathematician finishes his system to train his model,


Figure 3. Deep Learning is about placing intermediate layers that we can't understand without a thorough analysis. The machine learns by trial and error and begins to draw conclusions on its own.

it seems to render promising but incomplete results. The doctor examines them, does some tests and notices a problem: "there is no correlation between symptoms," he says. "If you weigh 90 kg being two meters tall is not the same as being one meter fifty centimeters, but this scheme does not understand." The mathematician then thinks about this and finds a way to create a series of dots in the middle and let a trial and error learning scheme trickle down to

the middle layer. "This column of points will serve to create new combinations of symptoms", he replies. But he also warns that “this middle row is very difficult to decipher. It can get it right, it can get it wrong... anything could happen. It can get it right for the wrong reasons, and it can discover new things that you didn't know about. Whenever you get a layer of neurons in between, this electronic brain turns into a black box."

Those dark, deep layers of neurons, which are hard to understand with the naked eye, are known as Deep Learning. And Deep Learning is what most closely resembles the human brain that we have managed to create so far, and this has been done without really understanding how two human neurons connect with each other... But the fact that we can get results from them indicates that perhaps are on the right track (Figure 3). After many years of testing and setbacks, Artificial Intelligence is now experiencing an unprecedented explosion. This is due to the gradual development of the mathematics behind these neural networks, and also because of Big Data (how to take advantage of the large data collections that different institutions and companies have gathered) and the paths that technology has taken towards parallel processing (3D, video games, mining bitcoins). 83


Deep Learning Models Science has performed in the AI field in a quite exemplary manner. Some of the main stakeholders in this area, such as Google, Facebook or NVidia, developed proprietary technologies and released them to be used by everyone free of charge. And the world gave a response accordingly: Universities, research centers, companies and individuals began to publish new ways to connect digital neurons to each other, and to connect them to images and sound, to language, to meteorology, medical diagnostics, resistance of bridges or protein structure. Then, whoever finds a use for these findings compiles them and develops their own programs. The technology used to analyze the composition of the coronavirus is essentially the same as the one that detects if you upload a nude to Instagram. Let us see some rather simplified examples of 84

Figure 4.

Artificial Intelligence models applied to the world of image (Figure 4). The first big hit of today's AI was a detector/classifier. If we connect the pixels of an image to a neural network, add several deep layers to it, and end up with a couple of output options, we can create an image classifier. Dog or cat? Porn or not porn? Benign or malignant tumor? And the choice does not have necessarily to be limited to two items. Detectors have been created with hundreds or even thousands of outputs and are capable of detecting all kinds of different categories. Through a structure that

analyzes the differences between adjacent pixels (convolutional neural networks or CNN), each new layer means a greater degree of abstraction, so that the first layers can detect if there are lines and corners, then rounded shapes, then circles and then confirm if for example these correspond to pupils, balls, wheels... A few more layers and the machine will manage to create categories. And always by using said trial and error procedure, if a correct result is achieved, the structure is maintained. If it fails, a change in connections is then forced. Conversely, if we start from a few numbers and


through millions of attempts, totally new images are created with a quality that depends on the quality achieved by both generator and detector (Figure 6).

Figure 45.

set the mission to produce images with them, we can create an image generator based on initial values that we indicate in the first few neurons (Figure 5). But how can a model like this be trained by trial and error? This is where the exciting advances seen in recent years come in. To determine whether a hit or a miss is achieved, a detector like the one we have just seen is added at the end. The detector is fed with real images and with random images from the generator, turning the virtual brain into cat-andmouse game. If the detector spots a fake, the generator must then learn. If the forger misleads the

detector, it is the detector the one modifying its structure. Gradually,

Autoencoder: If we combine the ideas of a detector and a generator the other way around, we get a bottleneck-type structure. From an image, information is passed through neurons until a center with few of them is

Figure 6.Faces generated by the website through a complex generator-detector system called generative adversarial networks (GAN). The most powerful version of this model existing at present was created by NVidia. Despite the enormous progress seen in recent years, some defects in pupils and teeth can still be detected.



Figure 7

created and then we force the machine to rebuild it. The detector, in this case does not follow any criteria we have set (dog class or cat class) but creates its own automatically. The name comes from this idea of creating data encodings on its own. The first part is usually called encoder and the second decoder. And little by little the decoder learns to reconstruct the images that are asked and also to create new images, if the numbers that are right on the bottleneck are changed (Figure 7).

Deepfakes arrive The autoencoder was designed to try to compress information, but then it was seen that it 86

was no more efficient than formats like jpeg. The problem was that it didn't work well as a general tool, but it did yield acceptable results by working with specific image categories, such as clouds, cars, or faces. Ten years ago, this model was being used as an example in Artificial Intelligence courses because it was very simple to understand, but in practice it was not that useful. Until someone came up with a way to alter the model so that it would learn two different faces and then deepfakes were born. In fact, that stranger was an anonymous Reddit user whose name was precisely "deepfakes". This is the structure that was proposed (Figure 8).

The encoder learns to read two faces. But on the bottleneck the information is forked into two decoders, so that if face A is fed, only decoder A is trained to rebuild it. The theory is that, if you train enough, data such as the direction of the gaze, the turn of the head, the height of the jaw will concentrate on the neck... and that data will be valid for one face and for the other; decoder A reaches different conclusions “about where to place that spot on the cheek if the face is smiling” than decoder B, so that the two become interchangeable. That was the theory. And it did work. The first area where it was massively applied was pornography. A huge


Figure 8.

market quickly and unexpectedly emerged in which there would be people willing to pay to get videos according to their interests as well as many programmers willing to try their luck. Little by little, research published by AI laboratories of technological universities began to be implemented. Currently, the best public access innovations are being gathered by Ivan Perov in DeepFaceLab, a program that uses Google technology and is available on Github, and some studios have (us too) started to develop private

solutions to streamline the workflow when faced with the difficulties that are discovered through experience.

The problems of changing a face The goal of big tech companies is to create and sell automated services with a single click. For the time being his work is focusing on services for casual users who play with their mobile phone, such as Instagram filters, face or fingerprint detectors, camera optimization, photo tagging... Facebook's,

Google's, or Apple's methodologies consist of creating very complex models that require entire buildings filled with computers to train for weeks, and then attempt to make them work universally for all users with a quality that fits with their mobile screen. On the other hand, the latest version of Photoshop has higherquality experimental tools that run AI models from the cloud in order to modify age or gestures in photographs. But with today's existing technology, you cannot 87


create a compelling broadcast-quality deepfake at the push of a button. You have to train your own model, and in order to achieve a good result this training requires a high degree of technological power. NVidia graphic cards measure their capacity in this regard in two main elements: VRAM memory and CUDA cores. The memory required to work with Artificial Intelligence depends on image resolution and number of neurons that we want the model to have (generally, the higher the better). For their part, the CUDA cores are responsible for the simultaneous

mathematical calculations of each neuron and the more we have, the faster the training will go. This is one of the biggest problems facing technology: the result improves with larger models and with higher resolution, but the speed needed by television and the 4K definition that cinema requires are still at the technological limit. A very powerful card will be able to work with small models very quickly or with large models very slowly. In the"Entrevistas por la cara" (Cheeky interviews) section of the Spanish TV program El Intermedio, typically two weeks elapse

between the recording and the broadcasting of a sketch, depending on the difficulty involved. This means a degree of coordination and foresight that also includes the Costume, Photography and Makeup teams in order to link it properly with the live broadcast, in addition to having to prepare a script that does not expire during that time. At the Filmmaking department each sketch is planned separately, combining the scenery and the camera shots so as to minimize problems that could arise, as for example, that the substitute nose is flatter than the original; or if the background has to be

Figure 9. The Pope's finished deepfake created for the section "Entrevistas por la cara" of El Intermedio.



Figure 10. © HBO Nordic AB Eduard Fernández in the series 30 Monedas, with a 4K master. Six months were required to make 250 rejuvenation takes.

rebuilt when the program's presenter Gran Wyoming appears sideways,; and said planning always strives to give the interpreters the utmost freedom during their work (Figure 9). In the series 30 Monedas (30 Coins), involving postproduction in 4K, 250 deepfake shots were made to rejuvenate two protagonists during a flashback sequence. Currently there is no universal solution for projects of this size, and different techniques had to be designed for each single deepfake. It took about 5 months, with some of that time overlapping between

training on one face and post-production of the other face. Fictional work leads to needs that are not found in an HD comedy show, such as returning grain and texture with the exact quality of the original shots and producing a 10-bit logarithmic output ready for color grading. Realism, in humor, can sometimes be sacrificed in favor of comedy. In fiction, that is just unacceptable (Figure 10). In addition, the selection of faces that the model is fed can significantly modify the outcome. If there are only left-facing faces, a right-facing deepfake cannot be

achieved. If there are no faces available in 4K, a 4K deepfake cannot be created. A similar deficiency occurred with the series. Eduard Fernández and Manolo Solo gave us a broad collection of images from their youth, but neither of them quite fitted. With Eduard, given the extraordinary physical change that he developed for the character, his images as a young man did not work, so we worked on a model that would seek an intermediate point between his youth and his physiognomy in the series. As for Manolo, having old images of poorer quality 89


Figure 11. © HBO Nordic AB Actor and comedian Julian Genisson lent his image to take years off Manolo Solo in 30 Monedas.

and nearly always wearing glasses, it was necessary to do a deepfake with an image stuntman by designing a model that would maintain Manolo's


proportions and features as much as possible, but keeping the youthful features of the stuntman (Figure 11).

And here comes the other big problem with deepfakes: There is no single result. There is no single way to train. It can be done in many different


ways, either by changing learning algorithms or by altering the collection of images being fed. And the results can be better or worse, and even cause rejection problems in viewers. The human brain tends to get used to imperfections as it deals with them. For VFX technicians, this creates a very complex compromise, because on the one hand they must trust their own judgment and on the other

be wary as they dive into the myriad of images being faced. For projects of this magnitude, it is necessary to design a workflow with impact on supervision, so that colleagues involved in the project who are not used to these images can contribute their criteria before presenting them to the director who, being in possession of the original vision of the work, but at the same time dividing his attention between all the post-production

departments, becomes the perfect judge. Training AI is a trade itself -almost a craft- of learning how to ask the machine what we want from it. It is vital to spend time investigating neural networks and gain experience, because our natural brains must also be trained through trial, attempts and many mistakes in order to develop an intuition about what to expect from artificial brains. 

Parody of a carol by Raphael (Spanish singer) for El Intermedio. The original, on the left, becomes Wyoming's face by using different techniques. The training system and preparation of the sources can lead to results within an immense range, from aberrant to correct, or even improve some aspects of the original image.



IP-Ready solutions: 4 interesting options Nevion VideoIPath

Nevion’s VideoIPath convergent orchestration and SDN control software hides the complexity of media networks, enabling production staff to be in control of their production workflows – within facilities (LAN), across locations including remote and distributed production (WAN) and beyond (5G and Cloud). Built on industry standards (SMPTE ST 2110

and NMOS), and interoperable with other vendors’ equipment (e.g. Arista, Cisco, Mellanox, Sony and more), Nevion VideoIPath offers connection management, workflow planning, maintenance planning, monitoring and analysis, both in IP and mixed IP/baseband environments. Equally suitable for small deployments, Nevion VideoIPath has been proved in real live deployment to scale massively, handling over 100,000s of end-points and 10,000s of simultaneous multi-format media flows. Nevion VideoIPath is being used daily for a variety of applications by broadcasters, service providers and other organizations across the world, including BT, BBC, CBC Canada, Discovery, France Télévisions, Optus, Singtel, TV 2 Norway, and many others.

Nevion Virtuoso

Nevion Virtuoso is a standards-based (JT-NM tested), virtualization-ready, software-defined media node that can perform a variety of realtime functions in the converged IP LAN/WAN network. With its functionality easily modified in the field through software, Nevion Virtuoso’s versatility enables a faster time-to-production and greater cost-effectiveness. Virtuoso’s is capabilities include transport adaption and protection (SDI, SMPTE ST 2022-6 and SMPTE ST 2110); monitoring and switching;


and video and audio processing (e.g. compression with various codecs like JPEG XS/J2K/TICO, audio embedding/shuffling/mixing/delay/gain/switching); NMOS compliance; and much more. Virtuoso is in use in 100s of facilities, contribution, remote production, distributed production and terrestrial distribution deployments. Broadcasters, service providers and other organizations across the world, including BT, Discovery, CCTV (China), Riot Games, and many others rely on Virtuoso for their live productions.


OMS (OMNEO Main Station) OMS (OMNEO Main Station) and DBP (Digital Beltpack) mark the beginning of a major new product family: RTS Digital Partyline. OMS bridges legacy analog partyline users who wish to migrate to IP functionality while using their existing equipment. OMS represents an incredibly versatile solution for a wide range of applications – a communications multi-tool for theaters, houses of worship, industrial, broadcast and event venues. Using OMNEO, OMS interconnects with our digital intercoms including the new digital beltpacks, keypanels, and the ROAMEO wireless intercom system. OMS can also serve as a stand-alone base station for ROAMEO and is available in five licensed models; Analog Only, Analog Plus, Basic, Intermediate and Advanced – for increased capacity and functionality as business needs grow.

OMS has the easy-to-use RTS digital iconbased front panel display, along with a simplified menu structure to allow system configuration and control from the front panel and display. For more info:

Panasonic KAIROS KAIROS is resolution and format Independent and supports baseband and IP signals such as SDI, ST 2110 and NDI, in any combination. It also supports PTP (Precision Time Protocol) synchronization. As a native IP system, KAIROS is well suited to be used for remote video production as part of a completely IP-based environment. Through “KAIROS Alliance Partners” program, Panasonic establishes KAIROS as the de facto KAIROS, the next-generation IT/IP live video production platform is based on software and runs on CPU and GPU architecture, allowing users to allocate processing power with 100% efficiency and enabling an extremely low system latency of just a single frame.

standard for next-generation video production platforms. As well as operating with Panasonic’s own broadcast camera systems, projectors and displays, it will integrate and link to other partner products. Using the system, Panasonic is promoting the concept of “Smart Live Production”.



adidas Runtastic: Thousands of pieces of audiovisual content for 172 million users

The traditional broadcast world, the one that includes television, has been forced to manage a growing volume of audiovisual assets. Both the explosion of new channels and the structuring of different target audiences have led to ever increasing efforts by corporations to generate customized, adapted pieces. This trend has also pervaded the applications that we can find on our mobile devices, which serve as windows to thousands of audiovisual items. On occasions like the ones we are dealing with, these are specially created for a given event. Adidas Runtastic, although it started as a small startup, is now a giant in the world of mobile applications. With more than 323 million downloads and 172 million registered users, it generates a significant volume of pieces produced entirely in-house every week. We found out more details about how the company manages its media section thanks to Andrzej Kozlowski, Workflow Manager for Media at Adidas Runtastic.

adidas Runtastic is a really popular app with millions of followers around the world. How many people work creating content for the app + related services? What started as a small start-up in 2009 is now an international company in the digital health and fitness space with more than 280 employees from 94

over 41 different nations. Together we work on creating the best products, content, and experiences. We can proudly say that our products are available in 10 languages. And although we've been working remotely for almost a year now, our amazing company culture hasn't suffered under these circumstances.


What’s your particular role in adidas Runtastic, Andrzej? I’ve been working as a Digital Media Designer at Runtastic for over 6 years, which is basically an allround role including video editing, motion graphics, and even some 3D work here and there. However, as our marketing demands and team grew, I was more and more engaged in streamlining/automating processes and workflows. That’s what led me stepping into my new role one year ago as Workflow Manager for Media. One of my first steps was researching and procuring an Asset

Management system that would cover our needs.

What makes adidas Runtastic special compared to other apps in terms of content production and distribution? We’ve been a proud part of the adidas family since 2015 and follow the same vision: through sport, we have the power to change lives. - More than 323 million app downloads. - With personalized training plans and a commitment to motivate and educate, Runtastic wants to create the best possible running and training experiences for the more than 172 million registered users. The goal is to inspire every individual to live a more conscious and active lifestyle, leading to a longer and happier life. - We have various inhouse experts, such as dietitians and fitness professionals, and as we’re part of the adidas family, we have access to

amazing adidas athletes (e.g. Haile Gebrselassie, Karlie Kloss) - Motivating features in our app that make us special. - From videos, texts, localization, and more – everything you can see in our apps and our (social / content) channels is produced in-house. I think that's what makes our company very special: we don't outsource anything to agencies.

Does adidas Runtastic have an on-site studio to produce the content we view through our smartphones? If so, could you tell us more about that setup? Recently, we’ve set up a brand new content studio, dubbed "The Foundry", in which we can produce all our content in-house. It is already equipped with state-of-the-art cameras, lights, and revolving sets for our different content styles and we’ve had a couple of successful shoots there already. Upgrades and 95


improvements aside, we are planning to introduce live broadcasting in the near future.

We know that adidas Runtastic publishes lots of content via social media. Is there a way to quantify this? How many hours of content (or number of pieces) do you work on monthly? We are very active on social media - we are posting content on a daily basis on Facebook and Instagram, weekly on Youtube, most often localized in six languages. Although I don’t know the exact number, it’s a lot. From the Media Team’s perspective, I’ve noticed a significant increase in the number of shoots we have over the last couple of years.

adidas Runtastic manages a large number of assets to provide original content to its customers and social media fans. What system do you use to manage these assets? Until we partnered up with IPV we’d been 96

struggling with proper asset management. All footage was spread out across network storage and external hard drives, hard to find and hard to screen. With IPV Curator, however, we’ve found a perfect fit for our needs. Many asset management systems deal with finalized assets, but our requirements were to have a proper overview and catalogue over our raw footage as well as support for our editors and designers during the post-production stage. Curator covers all of that.

What software do you use to create the pieces that are then shared by the adidas Runtastic team?

Our main tool is the Adobe Creative Cloud Premiere Pro and Photoshop being the main tools, but we also make frequent use of After Effects and Illustrator.

Do you have a unified system within your facilities to produce the pieces? As far as asset generation goes, especially video post production, we’re relying on IPV Curator’s production features. Before that, we had defined processes in place, on paper, but far from unified.

Covid-19 has transformed the way media houses work, especially when it


comes to remote work. How did you adapt to remote workflows? What system did you use to continue working? Did you use tools in the cloud? Did you opt for a VPN connection?

for us in this regard. Not only can our editors stream proxies from the cloud directly into Premiere, we can also conform remotely, as all of our footage is now stored in the cloud. No need for shipping hard drives any more.

However, our plans to grow content production is an exciting challenge, as well as creating an internal platform to work with and distribute our content, which Curator is a big part of. These challenges are well under way and it’s an exciting time of growth for us.

Adapting to remote workflows was not too complicated from the general standpoint of the company, as we already had everything online (from Google accounts, Wiki, Slack, to VPN and Jira). Why? Because we have 3 offices in Austria (Vienna, Linz, Salzburg) and many employees are able to work from other countries. Therefore, a lot of communication between the offices and amongst the teams already was online before the remote set-up. That made it easy to adapt ourselves to the new circumstances.

What’s the biggest challenge regarding media at adidas Runtastic?

One of our more immediate goals is definitely to create more content for our users and followers that is meaningful and helpful, and with the Foundry studio and Curator on our side we’re just getting started!

However, for our media department it was not that easy. We had to resort to sending footage to our editors via postal service (on hard drives). IPV Curator is game-changing

It was definitely a tedious challenge to have a good overview of our assets and the delivery thereof before we switched to Curator. Now it’s very easy.

But our teams are also always researching innovative formats and concepts, be it in our adidas Running and Training apps or on our blog and social media. 

On a general company level, we had to find new ways to connect as a team. We have more meetings to see each other virtually, connect with each other via virtual coffee chats or have regular online pub quizzes to support our vivid company culture. We even hired lots (over 50 team members) of talent last year remotely and did the whole onboarding experience virtually.

What’s the future of media at adidas Runtastic? Will you surprise your followers with Augmented Reality content, for example? Are you currently working on innovative formats for your fans?





Authors: Mauricio Alvarez-Mesa Sergio Sanz-Rodríguez Spin Digital Video Technologies GmbH, Berlin (Germany) Maciej Glowiak Eryk Skotarczak Poznan Supercomputing and Networking Center - PSNC, Poznan (Poland) Ravi Velhal Intel Corporation - Portland - Oregon (USA)

In December 2020 a number of technology companies have come together to demonstrate 8K live streaming over the Internet. The demonstration consisted of an end-to-end 8K live video showcase from live production to encoding, streaming and playback on 8K TVs. Intel, NHK Technologies, Poznan Supercomputing and Networking Center – PSNC, Spin Digital, and Globo were supported by Sony, Astrodesign, and Zixi in the event.



The demonstration presented a viable workflow for 8K live streaming over the Internet and included 4 main technology components: - 8K live production system with multiple cameras and live switching and mixing - HEVC real-time encoder with low-latency broadcast-grade quality 8K video at low bitrate - Low-latency live streaming over the public Internet

- 8K HEVC software decoder and media player for 8K TVs with HDMI 2.1 interface The 8K live TV programme was produced at the PSNC’s 8K studio in Poznan (Poland), with the video signal encoded using Spin Digital’s HEVC encoder and streamed to multiple locations across the world where it was received live and played on 8K screens. Viewing locations included NHK Technologies in Tokyo,

Figure 1: Global 8K live streaming over the Internet


Globo in Rio de Janeiro, Intel in Portland Oregon, and Spin Digital in Berlin. The press release of the main original event can be found here: ts/global-8k-live-2020/ A video summarizing the demonstration is available in Youtube: watch?v=pPDENsqsNcQ&t =54s


And an 8K HEVC encoded video with the complete program is available for download : rive/u/0/folders/1wrqiwt-nkimh94HzTaegRZ9ZgQZ kGLX This document presents further technical details on the different technologies used for the demonstration and how all the components were integrated together into a complete workflow for 8K live. It starts with a historical analysis on 8K live transmissions in order to give a perspective to the presented demonstration and to highlight the current technology readiness of 8K live solutions. This demonstration is the result of partners’ multiyear joint R&D efforts on different technologies for 8K. It is a significant milestone towards the adoption of 8K technologies for large scale live events.


8K Format: An Historical Perspective

With a total resolution of 7680x4320 pixels, and four times the number of pixels per frame compared to 4K/UHD-1, the emerging 8K video format (also known as UHD-2) is designed to provide a stronger sensation of reality and a considerably greater immersive experience, with users being completely absorbed by the audiovisual content (Sugawara and Masaoka 2013). The 8K standard offers not only a higher pixel count but an improved image quality with High Dynamic Range (HDR), Wide Color Gamut (WGC), and optionally High Frame Rates (HFR) for an overall improved visual experience (Putman 2020). Apart from technical improvements of the video quality, recent studies have shown that 8K brings in additional benefits to the subjective perceived quality such as increased perception of depth and 3-

dimensionality, which results, in general, in an increased sensation of immersion and realness (Masaoka 2013), (Park, Kim, and Park 2019), (Ogura 2018). When used for live applications, 8K aims at providing users with a highly immersive view of live events such as concerts, sports, and lectures, occurring at an event site. In addition to traditional broadcasting for home users, 8K live events can be experienced at large-screen immersive spaces with the aim of sharing a more realistic and emotional experience of the live event for group audiences. The Japanese broadcaster NHK has pioneered the development of 8K having presented the first satellite and broadband transmissions already in 2008 (Shishikui, Fujita, and Kubota 2009), and having launched the first 8K broadcasting channel ten years later (Sujikai and Ichigaya 2021). In the meantime, from 2009 to 101


2017, NHK continued to carry out 8K transmission experiments over satellite, terrestrial, and IP networks (NHK 2019). 8K live pilots were showcased at some of the most important sporting events of the last decade, such as the 2012 Olympics in London (BBC 2012), the 2014 FIFA World Cup in Brazil (Stanton et al. 2015), and the 2016 Olympics in Rio (Kerschbaumer 2016). Apart from NHK, other companies have been performing 8K live trials in the last few years. In 2017 PSNC organized an 8K live streaming demonstration for the TNC17 conference using Sony IP-Live production and a 100 Gbit/s network (Binczewski et al. 2017). In 2018 two demonstrations were performed: one via DVB-S2X satellite platform by SES, Spin Digital, and Sharp (SES 2018); and another one by TV Globo and Intelsat during the World Cup in Russia (Kurz 2018). In 2019 three 8K live demos were performed 102

within the context of the EU-funded Immersify project. One of these demonstrations was presented at InterBEE, in which an 8K live program was streamed from Poznan (Poland) to Tokyo (Japan) via open Internet using HLS (Immersify 2019). In the same year, SES and Spin Digital continued their efforts to validate 8K satellite transmission (SES 2019). France TV, Spin Digital, and Advantech conducted an 8K live streaming test of the Roland Garros tennis tournament via IP local network (France TV 2019). BT media also presented one of the world’s first live sports broadcasts (Hernandez 2019).

UK’s first public live 8K

Although the Tokyo Olympic Games - to be broadcasted in 8K - were postponed by one year due to the COVID-19 pandemic, a few 8K transmission pilots and demos were conducted in 2020. BT Sport and Samsung presented the


sports broadcast (Coy 2020). In the Immersify project, a demonstration of 8K live streaming over the Internet from Poznan to Linz (Austria) was realized once again in a different scenario (Immersify 2020). The Spanish broadcaster RTVE tested the 8K broadcasting workflow over the DVB-T2 terrestrial infrastructure (RTVE 2020). And, as presented in this paper, in December 2020, a group of technology companies, including Intel, PSNC, Spin Digital, NHK Technologies, and Globo, organized a global demonstration of 8K live streaming over the Internet (Spin Digital

The Table 1 summarises the most important events related to 8K live transmission. The table shows the video coding standard and bitrate used for signal distribution as well.


Table 1. Summary of key 8K live transmission events




End-to-End 8K Live Streaming Workflow

The demonstration presented in December 2020 consisted of an endto-end 8K live video streaming workflow from live production to playback on 8K screens. This live workflow consists of 4 main components: 1) 8K Live production: An 8K live program was produced at the PSNC’s 8K studio in Poznan using multiple 8K cameras, live switching system, and 5.1 audio. 2) 8K live encoding: The resulting live 8K program was encoded using a real-time software HEVC encoder

Figure 2: End-to-end 8K live streaming workflow


developed by Spin Digital. The encoder is able to provide broadcast-grade quality in a low-latency configuration at 120 Mbit/s, and the same level of quality was possible with a bit rate of 48 Mbit/s in a highefficiency encoding configuration. 3) 8K low-latency streaming: The 8K content was distributed to multiple viewing points across the globe using the open Internet. The streaming module of the encoder produced a low-latency stream that was transported over the public Internet using the Zixi protocol.

4) 8K playback: The 8K live stream was played back using a software HEVC decoder and 8K media player developed by Spin Digital. This software media player has been optimized for 8K resolutions and live streaming with low latency. Figure 2 shows a high level architecture of the 8K live streaming workflow used for the demonstration highlighting the mentioned components: production, encoding, streaming and playback. In the next sections there is more detailed information of each one of these components of the 8K live workflow.


Figure 3: The 8K live production system including 5.1 live audio

2.1. 8K Live Production One of the main goals of the demonstration was to create a prototype scenario for real live 8K television production. The basic idea was to use three 8K cameras as well as additional sources of 8K video, graphics and animations that could be overlaid onto the 8K image. The demonstration took place in the space of the New Media Laboratory of the Poznan Supercomputing and Networking Center in Poznan, Poland. The lab

and recording studio are both equipped with 8K video equipment, video processors, an 8K display wall (also available in 8K 3D scenario), sound system and - what is important for streaming is connected to the Internet via a 100G broadband network. In the demonstration, we used three Sony F65 CineAlta cameras connected via SMPTE fiber to three Baseband Processor Units BPU-8000 (Ichikawa et al. 2014). Each of them produced video at a resolution of

7680x4320 at 59.94 fps with 10-bit which was sent via 16x 3G-SDI links to the heart of the system, which were two live Barco "E2" units stacked together and working as a unified video processing system. The "E2" video processors were used to switch and mix all video streams. In addition to 8K camera inputs, they also received graphics from an overlay-graphics server, an 8K playout system, and a computer handling video-conference interface with remote demonstration participants. 105


Figure 4: Artist on stage and 8K camera in action at PSNC 8K TV production lab

The overlay-graphics server was running vMix software and produced two synchronised signals: fill (the actual color graphics) and key (premultiplied alpha channel). These two signals were processed and composed in "E2" processors, allowing any animated graphics such as artist signatures, lower thirds or animated logos of partners and clocks to be overlaid on the final 8K resolution program output. There were also two additional signal 106

sources: one from PCbased media server running Spin Digital’s 8K media player (Spin Player), which provided additional 8K video content, and the other one feeding the view of video-conference with project partners at the receiving locations. With these additional sources it was possible to both playback prerecorded videos in 8K as well as include the view of the remote partners. Each video signal was controlled and mixed from

the management console, where the TV switching engineer decided which signals and graphics to show in the final 8K program stream. Regarding the fact that the output of "E2" video processors was 16x HDMI connections, and the encoder and streaming server operated on 4x 12G-SDI 2-Sample Interleave (2SI) source signals, there was a need of using a chain of additional converters. Since there was no single device in the Lab that


could perform all the necessary conversions, several separate converters had to be used. First, all 16x HDMI signals were converted to 16x 3GSDI and then to 4x 12GSDI Square-Division (SQD). The conversion between SQD and 2SI was performed on a PC workstation equipped with two BlackMagic DeckLink 8K Pro video cards. Another task of the PC converter machine was embedding SDI audio into the first of four 2SI output links.

The sound was recorded using a Sennheiser EW 500 G3 wireless microphone system and sent to Yamaha QL5 Digital Mixing Console. Then the 6-channel audio (5.1) was transmitted using the DANTE network protocol to AES3 converter and finally was embedded in SDI link and delivered to the converter PC.

2.2. 8K Live Encoding One of the key components in an 8K live workflow is the

distribution encoder. Its main task is to compress the 8K signal at a relatively low bitrate, so that the resulting bitstream can be transmitted over current distribution networks (Internet, satellite, cable, etc.), and do it at a very high quality in order to ensure a high quality of experience expected from an 8K service. Moreover, the encoder has to compress the 8K signal in real-time and in lowlatency in order to ensure a maximum end-to-end

Figure 5: PSNC controlling room. Audio and 8K video production



latency for the target live applications. Among the available distribution video coding standards such as H.264/AVC, HEVC/H.265, AV1, VVC/H.266, currently HEVC represents the stateof-the-art in UHD (4K and 8K) compression for live workflows. Recent codecs such as AV1 or VVC can provide lower bitrates for 8K video compared to HEVC but at the cost of a very high computational complexity. Currently, there are no solutions available for 4K or 8K live encoding with AV1 or VVC. As a result, HEVC is the best choice for 8K live workflows: it offers very high compression efficiency and implementations have achieved very high performance, both for 8K decoding and encoding for live applications. The licensing issues affecting HEVC, which have prevented the initial deployment of the codec, have been addressed by the licensors and are now well understood, at least in the broadcast industry. 108

Spin Digital has developed a high-quality high-performance HEVC live encoder, known as Spin Enc Live (Spin Digital 2020), which is tailored to high-end video applications, such as 4K and 8K broadcasting and live streaming using a software CPU-based solution. The new encoder is based on Spin Digital’s advanced decision algorithms for fast and high-quality encoding, which are combined with a highly optimized software implementation for Intel Xeon Scalable processors in order to achieve a very high level of quality and performance. The encoder includes SIMD processing using Intel’s AVX-512 instructions and Intel DL Boost technologies, and scalable multithreading for systems that can include hundreds of CPU cores. As a result, Spin Enc live is able to produce quality and compression levels similar to best-inclass offline encoders (typically used for VoD)

while, at the same time, providing the performance required for processing 8K signals at 60 frames per second in real-time. Spin Enc Live can be used in high-efficiency and low-latency environments. The high-efficiency configuration provides the highest quality and compression at the expense of higher latency, and it is typically addressed towards broadcast and Internet distribution applications. Subjective tests have shown that 8Kp60 broadcast quality can be achieved at a bitrate of 48 Mbit/s (Sanz and Nikrang 2020). The low-latency configuration guarantees a minimum end-to-end latency of 750 ms (assuming ideal transmission channel). In order to have a similar quality as with the highefficiency configuration the bitrate has to be increased to 120 Mbps. The low-latency configuration is addressed towards broadcast contribution and immersive live experience applications.


Figure 6: System diagram of the Spin Enc Live 8K real-time encoder

In addition to the HEVC encoder, Spin Enc Live provides the I/O modules needed for complete live applications such as 12G SDI video (and audio) capture, and TS over IP (TSoIP) and chunk-based live streaming (HLS). Figure 6 shows a high level view of the encoder architecture used during the 8K live demonstration. The 8Kp60 production signal was sent via 12G SDI to the encoder, which was placed at PSNC’s Media Lab. The lowlatency configuration of the encoder was used to compress the video at 120 Mbit/s in broadcast-grade quality. The encoder was also tested using its high-

efficiency configuration, achieving the same level of quality with a bitrate of 48 Mbit/s. The 5.1-channel audio signal was encoded in AAC-LC at a bitrate of 384 kbit/s. The encoded audiovisual signal was encapsulated using MPEG2-TS and RTP and streamed over the Internet using the Zixi protocol as described in the next section.

2.3. 8K Live Delivery over IP Networks One of the objectives of the 8K live streaming demonstration was to deliver the 8K live signal over the public Internet without using expensive dedicated networks. This

required the use of stateof-the-art live encoders capable of reducing the bitrate to acceptable levels for current Internet bandwidths, as shown in the previou section. In addition, it was required to provide the 8K live signal at the lowest possible latency in order to enable a sense of participation from the remote audiences in the live event. Combining lowbitrate Internet distribution and lowlatency streaming required a carefully designed interaction between the live encoder, the streaming module, and the decoder at the receiving end. 109


Encoder streaming module: TS over IP The encoder includes an streaming module that encapsulates the encoded audio and video signal using MPEG2-TS and produces an IP (UDP or RTP) stream for low-latency delivery over IP networks (TSoIP). In addition to encapsulation, MPEG2-TS provides information to enable synchronized decoding of multimedia information over a wide range of reception conditions. The method of synchronization in MPEG2-TS is based on the System Target Decoder (STD). The STD model is a buffer management model, which is used by encoders and decoders for interoperability. The STD specifies the decoder buffer sizes to hold coded media data and the time when a frame can be removed from the buffer and decoded as well as any reordering before display. The STD is very similar to the HEVC’s hypothetical reference decoder (HRD) model and preserves the constraints given by the HRD model 110

on picture timing, buffer size and buffer handling (Schierl et al. 2012). Spin Enc Live streaming module produced an HEVC video (and AAC audio) stream compliant with the STD and HRD models given some buffer constraints. For the low latency application scenario used in this demonstration the HRD buffer was set to 200 ms (for high efficiency configuration the HRD buffer was set to 1000 ms). Loss recovery and low-latency retransmision using Zixi Delivering MPEG2-TS or RTP packets directly over the public Internet will result in significant quality degradations due to packet losses. Using TCPbased protocols such as RTMP or HTTP (DASH, HLS) would guarantee correct delivery but at the expense of very high latency. A third option is to use protocols that support Automatic Repeat reQuest (ARQ) such as Zixi, RIST or SRT (Wånggren and Thyresson 2019). For the 8K live streaming demonstration we

selected the Zixi protocol. All of these protocols address two main requirements for lowlatency streaming over the Internet: - Loss recovery: using ARQ techniques for recovering packets lost due to Internet transmission. - Low-Latency: perform the selective retransmission in a lowlatency manner The MPEG2-TS stream that was produced by the encoder was transmitted over the Internet using the Zixi protocol in a proxy configuration. A Zixi sender was connected to the encoder and a Zixi receiver to the decoder. The latency and buffering of the Zixi sender and receiver were configured in order to ensure reliable transmission at lowest latency possible (see section on latency analysis). Low-latency streaming player On the receiver side the Zixi receiver provided the 8K decoder with an MPEG2-TS stream free of


packet losses. The decoder then used the MPEG2-TS time samples to synchronize in the lowest latency possible with the encoder.

2.4. 8K Decoding and Playback The 8K live stream was played back using Spin Digital’s software media player called Spin Player. A new version of the media player specially designed for low-latency 8K live streaming was used. The media player consists of three main components:

- Streaming reception: The source module receives the TS encoded stream, demuxes the audio and video streams, and estimates the master clock based on the time samples present in the MPEG2-TS stream. This master clock is then used for the remaining player tasks such as decoding and rendering procedures - HEVC decoding: Spin Digital optimized HEVC decoder was used for decoding. The decoder is a software solution optimized for recent

CPUs. 8Kp60 at 120 Mbit/s can be decoded in real-time on a PC system with at least 10 X86 64bit high-performance CPU cores. - Rendering: The media player includes a flexible rendering and output module with support for GPUs or SDI devices. Depending on the 8K screen different interfaces can be used such as 4x HDMI 2.0 for legacy devices, HDMI 2.1 that supports 8Kp60 with a single cable, and 12G SDI for professional players requiring maximum quality.

Figure 7: 8K media players with support for different 8K AV interfaces



The stream reception, decoder and renderer were configured for lowlatency applications by minimizing the input buffering, number of frames in flight, and rendering buffers. The low-latency operation depends on the tight coupling between the encoder and decoder which was guaranteed by the low-latency streaming and selective retransmission implemented in the Zixi protocol. The figure 7 shows the player configurations with different interfaces used during the demonstration.


Latency Analysis

Combining the lowlatency configuration of

the encoder and the lowlatency settings in the transport protocol it was possible to transmit the 8K live video program with a glass-to-glass latency between 1 and 2 seconds depending on the location. The end-to-end latency from Poznan to the different playback locations across the world was measured using synchronized clocks. The Table 2 shows the end-to-end latency measurements and the contribution of the different components in the live workflow. Latency contributions are divided into three groups: - Live Production: includes the 8K camera processing, live switching, and HDMI and SDI conversions.

Table 2. End-to-end latency contributions and measurements


- Encoder + Playback: includes SDI capture, HEVC encoding, MPEG2TS muxing, demuxing, decoding and rendering. - Transmission (Zixi + network): includes the network delay as well as the selective retransmission performed by the Zixi protocol. - End-to-end Measured: is the total latency measured using the synchronized clocks. It is approximately the same as the addition of the three previous components. As shown in the table, the main contributors to the overall latency are the encoding and decoding processes. The playback system installed in Berlin


Table 3. Summary of the technical specifications of the demonstration.

was able to reduce the playback latency by 50 ms by using an SDI renderer instead of the GPU-based renderer used in the other players. The transmission delay, as expected, was very dependent on the physical distance between origin and destination with the lowest being from Poland to Germany (185 ms) and longest from Poland to Brazil (1251 ms).

The encoder was also configured in high efficiency mode at 48 Mbit/s. As mentioned


Technical Specifications

In Table 3 we summarize

before this mode allows

the main technical

for higher compression

specifications of the signal

but requires higher

and systems used in the

encoding latencies. The


obtained end-to-end latencies were: 3591 ms to Berlin, 4285 ms to Tokyo, and 3922 ms to Portland. 113




In this paper we presented the technical details of the workflow used for an end-to-end 8K live video streaming showcase. It includes a detailed description of the live production, real-time 8K encoding, low-latency Internet streaming and playback on 8K TVs. 8K live streaming was first demonstrated by NHK in 2008 using a dedicated network and an MPEG-2 codec at 600 Mbit/s. 12 years later 8K live streaming was performed over the public Internet and using a state-of-theart HEVC encoder at 120 Mbit/s. We presented all the components needed for a real 8K live production. It starts from an 8K live production in a TV studio with multiple 8K cameras, graphics overlays, live switching, and live audio production. The live feed was compressed using a state-of-the-art HEVC software encoder for reducing the bitrate 114

needed for 8K live applications making possible the delivery of broadcast-grade quality video at low-latency. An optimized streaming module based on TS over IP (TSoIP) and Automatic Repeat reQuest (ARQ) protocols such as Zixi was carefully integrated with the encoder and decoder in order. This allowed us to recover from packet losses due to Internet transmission and provide low-latency streaming over the public Internet. A software decoder and media player optimized for live and low-latency streaming was used for 8K video playback connecting to 8K screens using multiple interfaces. Low-latency high-quality 8K live streaming over the public Internet opens possibilities for new immersive live

experiences where events at venue locations can be experienced with audiences around the world in a way that gives the viewers at the remote locations a higher sense of a shared experience and connection with the performers. We will continue the development of the production, encoding, streaming, and playback systems as well as performing pilots, tests and validations with the objective of making 8K live streaming a practical reality in applications such as live sports, concerts, lectures, and other areas where this new format can offer a new and more immersive experience to end users. 




Wang, and S. Wenger. 2012. “System Layer Integration of High Efficiency Video Coding.” IEEE Transactions on Circuits and Systems for Video Technology, 22, no. 12 (December): 1871-1884. 10.1109/TCSVT.2012.2223054.

- Balderston, Michael. 2020. “Zixi, AWS Collab on 4K and 8K Tests Over 5G.” TV Technology.

- Kerschbaumer, Ken. 2016. “Live From Rio 2016: NHK Takes 8K Production to New Level.” Sports Video Group.

- BBC. 2012. “The Olympics in Super HiVision.” BBC Research & Development. velopment/2012/08/the-olympics-insuper-hi-visio.shtml.

- Kurz, Phil. 2018. “Intelsat, Globo Team Up for 8K World Cup Demo At Rio Museum.” TV Technology. -globo-team-up-for-8k-world-cupdemo-at-rio-museum.

- SES. 2018. “SES Showcases its First

- Binczewski, A., M. Glowiak, M. Stróżyk, and T. Zzewczyk. 2017. “8K live streaming using IP Live in 100G SDN network on distance over 1.000 km.” PSNC Technical Note, (June).

- Masaoka, K. 2013. “Sensation of Realness From High-Resolution Images of Real Objects.” IEEE Transactions on Broadcasting 59, no. 1 (March): 72-83. 10.1109/TBC.2012.2232491.

- SES. 2019. “Samsung, Spin Digital and

- Coy, Kevin. 2020. “BT Sport And Samsung Demo First Live 8K Sports Broadcast.” News on News.

- NHK. 2019. “An Introduction to NHK's 8K UHDTV (Super Hi-Vision) Research and Development Process.” NHK BS4K8K.

- France TV. 2019. “8K Experiment at Roland-Garros 2019.” France TV Lab.

- Ogura, T. 2018. “Advantage of 10,000

- Hara, S., A. Hanada, I. Masuhara, T. Yamashita, and K. Mitani. 2018. “Celebrating the Launch of 8K/4K UHDTV Satellite Broadcasting and Progress on Full-Featured 8K UHDTV in Japan.” SMPTE Motion Imaging Journal 127, no. 2 (March): 1-8.

- Park, D., Y. Kim, and Y. Park. 2019.

- Hernandez, Kristian. 2019. “IBC 2019: BT Media Conducts one of the World's First Live 8K Sports Broadcasts.” Sports Video Group. - Ichikawa, K., S. Mitsuhashi, M. Abe, A. Hanada, M. Kanetsuka, and K. Mitani. 2014. “Development of Super HiVision (8K) Baseband Processor Unit “BPU-8000.” SMPTE 2014 Annual Technical Conference & Exhibition, 113. - Immersify. 2019. “8K Live Streaming from Poland to Japan at InterBBE 2019.” Immersify Website. - Immersify. 2020. “Final Immersify Demonstration in Poznan and Linz.” Immersify Project Website.

cd/m2 with 8K Full-Spec HDR TV.” International Display Workshop. IDW 2018, (December). “Hyperrealism in Full Ultra High‐Definition 8K Display.” SID Symposium Digest of Technical Papers 50, no. 1 (June): 1138-1141. 10.1002/sdtp.13130.

- Priestley, Jenny. 2021. “Telestream Monitors Live 8K UHD TV Tests over 5G.” TVB Europe.

- Putman, P. H. 2020. “What's All the Buzz About 8K Video.” SMPTE Motion Imaging Journal 129, no. 6 (July): 1214. 10.5594/JMI.2020.2995253.

- RTVE. 2020. “La Cátedra RTVE en la UPM Presenta a Nivel Mundial la Primara Emisión Piloto de Señal UHD 8K en DVB-2.” RTVE Website.

- Sanz, Sergio, and Ali Nikrang. 2020. “Report on QA and Content Preparation Guidelines.” Immersify Project Website. s-reports/qa-and-contentpreparation/.

- Schierl, T., M. M. Hannuksela, Y.

Broadcast of 8K Television.” SES Website. SES Showcase 8K Content via Satellite.” SES Website.

- Shishikui, Y., Y. Fujita, and K. Kubota. 2009. “Super Hi-Vision Demos at IBC2008.” EBU Tech.

- Spin Digital. 2020. “Whitepaper: HEVC Real-time Software Encoder for 8K Live Video Applications.” Spin Digital News & Tech Blog.

- Spin Digital. 2021. “Joint Press Release: Global 8K Live Streaming Showcase 2020.” Spin Digital News.

- Stanton, Michael, Leandro Ciuffo, Shinichi Sakaida, Tatsuya Fujii, Hiroyuki Kimiyama, Junichi Nakagawa, and Hisao Uose. 2015. “8K Live Television Coverage of Global Sports Events in Brazil.” TNC 2015 Porto, (06).

- Sugawara, M., and K. Masaoka. 2013. “UHDTV Image Format for Better Visual Experience.” Proceedings of the IEEE 101, no. 1 (January): 8-17. 10.1109/JPROC.2012.2194949.

- Sujikai, Hisashi, and Atsuro Ichigaya. 2021. “Source Coding and Transmission Technology of 4K/8K UHDTV Satellite Broadcasting.” IEICE Communications Society — Global Newsletter 45, no. 1 (March): 1-12.

- Wånggren, M., and L. Thyresson. 2019. “Live Cloud Ingest Using Open Alternatives RIST & SRT.” SMPTE 2019, 1-25. 10.5594/M001874.