TM Broadcast International #102, February 2022

Page 1


Titular noticia Texto noticia


EDITORIAL In this issue we wanted to take a closer look at the development that audiovisual content production is undergoing. For this reason, we have contacted ITV, one of the main television networks in the United Kingdom, with the aim of getting to know the details of one of the longestrunning productions on Western television: Coronation Street. In 60 years of broadcasting, a great deal has happened! However, our sector is undergoing an intense and increasingly rapid evolution. The means of production are changing, as they are in full transition towards the workflows of the future, and, along with them, all the other stages involved in the process of creating and broadcasting audiovisual content are also modified. Coronation Street is older, but it is in great shape, and all the years that this soap opera has been going on have not been an impediment for embracing IP systems, remote graphic editing, offshore production and even flirting with the idea of virtual production and 5G. The present of our industry is exciting indeed. The processes that are changing a format as popular as Coronation Street are also affecting the European public broadcasting arena. We

Editor in chief Javier de Martín

have interviewed SRG SSR, the Swiss public broadcaster, about their present and future and they have assured us that they are working on digitization above everything else. What can make a public service that has broadcast over traditional media throughout its existence rethink this way of working to prioritize digital services? Well, of course, their audience, which is more used today to consuming content on demand through digital platforms than to turning on their TV sets and watching the programs in the traditional fashion. In a country with such a mountainous terrain and four national languages spoken, digital transition promises to be effective and efficient. In the search for efficiency and effectiveness in which both ITV and SRG SSR are immersed, we want to look to the future. And in this issue that we bring to you, our contributors have thoroughly reserached techniques and features that our industry has in store for us. Some of them have already been implemented in our daily lives, such as production of news and entertainment content through smartphones, and others are about to be implemented and will further transform our way of moving towards digitization, such as the JPEG-XS standard.

Creative Direction Mercedes González

TM Broadcast International #102 February 2022

Key account manager Susana Sampedro

Administration Laura de Diego

Editorial staff

Published in Spain

ISSN: 2659-5966

TM Broadcast International is a magazine published by Daró Media Group SL Centro Empresarial Tartessos Calle Pollensa 2, oficina 14 28290 Las Rozas (Madrid), Spain Phone +34 91 640 46 43




NEWS SRG SSR Public service content in four languages SRG SSR is Switzerland’s public television company. But this simple definition does not do justice to the activities it has developed throughout its history, nor to the plans that this corporation; which employs around 7,000 people, offers a public service and strives for the best way to reach every Swiss person, intends to achieve in the future.


“Coronation Street” The technological present of the oldest soap opera in the world Coronation Street is one of the most beloved shows in England, “from the professionals who make it all the way up to the Queen,” admits our interviewee Gary Westmoreland, Director of Technology, Drama & Continuing Drama at ITV Studios, in charge of the technological strategies of soap operas like Emmerdale or Coronation Street itself. How have they managed it?



JPEG XS, or should we say XL? The JPEG XS standard, specifically the ISO/ IEC 21122, is a codec developed by the Joint Photographic Experts Group (JPEG), as its name suggests, which promises to deliver interoperability, low latency without visual loss of quality and very low processing requirements for high-quality formats.


Cinema Smartphone Audiovisual creation at your fingertips


Taking off with Cloud in 2022: Thriving in an evolving mediascape


Elevating live production with 5G By Net Insight


“How to Mojo” by Ivo Burum (Part 1) What is mobile journalism and is my smartphone mojo-ready?



Blustream launches SDVoE IP500UHD-TZ to address the need for uncompressed 4K video delivery Blustream has recently announced a new addition to its video-over-IP family: the SDVoE IP500UHD-TZ 4K multicast UHD transceiver and ACM500 control module. The new device is already in the market and delivers uncompromised 4K 60Hz 4:4:4 video over a 10Gb network with low latency and seamless switching. The ACM500 Advanced Control Module operates with all major control systems and uncompromised performance in systems of all sizes.

Blustream introduces the SDVoE IP500UHD 4K multicast UHD transceiver, delivering uncompromised, zero-latency 18Gbps UHD video over a 10Gb network.

provides features including an industry-leading 1Gb Ethernet pass-through, and transmission of multiple control and data signals — IR, RS-232, and USB (KVM)

The Blustream IP500UHD-

— concurrent with audio

TZ supports the HDMI

and video. It supports PoE+.

2.0 18Gbps specification,

The solution is designed for

encompassing distribution

18Gbps HDMI installations

of HDR, 10- and 12-bit

where multiple transceivers

color content including

can be combined with one

Dolby Vision and multi-

or more 10GbE copper/

channel HD audio signals,

fiber switches to form a

including Dolby TrueHD

distributed video matrix,

and Atmos, DTS-HD Master

multi-viewer, or video wall

Audio, and DTS:X. It also



“Never before has the need for flawless, uncompromised AV over IP UHD signal distribution been as important as it is today,” said Martyn Shirley, Blustream General Manager. “The IP500UHDTZ 4K transceiver addresses this challenge head-on. It’s a simple-to-install yet featurerich and flexible solution that can be configured as a point of transmission or as a receiver to deliver uncompressed 4K ultra high-definition video over the network infrastructure.” 




Bridge Technologies upgrades ASI Monitoring with VB246 Legacy M2S, as well as accommodating both 188byte and 204-byte packet formats. The net result is significantly increased performance and stream count for broadcasters who work with high-density ASI transports, allowing for Expanding on the abilities of

six channels to continuous

the existing VB242, the new

monitoring of the same.

VB246 module is designed

This means that in a typical

to accomodate customers

1RU chassis, two VB246

who require higher ASI

units could be integrated

density by providing six ASI

to achieve monitoring over

inputs in parallel. Monitored

13 channels in parallel

concurrently and with

(with the 13th channel

continuous ETR290 analysis,

constituting an ASI input

the VB246 represents an

from either VB120 or

invaluable tool for ensuring

VB220 controllers).

the maintenance of

more detailed performance metrics to be tracked on a continuous basis, thus delivering more in-depth, insightful and usable data on network performance. hese etrics can be viewed flexibly, either as part of a wider system via the VBC Controller, as a stand-alone unit using a regular web

The VB246 also

browser (allowing for fully

represents an expansion

remote monitoring), or

of the standards and

even through a third-party

This new release represents

functionalities of the

management system. In

a significant leap in ASI

original VB242 blade;

this way, users are able to

monitoring capacity

supporting DVB-ASI

integrate monitoring in a

from the original VB242,

according to EN 50083-

manner which best suits

moving from round-robin,

9 Annex B, Burst mode,

their operational workflows.

sequential monitoring of

Spread mode and

optimum broadcast quality and transmission.





BBC Sports relies on Vizrt’s graphics solutions to create a virtual environment and brings Olympic sports to the audience

Vizrt is working with BBC Sport to deliver a virtual set graphics ecosystem in Salford to cover the Beijing Winter Games. BBC Sports uses Vizrt’s Fusion Keyer, all powered by the company’s virtual set controller, to create a system that gives sports fans access to the sports action unfolding in China. One of the challenges the BBC needed to solve with this hybrid rendering


pipeline of Viz Engine 4 in conjunction with Unreal Engine 4 was to enable an easy-to-manage workflow and streamline content changes. The Viz Engine offers integration of the Unreal Engine and the Viz Arc control application. “Prior to the summer, we converted a small studio space at Media City into a green screen area with a virtual design (by Jim Mann and Toby

Kalitowski) and enhanced rendering technology to deliver an immersive, enhanced experience for audiences. The studio, with five different presenting positions that can house a variety of sports output, will be a key presentation location for BBC Sport this year, including the Winter Games for which we have added more design and development,” states John Murphy, Creative Director


and Head of Graphics for

of sport for the industry,

Sport at the BBC.

but it’s the year of sport for

“2022 is not just the year

Vizrt. We’re ramping up our

offering across sports to support our customers in achieving their production needs; BBC Sport is a great example of that. We were honored to participate in last year’s celebration of the UEFA Euro 2020 championships and the long-awaited Summer Games, and for us, the BBC sets the standards in sports both in looks and functionality across the market,” remarks Gerhard Lang, CTO for Vizrt. 



Northern Ireland’s Accidental Theatre relies on LiveU to expand its streaming capabilities The charitable organisation based in Belfast, Northern Ireland, Accidental Theatre has recently implemented to LiveU’s multi-camera solution to provide live streaming capabilities with the aim of extending the reach of their performances in their website. Accidental Theatre is a non-traditional arts company that provides creative opportunities and performance space to artists. Its Artistic Director, Richard Lavery, explained that “We had always designed this building to be very internet accessible. Prior to the pandemic we were already streaming a couple of shows a month, usually in quite a low-key way, in terms of both promotion


and the technology we used. Then the pandemic hit and we, of course, had to close every in-person aspect of our business. When we realised this would be for longer than we initially thought, we started to reach out to festival organisers, other artists and businesses to explain what we were planning to do in terms of live streaming and new possibilities.” Accidental Theatre expanded its live streaming capabilities turning to LiveU to do so. Lavery and his team created a control room and editing space in the theatre and two small studios in what were the bank vaults. They also selected installed a Blackmagic Atem

production system and moved from a two-camera setup to a six-camera, including multiple PTZ cameras. LiveU’s LU800 unit is at the core of the live streaming setup and is connected to three PTZ cameras that can either be controlled onsite or remotely. On top of that they use an audio interface that takes a mix out of the desk – or they can build their own mix – into an audio embedder into one of the camera feeds that then comes back to a LiveU server in the control room where they do the final mix. They have one or two operators onsite and one or two in the control room. 


Globecast and Arabsat launch TiVi5MONDE across the MENA region in SD and HD Globecast and Arabsat have announced that TiVi5MONDE has chosen that partnership to launch the channel in SD and HD across the MENA region. TiVi5MONDE, owned by French network TV5MONDE, is a 24/7 French language children’s network, featuring cartoons, educational shows, and teen series. Yves Bigot, TV5MONDE CEO, said, “We are very pleased to launch this new channel with our long-time partners Arabsat and to expand the TV5MONDE brand in the MENA region. We anticipate Tivi5MONDE driving interest among young north African Francophones and Francophiles while further

satellite distribution

network of TV channels in

services over MENA for

the Middle East and North

TV5MONDE Maghreb

Africa region. Working

Orient HD and TV5MONDE

with Globecast, we’re very

Style HD, which are on

pleased to have been

the same transponder

selected as the exclusive

as the new service. For TiVi5MONDE, Globecast is supplying the channel, with full redundancy and channel monitoring, to Arabsat’s European teleport from where the channel is

satellite distributor of the new kid’s channel for our viewers in the Arab world. Arabsat has always been proudly classified by our viewers as the preferred satellite video


neighbourhood of the

Dr. Badr bin Nasser

attention to the valuable

The channel is launching

AlSuwaidan, Arabsat Acting

and distinctive TV channels

on the Badr satellite at

CEO, said, “Arabsat is always

that are particularly relevant

26°E, which already hosts a

honoured to be one of

to the customs, traditions,

strong DTH neighbourhood.

the partners and satellite

and privacy of the Arabic

Arabsat already provides

distributors of TV5MONDE’s

family in our region.” 

promoting French-culture and language in this area.”

Arab family. We pay close



The Canadian Humber Institute’s ATSC 3.0 laboratory is equipped with Triveni Digital

Triveni Digital has announced that the Humber Institute of Technology and Advanced Learning, based in Canada, has established an ATSC 3.0 living lab and test bed using a solution from Triveni Digital, including the SkyScraper® XM ATSC 3.0 Datacasting System. The lab will enable multinational industry stakeholders to experiment with delivering advanced television and datacasting applications, such as remote learning, personalized advertising, and automotive applications. The B²C Lab is set to be the first of its kind in North America to include an ATSC 3.0 system and a 5G core network, offering

network convergence. “Humber College is thrilled to explore the new business models and innovative services enabled by ATSC 3.0 and NextGen TV,” said Orest Sushko, director of the B²C Lab at Humber College. “We chose to partner with Triveni Digital because of their innovative leadership in ATSC 3.0 technology, real-world deployment experience, and outstanding end-to-end solution. With Triveni Digital, we have created a cuttingedge lab environment where academics, students, and industry stakeholders can learn and experiment to push the boundaries of what is possible with both television and datacasting applications.”

R&D pathways to achieve

The B²C Lab is developing

broadcast and broadband

an OTA experimental


RF transmission system test bed with multiple transmitters and antennas covering the Toronto area, allowing for developing an ATSC 3.0 intertower communication network and a single-frequency network. Humber College is using Triveni Digital’s SkyScraper datacasting system in the B²C Lab to support non-real time use cases with optimized bandwidth utilization. In addition, e B²C Lab has incorporated Triveni Digital’s ATSC 3.0 Broadcast Gateway scheduler, GuideBuilder XM ATSC 3.0 Transport Encoder, and StreamScope XM Analyzer for ATSC 3.0 datacasting. The GuideBuilder XM generates ATSC 3.0 metadata associated with the content being transmitted by SkyScraper. Using the StreamScope XM Analyzer, research teams will analyze and troubleshoot the ATSC 3.0 datacasting streams.


Qvest and Trade4Sports combine skills to enable automated and digitalized marketing of advertising spaces in sport As a provider of content advertising technologies, Trade4Sports specializes in a real-time digitalized marketing solution for the sports and entertainment sector. The company has established “T4S Desk & SSP”, a booking platform for media agencies and advertising clients, and developed “T4S Marketing Cloud”, an online-based inventory and playout system for rights holders. The platform “T4S Desk & SSP” serves as the digital bridge between rights holders and media agencies. Thanks to standardized preparation with learned media

planning key figures, the LED screens – called “Digital Sport Screens” – can be selected for the first time in classic media planning via the “T4S SSP” service and booked automatically via self-service using “T4S Desk”. This programmatic and intuitive ad-tech solution can be used at a national and international level to extend media campaigns that use digital advertising spaces in the sports and entertainment sector. As a all-in-one solution, the “T4S Marketing Cloud” covers all requirements from digital marketing to delivery and bundles

the technology expertise of Trade4Sports and Qvest. This SaaS solution gives clubs and rights holders a convenient, automated, and cloudbased way of managing their advertising spaces, such as LED boards at the sidelines or other virtual or physical displays in the stadium. The inventory management function therefore digitalizes and standardizes the marketing processes. The delivery of advertising media is made much simpler thanks to an integrated asset management. Assets can be controlled, modified, and delivered in real-time via a cloud-based multi-channel playout system. In addition, as an established media technology company, Qvest, together with Trade4Sports, offers holistic and individual support in the logistical and technical planning, delivery as well as commissioning of the LED technology in the stadiums. 



OOONA Integrated platform helps create VRT: a Flemish-Belgian public broadcaster VRT has signed a contract

subtitling for the visually

back to the VRT servers for

for a multi-year license of

impaired, plus Flemish


the OOONA Integrated

language voiceover and

platform. VRT is the Belgian-


Flemish public service broadcaster.

VRT has gone live with the OOONA Integrated

“We are very pleased to be working with the OOONA team,” adds VRT project manager Erik Phalet. “They

The broadcaster’s goal is to

platform which is now

promote pluralistic debate

available for use by its in-

and strengthen democracy

house localization team.

by propagating the Flemish

An API connection provides

identity through a cultural

the link for VRT localization

and language experience.

orders to be created

VRT achieves this through

on OOONA. Resource

a host of localization

planning includes the

“VRT has always been

services. These include live

customary ADFS security

on the cutting edge of

and offline subtitling for

authentication. Once

both hearing and hard-

localization production is

developments in terms of

of-hearing viewers, audio

complete on the OOONA

description and audio

platform, content is pushed

are great at listening, understand the market well and have responded quickly to our request to customize their platform to our exact needs”.

innovative solutions for media localization,” says OOONA co-founder and CEO Wayne Garb. “Being the localization platform of choice for this futurelooking broadcaster is an acknowledgement of our ability to deliver a platform that provides users with all the functionality required for today’s complex media localization workflows.” 



Dalet launches two partner-focused programs to expand its services to be deployed on third-party platforms Dalet has recently announced two partnerfocused programs that scale Dalet distribution channels and expand innovative third-party technology options available for Dalet workflows. These are the Dalet Channel Sales Partner and Technology Alliance Partner programs and they are schedule to roll out in Q1 2022. The objective is to expand local points of contact for customers around the world and to build technology partnerships with providers such as Amazon Web Services, Microsoft Azure, Alibaba Cloud or OVH Cloud. Leading the partner strategy and programs is industry expert, Ewan

prioritizes collaboration

globally, have been

to expand customers’

packaged under SaaS

workflow options. Ewan

and subscription options,

Johnston, Director of

making our first-rate video

Strategic Alliances and

management accessible to

Channel Partners, adds:

a wider range of customers.

“More and more companies

Thus, we are deepening

outside the traditional

our relationships with our

media and entertainment space need enterprise video management capabilities whether to improve the efficiency of their video production, enhance brand awareness

current partners while simultaneously creating new ones to have a wider partner network that can support these expanded opportunities.”

or engage audiences at a

The multi-tiered program

deeper level over digital

enables existing and new

channels. Dalet’s cloud-

sales-focused partners

Dalet’s Technology Alliance

native media workflow

to build a long-term

Program recognizes key

solutions, which are used

sustainable business

technology applications and

by media companies

working with Dalet. 

Johnston, who joined Dalet recently.



Medialogy Broadcast expands in South Asia and West Africa

Left to right: Habeeb Ibrahim, Medialogy Broadcast’s new SAARC HQ office building, Manoj Sharma and haramveer Grover.

London-based TV production equipment and service provider Medialogy Broadcast has announced the opening of a newlyformed sales and support company in India plus an addition to the recently established Nigerian branch which serves the West African region. Manoj Sharma joins the India office as senior manager, sales, with Dharamveer Grover as pre-sales support engineer. Both are based in New Delhi. Habeeb Ibrahim joins Medialogy Broadcast as solutions architect and support manager, operating from Lagos, Nigeria,


alongside technical sales manager Etukudo Etebom Akpan who joined the company in Q3 2021. “These new appointments build on a very successful 2021 during which we completed a series of new broadcast system deployments and introduced our servicebased archiving platform, ArcPaaS,” says Medialogy Broadcast sales director Ananth Sam. “Manoj Sharma, Dharamveer Grover and Habeeb Ibrahim are all highly accomplished media specialists with more than 40 years of broadcastrelated industry experience between them. “Medialogy Broadcast is a

fast developing group of people who understand how to turn customer aspirations into costeffective reality based on operationally efficient hardware, services or hybrid solutions,” adds Manoj Sharma. “Their knowledge of computer technology is particularly relevant to today’s IT-centric broadcast industry.” Medialogy Broadcast’s newly incorporated SAARC HQ office is at the centre of New Delhi’s commercial, business and communications district. It is a British era zone in the heart of the city, Connaught Place opposite the British Council building. 


Riedel Communications creates Customer Success Department and appoints Craig Thompson as its director Riedel Communications has recently announced that Craig Thompson has joined the company as executive director of the new Customer Success Department. Thompson has the objective of oversee operations for the department and to expand the company’s services portfolio and associated revenue streams. The Customer Success Department was created to deliver a good experience for customers. The department will coordinate every project from the initial

demo and system design to training, delivery, and postsales support. “With the creation of our Customer Success Department, we’re realizing our vision of customercentric operations, while preparing a solid foundation to support a product portfolio of hardware and software products,” said Rik Hoerée, CEO Product Division at Riedel Communications. “With an impressive record of improving customer satisfaction, exceeding sales targets, and managing

world-class services teams, Craig is a natural fit to lead our Customer Success team, and we’re thrilled to welcome him to Riedel.” Thompson has over 20 years of management experience in the media production and broadcasting industries. He comes to Riedel from Grass Valley, where he served as vice president of customer success and business transformation. “It’s an exciting time to be joining Riedel, as the company delivers on its commitment to our customers with the new Customer Success Department,” said Craig Thompson. “Riedel already offers a robust line-up of services — whether it’s consulting, technical support, or training courses — and I’m looking forward to expanding our portfolio even further to ensure a seamless customer experience.” 





SRG SSR is Switzerland’s public television company. But this simple definition does not do justice to the activities it has developed throughout its history, nor to the plans that this corporation; which employs around 7,000 people, offers public service and strives for the best way to reach every Swiss person, intends to achieve in the future. Its structure is divided into five companies: one for each of the four languages that have national language status in Switzerland, and a fifth that functions as an online information media. In the words of our interviewees, Bakel Walden, SRG SSR’s Director of Development and Offering, and Marco Derighetti, SRG SSR’s Chief Operating Officer, each of these four companies has tended to develop its own way of operating. However, it’s now, at the dawn of digitization, virtualization and the transition to IP installations, when they are working hand in hand to offer distinctly national content in German, French, Italian and Romansh, that reaches the highest mountain and the furthest valley in the country. The paradigm shift is undeniable: from satellite, cable, linear TV or FM to digital platforms, on-demand content or social networks. SRG SSR has embraced the change and the public service it offers is in the middle of the transition. Let’s see how they are developing it.



Interview with Bakel Walden, Director of Development and Offering SRG SSR In recent years, what have been the main technological challenges faced by SRG SSR? As a media company and as a public broadcaster, we have faced challenges concerning our connection with the audience. It has changed profoundly over the last couple of years. Technology plays a much bigger role now. We have been strong in broadcast, but if you look into distribution on third-party platforms, like social media or streaming platforms, we need a different set of technology and, sometimes, new partnerships. For us, this has been a challenge. At the same time, we are experiencing some challenging situations trying to have tools available so that our entire workforce can be ready for the COVID situation, basically favouring working from home. We have relied on cloud services and changed


Bakel Walden, Director of Development and Offering SRG SSR


our internal infrastructure.

You mentioned

The technical teams have

adapting your content

been very busy over the last

broadcasting to new

couple of years.

ways of delivering


content, how does your OTT platform work and which company have you partnered with? We are working very closely with SWISS TXT for some of these services. We had a catch-up player for each region. Since 2018, we are developing a dedicated streaming service. Play Suisse is a platform that has a login and is based on personalization. As a content consumer, you get a different offer than someone else.

Regarding partnerships, we work with SWISS TXT and You.i TV. One of the main things we do with them is the whole subtitling process. They have very good knowledge, and access services for people who cannot hear or see well. We’re doing all the subtitling processes automatically with a very interesting approach. You.i TV is a third-party provider for the infrastructure. Its contribution has been related to the back-end for the technological infrastructure.

And thirdly, for things that front-end experience, we have developed solutions ourselves. The growth of our platform has been guided by the know-how of third-party companies and the expertise we gained during the process. How did you perform your part? We set up an expert team who join our company and we worked very userfocused. We had a lot of user feedback when

Technical room in Comano



we were developing the navigation, the look and feel of the platform, etc. At the same time, we were creating everything else. We had quality testing regarding the programming of the platform. We started out with a streaming platform team formed by around 17 people and we are almost 30 now. However, compared to large companies, it’s still a very small workforce. Also, from user experience up to marketing, our knowhow is very important. We put the resources together. We’re doing it now and we’ll do it in the future as well. We combine the expertise we have developing a streaming service, but also we include our knowledge about the catch-up platform we have in each of our regions. So we have a real pooling of resources for developers to work on different projects at the same time, therefore we can share resources. Actually, the whole change over the last two years has been about working closely within the different divisions of the company, which I


Multifunctional and automated studio in Comano.

think is important because it gets companies to use as many internal resources as possible. Play Suisse is one of the most successful streaming services in your country, how have you achieved this popularity? I think one of the main ingredients of this success is a very clear Unique Selling Proposition (USP). We focus on Swiss-made films, documentaries and series. We will never be able to compete with the big ones, however we offer

that kind of content which our people want to see. We can do this because Swiss content is a small market and that makes it not so interesting for the big players. And Swiss people who want Swiss content find it at Play Suisse. At the same time, we’re trying to invest as much as possible in technology because – this is something we learned – you have to be as good as the big ones. In terms of navigation, technology, and the quality of the streaming service, you need to give the same impression.


or culture; obviously. These ingredients make stories that matter to Swiss people. I think we can tell Swiss stories like no one else. However, it’s also true

players, I think we still have some advantage there. We have been developing Swiss content for our whole life. We now have that large volume of offerings, and we’re not going to stop.

that big companies from abroad are starting to invest in local content. Today there is a discussion in Switzerland regarding a law that compels them to allocate a percentage of their income to invest in local markets. Although this investment from the big The third point, which comes from our position and is probably a big advantage, is that we use the reach of our broadcast services to the fullest to promote the streaming platform. We have worked hard with our editorial teams to use the power of the broadcast world to drive and promote the streaming world. Regarding Swiss content, how would you describe the essence of this particular content? First of all, I’ll mention the main parts: setting, context,


How did you manage the COVID-19 situation? At the outset, let me say a big “thank you” to our technical team because they made us get through the early days of the pandemic without knowing that such a major disaster was coming. Our company was ready to carry out work from virtually anywhere from the end of 2019. We deployed for all workers a new batch of laptops, introduced remote office and collaboration tools and also managed to create simple and easy access to the network. By a miracle, we were ready from day one. On the other hand, the production side was quite a challenge. We have exchanged a lot of knowledge with other broadcasters across Europe to see how they have handled the situation. Very



strict measures must be put in place and we could work quite well. Now I think that, after almost two years of the pandemic, there is a certain routine. It’s still a very serious situation, but I think the teams are now capable of doing it. Now we hope to be able to return to the ground as soon as possible, although we’ll always have a mix of work at home and in the office. It’s also important for people to come back together. As COVID-19 remote workflows have become established in TV, what is going to stay in your line of work? Before the pandemic, we were looking to reduce the space we were using in our company by 25% because we knew that not all the time people are working in the office. We just achieved that goal and now we’re looking to see if we can go more towards 30% less. Obviously, that goes more for editorial teams or administrative staff. If they are in production, they have to be in local facilities.


However, we have looked at the possibility of remote production. You don’t have to go in person to every alpine ski event. Occasionally, depending on the project, we may only have the commentator and most of the team sitting here in Switzerland while covering, for example, the Winter Olympics. Are you testing cloud solutions to incorporate remote work? We have been deploying many of our services from our own infrastructure to the cloud; from our administrative part with Office 365 to our streaming services. And we have been investing a lot in making them secure and legally compliant, which has been a big challenge, honestly. If you have editorial data in the cloud, which most of the time is confidential, you need to be sure what happens to the data and from where it can be accessed. I think many of the companies we work with have realized in recent months the importance


of data security and very positive steps are now being taken to truly achieve it. But it has been a challenge to bring all the agreements together. Switzerland is a multilingual region, how has SRG SSR created its infrastructure with this in mind?


New News Faciluty in Zurich.

Switzerland is probably the most decentralized country in terms of media and broadcasting space because the four linguistic regions have their own production center. Each center has tended to be independent historically; however, there were some joint operations, such as the big investment in OB trucks that we have recently taken and in which all the centers have participated. Although, the centers have been independent and they have used different technologies. For example, there are different editing

systems depending on the production centre. And we want to standardize it because we learned that is better to unify them. If you look at metadata, for example, you really have to have the same systems, otherwise it doesn’t work. There have been big projects, for example, in the editing system, but also in newsroom technology, etc. Now, with resources under pressure, we have to have cost savings of about 10% from 2018, which is 150 million francs, which is quite a lot for a public broadcaster. So we have to save these costs.

It remains a challenge. Even if you think that it’s enough to deploy a system and repeat it, there are different approaches, different needs in each of the regions. And there must always be a balance in the project so that all the production centers can undertake it together. I think we have come a long way in this regard: before we buy something, we look at whether we can’t do it together, whether it wouldn’t be better to do it together. I think this is a big change from 10 or 15 years ago, when there was more money and more freedom to do what you wanted to do. With the pressure of resources we have learned to work better together and for a common good. You have told us that a specific region may have particular needs, why? For example, in the German-speaking part of Switzerland, where you can find our biggest operation center, there are more ground to cover and a lot of different TV stations, radio stations, etc. This



means more resources are needed. And, of course, the more needs the better technologies and the more investments you will have to make. Otherwise, if you look at the Romansh-language region, about 40,000 speakers and 120,000 people who can understand, they cannot afford to have a big playout center. Their technical infrastructure needs to be very efficient, rather small. And it is not only language that differentiates some production centers from others. You also have to take into account the cultural orientation. Let’s say that German-speaking Switzerland probably always looks a bit more towards Germany or Austria and considers what they do there. Also, the Frenchspeaking part might have a different approach to technology, because they are a mirror of France and adopt a lot of technological changes from there. As we said, other production centers may be more conservative depending on the needs and cultural approach.


Visual Radio in Comano.

And it happens around the whole world. Some cultures have a much more structured approach than others. Other regions and cultures can take detours and the approaches are much more flexible. If you take a look at a broadcaster in Sweden and one in Spain the difference will be big. And it’s not a question of getting it wrong or right, it’s a question of how much flexibility each one assumes.

Switzerland also has a very characteristic orography, in fact, it’s the country with the highest concentration of mountainous territory on its surface. How does this affect your signal distribution? It has been a challenge. If you look at the FM network, for example, we needed to deploy a lot of antennas to reach all the valleys and that was quite a high distribution cost. That is


why we have been in favor of the FM switch-off, which will take place at the end of 2024. I think only Norway has totally or, let’s say, mostly switched off its FM coverage. Other countries are still hesitant to introduce DAB at full capacity. Our disconnection will allow us to reduce distribution costs. And this will also allow us to have coverage in many tunnels, not only over valleys and mountains. From an audience point of view, DAB offers more choices and better quality.


It also gives you more information, such as titles. From the company’s point of view, this technology has a lower environmental impact, as it uses less energy and has fewer transmitters, so you can save a lot on FM contribution costs. What is the technological strategy for SRG SSR? For us, the good approach is not to be the first to implement technological changes, but still closely watching where the market is going. It seems that, for example, the big players are moving towards giving full IP infrastructure for broadcasting and this has been an objective we have been developing over the last couple of years. We have had a major newsroom project launched in 2018, in Zurich, in the new News and Sports Center, fully built on IP. As for the mix between, let’s say, classical broadcasting and new technologies, we have set a goal for the next three or four years. We want to strike a balance between broadcasting and non-linear usage. We are focused on how people

will perceive our content offerings. We want them to ask themselves: “Are they broadcast or are they a digital distributor?” And also on how the audience consumes our offer. Our goal is to achieve a 50-50. Broadcasting is still important today and should not be underestimated. But at the same time, the use of on-demand content is becoming more and more important. By the mid2020s we should reach the goal of making non-linear increasingly important. No one knows when the last broadcast signal will be switched off, but it will remain relevant for a while. Although, linear content will stay for ever, because people will definitely be happy to have also linear choice apart from the ondemand content. Just look at the option Netflix has incorporated into its service that saves consumers from spending 20 minutes trying to decide what they want to watch. It is the “I don’t know what I want to watch” button and what it offers when pressed is, basically, content streamed in a linear way. 



Interview with Marco Derighetti, COO at SRG SSR Over the past few decades, what have been the main technological challenges facing the company? Since I began working for the SRG (in 2002 as CTO of the Italian-speaking production unit of SRG) I was confronted with a continuous digital transformation. We began with the introduction of a digital-based news and sports center including a digital archive system for audio and video in 2004. At that time each digital production center or system was an independent entity. The main challenge was therefore the introduction of new workflows inside these systems. Once a sufficient degree of digitalization of SDtechnology was achieved, we wanted to connect these systems and accelerate the interconnectivity between systems. At the same time, HD Format was gradually


Marco Derighetti, COO at SRG SSR.

HD was a big achievement for the consumer but for the broadcast industry, due to the bigger bandwidth, it slowed up the digital development of functionalities and interconnectivity. The technology was not ready (or too expensive) to allow full HD 1080p50 systems and therefore HD was introduced with (good enough) format compromises like 720p50 and 1080i25.

bandwidth) in the broadcast world caused a tremendous gap between the consumer online tool and device development and the broadcast ones. The developments of broadcast systems were many years behind the mainstream consumer systems: no common standards to integrate different suppliers’ devices together, no continuous software upgrade strategies and product developments, and no standard hardware devices to avoid a tricky end of life situations.

The introduction of HD (with very high video

The classical broadcast production systems

introduced (roughly between 2008 and 2012).



had first to build up a sufficiently powerful basic infrastructure (network with high bandwidth) that even today is often dedicated and runs on specific standards, slowing down the speed of the development of new tools. Nevertheless, today’s media production systems are fully digitalized but are not using IP-based standard technologies everywhere. This step is in development and will need some time. Other today’s challenges are: data and system migration process, to deal with the new degree of freedom thanks to IPTechnology (systems have to be previously defined and architectures must now be accurately rethought); and new security and business continuity issues due to increased interdependence and networking. How many production centres do you have across the country and how it is work organized between them? Is each of them specialized in any kind of content?

Switzerland has four national languages (German, French, Italian and Romansh) and the SRG has the duty to produce in all of them. Therefore, we have many production centres. The bigger ones are in Zürich, Bern, Basel, Geneva, Lausanne (radio), Comano, Lugano and Chur. Beside these, there are many local news productions centers or offices. In the past, these venues were organised by media (radio and tv), today we are concentrating processes which will reduce the number of buildings occupied (between 2015 and 2025 we will reduce approx. 25% of the occupied surface). We produce all genres except for fiction which is produced externally. Could you give us an overview about the technical equipment in your studios? Is there technological coherence between them? About half of the technical infrastructure is commonly operated by national competence centers (in the



domains of IT, Production and Enterprise services), the other part is built and operated independently but with strong coordination in terms of standards and products. For instance, we are in the final steps to build the same video production system for news and sport for all SRG venues, we purchased 3 identical middle-sized OB vans (one in each region) or we usually purchase studio and ENG cameras jointly and we define common standards (manufacturer) for all kind of equipment. What technology could we find in one of your main studios? On the imaging side, you can find remotely monitored and controlled HDTV cameras and increasingly remote PTZ cameras, all connected by fiber optics. For video managing system we have trust on Sony Hive for file-based workflows management of news and sport. On the other hand, a broadcast controller manages the live video feeds and we have another editing and post-production


Studio in Geneva.

system for audio and video. We have been deploying new graphic systems relying on AR and VR experiences. Our main studios are configured with ledwall technology and AR solutions are mainly used in sports productions. Audio management is more complex in the SRG SSR

due to the four national languages to manage. For this reason, it’s not possible to operate a common archive system. Synergies are used for media storage. To control video and audio resources, for example, a new master control room built over IP will be on air this year in Zurich. This one


will be interconnected with those of Lugano (RSI 2023) and Geneva (RTS 2024). The playout facility is based on automated workflows. Otherwise, manual processing is mainly performed during preparation or during large live events (mainly sporting events).


And finally, in terms of our distribution outputs, some content is produced and dedicated to the main social networks and, content that is broadcast traditionally is also distributed digitally in the form of live streaming (linear) and in the form of video/audio on demand (non-linear). SRG operates several regional applications for news and sports, and Play Suisse as the national one. Do you rely on some extra telecommunication technology to reach that spreading? Are you considering 5G Broadcast somehow? In terms of TV consumption, more than 90% of the Swiss population is connected



New News Facility in Zurich.

via IP or cable, a minority is therefore consuming through a broadcast satellite signal. This is the reason why we switched off our DVB-T Network four years ago. Otherwise, we are not considering 5G broadcasting for TV. 70% of radio consumption in Switzerland is digital (35% DAB+ and 35% Digital), the remaining 30% is still consumed through FM. The FM switch off in Switzerland is planned for the end of 2024. It is still not clear if 5G broadcasting for radio will provide any advantages compared


to the combination of DAB+ (free to air) and digital. We are actively observing the evolution and opportunities, but the perspectives for 5G broadcast in Switzerland are long-term perspectives. 5G for the media industry is much more interesting for the development of powerful PMSE applications. For remote production, 5G could provide valuable solutions. Some trials are planned for 2022. What resolution does SRG SSR TV provide? Is every broadcaster

working on HD? Are you considering UHD? Which technological change will imply this idea? Currently, we continue to distribute in 720p50 and produce mainly in 1080i25. We are considering developing our production and distribution to 1080p50 around 2025, which is easily scalable to UHD. Nevertheless, for live events we already offer a digital UHD channel. It will certainly be developed in the future. A general introduction of UHD for all production


locations is not yet under discussion. The bandwidth costs are not affordable and also not necessarily a “must have” for the main formats we produce (there is no obvious need for UHD for news or studio productions) and the added value for the consumer is not really a given. Most of them clearly cannot benefit from a UHD production at 1080p50. What’s next for SRG SSR? What is your technological future?

We have already made great progress in technological virtualization, the digitization of our infrastructures and the automation of our production processes. But we need to take further steps in order to exploit the enormous potential of these new technologies. The experience we gained with the introduction of an all-IP facility in Zurich showed us that a lot of work needs to be done to achieve the necessary maturity for both the

suppliers and the user of the system. We are currently reviewing our production processes. This is a major cultural change that needs time. Another promising “new” technological territory is the generation and management of all kinds of data; from procurement, through production and distribution to end users, and processes. We are convinced that AI will play an essential role in these fields. 

Visual Radio in Comano.



Interview with Robin Ribback, Head of International Projects, Research & Business Development at SWISSTXT To begin with, how did SWISS TXT come about and what has been your relationship with the Swiss public broadcasting association? SWISS TXT was founded in 1983 with the aim of creating and operating teletext for Swiss television. After three decades of


experience in digital transformation, SWISS TXT has been appointed by its parent company SRG as a center of excellence for ICT, video and accessibility services in 2019. What role does SWISS TXT play today, what is its current business model? SWISS TXT is a subsidiary of SRG SSR and with 270 employees at six locations (Biel, Berne, Geneva, Lausanne, Zurich, Comano), SWISS TXT has a multilingual presence in Switzerland and has been offering services to

both internal and external customers. SWISS TXT is involved in several EUfunded research projects and works with renowned partners from all over Europe on the future of digital media. SWISS TXT became the ITC department of SRG SSR, how did this transition develop and what was the purpose of this union? SRG had decided to strategically implement the digital transformation and the Journey to the Cloud. For this reason, it needs a reliable internal partner for


The frontend application is built with ReactJS and SASS is used for styling. Due to compatibility issues on older devices, JavaScript is converted to ES3/ES5 using Babel. The APIs used by the frontend application are written in C#.

its implementation. More than 30 years of experience in digital transformation have helped SWISS TXT to become this competence center. What services are provided by SWISS TXT and for SRG SSR? SWISS TXT offers solutions in SaaS, PaaS and IaaS for digital media operation. In addition, SWISS TXT offers a full range of accessibility services such as subtitling, audio description and sign language. To produce subtitles, SWISS TXT uses its own inhouse machine translation platform, which has been specially designed to

recognise and translate film content automatically on the basis of the soundtrack or existing subtitles. The platform produces a translated text, which is then manually corrected and edited by specially trained staff. Thanks to the platform, they can subtitle videos around the clock wherever they are. Our automatic subtitling platforms and solutions enable customers to deliver any audio-visual content in any language or publication form, at the desired quality, price and time. You also provide HbbTV for SRG. What technology is behind this service?

What is the purpose of strategies and projects such as Citizen Journalism, can you explain its conception and how it will change the way we understand the transmission of news? SWISS TXT operates an initial prototype of the mobile citizen journalist app, which was developed as part of the EU’s HELIOS research project. The app allows users to decentrally store recorded videos and captured images on the Internet and uses blockchain technology to ensure anonymity. A marketplace for sharing content enables publishing houses and media organisations to make further use of it. Citizen journalists can then be rewarded in cryptocurrency. 



“Coronation Street” The technological present of the oldest soap opera in the world



Coronation Street is one of the most beloved shows in England, “from the professionals who make it all the way up to the Queen,” admits our interviewee Gary Westmoreland, Director of Technology, Drama & Continuing Drama at ITV Studios, in charge of the technological strategies of soap operas like Emmerdale or Coronation Street itself. How have they managed it? Maybe being the oldest soap opera in the world gives them credit; they aired the first episode in 1960 and in 2010 they entered the Guinness Book of Records holding that title. However, it also has much to do with the fact that it has touched the hearts and souls of many generations of British people through characters and plots that allude directly to the purest feelings we all feel.



Sixty years in production is a lifetime. Gary Westmoreland grew up with the image of Coronation Street on his television and joined the team in 2015, but he has assured us that all the people and technology that contribute to the production of the show, from the first sentence written in the script to the last color tweak before sending it to air, work like a very well-oiled machine. A perfect system that has not been stopped by the transition from tape to archive, nor by the change from SD to HD, nor by COVID-19 more recently. Let’s see what the future holds for it, with surely as much success as its past.

Gary Westmoreland, Director of Technology, Drama & Continuing Drama at ITV Studios.

How would you summarize 60 years of the history? It’s Britain’s best-loved program. Really, I think that’s true. It’s loved by everyone, even the Queen. It started from very humble beginnings in Quay Street and has grown into the machine it is now with the loyalty it receives from its fans. As for Coronation Street as a program, it appeals to the real feelings of a working generation and the ins and outs of normal life. We still see it today, Coronation Street is one of our most important programs and it continues to grow year after year. What are your functions associated with the show? I’m Head of Technical Operations for both Coronation Street and Emmerdale; its sister soap. My role is to examine the processes that are carried out in our studios and technical facilities and try to make them as efficient as possible for the future. I look at both productions to make sure that we get economies of scale, and have that ability to, where




appropriate, be able to use the equipment from both shows to maximize the resources of ITV Studios; the parent company. I look at the equipment that we’ve got, how we work, and where things might go in the future strategically. It’s a pretty interesting place because you have to keep an eye on all the deliveries that we do, an issue that has expanded over time, and will certainly continue to expand in the future. It was very easy in the days when you just delivered to the network and that was it, but today, with our multiple deliveries around the digital content hub, VOD services

and whatever else, it’s a challenging task. Why it was easier on the old days? I think easy is probably the wrong word, it was different. The volume of what we offer has certainly increased. I guess the technology of the time also made that a challenge. Just look at the tube cameras that were used back then, how light interacted with them and what we could and couldn’t do because of that. We also have to take into account live recording on videotape, tape editing and that whole linear world, that old good stuff. [Laughs]

I think we’ve come a long way since then, but there are still challenges related to equipment acquisition and costs. For example, back in the day, acquiring 10 channels of cameras for the number of studios we covered, the technology behind it, the way we shot, the way we edited and the way we presented it to the network, I’m sure was a big challenge. As we went along, we changed cameras to Ikegami HDK-E79, which were SD but with HD lenses. We were still producing SD but with elements of HD equipment being adopted ahead of the switch over, which finally came with a camera and Post Production upgrade. More recently, when we’ve replaced those cameras, we’ve gone to 4K. Although, there is a thing with the soaps. There are lots of producers that are doing standard UHD these days. We’re not there yet and this is because and the risk of ensuring the volume we do and the risk of ensuring Coronation Street’s high production standards.



Now we are moving more than ever to be a

we look at that, are they suitable to make

company that has content on your Netflix,

the jump to 4K or UHD? We make sure we

on your Sky, or Disney, as a global player.

are ready for the future.

What we do in the U.K. is broadcast over DVB in high definition. When we renew our

The cameras that we’ve got are a couple of

technology, as we did a couple of years ago,

years old now, but they’re Sony HDC-3500.



Are you involved in any renovation process these days? A lot of the innovations are not related to television. A lot of the things that we do in the background have more to do with things like how we move scripts around the building and how we do them, and how we try to keep our carbon footprint down. There’s a big machine in terms of technology that sits behind the office process that also needs to be well built. For example, the Google Docs environment is built for that, but we also need to be able to add images, other digital content into the documents, be able to

annotate them, have that database management, that there’s no corruption when a person opens the same document from somewhere else. And achieving that is a challenge. We have deployed a couple of systems: one around crew and facility scheduling and the other around how the scripts are written and how are distributed. There is a lot of security around that because that is our pre-transmission storyline and information. We have implemented relatively new software, which has taken a bit of work over the last couple of years, and we have been thinking about how to integrate with our

These cameras gave us the ability to go to a higher resolution than we currently transmit. Could we do UHD


other systems. We are also looking at how we can move away from the standard UK delivery file format, AS-11. We are wondering whether we should look to the IMF standard to bring together all our essence in a single file that shows a playlist.

in HDR? Yes, we could. Do

Those are the areas we are

we? At the moment, no.

working on these days.



Do you own the equipment that you have? We own all the technological elements and devices we use. At the end of the day it’s equipment that we have used, that we use and that we will use for sure, because Coronation Street is a machine that’s been around for a long time, so it makes sense to own the core of the equipment. On the other hand, it is also beneficial to be part of ITV, as it has its own rental house that provides us with the extra camera, lights or sound equipment we need.

Apart from the cameras you mentioned above, what brands have you relied on? We are an Avid house, so we have Avid Nexis with Avid FastServe Ingest devices. We have implemented ISOGroup via Avid & JellyBean Media to manage the front end of the studio recording. This allows us to group clips and import sequences directly into the editing environment, with the ability to input additional metadata as required. Media Composer for editing, Pro Tools and S6 mixing consoles for

dubbing. Another thing we have deployed is Media Central Cloud UX. That gave us the opportunity to be more flexible as our production team are now able to review, comment



and edit content remotely

works well when it sits

Nor as secure. Now,

on various devices such as

on the foundation of our

broadcast is becoming


network, in our building,

more IT-driven than ever

And that is where the

over which we have total

and, in that transition

control. But it’s not so

from traditional broadcast

challenge has arisen. The

comfortable when you get

engineering to IT, lies the

introduction of an iPad

out of that environment.

real challenge.



What are your production methods? We do the episodes in blocks of four. This is because of the volume we handle, to make this process more efficiently. The system is much tailored to make sure we can do the volume we need to do on a daily basis. It’s also true that we are in a bit of an unusual position because


both Coronation Street and Emmerdale have been running multi-camera for many years. Recently, we altered our production methods. We used to shoot vision-mixed multicamera and now we just do multi-camera shooting. We utilize up to four cameras at once, all shooting the same scene, we record the ISO feeds, and we catch the sound with Fisher

booms; and this allows us to increase our creative options. We would love to hear about the great challenge that the Coronation Street format has experienced over the decades. I think the biggest challenge that they would have had from the beginnings is the


increase of volume. From two episodes a week to the number of episodes we’re airing now. I think the volume of content, maintaining the quality of the writing, the production, what we see on screen and the production values is probably the biggest overall challenge that Coronation Street has faced. And apart from that, one of the biggest technological challenges we have faced was when we changed from SD to HD. And it has to do with volume increasing as well. That transition was


made at the same time we were going file-based. How you then back that up? You have to trust on this entire computer environment and might be an uncomfortable feeling because we are very close to transmission in terms of our deliverables. How would we continue to produce content and remain on air if we lost key elements of our infrastructure? Nowadays we are looking at the equipment that we have got on premise and the possibilities of how we would do the same using cloud providers and which workflows are suitable for that. We’re looking for efficiency, as we said, and the ability to put our security backup systems in the cloud is very interesting. You don’t have to deal with egress charges and migration between platforms, and you can reduce your onsite presence to and move to a more efficient way of working. It’s secure, and it allows us to do all those things we still have to do in a way that’s efficient and greener.

Where are your facilities? We moved from Quay Street to our current site in Media City, Manchester, UK. We were at the old Granada Television site on Quay Street from its origins. The project required a complete rebuild of Corrie’s sets. We built four studios, we rapidly then built another two, so we’ve got six studios on-site. Obviously, we go off on location to shoot certain scenes where that’s required for the storyline, but a lot of our production is done on that site. We changed the location to achieve flexibility. We have a duct network with pointto-point fiber throughout our site, and there is the possibility of connecting to that fiber when necessary and adopting a different solution in the coming years. Today, much of our infrastructure works as follows: we have a box in the studio to which we connect our equipment. That then talks to a piece of equipment in our apparatus room which distributes the feeds both out to the studio and incoming. This is all



done automatically and is due to the fiber technology that has come along in the last two decades. It’s becoming more compact, easier to use and allows us to do things we couldn’t imagine before. Sometimes we experiment with things. When we built our additional studios, Studios 5 and 6, we basically built them as stand-alone studios without galleries. We have a mobile gallery that we move wherever we need to have flexibility in our shooting


schedule. It’s like being in the building. I’m sure in time that will develop and probably we’re not that far away from being able to go to a place that’s not on our Media City site and do the same thing using 5G and cellular bonded. Are you looking at this possibility? It’s an area we’re studying right now for the sister soap opera Emmerdale. The town of Emmerdale is a 30-minute drive from the base. It has limited connectivity just

because of its location. Eventually we will have the ability to shoot from there directly on site or in the cloud. The days are here when we do a color grading session in Manchester, our operator is in Plymouth and the producer is looking at it in London, so remote working is very much our future. Speaking of the devil, how much COVID has affected your workflows? The pandemic changed the way we had to operate


our galleries, the way our production staff in the studio was, we had to keep people socially distant, we had to protect our crew and our cast. That meant there were some changes on set. Sometimes that meant different techniques in terms of how you could approach people and how you physically shot it to make it work. The technical part of the remote workflow; obviously, in post, it’s relatively easy to do and it was relatively easy to do with the tools we already had to let our editors work from home. It’s



more about when people have to come into the building to do the high-level technical finishing. This is always going to be the biggest challenge. I think that it is very easy to lose that collaboration because you’re not physically in the room. Nine times out of ten, communication is body language rather than what’s actually being said. However, and apart from that condition that final adjustments have by involving decision-makers coming to the building, the remote work has been deployed, in general, perfectly considering the starting point and the procedures we were using, which are very traditional. Actually, very quickly, we were able to turn it around with very little additional effort. That’s been great. Is there anything of the remote work process you have developed that you are going to maintain to the detriment of traditional production methods? Personally, I don’t think we’re going back to a stage where all the editing is


done in the building. It’s really interesting, because we have to balance the well-being of our workers and also the fact that they want to reconcile home and work. If that really works for people, then we will get a better product from our staff. If people are happy and satisfied with the worklife balance, then we will encourage this element and try to maintain it because it is a really healthy thing to do, as well as reducing our carbon footprint. The curious thing is that, given the expansion of demand and, consequently, of content that we have experienced historically, it would be logical to consider creating more office space to accommodate the expansion of staff. But, COVID has turned this thought on its head - do we really need that extra space? Couldn’t we build it for other purposes? I think those are the elements that we will maintain, because it is beneficial for our staff and also for our workflows. We will rely on that part of collaboration, whether the technology supports

it or not. For example, could we get into virtual production environments and would that help us being more efficient? Well, if this technology was more mature it certainly would. Are you investigating the possibility of introducing


Virtual Production into your pipeline? It’s not something we have actively studied yet. It’s a theory that we’re playing with. What you can deliver on television today is, for example, what we usually see on a sports

commentary show. A lot of graphics flying around the presenters, a virtual set, and so on. Elements that, in reality, are not there. We wouldn’t produce it that way, but for drama it sure is a cheaper and more efficient way to do it. The ordinary way is to

shoot with physical sets and then edit and add all the VFX. Compared to virtual production, with the possibilities that it can come to offer, the traditional way is much less efficient. But that’s just a consideration we have in mind, for the moment. 



JPEG XS, or should we say XL? The JPEG XS standard, specifically the ISO/IEC 21122, is a codec developed by the Joint Photographic Experts Group (JPEG), as its name suggests, which promises to deliver interoperability, low latency without visual loss of quality and very low processing requirements for high-quality formats. Saying it this way looks really thrilling, but let’s delve in a little more detail what this means.



By Yeray Alfageme

The three pillars of JPEG XS As we have mentioned, JPEG launched, back in 2018, the JPEG XS specification with three main features in mind: • Zero visual quality loss: Content compressed in JPEG XS is visually indistinguishable from the original content. • Low latency: Total latency in a complete chain of encoding, transport and decoding under JPEG XS is minimal, between 1 and 32 latency lines.

• Lightness: JPEG XS is designed to require very little computing power, thus allowing implementation both in software and hardware efficiently. Each of these three essential pillars has more implications than those of a mere headline. With regards to zero visual quality loss, it allows replacing the original baseband or uncompressed video signal -which for HD environments means between 1.5 and 3 Gbps, unmanageable for many environments- by an

equivalent JPEG XS signal which can still be regarded as an original signal. This entails spectacular bandwidth savings. Compared to the older sibling -JPEG 2000- which has a typical bandwidth of around 150 Mbps, JPEG XS offers 3-4 times better throughput, which would then lead to a decrease of the original 1.5 Gbps up to about 38 Mbps. Not bad, uh? And if we translate this to 12-Gbps UHD-HDR signals, it means that a signal to be carried in just 300 Mbps without loss, thus allowing it to be managed by means of 1-Gbps



Ethernet links, something just unthinkable with other formats. The fact that it is a low latency codec also makes it easy to use it anywhere throughout the transmission chain. If it only takes us a few lines to encode and decode this format, nothing will prevent us from using it to compress the output signal of our camera chains on the way to our production center located in a remote production environment, for example. If the two previous are added to the fact that having enormous computing capacity is not a requirement -as it is sometimes the case with heavier or older codecsthis will allow us to have compression capacity anywhere. For example, a drone broadcasting highquality 4k50pHDR images wirelessly and by means of JPEG XS without quality loss, with low latency and a controlled bandwidth, enables coverage and range to be reliably extended, making it possible to use it in environments that were not feasible up to now.


But let’s not just scratch the surface, let’s go a little more in detail about what this codec offers, as it is bound to replace JPEG 2000 and, one that many manufacturers have already implemented throughout their entire equipment ranges.

Is it really that good? Video codecs are generally used for a wide range of applications, mainly aimed at distribution. For production, derivatives of these codecs with specific -although not streamlinedparameters are used.

The JPEG XS codec was specifically designed for use in production environments where high quality is the primary factor, far more important than compression ratio, although the latter must be also kept in mind. In production environments, more capacity is typically available for both streaming and storage. For transmission, private LANs or WANs with high bandwidths are typically used, and for storage, a SAN environment of several hundred Tbs is very typical in any production center. However, uncompressed


encoding is difficult, so mezzanine compression really caters to these needs. JPEG XS is used in instances where 422 or 444 subsampling with up to 12 bits per component and a compression ratio between 2:1 and 10:1 are needed, coupled with a low latency below 32 lines and multigeneration robustness. A typical example is streaming for 4k50p, which requires about 10 Gbit/s. So, reducing the bandwidth by a factor ranging between 3 and 4 enables secure transmission of multiple streams over a 10-Gbit Ethernet line. And this has great implications on a creative level. As this codec needs very few resources, it can be implemented in FPGA chips for use in devices or environments that require low power consumption or higher compression speed and reliability, such as in software environments by means of a conventional CPU, and thus offer greater flexibility in editing workstations, for instance. A state-of-the-art workstation is not required for viewing

in real time a postproduction result in 4k50p on JPEG XS; a fairly modern graphics card can support it without problems by using Premiere Pro, for example.

But that’s not all JPEG XS is not only a good, nice, cheap codec, it meets many technical requirements to be considered an industry standard such as JPEG 2000, MPEG2 or even MPEG4. However, it is difficult to compare it to other existing newgeneration codecs as it is geared towards both production and broadcast environments. In order to get a better idea, let’s see some features in more detail: • Truly constant bitrate: Although CBR (Constant Bit Rate) has always been associated with higher bandwidth consumption, a drawback offset by a lower encoding cost and lower latency -all the above being true10:1 compression ratios achieved by JPEG XS enable to set very precise,

optimized bandwidths. Because JPEG XS allows us choosing the Mb, unlike ProRes, for example, in which we move between bitrate integer values. In this way, all our bandwidth can be used with more than one signal. • Multiple passes without visual loss: JPEG XS allows up to 10 encoding cycles without much image degradation. We have to be skeptical about this, though. That said, this range is much higher than for other codecs, so we can rest assured and use it in our production environment where, typically, an image is encoded between 4 and 6 times. • Mathematical Lossless Coding: Whenever visual quality loss is involved, a subjective factor comes into play, although from the EBU to the ITU many studies have been carried out to objectively determine what a visual quality loss really entails. However, as an engineer, bringing it into the mathematical realm always gives me



confidence. And there are JPEG XS profiles that actually allow you to reconstruct, pixel by pixel, the same original image from one end to another end of the encoding chain. This is lossless indeed. • Actual support for new formats: JPEG XS allows up to 16 bits of depth to be allotted to each pixel, which is above the current 12 bits used in HDR signals, for example. This means it can be used in image formats, such as UHD and higher, without the need of tightening codec specifications. In addition, we will be able to continue using it in environments with limited resources or low-latency needs, even with very high resolutions or BT.2020 color ranges and others that are yet to come. There are other important features, such as bitrate control, but I believe that they do not have as important an impact as the previous ones in our work environments and they do not weigh heavily for JPEG XS to be implemented as a new standard worldwide.


Let’s discuss standards a bit

guidelines for the reverse

When it comes to new formats, making reference to standards is a must. Although we very well know that it is the most boring part, they are important, since knowing that a system complies with a certain standard gives us the assurance that the requirements we need are met and, in addition, this allows us to guarantee interoperability. They are boring but necessary.

stream, more simply called

As we have already mentioned, JPEG XS is the ISO/IEC 21122 standard and was released by the JPEG group in 2018. Back in 2019 is when the first editions of the standards came out, in separate parts, as it is customary. Part 1, formally designated as ISO/IEC 21122-1, describes the core coding system of JPEG XS. This standard defines the syntax and, similarly to other JPEG and MPEG image codecs, the decompression process for reconstructing a digital image from its data stream. Part 1 provides some

process -compressing a digital image into a data the encoding process- but leaves implementationspecific optimizations and options to implementers. Part 2 (ISO/IEC 21122-2) builds on Part 1 to split different applications and uses of JPEG XS into subsets of encoding tools with


more stringent restrictions. Defining profiles, levels and sublevels allows to reduce the complexity of the implementations in specific applications, while ensuring interoperability. Let’s remind that less complexity generally means less energy consumption, lower production costs, simpler constraints, etc. In addition, Part 2 also specifies a buffer model,

consisting of a decoder

mastering (MDM), and EXIF,

model and a transmission

for easy transport, editing,

channel model, in order

and viewing. In addition,

to guarantee low-latency

this part defines XS-specific

requirements at a fraction

ISOBMFF tables, an Internet

of a frame.

media-type registry, and

Part 3 (ISO/IEC 21122-3) specifies the transport and container formats for JPEG XS signals. Defines transport of important

additional syntax to allow XS to be embedded in formats such as MP4, MPEG-2 TS, or the HEIF image file format.

metadata, such as color

Part 4 (ISO/IEC 21122-4) is

spaces, display metadata

a JPEG XS support standard



that provides compliance testing and buffer model verification. This standard is crucial for XS implementers and device compliance testing. And finally, Part 5 (ISO/ IEC 21122-5) represents a reference software implementation (written in ISO C11) of the JPEG XS Part 1 decoder, in compliance with Part 2 profiles, levels, and sublevels, as well as an exemplary encoder implementation. All of these parts were released between May 2019 -Part 1- and October 2020 -Part 5- and all of them are under review. Part 1 was foreseen to be revised by January 2022 and is now in a final stage before release. After this, revisions of each of the subsequent parts will follow throughout the year, so we will have to be ready.

What are the profiles available in JPEG XS? As we have mentioned before, one of the features of JPEG XS is that it allows us to choose the exact


bandwidth for our encoded signal. This does not mean that we can encode a 12-bit 8k100p-HDR signal at 100 Mbps, as it would not be properly displayed anyway, so there are certain profiles that the standard offers us to make life a little easier (see Table 1).

Conclusion I think JPEG has done a great job with JPEG XS. JPEG 2000 is already a codec that -the evidence is there- has set and will be a standard in many applications, but with JPEG XS they have managed to offer much more flexibility without overcomplicating its implementation. It shows in the codec’s design and in the text for its standards, as these two elements are a sign of the fact that increaseingy fewer dedicated signal processing devices are being used in the industry, which makes JPEG XS a lightweight codec that can be implemented in hardware, software or even on mobile devices. In addition, it has been kept as separate as possible from

current image formats such as UHD or the likes. Although it may seem contradictory, this allows the codec to be durable and innovation-proof, because if a new VR format that we know nothing about today arrives, or a new definition or aspect ratio are created, JPEG XS will not care, it will not matter. the codec will apply its algorithm to achieve extremely high compression rates with “lossless” quality. From 2019 to the present, more and more manufacturers have JPEG XS within their catalog of supported codecs and I believe that it will be a widely used production codec as soon as production formats, such as UHD require it for the most part. I think we are currently in a time of transition from HD to new formats and all this will eventually stabilize and, almost certainly, JPEG XS will be one of those 2 or 4 chosen codecs as AVC-Intra, MPEG-2 or JPEG 2000 are nowadays. 


Table 1.



Audiovisual creation at your fingertips By Carlos Medina Audiovisual Technology Expert and Advisor

The purpose of this article is to get an insight into a piece of equipment that brings together many technologies and one that has pervaded many facets of our social context: the smart mobile phone or smartphone. Because of this device, our lives have become very dependent on its use and operation, from communication between people via telephone calls, to sending multimedia messages, not to forget the possibility of carrying out banking, commercial and admin operations, by connecting to a 3G/4G/5G network, or download any type of app.

without the mobile phone or not

Only fairly recently, research on mobile addiction has begun to be carried out. This phenomenon is known as nomophobia and consists of an irrational fear of leaving home

the most personal sphere to content


carrying it with us. It is not our aim to delve into this addiction, but it is clear that the presence of mobile phones has changed the future of society as a whole and of people, in particular. Our approach to smartphones is from the audiovisual creation point of view. This increasingly sophisticated equipment is making it possible to obtain images and videos of high technical quality. It is a very interesting tool for achieving an audiovisual production that can be used in different environment, from for social networks or YouTube, and also for the professional audiovisual industry (short films, feature films or advertising).




In 2015, Samsung’s marketing division published information on the most important aspects that consumers had pointed out when purchasing a smartphone, highlighting that the quality of the camera was the third factor to consider only after factors such as 4G connection capabilities and device durability.

elements or technologies of a smartphone that will enable recording images or videos is the timerelated aspect for the implementation of this technology on a mobile phone. We mean the fact that in a truly short time we have been able to take photographs (still images) and even to record videos (moving images).

The first thing that comes to our mind before knowing a little more about the

The historical origin of cameras in mobile phones is as uncertain as

what happens with other technologies such as cinema, television or radio. Establishing who the first one was or the first patents, the first tests, experiments and inventions is not an easy task. We could set about it with the first commercial appearance of a cell phone featuring a built-in camera from manufacturer Kyocera, the VP-210 VisualPhone model (1999), which was sold only in Japan for the PHS network and equipped with a single front camera of 110,000 pixels. Samsung initially launched the SCH-V200 in Korea in June 2000. A model with a very peculiar solution. The phone’s camera and components were separate devices that shared a common body. To access photos on the SCH-V200, the phone had to be



connected to a computer. In addition, it was only capable of taking 20 photos with a resolution of 0.35 megapixels. In November of that same year 2000, Japanese company Sharp and J-Phone released the J-SH04 phone model. This camera-phone was capable of taking pictures with a resolution of 0.1 megapixels for a 256-color display. In 2002, Sanyo, together with carrier Sprint, brought the first camera phone exclusively to the US: the Sanyo SCP-5300 model (phone in the form of a “clamshell”). This device had some upgrades as compared to the J-SH04 such as the option to use a flash, white balance, digital zoom and the possibility of displaying photos for phone book contacts and filters in sepia, in negative colors and in black and white. Three major mobile phone manufacturers such as Motorola, Sony Ericsson and Nokia, committed then to capturing and recording images through their new models.

Sony Ericsson launched the T68i model (2002). A phone with the peculiarity that it did not have a camera built into its body but required instead an external accessory (the camera) that had to be connected to the terminal on the charger area. The T68i featured a 640 x 480-pixel VGA resolution, whose greatest novelty was that it could take color photos. The accessory itself had enough memory to store 14 photos at full resolution, although it could be decreased to 80 x 60 pixels to hold up to 200 files. Nokia marketed the 7650 model (2002), the company’s first phone ever to come with a VGA camera. And Motorola released the E365 (2003), also with a VGA camera. Back in 2003, we found the first video recordings on a smartphone, which would come from manufacturer NOKIA with the 6600 and 7600 models, naturally equipped with a camera and the necessary processing power to achieve this goal.

In this same year, in 2003, the first front cameras made it into the market from Sony Ericsson (Z1010 mobile phone) and Siemens (U15 mobile phone). Years 2004 and 2005 marked the beginning of an ‘endless’ battle to improve the quality of images under the almighty megapixel figure. Nokia took camera phones to a new level by launching the N90 model, which was capable of achieving 2 megapixels. A determined commitment from Apple and Samsung towards smartphone features, made them absolute leaders in the marketing of smartphones equipped with highperformance cameras. Simply by way of historical data, in year 2007, Apple introduced the original iPhone, with a 2MP camera, with neither had LED Flash, nor autofocus, nor video recording capabilities. But it was the iPhone 4S in 2011 the one that stood out for its new 8-megapixel camera.



For its part, Samsung introduced the i8510 (2008), known as INNOV8 (for innovate), the first terminal to feature an 8MP camera. But Nokia was the one setting the bar very high in terms of megapixels with its Lumia 1020 (2013) model, featuring 41MP as well as Carl Zeiss optics, optical image stabilization, auto/ manual focus and a xenon and LED flash. The most sophisticated features in the cameras built in smartphones opened up plenty of opportunities for new manufacturers: LG, HTC, Acer, Google, Huawei, One Plus, among others. In 2011, both LG and HTC launched smartphones equipped with 3D cameras, the HTC EVO 3D and the LG Optimus 3D. It was a system that made use of two 5MP cameras in order to create stereoscopic images, enjoyable on their screens with 3D technology without the need of 3D goggles.


Not very successful in the smartphone market, Acer, the renowned laptop manufacturer, presented its Liquid S2 model (2013), being the first mobile with 4K video recording. Google, in collaboration with LG, implemented with its Nexus 5 (2014) model, the era of HDR in smartphones. Also Huawei, with its P20 Pro model in early 2018, stood out from the rest with a new triple camera system. This novelty would be later implemented by many of its competitors, such as OPPO’s Find X3 Pro model, which features up to five cameras (four rear and one front). In January 2022, One Plus has just presented its OnePlus 10 Pro model, equipped with a 48MP main rear camera. Manufacturer Xiaomi markets the MI 11T model, which comes with four cameras, one of them, the main rear camera (wide angle) reaching 108 MP. Nowadays, we are seeing every year news of a new

model or version or even the appearance of some other manufacturer that surprises us. But one conclusion is clear, with regards to the analysis of the possibility of capturing and recording images/ videos, having a quality


Placement on the device’s body: front (for lovers of selfies, video conferencing or online meetings) or rear. Normally, the main camera -and the one offering greater performance- is located on the back. Knowing the possibilities of each camera, as well as regarding to the relevant viewing angles, allows for greater shot versatility. Currently, there are smartphone models that already feature wide-angle optics, telephoto and even macro-optics.

camera has become one

that we must consider for a

of the main arguments

smartphone to qualify as a

in smartphone sales,

Cinema Smartphone:

promotion and marketing campaigns.

Camera or cameras: We refer to the number of

It is now time to share

cameras that the mobile

those aspects, parameters

device has, even reaching

or technical specifications

up to five in some models.

Image sensor: It is important to know the sensor’s size and technology used when capturing the image. Regarding sensor sizes in the mobile sector, they are small compared to other types of cameras that we find in professional audiovisual production. The most common sizes are 1/1.7″ sensor, 1/2″ sensor, 1/2.3″ sensor, 1/2.4″ sensor, 1/2.55″ sensor, 1/3.4″ sensor, among others. As for sensor types, CMOS is the prevailing choice Number of megapixels:



This piece of data is essential for images capture as it allows higher quality and an improvement in the possibility of enlarging them. At present, we are seeing a minimum of 12MP and up to 108MP in some

Video resolution: We


most common values in

refer to UHD (2160p) and 4K qualities. But we can already find some smartphone models with 8K UHD (7680 x 4320). Frames per second: The

images per second that we can find are 24fps, 30fps and 60fps. HDR: An abbreviation for High Dynamic Range. This technology allows us to improve image capture for a better result among the various exposure levels that we can record in our photographs/videos. The most common standards used are HLG, HDR10 and HDR10+, although some smartphone manufacturers are already including Dolby Vision. In February 2017, the Ultra HD Alliance announced a new standard for mobile devices, the so-called Mobile HDR Premium. The Ultra HD Alliance is made up of movie producers and technology companies seeking to set a standard for next-generation entertainment. Video codecs: The possibility of supporting ProRes video recording, HEVC and/or H.264 formats. Diaphragm or F no.: We can find out what the camera optics’ maximum



Other key aspects in smartphones for achieving quality recordings have to do with the mobile phone’s screen type, internal processor and capture/ listening of sound.

opening rate, which will allow more light to pass through to the image sensor. Some examples: f/1.7, f/1.79, on the main rear camera; and other F nos. on front camera or secondary rear cameras: f/2.24, f/2.2, f/2.4, … In this regard, the lenses yield the maximum performance in the opening and correction of chromatic aberrations due to the collaboration of companies specialized in photographic and cinematographic optics such as: Sony launched its CyberShot technology; Nokia along with Carl Zeiss; Leica optics in Huawei or iPhone mobiles; and Hasselblad’s collaboration with OnePlus.

Camera modes: Each manufacturer gradually includes configurations for capture and recording operations that make handling easier for users. Some of the most important ones are: night mode, beauty mode, slow motion mode, time-lapse mode, cinema mode, and even the ability to apply effects and filters to the resulting image. Other features: white balance adjustments; color temperature changes; ability to adjust ISO levels, shutter speed, opening and EV; optical zoom (x6 for example) and digital zoom (x30); continuous autofocus...

Regarding the mobile phone’s screen, we find Super Retina XDR technologies with ProMotion OLED, P-OLED and/or Dynamic AMOLED 2X screens in various sizes (from 6.1 inches or 6.55 inches up to 6.7 inches). A typical 5,000,000:1 contrast ratio, sRGB color gamut up to DCI-P3 compliant (inclusive), and a brightness response of around 5001,000 nits. The device’s processor is essential for many of the operations and features, but also for the recording of still and moving images. We must take into account the number of CPU cores and their speed: a minimum of 6 cores and 2.2 GHz, up to 8 cores and 2.9 GHz speed. Finally, when it comes to audio, it is interesting that we have included the following features in our smartphone: built-



in microphone, stereo speakers and in/out

our recording process

film Unsane (2018) using

and image results, such

three Apple iPhone 7 Plus

minijack. Although not the

as: Cinema 4K, Filmic Pro,

or later with the work for

best of the technologies

iMovie, UltraKam Pro,

Netflix entitled High Flying

that mobile phones have,

Camera Plus Pro, Cinema

Bird (2019) with an Apple

we must include external

FV5, iSuper8, or the Adobe

microphones (minijack

iPhone 8, an anamorphic

Clip pack, among the most

lens and Filmic Pro as a


recording app.

All the technical parameters

Another example: Park

mentioned above for

Chan-Wook won the Best

a smartphone cinema

Short Film award at the

open up for us a world

2011 Berlinale for Night

full of possibilities to

Fishing, shot on an Apple

make excellent video or

iPhone 4.

and/or USB-C connection) in order to achieve better sound recording. Solutions like Rode SmartLav+, Zoom Am7, Zoom iQ7, Rode VideoMic Me, Comica CVM-VS08, Saramonic SmartMixer. Some models currently support Dolby

film recordings. Some


well-known directors

Last but not least, the

for example Steven

device’s battery life, data storage (either in internal

have already done so, Soderbergh, first with his

Apple iPhone XS, iPhone 13 Pro or iPhone 11 Pro and iPhone 13 Pro Max, S20 Ultra 5G, Samsung Galaxy Z Fold 2, Galaxy S21 Ultra,

memory or on a dedicated

Galaxy Note 20 Ultra and

external card) and

Galaxy S10, Oppo Find

endurance to splashing,

X3 Pro and Reno 6 Pro ,

water and dust (IP68 rating according to the IEC 60529 standard). Also, the use of ancillary equipment such as tripods, gimbals, stabilizers, among others. And even external optics attached, like PhotoJoJo, Selvim Lenses or Apexel kit lenses. We should not forget the use of apps that improve



Google Pixel 4XL, Huawei

Apple’s TV commercial for

P20 Pro and P40 Pro, One

the iPhone 13 Pro model

Plus 7T Pro and 6, Xiaomi

in September 2021 made

Mi Note 10 Pro and Mi 11T,

it very clear: Hollywood in

Vivo X60 Pro, Sony Xperia 1

your pocket. A spectacular

III, LG V30 are many of the

30-second spot showing

models that we can find on the smartphone market (2021) to make movies. A fact that reveals the success of some brands compared to others is the result of smartphone sales worldwide that was released in 2021: number one is Samsung, followed by Apple, Xiaomi, Oppo, Vivo and Huawei. All the above-mentioned are renowned and recognized manufactures that deserve attention every season due to newproduct offerings that will continue to showcase improvements in the field of image capture. Not to forget the arrival of other manufacturers such as Realme, Umi Digi (UMI),

the fascinating possibilities of being able to shoot your story with a mobile phone. Many of us have become audiovisual professionals whenever given the opportunity to work in films or television. But we have always had the need to tell a story that sometimes did not come true due to the difficulty in accessing technical means. Today, this technology is remarkably close to us. For this reason, training in audiovisual language and narrative, a little enthusiasm, desire to tell stories and a cinema smartphone are the necessary ingredients to achieve an audiovisual creation. Never before has

Doogee, ZTE,

it been so close at hand,

Wiko, among

literally speaking, thanks to


cinema smartphones.



Taking off with Cloud in 2022: Thriving in an evolving mediascape

By Tim Banks, VP Sales EMEA, Grass Valley The live content market has adapted to tackle new challenges head-on over the past couple of years. And while the pandemic continues to cause disruption, innovation is ever-present, enabling media companies to break new boundaries and thrive in a complex landscape. This year we’ll see evolving consumer habits continuing to reshape the way media companies create content across Europe and globally. In 2022, two key trends will continue to remain front of mind for our market: skyrocketing demand for a wider array of content across more platforms than before, and the use of innovative cloud technologies to help media companies adapt to this shifting consumer landscape. The growth in cloud adoption will be driven increasingly


by the age-old question of ‘how to do more with less?’ as media companies seek greater efficiency, flexibility, scalability and resiliency – all with the aim of enhancing their yield per asset.

Diversifying live content through multi-platform distribution Following another tumultuous year when the pressure to stay at home fuelled demand for fresh video content, European audiences remain hungry for always-on live sports, news and entertainment — and they are turning to multiple sources to get it. Streaming services have been the big winner so far and are positioned to keep gaining subscribers even after a greater sense of ‘normality’ has returned.


However, streaming’s rise is not all about cord-cutting, with many consumers adding additional subscriptions to existing pay-TV packages. Ampere Analysis research showed that 80% of payTV households in Spain, Italy, France, the UK and Germany now also subscribe to at least one streaming service, with the total paying for multiple subscriptions increasing in each market in 2021 compared to 2020.

Consumers’ seemingly insatiable appetite for more content across more platforms will continue to impact live production in 2022. The shortage of new live premium content during the pandemic due to cancelled and delayed events has highlighted the potential value of programming previously considered niche. The rising demand for a broader live content choice has caused an increase across the board in ‘alternative’ programming, such as lower league, regional and specialised sports, as well as more live shoulder programming such as pre-game shows. With slimmer budgets, this niche and complementary content must be produced as cost-effectively as possible and delivered to the broadest possible audience through global and largely IP-based multiplatform distribution. Despite the pandemic, European media companies have bolstered their live content portfolios, with Canal+ Sport and Matchroom just two of

several launches in Europe that have expanded sports content through a multiplatform approach.

Accelerating toward remote production and cloud-based workflows Consumer demand for more varied content has neatly intersected with media companies’ ability to shift towards more remote and cloud-based production – an approach that has been tried and tested throughout the pandemic. The wider uptake of remote and cloud-based approaches has been driven out of necessity as restrictions around social distancing forced content producers to look at ways to deliver live content while reducing onsite staff and travel. Social distancing measures also impacted media centres, where skeleton crews for news networks were bolstered by talent and technical staff contributing remotely. Looking back to 2020, we saw how the creation and deployment of new ways of



working enabled the safe and successful resumption of the 2019/2020 Premier League season with Project Restart. Grass Valley’s close collaboration and workflow innovation with the likes of Sky, BT Sport, NEP, EMG, Timeline and Gravity Media was central to bringing live sports back on air amidst ongoing disruption. Even last year, coverage of flagship events such as the 2020 Tokyo Olympics and UEFA Euro 2020 were adapted to a largely remote footing. Using hightech studios and remote production capabilities, many rightsholders were able to bring the immediacy of up-close live sports action to these events despite restrictions and precautions. For example, while BBC’s Olympics hosts were presented against a Tokyo backdrop, they were in fact appearing on set in the pioneering dock10 television facility in the UK. Increasingly, a seamless integration of onsite and remote elements will be seen less as an exception and more as another viable option with its own pros and cons.


The uptake of cloud has

teams using a cloud-based

accelerated across many

workflow supported high-

application areas, with

profile esports events for

audio, video switching,


clipping, editing and

Apex Legends. Utilising

highlight reels now easily

Grass Valley’s cloud-based

managed in the cloud.

Agile Media Processing

Esports organisations such

Platform (AMPP), EA’s

as Electronic Arts (EA) have

production staff seamlessly

harnessed the potential

delivered live competitive

of cloud technology to

esports competitions from

enable truly global remote

locations in Europe and

productions — last year,

North America to a global

distributed production

fanbase. The efficiency


necessity truly has been the mother of invention over the past two years. Remote, cloud and distributed production methods and technologies have been proven to help media companies create highquality live content. In 2022 and beyond, there seems to be little reason the media and entertainment market won’t continue to embrace the innovative techniques developed to work within a pandemic environment, even as conditions get better – the benefits are too hard to ignore.

and innate flexibility of this type of innovative, cloudbased approach mean that in 2022, we’ll see even more esports companies and premium live content providers realising the transformational potential of cloud for live production.

Driving momentum in 2022 I think we can all agree that

Clearly, the demand for live content across sports, news and entertainment will continue to grow – and cloud-based production and delivery has a big role to play in supporting that growth. The status quo of sports viewing is evolving as streaming pioneers such as DAZN and Amazon Prime compete with leading media companies for top sports rights, with all players exploring new ways to engage with audiences and increase global viewership.

Over the next few years, the European live content market will likely be characterised by a lot more competition, partnerships, challenges and surprises – but the lessons learnt over the last two years will help the media landscape adapt and thrive. Companies are looking to get ahead of the curve and design futureproof technology roadmaps that align with the evolving mediascape — the scalability, flexibility, resiliency and costeffectiveness of cloudbased technologies will represent a hugely appealing proposition, both today and in the long-run. 2020 and 2021 certainly accelerated both the awareness and acceptance of cloud for live production. Building upon this momentum, this year we’ll see a rapid rise in the adoption of cloud-based workflows — and greater advocacy for cloud as media leaders begin to showcase their cloud-driven innovations for premium live content production on the biggest stages. 



Elevating live production with 5G

By Per Lindgren, Co-founder and CTO at Net Insight Consumers increasingly want viewing experiences that are as good as – or better than – in-person: instant replays, more camera angles, access to highlights, and immaculate quality on any screen. Today, live production has grown to be a vast and complex operation, with broadcasters and production companies balancing increased consumer demand for more content and access to it at any time and from any location with shrinking budgets. Thanks to 5G, the broadcasting industry gains more flexibility and can streamline workflows. Remote and distributed production of live sports and news will benefit from 5G’s high bandwidth and low latency capabilities. 5G-enabled remote production workflows will become increasingly


important in the staging

media companies wishing

of events of all tiers, from

to increase production

major to smaller events.

output while lowering

Fast connectivity is an

expenses without the

intriguing alternative for

logistical problems and


costs associated with deploying fixed equipment. As the 5G rollout accelerates, broadcasters will require a proven endto-end solution to take advantage of 5G’s potential to alleviate bottlenecks in their live production workflows. There’s no better time to get ready for 5G than now.

Remote production boosts productivity The live content supply chain has changed, resulting in significant cost reductions and improved overall efficiency. The broadcast industry was able to bounce back despite the challenges brought about by the pandemic

by moving away from onsite production practices. 5G will accelerate the trend toward remote and distributed workflows, allowing more sites and events to be integrated into a single high-quality, low-latency production environment. By having only the necessary technical manpower and hardware equipment on-site, remote production speeds up the shift from CAPEX to OPEX. Scalable workflows are built and paid for just when they’re needed, eliminating costly hardware investments.

5G raises the bar for live production 5G allows for the low-cost capture and distribution of mobile media from anywhere in the world, with high bandwidth, low latency, and high quality of service. This expands the possibilities for events like marathons, golf, cycling, and other long-distance activities that need are challenging when it comes to broadcasting. Once



fully implemented, 5G has the potential to become the major networking link between the event site and the centralized production facility. Until that happens, it can be used as a backup connection to the primary connection, providing a variety of high-quality feeds and removing the inefficient practice of varied routing via fixed connections. Slicing capabilities in 5G networks increase connection reliability by allowing a select group of users or services to reserve a section of the network. This component is critical when live production requires high connectivity and ultra-low latency.

Creating unique and engaging viewing experiences While 5G is intended to provide users with more cost-effective and agile contribution links, it also has the potential to change and improve the viewing experience. It allows more camera feeds and new and more immersive camera


angles to be included in the primary production, whether it’s player or referee cams, 360-degree or AR/VR cams from moving vehicles, or simply connecting broadcast cameras and sensors at event venues. When used in conjunction with remote

workflows, it will also allow content owners to distribute more immersive content from events and even allow viewers to choose their own camera angles or players to watch. This means that 5G will have a revolutionary impact on viewing experiences,


with gaming and esports

video from 5G-enabled

being at the heart of

smartphones, creating


multi-camera experiences

5G has the potential to change television production and event staging quite dramatically. By connecting cameras to production facilities wirelessly, camera operators will be more mobile and follow the action. 5G can be used by production teams to set up “pop-up” production capabilities that send multiple camera signals back to a central production center. Producers may simply mix video from traditional cameras with

and unique perspectives not previously available.

Shaking up the media industry As 5G grows and gains traction in the market, it has the potential to drastically transform the media industry. With connectivity becoming better, faster, cheaper, and more readily available, even smaller or niche events can gain more global audiences, opening up new revenue streams for broadcasters. The growing demand for content, alongside the changing consumer viewing patterns, means that having flexible and reliable solutions for live production is critical for long-term success. 5G is a key tool that will open the door to more exciting live production workflows and will deliver viewing experiences that were not possible before. 



“How to Mojo” by Ivo Burum (Part 1) What is mobile journalism and is my smartphone mojo-ready?

Ivo Burum is a journalist and award-winning television executive producer – and a mobile journalism pioneer. His company Burum Media provides mojo training in remote communities and to many of the world’s largest media groups. Dr Burum has written five books on mobile journalism. His latest, The Mojo Handbook: Theory to Praxis, published with Focal/Routledge, has been chosen as one of 12 must-read books for investigative journalists in 2021. Ivo lectures in mobile and digital storytelling and in television production and is discipline convener of Media Industries at Latrobe University, Australia. Twitter: @citizenmojo





Author doing a PTC.

Mojo Defined More than three billion smartphone users create and upload 500 hours of video every minute. Journalist Charles Feldman calls this 24/7 convergent clickstream


an “information tsunami.”

Simply put, Mojo is a

I see it as an opportunity

combination of digital

and so does Sennheiser

storytelling skills and

who has developed the

tools used to capture

series of mobile-friendly

and transform raw user-

microphones featured in

generated content (UGC)

this series of articles on how

into complete user-

to mojo.

generated stories (UGS). A


mobile journalist is trained to shoot UGC on their smartphone and to edit that into powerful, lifetransforming UGS, and if required, to publish the finished story from location. Mojo definitions include DSLR (digital single-lens reflex camera) or another hybrid approach to record stories. Being a mojo is more about knowing how to tell stories and balancing the right combination of literacies, techniques and tools to produce a story. Mojo’s holistic story-

centric approach is key to determining the level of technology required and whether a smartphone is the right tool for the job. An assignment recording wild animals might mean mojos need to use a DSLR

Ivo’s tip Before you throw that old phone away learn to use all its features.

with a long lens, or a video camera, or a drone for an overhead perspective. On the other hand, a mobile journalist working in Syria may, for safety and cultural reasons, require a minimalist approach, which might not even include a microphone for their smartphone. Mokhtar Alibrahim, a Syrian investigative journalist, says, “Mojo is very important, particularly in areas where photography is considered ‘taboo’…I prepare many reports for BBC and use my iPhone because it’s smaller and not so visible.”

Author using MKE 400 Mobile Kit.



GIJN mojo workshop at conference.

“A journalist in today’s digital world will have no edge if he or she can’t use mojo,” says Rana Sabbagh, former director of Arab Reporters for Investigative Journalism. Sabbagh sees these journalists, who are often first on the scene, as bearing “witness to a first draft of history.” She believes their mobile investigations can provide “the prima facie evidence at international criminal tribunals,” potentially providing a level of editorial control essential to achieving transparency and accountability. Mojo is like using the best of our story-telling skills married with digital tools, mobile and other relevant technologies, to create a


more responsive news and storytelling eco-sphere. Creating powerful visual UGS, especially from the marginalized world, creates a local voice and potentially a more dynamic and diverse content sphere. The key to new sustained localized experiences that mojo can provide is understanding digital economics, relevant training, local focus and necessary scaling. How we consume news on our phones — on the home, work, play, home continuum — is central to that experience. Those of us who work in the varied mojo spaces (citizen, professional or education) believe that local will play a key role in the new news eco-sphere.

Ivo’s tip Buy a good phone on a platform that has all the apps and peripherals you need to create mojo content.

We record, edit, publish and read our news on mobiles. Designing this new mobile eco-sphere is not about replicating newspapers digitally, it’s also not about throwing the convergent baby out with the digital bath water. We need to remain open to digital possibilities including new skills, tools, styles and workflows.


Ivo’s tip If you are thinking of upgrading your phone, ask whether you’ll have to change your cradles/rigs, adaptors, and microphones. One reason I use the Beastgrip Pro cradle/rig system is that it works across all phone types. Irrespective of the type of mojo you do there’s no denying that mobile tools will continue to play a central role in all forms of journalism. Something Sennheiser and other hardware manufacturers are betting on. The list of equipment, mobile devices, apps, styles, literacies and skills that are described below and in the next articles will help journalists get their mojo working from almost anywhere on the planet.

New Smartphone or Old? One of the first questions I’m asked during my training workshops is, “Can I do mojo on my phone?” “Not if you don’t have memory!” The next often asked question is, “Okay, which phone do I buy?” Android is like a coalition winning the race on numbers, while iOS is a rich boutique shop going it alone. Are the cheaper Android phones any good? Absolutely. But do you need a new phone? The simple answer is, if you can

use your smartphone to produce your mojo story — shoot in low-light, run good camera and multi-layered video apps, and it has lots of memory — save your money. If not, you might need a new phone for mojo. Here’s my checklist that debunks some myths around smartphones. Platform: Android vs. iOS When I began training people to mojo there was a big difference in what phones could do. iOS was a clear winner because of

Ivo Burum using Beastgrip Pro.



its workflow, lack of viruses, lots of 3rd party peripherals such as microphones and cradles, and there were more iOS mojo apps (see Mojo Tools). The Android operating system (AOS) benefits from Google’s algorithmic power, but handset manufacturers are often at logger heads with app developers. Android’s open platform enables manufacturers to create a different user interface (skin) across Android brands. Android app developers say this can make it difficult for them to scale apps across multiple manufacturers. Another issue is creating updates for dozens of phones. While the top range of phones get their updates fast, it can take time for them to arrive for cheaper devices, which can lead to security risks.

that because there’s no Google Play store you probably need to use OEM (manufacturer) stores. Phone basics Effective mojo begins by deciding on story, which informs process, before technology. Notwithstanding this, once you decide if you’ll edit on your phone, tablet, or your laptop, consider the following:

- An easy-to-use phone with a fast chipset that integrates across platforms, workflows, high-end broadcastquality apps, and peripherals - Multi-core chipsets to spread load, increase app speed and reduce power consumption

Apps The best platform for mojo apps is iOS. I can’t find as many free Android apps as good as the free iOS mojo apps (e.g. iMovie, VN, Ferrite, Vont; see “Mojo Tools” for apps). The other issue that may arise in countries like China is


Ivo’s tip When might 24 megapixels be better? When the sensor doubles in size. If you have a terrible camera with a terrible lens with more pixels you will end up with more terrible-quality pixels.

- Whether you need SD card features or whether you intend to export content to laptops via transfer devices like iXpand or Airstash Pixels.

- Whether the battery is long lasting


The truth about pixels


Rendering of continuous lossless optical zoom.

- Microphones are not cheap so make sure yours connect to your new phone My favourite phone is still my old iPhone 6s Plus, which scored 81% in a recent review in Australia, where the top-rated phone scored only slightly more at 84%. It shoots up to 4k

(I mostly shoot in 1920 x 1080 resolution) and it has a 3.5mm microphone input. If you use a phone with a Lightning microphone input, you’ll need a mini jack to Lightning cable, or a mini jack to Lightning adapter from Apple. If you use the adapter workflow, you might also need a TRS to TRRS adapter.

Pixels are square Red, Blue and Green representations of trapped light on an electronic sensor (film). A pixel is like a bucket for trapping water (light). The larger the bucket (pixel), the more water (light) it collects. Former Apple camera engineer, Nikhil Bhogal, explains that when it comes to smartphone cameras, pixel quality matters more than the actual number of pixels. What he means is that a large sensor supports larger, more light-sensitive pixels. On a one-inch 12-megapixel sensor, pixels are twice as big as those on a one-inch 24-megapixel sensor and should capture more light in low light conditions. One lens or more? In 2016, LG revealed dual cameras and in 2017, iPhone 7 Plus had two independent cameras (12 megs each), the second featuring a 2x optical zoom. A digital zoom degrades image quality by zooming and cropping the edges of the photo and increasing the pixel size to simulate an



optical zoom. Here’s a link to a video on a comparison between three wide-angle smartphone lenses. Chinese manufacturers OPPO and Huawei are planning 10x periscopic zoom lenses on their cameras. If the optical aspect works, these will enable better sports, wildlife and other coverage where a long lens is required. Now, that might be worth the wait.

aperture. For example, the new iPhone 13 Pro has a primary lens with an aperture of f1.5 and on the ultra-wide lens it’s f1.8. The aperture, or f stop, denotes how wide the lens will open in low light conditions. The wider the aperture (lower f stop) the more expensive the lens. The picture below, taken using the iPhone Xs, shows the camera’s ability to alter bokeh (depth of field focus), after the shot

The iPhone 13 Pro now ships with four lenses — even the front selfie lens is 12 megs. The advantage of this phone, apart from being able to switch between lenses, is that the wide lens enables the mojo using the camera, or shotgun mic, to be closer while still retaining the MCU (medium close-up) frame, resulting in cleaner audio (see Recording Mojo Stories). Lens Speed Another important factor is the speed of your smartphone lens. You should look for the widest (fastest) possible


Bottles Depth of Field Bokeh.

is taken. One picture shot at f16 has a large depth of field, the other shot at f1.4, has a shallow depth of field. Transfer Devices These are essentially Flash Drives designed to connect to your phone. There are generally two types: one creates a local WiFi link with your phone; the other is a plug-in type (Lightning, mini USB or type C). I find


• Fast chipset — critical

Ivo’s tip You don’t need to spend lots of money on a largecapacity Flash Drive with lots of storage—they are for transferring content and then they can be wiped.

for running multi-layered

apps are a key feature of


mojo work. In the past,

• On-board memory — more storage and RAM is best because it keeps your phone running fast during edit • SD memory — if you

offer SD card memory.

Summary If you are buying a smartphone to use for mojo work, consider the following:


connect to professional

smartphone-specific mics

Many Android phones also

• 5G — is your phone 5G


like Sennheiser sell

devices simultaneously.

always true today

needs to be able to

Most companies

connect to numerous

edit apps, but that’s not

features, Android is your

microphones is critical.

yet the WiFi version can

didn’t run some high-end

In short, your smartphone

accessibility of

the connector type faster,

cheap Android phones

need removable SD card

• Microphones —

Airstash Transfer drive.

skill as a mobile journalist,

with 3.5mm and USB-C connectors • Camera — a good

microphones, record in at least 1920 x 1080 HD1 (we are using more 4k, but not always), have enough grunt to run high-end camera apps (FilmicPro, Camera+ 2), run advanced yet easyto-use audio apps (Ferrite) and multi-layer edit apps (imovie, Kinemaster, Luma

camera with a fast lens

Fusion, Power Director and

is critical. You want


both front and back cameras to be as close in megapixels to each other as possible, to enable you to switch seamlessly between them • Large sensor — with more larger pixels • Apps — apart from your

Does your current phone do all that? If the answer is yes, hold on to it. BTW, if you do change up, don’t throw your old phone away, take a look at this DIY radio microphone video before you do. Go mojo… 




Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.