TM Broadcast International #95, July 2021

Page 1

EDITORIAL We recently commented that the market, given the postponement of IBC, was somewhat calmer than usual. Despite this, the amount of news reaching our newsroom daily is relentless, an obvious sign that the market continues to move strongly. In the coming days and weeks several top-level sporting events will take place such as the Olympic Games or the Ryder Cup. Both tend to generate a flood of technical information as far as audiovisual aspects are concerned. Naturally, we will cover these events and we will give you full information on these pages about the technologies developed and their protagonists. In this issue you can find an extensive report on CBC/Radio Canada. An excellent TV station facing two big challenges: the enormous extension of the territory and the added difficulty of the two official languages: English and French. In an exclusive interview, Maxime Caron (Senior Director, Architecture & Strategic Development) and François Legrand (Senior Director, Core Systems

Editor in chief Javier de Martín

Engineering) were kind enough to tell us in detail how they are addressing these and other issues. David Katznelson and Kate McCullough, two Directors of Photography acclaimed by several television series, talked with us about how they approach their work from the point of view of optics, cameras and audiovisual equipment in general. We are also publishing the latest installment of the VoIP trilogy that has aroused so much interest, this time focusing on the practical application of many theoretical questions raised in previous issues. Finally, the issue closes with a detailed benchmark with an in-depth analysis of the Canon Eos C-500 MkII. A camera that has surprised us very positively. Some readers will be lucky enough to read these lines while relaxing on their vacation. Others will be about to enjoy a few days off. We wish all of you a happy holiday and we invite you to recharge your batteries for the season that begins in September and which will be full of opportunities for all those who are willing to make the most of them.

Creative Direction Mercedes González

Key account manager

Administration Laura de Diego

Susana Sampedro

TM Broadcast International #95 July 2021

TM Broadcast International is a magazine published by Daró Media Group SL Centro Empresarial Tartessos Calle Pollensa 2, oficina 14 28290 Las Rozas (Madrid), Spain Phone +34 91 640 46 43

Editorial staff

Published in Spain ISSN: 2659-5966




20 CBC/Radio Canada CBC and Radio-Canada’s first television stations began broadcasting in 1952. By 1955, CBC/Radio-Canada’s television services were available to 66% of the Canadian population. Since then, CBC/Radio-Canada has become the largest public broadcaster in North America, facing the technological challenges of an immense country and transmissions in both French and English. Nowadays, this public broadcaster is moving all its infrastructure to IP.. We had the opportunity to talk about these current challenges and the future ones with Maxime Caron, Senior Director, Architecture & Strategic Development and François Legrand, Senior Director, Core Systems Engineering.

32 VoIP: from theory to practice

To close this Video over IP trilogy we are going to start by reviewing a little bit of history in order to reflect on the past and what has brought us here. Without this perspective, it is difficult to understand the current environment and technologies and their raison d'être. In short, IP is somewhat disruptive but it will coexist with current SDI environments and it should be this way for a long time. 4


David Katznelson

David Katznelson has become one of the most renowned directors of photography in British audiovisual fiction production. Since 2000, when he graduated from the National Film and Television School, he has shot a significant number of audiovisual productions for which he has won several awards including an EMMY, a BAFTA and a RTS Award. Technology writes the film and is as important as the script. And David Katznelson knows that very well.


The Next Hurdle for Sports Broadcasters, by Vizrt Grop



Kate McCullough, filming the

Kate McCollough was born in Ireland and from a very young age she was attracted to film cameras. Since then she has never stopped trying to capture the power of light


Test Zone:

Canon EOS C500 MKII 5


Teradek releases a new wireless camera controller, the Bolt 4K Monitor

The Bolt 4K Monitor Module TX is a new SmallHD-integrated system with which camera operators can take remote control of the ARRI Camera with a distance up to 750 feet away. The Bolt is able to integrate into any SmallHD Smart 7 Monitor which unifies everything in one assembly.

fully compatible with the entire Bolt 4K series of products. Existing users of SmallHD’s camera control software can take advantage of the new wireless functionality with an upcoming firmware update, as long as they are using their monitors with a Bolt 4K Monitor Module TX/RX set.

Powered by Teradek’s BB3 chipset, the Bolt 4K Monitor Module TX is

“The launch of the Monitor Module TX is the culmination of a multi-


year R&D effort to completely overhaul our Bolt wireless ecosystem based around our new 4K Chipset”, said Greg Smokler, GM of Cine Products at Creative Solutions. “The Bolt 4K Monitor Modules are a breakthrough in simplifying on-set workflows. For the first time, we’re announcing a revolutionary new feature: wireless camera control”. 


Broadpeak achieves world-first cache mobility through 5G networks

Broadpeak has recently announced that it has completed the industryfirst trial for its Cache Orchestrator. Automatic cache instance deployment and termination, edge cache relocation, and the first inter-cache user mobility tests were successfully achieved. This tool offers a CDN solution for cache allocation on 5G Networks. “The key to delivering the best video experience 8

over mobile networks is to bring content as close as possible to the end user. We are very excited to have accomplished the first phase of our Cache Orchestrator trial, aimed at optimizing further the quality of experience on every screen”, said Jacques Le Mancq, CEO at Broadpeak. Broadpeak’s Cache Orchestrator offers a complete orchestration framework that is videoservice-aware,

Kubernetes-based, 100% virtualized, and configurable through a standardized API. The Broadpeak’s solution automatically and seamlessly reallocates video sessions when instances shift around, and their configurations are constantly fine-tuned through a centralized process based on realtime usage analytics collected at every tier of the system. “We are thrilled to complete the first step in this groundbreaking project and are already working on the next phase of Cache Orchestrator, which will focus on testing the solution based on the latest 5G edge-related standardization”, said Guillaume Bichot, principal engineer and head of exploration at Broadpeak.


ETV Bharat adds Grass Valley’s Stratus and Edius to their facilities on premise and remote ETV Bharat has taken a wide range of Grass Valley solutions to underpin its upgraded production and editing operations. The new setup includes GV Srtratus with nonlinear editing software EDIUS. GV Stratus is a video production and content management tool that gives future-ready capability, allowing it to scale its operations both 10

on-premise and remotely off-site. ETV Bharat delivers information services across mobile apps and web portals to 24 Indian states with programming in 13 languages. GV Stratus will give to central production facility in Hyderabad a combination of tools and customization for fast ingest to on-air, efficient asset management,

integrated social media publishing and management of metadata. GV Stratus manages media on premise and remote broadcast applications. All ingested content is formatted in lower resolution proxy on the fly by Stratus transcoders. For on premise use, ETV purchased 400 EDIUS Pro licenses for editing. 200 Remote users utilize

EDIUS XS for secure editing via VPN and user access control while rendering their edits from the field. “We pride ourselves on innovation, and the ETV Bharat service is a first-ofits-kind offering in India in terms of diversity and depth. To remain productive during the COVID-19 pandemic, we wanted to implement a reliable and trustworthy production and asset management solution that could enable our operators to work on-site and at home as needed. With its proven reputation, Grass Valley was the perfect choice for us, and its future-proof product features enabled the digital platform to manage and operate 16 language channels — across multiple platforms — with greater scalability, flexibility and efficiency”, said Bharath K, CTO, ETV Bharat. 


Bahrain Radio modernizes with Lawo VSM and radio technology Built in 1980, Bahrain radio, with seven radio stations, was long due for a revamp. The recent USD6,5m turnkey project, which is the first phase, has launched this facility into the digital realm. The entire facility has been refurbished, from nine radio studios and control rooms to the MCR (Master Control Room) and CAR (Central Apparatus Room), with a parallel overhaul of furniture, equipment, automation systems, radio library and acoustics. By deploying digital solutions in all areas of the facility, the radio section of the Ministry of Information Affairs (MIA) has essentially migrated out of an analogue environment. Dubai systems integrator GloCom was tasked with the challenge of undertaking this overhaul right in the midst of the 2020 lockdown, and it delivered, ensuring that the existing service was not disrupted at any point in time, and also digitizing the department’s radio library


archive. While the main centre of the project was in Isa Town, related civil work also took place at various other sites.

Prime Minister of Bahrain,” says Eng Abdulla Ahmed Abalooshi, Assistant Undersecretary for Technical Affairs at the MIA.

“Bahrain Radio is one of many projects the MIA has completed in recent months, with more due for completion this year and next. Much of this renovation has been possible thanks to the support of Prince Salman bin Hamad bin Isa Al Khalifa, Crown Prince and

“Our radio station and studios were built in 1980. They were really old, and we used to have the occasional breakdown with no support available for them. All our FM and AM stations are processed in these studios and go through the MCR; our radio channels are also available

on satellite and OTT. With this project, we have transferred our entire radio technology to a digital platform and have added a few elements that will make life easier for the production people in our radio department,” he explains. At the core of the set-up is a Lawobased MADI architecture that covers all seven on-air studios and allows the control room to serve as a self-operating studio. These on-air studios can also be connected to two of the production studios for music or drama. A third production studio has been redesigned for mix-mastering. All seven on-air studios are designed to enable any FM station to log in and go live from any studio. The MCR, the heart of the station, includes a 15 FM station automation system with full redundancy, enabling the department to scale up in the future and add another six FM stations. It also includes four 80’’ LED walls, a brand-new Lawo Vistool for audio monitoring and a Lawo VSM monitoring solution. A big part of the project was the replacement of a legacy Dalet system with the latest radio automation system from RCS. The studios are connected with the MCR through fibre (MADI) with physical AES/ANA cables for redundancy. The CAR was also designed with a centralized audio router (MADI) from Lawo. 


Logic Media Solutions orchestrate an IP migration in ZDF and ARD German broadcasters German broadcasters ZDF and ARD have upgraded to full IP functionality the essential core components of their mobile production units. Logic Media Solutions has been in charge of this IP migration process. These mobile production units are used jointly by ZDF and ARD for events which, due to their size and complexity, cannot be handled with conventional OB vans or onsite studio. The system offers flexibility and scalability, making unexpected expansions and the temporary integration of additional rental equipment possible. The system is currently in use at the EURO 2020, providing the production infrastructure at the National Broadcast Centre (NBC) in Mainz after receiving signals from the International Broadcast Centre (IBC) located near


Amsterdam. And for UEFA Euro 2021, the system has been scalated with the use of additional EVS servers. During the configuration for the European Championship Logic implemented components from its partners and third-party devices. The MPE racks operate based on SMPTE 2110, SMPTE 2022-7 redundant, including NMOS (IS-04/IS-05). They are PTP-compliant and can handle 1500 video connections and up to 5000 audio connections in their entirety. About the technology in detail, Logic implemented Grass Valley IQUCP systems as gateways and Grass Valley MV821 multiviewers in the racks. Other third-party devices were also used as gateways, multiviewers and processing devices, as 1080i to 1080p converters. Nevion Virtuoso devices serve as gateways and for advanced monitoring of incoming and outgoing lines. The audio matrix is also realised with two additional Nevion Virtuoso units, which allow 2048×2048 MADI connections. Interestingly, Nevion’s VideoIPath is used as an SDN orchestration layer, providing a control layer for the broadcast controller between the end devices and the network. 

TOKYO 2020

Technology changes Olympic storytelling

The world turns its eyes every time a new edition of the Olympic Games takes place. Not only to enjoy elite sports. It also attracts the attention of all of us who have broadcasting as our way of life. Each edition is a step forward in terms of the technology used for broadcasting. In this magazine we are going to show you how OBS has once again changed the production standards. 16

The technological advances that are part of the postponed Tokyo 2020 Olympic Games are based on five fundamental pillars: the first Olympic experience in HDR, the transition from HD to UHD, the implementation of cloud solutions for production and broadcasting, 5G connections for communications and Artificial Intelligence applied to sport.


A HDR Olympic experience: this standard capture content that contains four times more pixels than current High Definition. For the audience, this will translate into more realistic detail, richer and more lifelike colours and greater contrast and sharpness. OBS will implement this technology on 42 of the Olympic competition venues. Broadcasting

TOKYO 2020

content in HDR and UHD combined requires the adaptation of mobile units and workflows. Overall the broadcasting services will use a total of 31 OB vans and 22 flyaway systems that have been prepared for the occasion.


Ultra High Defined sports: the challenge has

been huge. OBS aimed many technological improvements. Firstly, developing and delivering a single HDR/SDR workflow. Considering the previous one, it was necessary a full IP migration through the whole OB fleet. Nevertheless, implementing every broadcaster immersive

audio and not sacrificing any quality standard. As well as, in order to ensure the content, OBS will contribute with an extra SDR signal to every broadcaster. But even 1080i signal will be different because it is really a conversion from the UHD feed.

TOKYO 2020


OBS goes cloud: in collaboration with Alibaba Group, OBS has created a suite of cloud services designed specifically for dataintensive broadcast workflows. Times are changing and the necessity to extract production and distribution workflows from home has become necessary. For that reason, OBS has launched OBS Cloud and this has become an inflexion point 18

for many broadcasters who, in this edition of the Olympics, will not need to create or move infrastructure.


5G connected production teams: 5G has the capacity of handle large volumes of data including UHD transport with ultralow latency and higher video quality. This connection allows production teams to be located virtually

anywhere. The wireless connection provided by 5G also means that cameras can be untethered, can be placed in more creative positions and angles and, also very important, 5G will reduce camera set-up times. For broadcasters, 5G connectivity gives the opportunity to UHD contribution thanks to its high bandwidth and the ultra-low latency required.


AI, the biggest documentation provider ever: Artificial Intelligence (AI) is the use of computer capabilities to automate manual tasks to drive efficiency. For Tokyo 2020, OBS will run an Automatic Media Description (AMD) pilot project based on athlete recognition and combine it with speech-to-text technology to complement and enhance the tagging of media assets. Also, this technology will help OBS to deliver more personalised experiences. It means that, instead of RHBs searching for content, the IA will automatically push content to them. Overall, AI-driven technology is making the content discovery process faster and more accurate. These five major technologies are changing the global landscape. In previous numbers of our magazine, you have seen how we have dedicated many pages to IP migrations, HDR / UHD transformation and the advantages and the disadvantages it brings or the implementation of cloud solutions for storage, production or broadcasting processes. The world is changing and we want to tell you about its technological change. To achieve this, we are already working on a very complete report on the Tokyo 2020 Olympic Games. In the next number, you will be able to enjoy a detailed chronicle on the technological innovations that this edition of the Olympic Games has brought to the broadcast and we will share with you the testimonies of the great minds that have contributed to make them possible. In addition, you will enjoy our exclusive interview with Sotiris Salamouris, CTO of OBS, who has been kind enough to share a conversation with us. 




CBC/Radio-Canada was founded in 1936 as the public radio station of Canada. CBC and Radio-Canada’s first television stations began broadcasting in 1952. By 1955, CBC/Radio-Canada’s television services were available to 66% of the Canadian population. Since then, CBC/Radio-Canada has become the largest public broadcaster in North America, facing the technological challenges of an immense country and transmissions in both French and English. Nowadays, this public broadcaster is moving all its infrastructure to IP. We had the opportunity to talk about these current challenges and the future ones with Maxime Caron, Senior Director, Architecture & Strategic Development and François Legrand, Senior Director, Core Systems Engineering.



François Legrand, Senior Director,

Maxime Caron, Senior Director,

Core Systems Engineering

Architecture & Strategic Development

Over the past few decades, what have been the main technological challenges facing the company? Maxime Caron: We have been transitioning all our equipment from an SDI system, which was introduced in the 1990s, to HD, which was introduced in 2006, to the point where everything is now HD in terms of news and TV production. The biggest impact on our activities in the past ten years has been our digital transformation; we 22

offer digital services through our video streaming services, CBC Gem and ICI TOU.TV), and our websites ( and Recently, transitioning to IP is the biggest project we are undertaking.

IP is more than a lift-andshift operation, because we are turning circuit switching into packet switching. This gives you flexibility and agility, but it's also very complex. This transition into IP has been a very interesting journey.

François Legrand: We went through a bunch of different transitions. I was there for the transition from analog to SDI, which was easy, because although SDI is digital, at the end it doesn't change the workflows or the core technology.

We have a large project right now in Montreal to move to IP in our newest facility, the new Maison de Radio-Canada, where most of the production and distribution for CBC/Radio-Canada’s French-language services happens. Our radio operations have almost



completed their move into the new building, and our TV operations are following suit. There has been a lot of learning during the deployment of the 2110 technology (SMPTE 2110) on this scale.

How many production centers do you have across the country and how is work organized between them?

MC: We have 76 production centres across the country and a few around the world. We also have foreign bureaus in Paris, London, Washington, New York, Beijing, and Delhi. These centres are all interconnected in terms of video, audio, phone systems, internet and IT type traffic flows through a network we call NGCN (Next Generation Contribution Network). We have two main facilities: one in Toronto for English services and one in Montreal for French services. We also have a large site in the nation’s capital, Ottawa.

stations reach 98% of the Canadian population. We have transmitters in Northern Canada and, actually, they are key to providing people in the North with the only signal that they can get. We have a strong presence from coast to coast to coast, and we are the only Canadian broadcaster in more than 60 locations across the country.

Canada is a very large country, what are the challenges you face in terms of content distribution and workflow? Do you reach the entire country?

FL: Providing our staff in the North with proper internet access or means of communication for their work is also challenging. We have a very close eye on the new LEO constellation to help us, but right now it's not fully available in that part of the world, it's still too far north. We have to rely on our own satellite means to bring connectivity and internet to our employees in the North because without that there's just no connection available there.

MC: The answer is yes. Our radio, TV and digital coverage extends from major cities to very small towns with less than 500 residents. Our radio

MC: Most of our radio transmitters are fed via satellite dish because the fibre optic is not available and it would be too costprohibitive to install it. 23


CBC/Radio-Canada is a public broadcaster. How does this affect its broadcasting activities? What are the main challenges associated with this?

and French. What technological challenges does this entail?

We also face competition from networks in the United States.

FL: There is one corporate technology team and a common set of infrastructure that delivers functionality to both of our networks, CBC (English) and RadioCanada (French). Each network creates content in various locations throughout Canada. Both networks use the same editing tools, but each has its own process for distribution.

Because Canada is a bilingual country, you offer content in English

MC: Each network’s distribution is centralized in one location: for CBC,

FL: Being a public broadcaster means that we are governed by very specific rules defined in federal legislation, namely the Broadcasting Act.

it’s Toronto, and for RadioCanada, it’s Montreal. FL: Most regions have their own local TV and radio channels. For example, Radio-Canada’s main TV channel, ICI TÉLÉ, has 13 local channels across the country: there’s ICI TÉLÉ Montréal, ICI TÉLÉ Québec City and so on. Also, Canada spans six time zones across four regions. We therefore need to create a timeshifted version of the content. Each region has local news and advertising. In some cases, we create local or regional programming that is then broadcast nationwide, while other shows air only in certain regions.

Which is the main equipment you are using in your studios? Could you give us an overview over the main providers the company uses in terms of cameras, microphones, etc. FL: We have 76 production sites. Although we try to standardize equipment as much as we can, it's not always 24


possible. For the new Montreal building, our main studio camera provider is Sony and it’s pretty much what we are using across the country, with some exceptions. The production switchers we use are Grass Valley, Ross, and Sony. We use all of them depending on the location and the needs of our teams. With respect to the audio consoles, we use Calrec and LAWO, depending on the location. For microphones, it's a mix of Shure and Sony. Our playout system is iTX from Grass Valley. The IP infrastructure is managed by VSM from LAWO over Arista networking equipment.

What is your technological state in the graphics section? Have you tried developments in augmented reality? FL: Our graphics are a combination of Vizrt and Ross Xpression.


MC: We have virtual sets, but the main portion of our studios is conventional studios.

What’s your opinion on 5G and how will it affect the industry? FL: Many TV/Radio production use cases don't need 5G to work. There is already technology available that can be used with the existing cellular or wireless infrastructure to make it happen today. 5G is going to help us to improve our workflow, but it's not going to be a game-changer because a lot of what 5G will enable is already possible today. Yes, with a bit more latency. Yes, maybe with only HD quality, not 4K. But for 90% of the applications, it works. Lightweight technology to easily transmit video or audio signals more cheaply than via satellite or truck is here to stay. But I believe that we could be doing a lot more now with just 4G. We don't need to wait for 5G. MC: Adding to François' point, the 5G 26

announcements of the telecommunication companies in Canada are in densely populated centres such as Montreal, Toronto and Vancouver. What we are also watching closely is the LEO (Low Earth Orbit) satellite constellation, such as the Starlink service from SpaceX. If we can leverage that for

mobile and fixed remote applications, it's going to make it easier for us to cover other parts of the country for news gathering, especially in the North.

What do you think about cloud-based productions? Is something that could happen in the future?


cloud with fast access, but the final result is a highresolution version. It's a hybrid model that we tested that works well and we are looking at deploying this concept for a large-scale project.

Which is the distribution standard you are applying? FL: We use more than 700 transmitters to cover the entire country.


MC: 98% of our digital client-facing services are running in the cloud. Our websites, our video streaming platforms for content distribution, are all running in the cloud, either in Azure, Google Cloud Services or AWS. When the editor is working remotely on the system, it uses a lowresolution version on the

MC: We use 27 ATSC transmitters. The fraction of the population that receives it through the airwaves is quite low because most people leverage cable/satellite distribution for home use. That's where the core of our TV distribution happens. We have direct connectivity to the cable distributors to give them the signal. FL: Something interesting is that we reach a substantial amount of viewers through our own video streaming services. There are two, one in each official language, CBC Gem in English and ICI TOU.TV in French. Through 27


those platforms, we distribute a lot of file based and each one of our live OTT feeds is also available.

Which image format does CBC/Radio-Canada use? Are there any plans for 4K? What is your opinion regarding the deployment of this technology? MC: Two elements should be taken into consideration: the format we use internally for production and what gets distributed to viewers. Those might be two different formats. I will give you a good example. In SDI, our old building in Montreal was built with 1.5 gig equipment. It could not even pass 3G 1080P everywhere, so production was done in 1080i, but for various reasons, distribution was done in 720p. In the new Montreal facility with IP, everything becomes bytes and a question of bandwidth. We decided to keep 1080i for the production path, mostly to remain compatible with our nationwide file-based 28

workflows, but we have many pockets of 1080p or 4K/UHD. FL: Another great example is the output of

the multi-viewers. You create a lot of small tiles, but if you want to maintain a good picture quality, it makes sense to use them in 4K. It is HD in,


but it is 4K out. With IP, you get that flexibility to be able to have a mixed infrastructure where everything can work together nicely.

In terms of content creation, everything we bought recently is 4K ready. For the new building in Montreal, the cameras and the

production switchers are 4K native. Of course, the IP network has plenty of bandwidth to carry those 4K signals.



Looking ahead, where is the TV broadcast industry heading? What will be the main trend for the next few years? MC: Part of our strategy for technology is making tools available and accessible from anywhere. Any type of remote editing web-based tool is a big part of our strategy going forward. That includes our IT-type applications, our editing platform and these production tools.



More IT-type tools and software is a big driving force of the future. One of the reasons why we went IT was to leverage the software-based applications so that we

have quicker and flexible access to the market. That comes with the fact that our staff need to change to more of an IT-type approach.


Another one is leveraging data as much as possible. Make better business decisions and leverage our technology tools through data. I'll let François talk about the challenge around security. Security is probably the topic we talk about on a daily basis. FL: Security is a big concern for us and I believe it should be a concern for the whole media industry. We've seen a few broadcasters that were attacked during the last months and years, but I believe it's just the tip of the iceberg.

I believe that the media industry doesn't take the security aspect with enough rigor. Last year, we spent a lot of time trying to work with vendors to bring to the market good IP solutions. Now we want to work with them to bring secure products. Also, I believe with COVID we were forced to experiment very quickly with remote workflows and we found out that we can do a lot more than what we thought with an infrastructure that was not necessarily built to enable work from home. At least in our case, it took a few days, or a week in some

cases, and people were back to normal working from home. In the future, by building systems natively designed to be remotely accessible, we'll be able to get a lot of very efficient and very interesting workflows. We have servers running software to execute demanding workloads, as well as hardware solutions. ITX, our playout system, TAG, our multiviewer, and various others are software-based. At least for us, we will always prefer a softwarebased solution than a hardware-based one for the same application.  31


To close this Video over IP trilogy we are going to start by reviewing a little bit of history in order to reflect on the past and what has brought us here. Without this perspective, it is difficult to understand the current environment and technologies and their raison d'être. In short, IP is somewhat disruptive but it will coexist with current SDI environments and it should be this way for a long time. By Yeray Alfageme





A difficult choice It seems that now we are all faced with the dilemma of having to choose between SDI and IP. SDI is considered the most robust, reliable and interoperable video signal exchange mechanism available, the result of the digitalization of linear analog video signals. If we go back a little further in time, we find the really ancient standard SMPTE ST-125 as a predecessor of SDI, which laid down the standards for the exchange of 525-line and 625-line 4:2:2 digital signals. Sounds archaic, doesn't it? Well, even in those days ST-125 serial interfaces coexisted with SDI for a while, even though the latter was a clear step forward. In the case of IP, it is not an evolution of SDI but rather and adaptation of an existing technology to our signal exchange environment. SDI has not been redesigned to make it evolve, but we have looked instead at other existing technology in order adapt it and adapt 34

ourselves to it. That is why the choice of SDI or IP is not such, but we must instead know and understand both worlds to make the correct combination, not the correct choice, and benefit from the advantages that both technologies offer.

Adopting SDI, other times As it happens with early adopters of IP technology now, it wasn't easy for early SDI daredevils either. Starting with the problems related to cabling, not all coaxial cables worked well and their length was highly limited; even quality losses caused by

incompatibilities in chrominance information processing. The beginnings were not easy. Doesn't this ring any bells? It is almost the same thing that has happened with the adoption of IP: Compatibility issues, signal format, cabling, etc. So not everything is new in IP, as the challenges are similar to those of years ago. Half-way through the life cycle of SDI, HD appeared, imposing new needs and forcing a transition to 1.5 Gbps and even 3 Gbps interfaces -when dealing with progressive signalswhich introduced yet a further degree of


complexity. And close to the end of the development of SDI -at least for now- new “standards” such as 6G-SDI or even 12G-SDI were necessary in order to be able to accommodate UHD signals through coaxial cables, so interoperability was once again a problem.

The IP in broadcast As we have already analyzed, and not only in this trilogy but also in countless previous articles at TM Broadcast, IP offers great potential for our industry, but we must know how to make the most of it and not get carried away by fashion. The IP is just a transport method, nothing more. It is neither our own technology nor does it offer an added value beyond conveying the signal through equipment and standard data links between systems. But the point is that the last sentence has great implications and a lot, quite a lot of potential. The ST-2110 standard 36

really made it possible to take advantage of all this potential and put at the service of our industry everything that a signal transport method based on packet data can offer us. IP technology is very, very mature. It has been evolving for over 50 years and adding new capabilities, which has now turned it into an option for the transport of video signals. And the thing is that our technology, seen from the

standpoint of the amount of data and information that is being handled, is not trivial at all, but entails a great challenge. Who has a 1.5 Gbps Internet connection at home? Nobody, or maybe hardly anybody. Well, all that bandwidth is necessary to for conveying a single, uncompressed video signal. Not to mention UHD, starting at 12 Gbps, even more if we move up to definitions such as 8K. What can we do with Internet


connections featuring those speeds? I remember that my university would connect the entire campus to the Internet by means a 1 Gbps connection, and we were already 10,000 students back then. The Internet connectivity necessary for an entire university campus is what is required to convey a video signal, and in the EURO 2020 more than 35 cameras per game have been used, almost the bandwidth of a small town.

Compression, our most feared enemy I know I am trivializing, but I think this is a good reflection in order understand why it is reasonable -as long as quality is guaranteed- to contemplate compressed signal exchange environments. But there is no need at all to pull one's hair out for this reason. Nowadays the available codecs prevent the variable latencies present in the past, since most of

the compression is done by hardware and not by software. Furthermore, these codecs feature such good quality that makes it visually almost impossible to tell a SDI signal from a compressed one. If we use the latest H.266 codec with a bandwidth of 30 Mbps -100 times less than 3 Gbps- we obtain an image quality for 1080p50 SDR definition, for instance, which allows us to even process the image in post-production without any major problems. Production environments, including contribution and distribution of compressed signals, are no longer the future but the present and therefore, they should be considered for adoption as soon as possible.

Timing and interoperability Although all that glitters is not gold. IP has still some way to go before it is considered as robust and interoperable as SDI is today. Because in view of such fast-paced developments, standards are not able to keep up 37


with the speed and manufacturers struggle to implement all these improvements in their products. All this rapid evolution leads to an uncertainty that sometimes delays the launch of new developments and products due to unwillingness to run the risk being the first one and then failing. Which is completely understandable. In this environment emerged NMOS, an IPbased protocol with compressed signals that makes the exchange of signals between equipment units feasible, thus increasing interoperability. A big step forward but one that still has limitations, and not all


manufacturers have adopted it. For example, other proposals such as NDI are equally valid and have to be considered. Time will tell us which is the correct option or whether a new one appears that will take them all in. We all want that.

Controlling hundreds of signals simultaneously It is not uncommon to find -in fact it is most usual- that in a given production environment hundreds of signals are combined. From cameras to signals produced from various sources such as graphics, audio as well as control signals. All these

combined within an IP network result in increased complexity and therefore in more demanding bandwidth requirements. This is something new for IP environments. Not the data to handle, but doing it all together with such high bandwidth requirements. That is why certain protocols and even equipment such as switches and routers have had to be adapted to become "media-capable". And now the environment is much more mature than before, something to enjoy.


Finally, there is the whole area regarding corresponding to the synchronicity of the signals. What began as our famous blackburst or Trilevel signal is now PTP signals that are used throughout the network to synchronize all our streams and avoid problems when using them together.

Conclusions SDI exists and will continue to exist for a long time as it remains the best technology for many uses and daily productions. And although

it may seem to us that it is easier to use than the IP environment, this is only based on our experience working with it, nothing else. The technology is mature enough by now, we just need to finish adopting it and for the strange profiles of broadcast IT engineers and technicians to become widespread as we acquire the necessary knowledge to standardize it as much as SDI or even more so. There is no longer a problem in handling networks carrying 400 or 800 Gbps with standard equipment to

accommodate any production environment. It is evident that SDI has limitations in basic issues regarding signal and installation management and IP does not, but the latter needs to mature some more and we have yet to acquire the necessary knowledge to adopt it; therefore they will both coexist for a long time. SDI is not better than IP, just like IP is not better than SDI. They are different environments that must coexist and be used optimally wherever needed. The clearly best approach is getting to know both, which will be a must in the near future. Let’s go for it. 



David Katznelson, of Danish origin, but settled in the United Kingdom, has become one of the most renowned directors of photography in British audiovisual fiction production. Since 2000, when he graduated from the National Film and Television School, he has shot a significant number of audiovisual productions for which he has won several awards including EMMY, BAFTA and RTS Award. His work has touched all genres. From the documentary "The Village at the End of the World", where he told through the camera the lives of probably the most isolated citizens of the world; to one of the most remembered episodes of the acclaimed series "Game of Thrones": "The Climb". We cannot forget his participation in the first season of "Downtown Abbey", his work as cinematographer recognized with a BAFTA award in "Shoot the Messenger" or his latest production: "It's a sin", for which today we interview him between our pages.




British cinematography from “Downtown Abbey” to “It’s a sin”


Photo by Tine Harden



We want to reference your work with two of your productions: “Downtown Abbey” and “It’s a Sin”. They are different from each other, just regarding to the environment and production. The differences are huge, which are they and how did you adapted these productions? The biggest difference between these two productions is the period and the location. That makes a big difference to

Downtown Abbey.



cinematography. For me “Downton Abbey” was a very different show. Because of two reasons: I was less experienced and, equally important, because, at least at the beginning of the series, it was set on the preelectricity era.

It's a very different kind of lighting challenge; all about oil lamps and pretending that you're in the world of the old days. Also, the technology back on 2010 was slightly different. I think it was the first drama I shot on a proper digital camera,


It’s a sin.

which was Arriflex D-20. For that reason, it was also a big change in terms of technology back then. For “It’s a Sin” it’s something different. I like to be able to switch between styles. The journey as a cinematographer is always to find something new. I like exploring and seeing things anew. If I just did one period drama after another, set in exactly the same era, I'd get bored. I

like to do different periods: contemporary, old days, new days, fantasy. It's nice to be able to change the style a little bit. In that respect, they are very different productions and very different challenges, but enjoyable on their own terms.

We want to stop first on “It’s a Sin”. Could you describe the main equipment used for this

production and why did you choose it? “It’s a Sin” has two different styles if you could say so. They became clear when we were reading the scripts. We had this very young, vibrant gay community with parties, a lot of movement and a lot of fun. Then there was a more conservative, if we may call so, parent’s generation and officials that were more set in their



ways. For me, it felt like I could split up the story into two in that sense. For the young and vibrancy, we chose spherical lenses, the Canon K35 rehoused lenses. Those lenses were 44

just much smaller, so they were better suited for some handheld sequences we did. Also the close focus allowed us to be closer to the characters and experience the world with them. They were not

crisp and sharp, like today’s model lenses. They could give a little bit of a period feel I suppose, as we were trying to recreate the London’s ‘80s. We contrasted that with Cooke anamorphic lenses


for the settled landscape of the parents and officials. In here, we kept our distance a little bit more and we were always on a Dolly or a Steadicam to do those slow moves, instead of hand-held. Of course, there were moments where those two worlds met each other and we always ended up having this debate, "Whose story point is it? Whose world or whose moment is this?” It had two separate sides, but they met in the middle sometimes as well. We shot with the Sony Venice cameras, which I had used very briefly before on a documentary feature doc about the Thai cave divers that got stuck in 2018. I was introduced to the Sony camera for that project and I really liked it so we went the same way to get a good image. I thought that I would shoot on the 500 ISO setting and then occasionally go to 2,500 ISO, but we ended up shooting the 95% with the same 2,500 ISO. I really liked the flexibility of the camera.

The A camera was a conventional setup camera, which we could change it to handheld. We could also put it on a Steadicam. Then the B camera was rigged in this Rialto mode, where you take the front element of the camera and separate it from the back element. Then, you have a cable between the two. That allowed us to have a camera, which would be more flexible if we were to get in a car or in a corner for handheld. I like operating it because it's small and you can move it and take it anywhere.

Which was the main challenge you faced off during the “It’s a Sin” production? It was shot on a stage and had a set for some of the show when we used the Rosco SoftDrop backdrops. If you want light for day, you light it from the front, and for night you light it from the back. It was the first time I used them. For me, that was a new thing which I really enjoyed and I thought it worked really well.

Another challenge was one scene towards the end, which is in the flashback section in the last three minutes of episode 5. The five main characters are in this landmark place called Primose Hill in London. The secret is that we didn't actually shoot there because the whole show was shot in Manchester. We had to make Manchester looks like London. In the script it said, "This is a very sunny day and everybody's happy”. We were on some little hill in Manchester and it was drizzly rain and foggy, middle of December, up North of England, a really horrible day. The biggest challenge there was to make it a little bit sunnier. We had this huge crane with 200 kilowatts of lamps, just shining down to try to give a little bit of a summer feel. It was the first time I'd used such big lights and it was a fun technical challenge.

How challenging it was to recreate the atmosphere of the 80s in London? And how did you do it? 45


It was not easy. I was a teenager in the '80s and it seems like they were yesterday and they are not so far away. When you actually see what those years looked like, looking at reference photos and films, you realize that it was a long time ago. It was definitely not easy to do it in Manchester because some of the architecture is slightly different to London. I think our location manager and our production designer Luana Hanson did a great job creating London in Manchester. Even filming in London for the '80s would have been tricky because of cars, satellite dishes, shop fronts and everything is different. I suspect we would have had challenges in London as well. It has become a really hard city to film in. It's expensive because they don't want filming there. I remember some of the scenes with Roscoe, one of the characters interpreted by Omari Douglas. He has this affair with a politician and he lives in a flat overlooking the Thames in 46

London. You go to some skyscraper in Manchester and the view is nothing like London. It's amazing what with CGI and careful shooting, you can make. You can put in the Big Ben or something to give the feel that you're in London. I believe that with few means you can manipulate things, which I suppose is what cinema is about on many occasions.

With more than two decades of experience, which work of your career was the most challenging? Why? The biggest challenge was shooting the “Game of Thrones” episode “The Climb”, in which Jon Snow and the wildlings climb the ice wall. When I first joined up with the production, I thought the biggest challenge was in


my episode because the other DPs were doing two episodes for that season. I only had one episode to do and a lot of preparation before I tried to get it right. I can't remember how high The Wall was meant to be, but it was about 300 meters tall or so and, of course, it doesn't exist. To begin with, I thought, "OK, we should go to Iceland and film it," or somewhere where we have a part of the natural world to deal with. As “Game of Thrones” had filmed in Iceland before, it would have been a natural place to go, but very quickly became clear that it was not going to work because it was going to be too dangerous, too cold and we still wouldn't find an ice wall which would be that high. We put that idea aside and with the art department we started to work out how to do it. There is a great VFX guy called Joe Bauer, who was working that year on “Game of Thrones”. We started

Photos by Tine Harden



to design in Previz what the sequence would look like. Then, the art department started to build samples of the ice wall. Ice is a very complex thing to work with because it's water, it's transparent, it reflects the lights in a certain way, and it has structure and texture. Is very hard to build that. We started out with small sections of wall built out of Polystyrene, salt and things that would reflect or have some granular texture. After that, we shot camera tests on and change the construction one time after another. We kept going back and forward with it for months actually. Once we were happy with that, we tested how it would look like with ice axes going into the ice and whether it was safe. We ended up with an ice wall of 10 meters high and 10 meters wide and green screen everywhere around it. We started to build all in Previz, finding out which kind of crane we would need to use and build the tower for crane. There was a section of 48

the wall that we had to drop down. That was a special effects element within the design. We rehearsed all of these things; we created for a week or two and then shot some tests again. Then eventually shot the sequence in three or four days. A lot of work just for a minute's screen time. Luckily, “Game of Thrones” was very good at giving you a challenge, but also giving you the opportunity to solve it in a right way. It was such a pleasure and it was a challenge for us. Photo by Tine Harden.

How do you feel working with a big production like that? Do you feel the pressure? There is a lot of pressure for sure, and of course, the bigger the production, the more pressure there is in many ways. For me, it's really about getting the right circumstances in place to solve the task and to prepare as much as possible so that when the time comes to do the job, you know it's going to work. The worst pressure comes when you are not



sure whether things are going to work. That really makes me very nervous. That pressure can come on small productions and big productions, because you want to deliver something which is good, which is quality, and very often I think that the pressure is in having enough time to do it. If you use “The Climb” as an example, there's a lot of pressure, but once I've done all the testing, I had been there and I’ve out that everything was working before shooting, then the pressure is still there, but, somehow, you can manage it.

How does the arrival of platforms like HBO influenced in your work? I think there has been a huge change in the last

five or ten years. I used to do British TV dramas, where you work with BBC, ITV or Channel 4 occasionally. They always had a certain amount of money with their budgets and it was always too small budget for what they wanted to achieve. All of a sudden, we had these huge players like HBO or Amazon that they are just as ambitious as big future films and they're competing with each other to become the best and to provide the best and next hits. For that, they need to give you quality. That has been a big game changer. The difference between cinema films and TV is very small now. We have many incredible television productions, which I think are often more inspiring than 90% of the feature

films you can see in the cinema. I used to look up to the DPs who were doing the big films and I would go, "Oh my God, I would love to do that”. I think less so now because now you watch “The Queen's Gambit” or “Game of Thrones” and you go, "Well, actually they look just as good if not better". I think it really changed. It's great, of course, for a cinematographer that TV has become so much more valued and that you get a chance to make it more visual.

For a photographer with a good background, what's the difference to go to the cinema, or to sit in your living room and watch solo on TV? I definitely prefer to see things in cinema. I think you just can't compete with it. I like seeing things at home as well. It's great to sit there and watch one episode and the next and so on. It's a hugely enjoyable thing to do and it's very flexible. Equally, I really like going to the cinema and turn off my phone 49


knowing that I'm just going to watch this and this is all I'm going to do now. I think the world without distractions is very hard to find now. It's so rare to find anywhere where you are not constantly distracted by other things. I think cinema is one of those things. It's a bit like going for a walk in nature where you haven't got mobile reception. You go in and you're committed to something for two hours. I don't think you can beat that, really.

We were talking about platforms, and we often required larger formats, like 4K or 8K. How do you feel about working with those formats and with these characteristics? Platforms are dictating 4K or 8K due to delivery requirements. I can see that the quality is better, but I do not know that I care too much. It is more about what the delivery actually is. Of course, we want whatever we shoot to be future-proof to some extent, but my first experience was that I 50

Photo by Tine Harden.

really wanted to shoot on the ALEXA cameras. I wasn’t allowed to, because I had to deliver 4K at the time. The first thing I had to do with 4K was an American series called “11.22.63”. The ARRI camera could only deliver 2K, so I was forced to shoot on a RED camera and I wasn't really very comfortable. Once you get going with a project and you shoot, it doesn't really matter what you're shooting on. You just get used to it, and you

get on with it. On the other hand, it’s great that the image quality is getting better and better. As cinematographers, we could try and do different things. You can push your develop in a different way to change the look. I think now many people are trying to do different things to put their fingerprints on whatever they shoot. I think it's a balance between the two, really. It's amazing when I watch some nature


experience with HDR, to be honest. We did deliver HDR on “It's a Sin” and we were meant to grade in HDR but because of the pandemic, I ended up sitting at home with an iPad grading while the colorist was in Soho waiting on the big monitors.

program and you see a frog or a monkey or whatever you see in 8K. The details are just unbelievable.

We've talked to many CTOs and DPs about HDR as a revolution, but we've been told that cinematographers have yet to learn how to use it properly. What do you think about this? How was your experience with HDR? I think it's true. I have to admit I haven't got much

When I had a look at the HDR motion of the series was when I saw these incredibly bright lights that were completely burnt out on the camera as far as I could tell. When I had seen it, they were burnt out, but then in HDR, you could actually bring back some texture and really play with the density of the image in that sense. It looked like it had potential, but I don't have enough experience to really comment more on it.

What is going to be the next technological revolution or advance in the industry? I really like the fact that we can become a little bit more environmental by using LED lights. There has been a great technological advance with lights have becoming

smaller, more compact and using less energy. We still need that for bigger lights and I think that will be a huge technological advance. When LED lights replace big HMIs, instead of being 12 kilowatts of power, they will use a 10% of the power and give more output. I'm looking forward to that. In terms of cameras, you can get them smaller, but once you build up we have the production camera that it's still the same size as it's been for the last 30 years with more things on. However, I think that is changing too. I can go for a walk in a park at night with my iPhone and I can actually expose and take a picture when it's completely dark. It results that my phone sees more than my eyes see. In that sense, the sensors and the sensibility of the camera has completely changed in the last few years. The fact that we have a Sony camera with 2,500 ISO, capturing things that I can barely see with my eyes, it's incredible. I think that those are big advances.  51


The Next Hurdle for Sports Broadcasters

As we cautiously reemerge from a year of empty stadiums, socially distanced productions, soft starts, and “bubble” mandated seasons, we can begin a review of the work we’ve completed for the past 15 months. For the world of broadcast that is focused on live sport this will be a time to review our practices and workflows of the past year and make some more permanent decisions. What new approaches are here to stay? What practices will we leave in the past? What are we excited to bring back to the fore of production?

Rethinking Staffing The idea that broadcasters must be able to smell the grass at the stadium is now under forensic analysis by the accountants. It has been a long-held belief by many that teams of play-by-play commentators, producers, 52

and other staff are needed to be in-stadium to experience – and convey – the atmosphere of an event. Pandemic-era productions have tested this convention. Credit to the professionalism of production teams over the past year who needed to work from home. They produced compelling, effective, entertaining, and professional events. The vast majority of viewers had no idea staff was working remotely. That said, for those who enjoy taking trips to such marquee events, the bad news is they may have performed their way out of a few flights going forward!

Rethinking Production Techniques These professional productions were also enabled by softwarebased solutions that empower visual

By Jonathan Roberts, Global SVP of Sports at Vizrt Group

storytelling. And, largely, these systems have shown that they hold up in remote environments. The ability to direct, live switch, and commentate on sporting events in nearreal time with robust production tools from home is now a tangible and proven capability. Further entrenching this new workflow is the beneficial cost savings and efficiencies of such practices. By making a team available remotely, you don’t just save on travel and lodging for staff members, you also make


your best workers available for more premiere events. Moving to a remote, software-defined visual storytelling workflow is a cost-saving approach that helps counteract rising rights costs.

Rethinking Advertising Live sport has an immediate, real-time, and perishable value tied to it. When an event ends, the value of it as a marketing

opportunity sees an abrupt drop off. That is joined by the fact that viewership numbers of live sport are somewhat down year-over-year (although, encouragingly, it appears the UEFA European Championship is bucking that trend). This is largely due to the fact that we have so many screens, devices, and content options fighting for our attention. Now, that said, just because a viewer on the

west coast of the United States missed the live feed of the Portugal v Hungary fixture doesn’t mean they didn’t later see Cristiano Ronaldo secure his spot as the tournament’s all-time leading goal scorer – nor does it mean they didn’t hear about his postconference snub of a soda bottle in favour of water. These types of moments raise an intriguing case study for advertising in sport. That soda sat upon the press conference table



for a reason: to get more airtime for the brand. It is a play to provide more value to the sponsors. In this instance, however, it backfired. So, what other advertising real estate might be offered that also features one of the world’s biggest stars? What about his two goals during the match? One of the discussions 54

being had pre-COVID was how best to offer the onscreen advertising that is digitised onto the field of play. Now that we are emerging from a challenging year for advertisers of live sport (they lost millions of eyes when in-stadium attendance went away) broadcasters need to reignite these discussions with new ideas.

Among the potential offerings are dynamic advertising inventories that display based on viewer region, language, and perhaps even fanship. Device-by-device viewing also needs to be explored. An on-field advertisement can look magnificent on the 65-inch UHD display in a living room yet be unreadable on a mobile device. How can we


deliver effective advertisements to these mobile viewers as well?

Manufacturer Innovation The answer to this question is rooted in the previous innovations we’ve noted: softwaredefined visual storytelling tools that allow for flexible and dynamic solutions.

Just as software helped solve for remote production, it too can offer new ways for viewers, sponsors, athletes, teams, and broadcasters to interact within a rich media environment. Thus, it is on manufacturers to develop virtual advertising insertion technology and increasingly softwarebased tools that make production easier and

more effective. By working together, the sponsors, rights holders, broadcasters, and manufacturers can continue to evolve the way we present live sport to the public. And that will result in more stories, better told – a win for all involved in the joy that is live sport.  55



filming the light 56


Kate McCollough was born in Ireland and from a very young age she was attracted to film cameras. Since then she has never stopped trying to capture the power of light. Before finishing her studies at The National Film School of Lodz in Poland, Kate McCollough was already shooting the Irish blockbuster "His and Hers". The film won the World Cinematography Award in Documentary at Sundance in 2010. From then on, many of her productions have been the subject of awards and recognition. In 2018, he received The Golden Frog for Best Cinematography on a Docudrama “I Dolours” at Camerimage. During the same year, Kate McCollough has been nominated to an EMMY Award for her cinematography labour on “The Farthest”. Recently and concerning to the same film, she was nominated for the IMAGO award 2019. Her film, “Normal People” has been nominated during 2020. Kate McCollough is a cinematographer specializing in television programming, and we are now living another golden age of television programming. Cinematographic photography in television is improving and refreshing visual languages, as well as renewing stories with an ambitious goal of exploring the image possibilities that emerge from the script and pass through the camera. And she has much to say about TV series.

When and how did you realize that cinematography existed when you were young? Has that part of cinema caught your attention since then or did that feeling evolve during your life? I was always attracted to image making as a kid, painting and then later as a teenager with photography. When I got interested in films my eye

was drawn to composition mainly, I was not so aware of light. Understanding the power of light followed along with the idea of moving the camera. Since going to college I have become much more aware of the cinematography community and its potential. It’s a continually evolving thing, discovering new cinematographers, new techniques, new equipment and sometimes digging up old tricks too.

Is it the purpose of a cinematographer to find the perfect shot? Is there a perfect shot? What you want to communicate to the audience in that moment will dictate where the camera should be, the ideal position for that moment. Of course there are times when you feel like you have captured the perfect moment but this is a synthesis of the



performance, the cinematography, the production design, costume and all departments coming together to achieve this. It’s a delicate balancing for these to all work in unison.

What’s your vision of the profession? What’s your goal when you get involved in a project? My focus is always to serve the story and to collaborate with a Director to develop the appropriate visual language.

How is Kate McCullough on set? How do you like to get involved in filming? I’m generally calm, I like to work on a quiet and respectful set. I get very mono focused towards my work. I like to shoot with purpose and rigour.

Generally, DoPs are influenced by cinema cinematographers. After all, that’s part of their film education. Anyway, we live in times where we find brilliant creativity in TV series and VOD platforms. Has 58

a television cinematographer influenced you? Can you recall a particular TV series that caught your attention? What Marcell Rev brought to “Euphoria” was incredibly fresh, dynamic and although it was loud at times it was always anchored into the emotional journey of the characters. Jacob Ihres choices on “Chernobyl” brought a real weight and sense of the audience as witness to this traumatic material.

What’s, in your opinion, the most challenging part of your job? Time.

Is there room for creativity in current cinematography in TV series? Do you think there are ways to be innovative when approaching, for example, an intimate scene from a cinematographic perspective? Sure, I wouldn’t be in this business if I felt I could not express myself in some way through my cinematography. It’s

central to my curiosity and passion for filmmaking. There are opportunities for innovation but you have to be clever when to use them. For me they should not draw attention to themselves for the


wrong reason, then the story is just serving the technology as opposed to the reverse. There has to be an honest motivation for the approach, anchored by the script. Sometimes it’s planned,

sometimes it comes through testing and of course sometimes it evolves on set.

The look of TV series has evolved dramatically over the past decades.

When did you first think that your work in cinema and documentaries could have a relevant role on the small screen? The small screen is no



longer the small screen. And the big screen is being watched on the small screen. The two worlds are cross pollinating. It’s an exciting time. I’ve spent many years shooting documentaries and for tv and cinema and my recent segway into fiction came in the form of a TV series so I’m certainly grateful for that.

You’ve been linked to documentaries throughout your entire career, but then you switched to television. What part of your nonfiction experience did you bring to the series you have worked on? I think a sense of 60

economy is learnt from documentary shooting. I’ve carried this over with crew and equipment needs. Do we really need this? Is it serving the story? You tend to develop a keen sense of coverage with cinema verite shooting too. Also I found that I’ve learnt to keep my eyes open to what’s happening with the actors particularly the spaces between dialogue.

‘Can’t Cope, Won’t Cope’ has a really naturalistic feel, similar to what we find in ‘Normal People’, but ‘Blood’ has a really ‘cinematic’ touch. What style do you feel more comfortable with?

I’m happy to explore many different approaches, again it comes down to what the story asks for, what's the accent or tone of the piece. Trying to find a specific look for each new project presents its own challenges and so ultimately it leads to very satisfying work.

You worked together with Suzie Lavelle on ‘Normal People’. How was the experience? Did you define the style of the show together? Suzie in collaboration with Lenny Abrahamson established the look for the series. It was fundamentally rooted in naturalism, a small


footprint with crew and equipment, giving the actors agency and a safe space to perform. I shot Eps7-12 which dealt with slightly darker themes so it was exciting to be able to develop and build the look, taking it a little darker and a little further while keeping within the look of the overall series.

What was the camera + lenses choice? We shot on the Alexa mini with K35’s and I brought in some master primes to flesh out the gaps in focal lengths. It was my first time to work with Hettie MacDonald so it was important I had that flexibility there.

What was the biggest challenge of this TV series? Trying to convey the seasons and sense of time passing while shooting in the summer! Shooting Dublin for Sweden but thankfully we got to shoot Italy for Italy.

Do you consider a ‘techy’ cinematographer? I like discovering technology when I’m

trying to problem solve a particular technical challenge. I’m not interested in it solely for technology’s sake.

What’s your favourite camera right now? Sony Venice, I’m very excited about the use of Full frame right now. It’s a whole other way of framing, looking at the world, the proximity to the character, the sense of 3D it brings to your subject. And the tethering system opens up so many possibilities with the camera in that modular mode. And of course its super sensitive.

What gadgets cannot be missing from your kit? Sun path, Astera tubes, bongo ties! It really depends on the requirements of the gig.

What’s your view on industry standards such as 4K or HDR? Do you think HDR should be a creative option rather than a must in almost all VOD productions? I think HDR is an incredible tool for cinematographers. I can’t see why we would not

want it to become available right across all platforms. It’s a very significant step in the development of technology available to us, allowing for more choice in terms of exposure and lighting and ultimately more control. HDR shooting and posting is something I would very much like to explore further.

What would you ask of technology manufacturers? What’s the solution that you’re wishing exists? Easy rig for more of a variety of bodies.

What’s next for you? Do you plan to continue betting on VOD platforms / TV series? I completed another feature under the Cine 4 scheme( Arracht Irelands submission for Oscar consideration 2021 was funded under the same initiative). I’m certainly interested in leading the look of a series. I’ve met for a couple of new projects and we’ll see where the next adventure takes me.  61






Modularity + flexibility + efficiency = C500 versatility Packing the latest technology into a camera body is a very sensible move, but doing so with careful design and the concept of offering maximum adaptability and versatility turns the result into an outstanding tool for creation. Lab test perfomed by Luis Pavía

The Canon EOS C500 MkII that we are bringing to our pages today was first showcased last fall, making it easy for its main features to be well known in general. So, as on other occasions, we will try to provide an overview that goes a little beyond mere specifications, and one that allows getting acquainted with all the necessary details in order to decide in which instances this will be the “perfect camera”. Well known is our view that the "perfect camera" does not exist, but we do firmly believe in the knowledge and ability of the professional with a view to choosing the ideal tool for each job. And for that purpose, we need to

know not only the collection of features, but also the entire set of drivers that influence this decision. To start with, its technical features place it at a very high level within Canon's cinematography range, even featuring some improvements as compared to its C700 sibling, a higher model in the range. It is not the first time that we find that a “little sibling” surpasses in some respects its elders, simply because of the fastpaced progress of technology. But do not get us wrong: we are not talking in general, but only in regard to some specific features that we will highlight in due course.

Reviewing its most significant features, worth noting are the same sensor as in the C700, a dynamic range of 15 fstops, an integrated 5-axis stabilizer, internal RAW (light) recording and an excellent autofocus system, which positions it without doubt in the upper-middle range of digital cinematography equipment. Although of course there is much more than this, and that is what we will be delving into throughout this analysis. We will pay attention to design as well, because it is one of the aspects that we also understand to be decisive and one that cannot be measured just through figures. We think this is important because 63


it directly affects the uses for which the camera may be most suitable. And we are not just talking about physical design, but everything related to its usability and potential. Starting with the sensor, we find the same Bayertype and full-frame CMOS technology with a count of 6062 x 3432 pixels on an active surface of 38.1 x 20.1 mm. The size actually used will vary depending on the format selected for recording. As an example, 5952 x 3140 pixels will be used for 4K and 2K formats, or 5580 x 3140 pixels for UHD and HD formats. That is, the use of the sensor in full-frame modes has an extra resolution that turns out quite noticeable. The fact that the resolution of the sensor is higher than the format being recorded brings significant advantages, because the resulting image is cleaner and sharper; and because, all things being equal, the noise is less. As expected, there is also a recording format that captures -pixel by pixel- the maximum capacity of the sensor: 64

5952 x 3140, allowing subsequent reframing without loss of resolution. The next item that directly affects the image is the processor. Just as it is usual to make reference to physical size and

resolution of the sensor, this is one of those elements that is not mentioned so often, but whose performance is a decisive factor. Its function is comparable to that of a small computer that is


signal is, by definition, the raw dump of the sensor's data. But in this case it also has an impact, since our C500 MkII is capable of internally recording a "light" RAW format. What does this mean? That storage systems with very high sustained transfer rates are normally required in order to be able to transfer the huge volume of data that is generated from a video image. These are usually external devices, very specific, and with an impact on production costs. On the other hand, the traditional way to reduce such volume is through different types of processing and compression that give rise to the various recording formats we all are used to deal with.

responsible for processing all the data received from the sensor before storing them in memory cards. For example, it is directly responsible for the interpolation algorithms that perform the

conversion of the Bayer signal into an RGB signal. In this case the result is excellent. But what about RAW? In theory, in this instance it would not make much sense since the RAW

To give an idea, a simple 8-bit 25p UHD image generates 4,976,640,000 bits per second (3840 x 2160 pixels x 25 fps x 8 bits per color channel x 3 color channels). Almost 5 billion bits per second, amounting to more than 12 billion (12,740,198,400) when recording 4K at 60p and 10 bits. While this is 65


just theoretical data, in one way or another this is the kind of volume of information that processors must handle uninterruptedly in order to interpolate, compress, transfer, etc. while we keep recording. In this case, and thanks to the capabilities of the DIGIC DV7 processor it 66

features, Canon has found the necessary balance that allows recording in CFexpress cards its own CinemaRAWLight format with transfer rates of up to 1 Gbps, or 2.1 Gbps for 120 fps frequencies. These are more than 1 or more than 2.1 billion bits per second, respectively. But not only is resolution

important. Color depth is also critical, especially in projects where postproduction plays an important role. Keeping in mind that each additional bit doubles the precision in the color reproduction per channel, it is possible to have color depths of up to 12 bits. This is more than 68 billion colors (4,096 per channel),


absolutely indistinguishable by the eye, but very convenient for offering enormous color grading possibilities and necessary in order to avoid any type of artifact after intense postproduction processes. A token of its high capabilities is given by the compatibility of the camera with the ACES color system, allowing direct import of the images captured in compatible systems.

camera has not only the most common color-indisplay conversion profiles (LUTs), such as 709, Cinema and BT2020, but also enables the user to create and load up to 15 LUTs of their own, being feasible to even apply various conversion tables to the different outputs simultaneously.

By combining sensitivity, color depth and processing capability, we come to another important aspect to consider in equipment: the dynamic range. In order to improve it, different manufacturers use systems that allow the recordable range to be expanded: the well-known gamma curves. As expected, this equipment is fully HDR, achieving when the Canon Log 2 curve is used- a value exceeding 15 stops (fstops).

What does all this translate into? That this camera offers us a truly increased versatility and is perfectly suitable for highly-demanding productions. All cameras offer us different formats for internal recording with regards to both resolution and compression, always with some type of compression. Some also offer direct sensor output to make recording in uncompressed RAW formats on external devices possible. And our C500 MKII also offers this possibility of a "lightened" RAW without a need for external devices while maintaining an excellent 5.9K resolution.

For an easy viewing and management of these types of images, especially during capture, the

Naturally, depending on the purpose of our project, this format may not be sufficient in itself, but of

course it does entail an improvement as compared to the usual formats, thus providing one more option when working without having to resort to an increased investment. And also relying on the utmost possibilities for an external device if necessary. To get an idea of the final result, at 2.1 Gbps on a 512 Gb CFexpress card, up to 30 minutes of video in 5952 x 3140 CinemaRAW Light format can be stored. By reducing the resolution to 4K and the transfer rate to 1 Gbps, said time increases to 65 minutes. And for a 2K at 250 Mbps the time reaches 256 minutes. In addition to this format, it is also possible to record in XF-AVC/MXF with a wide range of transfer speeds. In these instances, and depending on the different resolution/transfer rate combinations, recording times that would be achieved in a card having identical capacity will range between 79 minutes for a 4K 10-bit in 4: 2: 2 at 810 Mbps, and 401 minutes for a 2K at 160 Mbps. 67


A must in a piece of equipment such as this one, we have two CFexpress slots that can be configured for parallel recording (so as to have an instant backup) or in relay (for recordings of theoretically infinite duration). There is a third SD/SDHC-type slot for


storing video proxies, 2K/HD-resolution photos, sharing configuration data and facilitating updates. But it is not only the figure-related features what has an impact on versatility. Design is yet another aspect that sometimes does not get enough attention. And we

are not only referring to physical design, but also to conceptual design with all its implications, as we will see shortly. As for the physical side, we find the type that has become most popular in recent years and one we love: a "box" that houses the sensor with all its


electronics and the minimum essential elements, such as optical mount, memory cards, battery and keypads. And on it, we attach all the necessary elements, thus completing the final setup of our camera according to each particular need. But not all boxes are the same and in this case it seems that -once morelightness and versatility have been the deciding factors in achieving a distinct result. Furthermore, the concept of modularity has been taken a bit beyond what is usual, with some distinctive feature such as the ease users have now for changing the lens mount. There are three options available: the usual Canon EF mount, the EF Cinema Lock mount that allows mounting and dismounting the lens by just turning the anchor ring, in the same way as in the traditional PL mount, which turns out to be the third option available to us. Thus, it is possible to have a wider range of optics by adding all PLs to Canon's standard and

Prime Cinema EF ranges. This includes the Cooke/i models. Plus the whole extensive range of B4 optics by using the relevant adaptor. By following the path of the light once it has passed through the optics, we find the 2, 4 and 6-stop ND filters, which can be extended with two additional levels -8 and 10- thanks to the combination with a second ND filter. And now back to the sensor, which, surprisingly, we were not done with yet. In addition to the fullframe modes, it is also possible to record by using the two available crop modes in which not the entire surface of the sensor is used: Super 35mm and Super 16mm. These modes do not only optimize the use of optics designed for these formats, but also allow replicating the appearance of images recorded with other types of cameras, thus strengthening the versatility and applicability of our tool. Still dealing with the sensor -and this is a

favorable differential aspect as compared to its elder sibling- the camera features new 5-axis stabilization, which combined with the exchange of data with compatible optics, provides outstanding results. If using optics that do not provide this data, the focal length of the existing optics can be entered manually, so that the stabilization system performs at its best. As for frame rates, we recommend referring to the compatibility tables given the enormous number of possible combinations and also the fact that, of course, not all are available in all formats. In short, in nearly all cases there are 15 to 60 fps rates for frequencies of 59.94P / 29.97P / 50.00P / 25.00P. And ranging from 12 to 60 fps for frequencies of 24.00P / 23.98P. On the other hand they can go up to 120 fps when the image resolution falls to 2K or lower. The range in gain values that can be handled is enormous: from -2 to +42 dB in normal mode, and 69


from -6 to +54 dB in expanded mode. When dealing with sensitivity values, the range is equally huge: between 160 and 25,600 ISO in normal mode, and between 100 and 102,400 ISO in expanded mode. And here, when facing the usual question: how far can you record without noise? the usual answer: what is your level of demand for the job at hand? To finish with the sensor, processor and card storage, we will briefly mention two features: One is compatibility with anamorphic optics of factors 1.33x and 2.0x. And the other is the availability of a pre-recording function that, once enabled, allows you to keep the 3 or 5 seconds prior to pressing the record button; these are menu-selectable intervals. Let's go back to physical design to continue stressing versatility. On the body itself we find 15 customizable buttons, a knob for the iris, another one for selecting options and a small joystick, as well as threads to attach accessories and 70

connectors. All are accessible and well placed, in addition to those on the handle: the concept of adaptability and manageability is unmistakable. This is reinforced by features such as the fact that the screen is an independent element or that the eye viewfinder is optional, that the handle and its collection of functions is a dispensable element with only a thread and a small connector. And then we go a step further when we discover that there are separate options such as an eye viewer, or two different types of connection extensions so as not to swamp the body with unnecessary elements depending on the intended use. But this does not mean that we need extras to use the camera. In fact, the body has two XLR inputs and a mini 3.5mm audio input, a mini 3.5mm audio output, HDMI video output, USB input and 4 independent BNC connectors for monitor output, 12G-SDI output, time code input/output and sync output. In addition to the

proprietary connector for display screens, either the standard LM-V2 supplied or an optional one.


handle with more threads and shoes, a good capacity battery, and charger/feeder. We recommend checking with distributors though, since different markets may offer different contents in their basic packages. Only for certain uses will the additional connectivity provided by the two available options be necessary. Another aspect where we have seen progress as compared to the C700 is consumption, since the BPA60 battery that comes standard has provided well over one hour of actual use in the C500, estimating that in continuous recording it could exceed two hours of use without difficulty. But let's go to our impressions. If we were to analyze its versatility, we would discover a combination of modularity, flexibility and efficiency, linked by balance.

The equipment supplied with the camera is enough to start working with it without difficulty. It

includes an excellentquality touch screen (in which we are only missing a lens hood), a large

We have found a camera that offers extraordinary image quality, is very suitable for a large number of uses and, we would say, has been rather 71


designed for a single operator or a small team. This is probably the biggest difference with its older sibling C700, which seems more oriented to productions involving larger human teams. We have reached this conclusion based on some of the features that we have not deliberately mentioned up to this point in order not to incur in reiteration. The first feature and the one that seems most significant to us is the autofocus system. It is unquestionable that in a large format production, where everything that is going to happen is predetermined and in which we have an assistant dedicated exclusively to this task, the narrative result that can be achieved has no equal. But it turns out that these are not always the prevailing conditions. There are many other situations in which we need to get results without having all these resources. By combining different technologies and help tools, it is especially easy to ensure that our images 72

have the plane of focus, the center of attention, right where we decide. Traditionally, autofocus systems have not been big favorites because it was "the camera" who decided where to focus, based on a series of criteria such as contrast or brightness, although these systems have proved increasingly fast and accurate when focusing. Like ourselves, surely many readers have also gone through the times in which focus in manual mode would be used so as not to lose the focus plane, but closing the plane to use the “Push AF” function and reframing before recording as a working method. But that is past now.

plane deviation towards front or back.

First of all, we must confirm that thanks to the CMOS dual pixel phase detection technology available in our C500 MKII, the autofocus function is extremely fast and accurate. Add to this several operating modes: first of all, we find focus aids in manual mode such as the focus guide, which not only shows the object on focus, but also gives us a visual reference of focal

Naturally, combining both will be as simple as selecting a face on the screen to make the focus stay on the selected face, even if there are others in the frame. As if all this were not enough, it is possible to customize -by means of the menu- the behavior control of the autofocus, by fine-tuning tracking response and adjustment speed, although this function

Among the automatic modes, the follow-up focus allows us to select an object on the touch screen, taking care of the camera to keep the focus on that object. The next level is face-priority autofocus, in which the camera uses a face detection system to prioritize focus on people's faces, and keep them on focus as long as they remain in view. One more level and our camera will focus exclusively on faces and not on objects, allowing our protagonist to leave the frame and re-enter without changing the plane of focus.


requires the lenses to have certain compatibility features. Also making life easier for the operator, we have in addition to the zebra and peaking markers- an internal/external waveform monitor (only available for some outputs) and false color for accurate exposure evaluations. The menu is impressive at first sight. It seems to us that it is very well organized and its presentation is simple and clear, but with so many options, sub-options and possibilities that we recommend spending some time before starting to shoot. And, of course, use the personal menu option to have quick access to a reduced list with those options that we may need to modify more frequently. Some features are especially appreciated and also help create a high-quality image very effectively when our project has tight production times or none at all. In this case, worth highlighting are a good number of options to

soften the degree of detail in skin tones and another one just as wide to do a selective noise reduction. In these cases, the signal will be already registered with the established conditions, thus making it easier the subsequent process as some steps will be already taken. Before finishing we should highlight some other details that we especially liked, such as switch-on speed. It is surprising that the camera is completely ready to record only about 4 seconds after having turned it on. We invite you to take a look at the list of optional accessories available, among which a GPS positioner or a Wi-Fi wireless transmitter, when combined with additional connection sets, will expand the possibilities of use depending on the various needs of each individual project. In short, a combination of excellent image quality, wide dynamic range, speed of operation, precise autofocus system, enormous customization capabilities, both by

means of the different accessories and the through extensive menu, make it a suitable tool for a large number of applications. A range that is expanded when we consider its ability to generate 12-bit RAW files internally without the need for any external device. Its modular compact and lightweight design, which allows the accessories to be reduced to a minimum by configuring the camera to suit the different needs, expand its possibilities of use in hot heads, cranes and drones as also do its location in any environment where size and weight are determining factors while we are not willing to give up its excellent image quality. If we also take into account that its price is significantly lower than the C700, it becomes a means of creation that not only allows us to offer excellent quality results. Furthermore, due to a reduced investment cost we can be more competitive in a large number of projects.  73