NEWS - PRODUCTS
Titular noticia Texto noticia
EDITORIAL Within a month’s time we will have NAB show again. The organizers have done their best to hold the event in the safest way possible and the require vaccination for all attendees, but there are many who are considering whether to visit the fair or not. Sony and Ross Video have communicated this week their intention not to attend the fair in person and probably between now and October we will see some another exhibitors follow suit. Of course, it does not seem that in this edition there will be the crowds that are usually seen at the Las Vegas show but, on the other hand, there are many who want to restore normality in dealing with clients and suppliers. In any case, this will be a good indication of what can happen later with IBC and to what extent the market is prepared for the big events or, if for the time being, it is preferable to organize smaller events that are local to each country. The next issue of TM Broadcast will include an extensive
Editor in chief Javier de Martín firstname.lastname@example.org
article about this edition of NAB and what can be seen at the fair. We are sure that novelties seen there will be many. In this September issue we have a lot of content that we trust will be of interest to you. We open with an exclusive interview with AMWA, an association of manufacturers and broadcasters that helps organizations move to IP. We will tell you the ins and outs of the Tour de France, a sporting event with huge requirements and diﬃculties on the technical side. We discover the range of possibilities for backpacks, an essential piece of equipment in outdoor broadcasts. We review in detail the possible accessories for cameras, from optics to adjustment and color charts, including recorders, batteries, torches... In the next issue of the magazine you will find part two of this extensive report. And we end with the second installment of the article ‘The Colorist’, with which we discover everything that this fundamental figure implies in post-production
Creative Direction Mercedes González
TM Broadcast International #97 September 2021
Key account manager Susana Sampedro email@example.com
Administration Laura de Diego
Editorial staff firstname.lastname@example.org
Published in Spain
TM Broadcast International is a magazine published by Daró Media Group SL Centro Empresarial Tartessos Calle Pollensa 2, oficina 14 28290 Las Rozas (Madrid), Spain Phone +34 91 640 46 43
NEWS AMWA The AMWA provides a meeting point for cooperation between companies that, outside the AMWA, are direct competitors. But in their workshops and meetings they strive to find the best common approach in the IP transformation race. TM Broadcast International has had the pleasure of having the testimonials of Brad Gilmer and Neil Dunstan.
CAMERA ACCESORIES How are we going to dress our new camera? Is it reasonable to make the choice of a camera conditional upon its accessories? Are they really that convenient or necessary? Are we acquainted with every feature in all elements available on the market? And how to use them to get the most?
TOUR DE FRANCE Christophe Barrier is a Production Director for France TV. His job, during the Tour de France season, is to coordinate the production teams so that everything runs smoothly. To do this, he must attend each stage and lead the teams on site. For these reasons, at TM Broadcast International we have interviewed him and thus obtained the firsthand testimony of such a titanic eﬀort.
THE COLORIST (II) Technique and creation blended.
NEWS - PRODUCTS
Telos Alliance and Grass Valley release Infinity VIP: a cloud-based intercom platform
Grass Valleys release Telos Infinity VIP (Virtual Intercom Platform) on Grass Valley AMPP (Agile Media Processing Platform) in partnership with Telos Alliance. It is a professional intercom solution for cloudbased media production workflows. The addition to GV AMPP supports intercom functionality as partylines, IFBs, groups and peer-to-peer communication. “As a Grass Valley Media Universe Alliance Partner, our technology is now tightly integrated with AMPP, putting our intercom solutions in prime position for productions using the cloud-based platform. Through seamless access to our solutions, we look forward to gaining new
customers as well as pleasing existing users looking to move as much functionality as possible to the cloud”, commented Scott Stiefel, Chief Operating Oﬃcer of Telos Alliance. Telos Infinity VIP on AMPP is a native SaaS solution tightly integrated from both an operational and commercial standpoint. The cloud server and virtual panel apps are available from the AMPP app store and deploy in the same way as any AMPP application. Operators are allowed to support live productions from anywhere as long as they can access the application from any connected browser. The microphone and speakers from the connected phone, tablet, or computer allow
operators to communicate with team members, while using the same device for performing their job functions. “The undoubted advantages of cloud-based production, even in a studio environment, means that the methodology behind GV AMPP is a hugely positive paradigm shift for the video production industry”, added Tim Shoulders, CEO and president of Grass Valley. “The concept of cloud production is widely understood throughout the industry. It’s now the job of the GV Media Universe to make available to production teams all the professional functionality they need through the cloud”.
NEWS - PRODUCTS
NEWS - PRODUCTS
RED Digital Cinema releases its new creature: the V-RAPTOR 8K VV RED Digital Cinema has recently releases the V-RAPTOR 8K VV. It is the RED’s first entrant into the next generation DSMC3 platform. V-RAPTOR features a multi-format 8K sensor (40.96mm x 21.60mm) with the ability to shoot 8K large format or 6K Super 35. The sensor capabilities are wide. The new video camera allows users to switch between capture
The new DSMC3 camera
some of the functionalities
frames per second (150fps
is built on a robust
highlighted by the company.
at 2.4:1), 6K up to 160 fps
professional I/O array that
(200fps at 2.4:1), and 2K
includes two 4K 12G-SDI
(2.4:1) at 600 frames per
outputs, XLR audio with
second, while still capturing
phantom power capability
over 17 stops of dynamic
via adapter, built-in USB-C
interface allowing for
8K full sensor at up to 120
V-RAPTOR harnesses RED’s proprietary REDCODE RAW
remote control, ethernet oﬄoad, RF lens mount with
Additional features include data rates up to 800 MB/s using RED branded or other qualified CFexpress media cards; integrated micro v-mount battery plate; a 60mm fan for quieter and more stable heat
locking mechanism, wireless
management; and wireless
users to capture 16-bit RAW
control and preview via
connectivity via the free
and leveraging RED’s latest
Wi-Fi, phase detection
RED Control app, which is
IPP2 workflow and color
autofocus and a newly
available now for iOS and
designed interface are
codec. The codec allows
NEWS - PRODUCTS
NEWS - PRODUCTS
Teradek introduces new Bolt 4K Monitor Modules 1500 TX/RX Teradek has launched the Bolt 4K Monitor Modules 1500 TX and 1500 RX, giving users a long-range option with the 1500 monitor modules and a short-range option with the original 750 monitor modules. “Scalability has always been a huge benefit of the Bolt 4K family,” explained Colin McDonald, Product Manager for Cine Products at Teradek. “With the launch of the 1500 monitor modules, Bolt users have even more kit options and, for the first time, long range camera control over the Bolt wireless link.” Both the Bolt 4K Monitor Module 1500 TX and 1500 RX can seamlessly integrate with SmallHD Smart 7 Monitors and the entire Bolt 4K product series. When paired, they transmit and receive zero-delay wireless video signal from 1080p up to 4Kp30. The new longer-range devices oﬀer 1,500 feet of line-ofsight range and support for
10-bit 4K and HDR. The Bolt 4K Monitor Module TX 1500 has an integrated hardware connector for
is available now for ARRI cameras, and a beta can soon be downloaded for RED® KOMODO®.
SmallHD Smart 7 Monitors
The Bolt 4K Monitor Module
(Cine 7, Indie 7, and 702
RX 1500 includes a V-Mount
Touch), optional wireless
or Gold Mount Battery
camera control with a
Plate for power pass-
Smart 7 Monitor and
through and an integrated
SmallHD license (sold
hardware connector for
separately), easy setup
SmallHD Smart 7 Monitors.
and signal management
It can receive metadata,
with the Bolt App for
timecode, and start/stop
iOS and Android, and
flags from most camera
supports Broadcast Mode
manufacturers and can also
to unlimited receivers.
be managed via the Bolt
Wireless camera control
App for iOS and Android.
NEWS - PRODUCTS
NEWS - PRODUCTS
M2A Media, InSync and Hiscale releases the firs cloud-based rate converter M2A Media has announced collaboration with InSync Technology and Hiscale. The union between these three companies has brought to the market a new cloud-based frame rate conversion service: M2A Connect | Cloud Frame Rate Converter. The service oﬀers broadcasters and OTT platforms flexibility and scalability without the capex costs usually associated with traditional, hardware-based frame rate conversion. Many broadcasters find that they need a frame rate conversion to handle unexpected peak demand or planned additional services for special events and keeping proprietary hardware on-premises for irregular use is costly and limiting. That is the reason for the creation of this service. This converter makes for global broadcasters to accept content in any frame rate
M2A Media and InSync Technology launch M2A Connect | Cloud Frame Rate Converter
and to convert it to a highperformance feed for local output. M2A Connect | Cloud Frame Rate Converter is orchestrated through M2A Connect scheduling, which means live events can be scheduled for frame rate conversion as required without manual intervention on the day. Marina Kalkanis, CEO and Founder of M2A Media, commented, “as more and more broadcasters transition their workflows
to the cloud, the demand to move traditional hardware-based services to the cloud also grows. We immediately saw the value of a cloud-based frame rate conversion service and were delighted to collaborate with InSync Technology to be the first service providers to oﬀer exactly that. M2A Connect | Cloud Frame Rate Converter is marketleading and the perfect addition to M2A Connect’s innovative, software-defined oﬀering”.
NEWS - PRODUCTS
NEWS - SUCCESS STORIES
MOG and SWISS TXT announce a new partnership: Diving into the New Era of Crowd Journalism SWISS TXT is subsidiary of SRG SSR (Swiss public television) responsible for DMO and accessibility services and one of the top 30 language providers worldwide. The partnership aims to further explore the technical and commercial potential of a revolutionary citizen journalism and social media exchange platform which is based on blockchain technology, and also test this platform’s potential for new business models and applications. Within the project’s scope, MOG has developed a collaborative production platform, where anyone with a mobile phone and without needing additional installations, can use a browser to stream live event videos in a wide variety of scenarios such as sports, news, music shows, and cultural events - to name a few. The platform allows citizens watching a live event to actively join the
mobile app named Citizen
by capturing the events
Journalist App. Due to
with their own phones,
blockchain technology, this
and easily stream the
app allows journalists to
recordings through the
store videos and images
decentrally in IPFS, to work
The platform aims to empower journalists into gaining more control of their pieces with a wider library of live resources, while encouraging the audience to monetize their streams through a digital marketplace.
in anonymity if necessary and to be rewarded for their content in ether. The app thus supports journalists in their mobile work and in distributing content on social media. Through a decentralized web platform (DApp), the contributions can be used
SWISS TXT supports such
for the dissemination of
live-streaming with a
NEWS - PRODUCTS
NEWS - SUCCESS STORIES
With seamless integration, Unreal Engine 4 is easily utilized as an additional render-blade within the Vizrt controlled workflow.
Vizrt unveils details of its Augmented Reality set for BBC Sports Vizrt has recently revealed
“We had a small studio space
studio that has 5 diﬀerent
details of the advanced
at Media City which was
presenting positions and
virtual set graphics
not being used as much as
is able to house a variety
ecosystem. This studio has
we would have liked so we
of our sports output”, has
turned from a small simple
decided to convert it into a
commented John Murphy,
studio into their “Pres 2” set
green screen space. With the
creative director and head
and a home studio for BBC
virtual design and rendering
of Graphics for Sport at the
Sport in Salford, UK.
technology we now have a
NEWS - SUCCESS STORIES
In April 2021, BBC Sport
Sport has used includes Viz
games, and for us the BBC set
has been using the Vizrt
Engine 4 with its integrated
ecosystem to broadcast
Unreal Engine 4 render
stands out both in looks and
a variety of sporting
pipeline and Fusion Keyer,
coupled with the Mo-Sys
include chosen programs
StarTracker 6-axis optical
from Match of The Day,
tracking system and set
Euros 2020 and the
designs from BK Design and
studio provided an operator
Summer Games – providing
friendly workflow that enables
audiences with dynamic graphics to accompany sporting analytics and highlights during the show. The Vizrt’s system that BBC
“Vizrt was proud to participate in BBC Sports’ celebration of the UEFA
functionality. With the hybrid Unreal 4-Viz Engine 4 studio we were able to solve many challenges for the BBC. The
content change and provides viewers with meaningful information throughout the
Euro 2020 championships
show”, has said Gerhard
and the amazing summer
Lang, CTO for Vizrt.
NEWS - SUCCESS STORIES
Celebro chooses Quicklink ST500 (Studio-in-a-box) as part of Global Studio Project contributions back to their MCRs in Washington D.C. or London. The ST500 (Studio-in-abox) is a compact unit with in-built lighting and camera that can be controlled easily and eﬃciently from any remote location. The camera, lights and audio can be controlled from a Chrome web-browser using the Quicklink Manager Portal to ensure the highest quality, in-frame broadcast content is received. Celebro Media, a provider
studios across the world
of live TV services, have
over the next two years,
partnered with Quicklink
which will complement
as part of their new global
their much larger state-of-
studio project. Quicklink’s ST500 (Studio-in-a-box) solution has been chosen to provide high-quality contribution facilities across Celebro’s televisions studio spaces around the world.
the-art facilities in London, Washington D.C, and Los Angeles. Roll out has already begun with Miami and New York - with Paris, Brussels and Moscow next
In order to facilitate Celebro’s studio roll-out, they have partnered with Regus - part of the IWG group, an organisation providing flexible oﬃce spaces in over 1,000 towns and cities across the world. As a result of this partnership, Celebro can very easily open studios/
to open. In these locations,
bureaus at locations where
As part of the global studio
Celebro Media have chosen
a news event is happening,
project, Celebro plan to
Quicklink’s ST500 (Studio-
such as a sporting event or
open 80 new television
in-a-box) to facilitate the
NEWS - SUCCESS STORIES
Broadcasting Center Europe (BCE) is going to cover the Skoda Tour de Luxembourg BCE will live products and live transmits, even streaming, the he 81st Edition of the Skoda Tour de Luxembourg 2021. BCE will be on site with its Outside Broadcast Van connected to multiple HD cameras. The action following the cyclists will be covered with motocams as well as aerial means (helicopter and airplane). Using BCE’s mutualized playout services, the Skoda Tour de Luxembourg will be broadcast live over IP from BCE’s broadcasting centre on RTL Télé Lëtzebuerg, Eurosport and L’Equipe. The live streaming of the Tour will be available in high definition (HD) on RTL.lu.
NEWS - BUSINESS & PEOPLE
Chauvet acquires Kino Flo In a move that will significantly expand its capabilities to serve the global film, studio, and broadcast markets, Chauvet has acquired Kino Flo, a manufacturer of lighting systems engineered for cinema and television production. “This represents an important step for our company in our ongoing mission to expand our broadcast, studio and film lighting capabilities,” said Albert Chauvet, CEO
of Chauvet. “Kino Flo has a longstanding tradition of innovation and a commitment to excellence, which makes the company an excellent fit with Chauvet’s own philosophy. We are excited to invest in the future of Kino Flo and build on their technologies, while adding resources to accelerate product development and commercialization.” Kino Flo will continue to serve its customers from its Burbank, California oﬃces.
Riedel hires Oliver Zimmermann as Director of Manufacturing Oliver Zimmermann has been appointed to serve as director of manufacturing for Riedel Communications. A physicist, supply chain expert, and change management specialist with more than a decade of experience in the manufacturing business, most recently in data and telecommunications, Zimmermann will focus on Riedel’s supply chain and production processes to enable even greater eﬃciency and agility.
and since then has leveraged his knowledge of models and processes to analyze systems, derive meaningful measures, and refine internal structures to help manufacturing organizations become more eﬃcient. A supply chain expert and change management specialist, Zimmermann applies stateof-the-art management models and optimization tools, such as digitization and artificial intelligence, to drive improvement.
Zimmermann earned a doctorate in physics from Ruhr University Bochum
He most recently served as director of operations for Telegärtner Karl Gärtner
GmbH, with process responsibility for the supply chain of purchasing, logistics, warehousing, production, work scheduling, and industrial engineering. Zimmermann implemented agile and lean management methods and digitized and optimized the production and supply chain with tools such as SAP S/4HANA. In earlier management roles with Poligrat GmbH and Steinert Elektromagnetbau GmbH, Zimmermann likewise led digitization and process optimization initiatives to transform sales, production, and administration.
NEWS - BUSINESS & PEOPLE
Telestream acquires Sherpa Digital Media Telestream has announced the acquisition of Sherpa Digital Media. The services of the recently acquired company are live event hosting and distribution platform. Sherpa Digital Media is headquartered in the San Francisco Bay Area and it has been serving its customers with their streaming platform for 10 years. The team will be
fully integrated into the Telestream family.
“Sherpa Digital Media has
The fully featured live and on-demand event platform enables customers to create live events and webinars easily. The platform includes breakout rooms, video hosting, marketing automation integration, secure streaming, customized look and feel, and a high range of scalability.
many rely upon to stream
built a solid platform that their live, interactive events, and we intend to continue its development to expand into new areas such as using our Wirecast product to produce events distributed on the platform”, commented Dan Castles, CEO of Telestream.
Democratizing IP transition Running a cable from a camera to a mixer is simple in an IP-based infrastructure. But what happens when it’s no longer just a camera and a mixer? What goes on when you have a system where each device is designed by a diﬀerent manufacturer? How do you make that system work seamlessly when there’s no room for error in a broadcast environment? IP migration times are long and the processes are tough. But competitiveness is pressing and the need to grow and modernize is very important for all content creators. It’s a complicated race, but, fortunately, there are some shortcuts to complete it. Twenty years ago the AMWA was born. An association of vendors and users was created with the democratic spirit of making the use and understanding of technology accessible to everyone in the broadcast production process.
The AMWA provides a meeting point for cooperation between companies that, outside the AMWA, are direct competitors. But in their workshops and meetings they strive to find the best common approach in the IP transformation race. TM Broadcast International has had the pleasure of having the testimonials of Brad Gilmer and Neil Dunstan. Brad Gilmer has been part of AMWA (Advanced Media Workflow Association) since the beginning, holding the position of Executive Director since that time, as well as in VSF (Video Services Forum). Mr. Gilmer is also at the heart of the JT-NM (Joint Task Force on Networked Media), which includes the SMPTE, VSF, EBU and AES associations. Neil Dunstan has ten years of experience in AMWA and is the Director of Membership and Marketing. His main task is to communicate the work of the association to the world and encourage media companies to contribute to its developments.
Brad Gilmer, Executive Director at
Neil Dunstan, Director, Marketing and
Membership at AMWA
What is the AMWA?
What is its mission?
NEIL DUNSTAN: The AMWA has a worldwide membership which is 25% of end users and 75% of their suppliers. The organization forms, among other four, -there is also AES, the EBU, SMPTE and VSF-, the “Joint Task Force on Networked Media”.
NEIL: The whole point of the AMWA is to provide a place where there can be collaboration between end users and their suppliers. The thing I found really surprising is that when the NMOS work started, how companies, who were in competition with each other to sell products, their engineers and their developers sit down and work together to make technical solutions that everyone could use.
The NMOS work, the preparation for IP-based infrastructures, is the third big project that the AMWA has taken on. Firstly were the AAF (Advanced Authoring Format) work and then MXF (Media Exchange Format), which are widely adopted. There is still lots of work to do on NMOS (Networked Media Open Specifications).
That happened because everybody wants to have things that interconnect properly. The product developers build unique features into their products to compete with their competitors. The
THE WHOLE POINT OF THE AMWA IS TO PROVIDE A PLACE WHERE THERE CAN BE COLLABORATION BETWEEN END USERS AND THEIR SUPPLIERS
interoperability between all these products is the area where we can help. Our work is very practical. From the very beginning, when NMOS started up, our workshops were set up. At the first one we had 16 or 20 companies who came together in one room to connect their products and to make sure they all worked on a shared IP system. They left that
three-day event with much greater confidence that their products were what the market needed. How are your workshops? BRAD GILMER: The workshops were first developed to address a problem. “The smartest people in the room” used to collaborate and develop solutions together, but
sometimes they tend to develop a solution that is maybe too complicated to be built. Or perhaps they just do not write down enough of the detail for regular humans to understand it and actually be able to build it. So the workshops that we created were a way to be sure that a number of people could actually do what is described in the specifications and standards. The first objective of the workshops is for people to test the documents they have written to ensure that others can understand them. The second is to be sure that what is written down actually can be done.
THE FIRST OBJECTIVE OF THE WORKSHOPS IS FOR PEOPLE TO TEST THE DOCUMENTS THEY HAVE WRITTEN TO ENSURE THAT OTHERS CAN UNDERSTAND THEM.
How can the AMWA help an organization move to an IP infrastructure? NEIL: With an SDI system, you always have one cable from one device to another. IP-based systems have much more in common with IT systems. Because of that, you have a challenge of understanding what is on the system and making sure that everything that is on it knows what else is there. Our industry now is moving much more quickly than it did when SDI came along. With the many cloud providers; plus COVID, where people are now having to make programs using cloud services;
AMWA NMOS 2020 Jan workshop.
AMWA booth at NAB 2019
suddenly, the media environment is changing in days and weeks instead of months and years. There is a whole education exercise for media industry to properly understand how they go about building systems of this type because it has never been done before.
NMOS has been going for five years now. There’s still work to be done to educate people. That’s why articles by your magazine help more people to understand more. What is NMOS? BRAD: NMOS is not really a product but a set of API
specifications, and it’s up to vendors to put those APIs into their products. Sometimes people talk about our work as if it’s a product. I am not in charge of a company that makes NMOS products. They are made by Sony, Grass Valley, Imagine, Telestream, and any of the normal vendors you know. They make
products that can decide to use this or not use it. They can use it or they can use their proprietary solutions. They can do a mix. They can support some of the NMOS stuﬀ and they can do some things in a proprietary way. How are you going to tell the facility, take this stream to this monitor, take this camera to this monitor? How are you going to do tallies? How are you going to handle showing that the video stream and the audio stream from this camera are together? These are the things that the AMWA started to address with NMOS. It really is an industry collaboration and there are lots of diﬀerent organizations supporting the work. NEIL: I think it’s relatively easy to see what the benefits are for a customer or an end user, because they can pick products from any of the suppliers which are using NMOS APIs and assemble their system using it. The suppliers can worry about the features that their products have and make their features better than their competitors and sell their products because they have a more complete feature set. The interoperability between their products and their competitors is not something that customers ever need to worry about. Where I felt that the AMWA was doing the correct thing was when many new members joined to be part of the discussion because they recognized that
NMOS Workshop 2018.
the open specifications the Networked Media Open Specifications, NMOS – and the “open” part of it was really very important to them. How it works, who benefits? BRAD: To illustrate that, you can imagine that it’s quite easy to connect diﬀerent products using SDI, of course. Well, if you take two IP-based products and
you put them on a network, how does that receiver know to join the stream from that sender? Let’s say you’ve got a camera and a monitor. Well, if the camera and the monitor are from the same manufacturer, they could have some sort of private communications between the two of them, and the sender could say, “I’m over here”, and a control system could say to the monitor, “Now,
join that stream from that camera”, in some closed way that nobody else could understand. It would work perfectly fine, just like you were hooking two SDI devices together. Now, what happens if you add another monitor from another manufacturer to the mix, and another camera from some other manufacturer, and you add a playout server from another manufacturer?
The manufacturers saw that this was going to be a real barrier to adoption of IP. The basic thing of connecting a sender to a receiver is fundamental and if there’s not a common way to do that, there’s not going to be any market for IP equipment. For example, the AMWA IS05 specification is exactly that. It is a common API that any manufacturer
THE BASIC THING OF CONNECTING A SENDER TO A RECEIVER IS FUNDAMENTAL AND IF THERE’S NOT A COMMON WAY TO DO THAT, THERE’S NOT GOING TO BE ANY MARKET FOR IP EQUIPMENT.
can implement, and if a controller tells a receiver using this API to join the stream from some other camera, it’s all going to work. Now, in that environment where you have several diﬀerent vendors, they all can do one software development rather than doing many diﬀerent software developments, in order to just get that base level of connections working. That’s a very concrete example where NMOS specifications are helping the industry move toward IP. It is about the customer business needs and the practical implementations. If you are in a vision control room and you select Camera 1, you want to make sure that on Camera 1, a red light comes on so the cameraman knows that it’s on air, the person who’s looking at Camera 1 knows it’s on air”. In an IP infrastructure, it was something that did not exist automatically. It had not been done before, and we had to find a way of making it work.
How did you perceive that NMOS was necessary? BRAD: Myself and another gentleman named Richard Friedel were at a conference maybe five years ago and we were having a discussion. We said that it seems that the investment in IP and also IT is tremendous: billions of dollars. I used to work at Cartoon Network, TNT and other Turner organizations, Richard was working at FOX TV and both were always being asking to have more content and in more devices. “You can’t use SDI for me to send Cartoon Network to your phone, Netflix is not going to use SDI to do so”. We had this conversation in front of everyone in the conference and we asked everybody: “How many of you at this conference think you will be buying an SDI router in one year?” A lot of hands were raised. “How many of you think you will be buying an SDI router in three years?” This time a little less hands were raised. “How many of you think you will be buying an SDI router in five years?”
No hands were raised at all. And we think: “If we’re going to move to IP, we have to start now because there’s no way to do this IPTV right now”. Is there any other action that the AMWA is taking to increase the educational level of the whole industry? NEIL: From an AMWA point of view, NVIDIA, Riedel, Sony and the BBC all worked together to put together a package where the tools you need to make NMOS work can be downloaded and used by suppliers who have not being part of the discussion so far.
On our website, there is information about where you can go to download the content and how to use it. BRAD: Sony made a major contribution to this project. They have been a major force behind open-source software that people can download to get started with NMOS. It worked like this: NVIDIA opened a “docker container” and all of the Sony open-source stuﬀ and some open-source stuﬀ from BBC and others is there and ready to go. It’s an example of how the AMWA is a place where diﬀerent organizations who may be competitors, come together to collaborate to
help people learn more about IP facilities and the transition to IP. Do the AMWA developments help other markets than traditional Broadcast? NEIL: In the beginning, broadcasters only bought expensive broadcast products We’re realizing now that quite apart from theaters, churches, educational establishments which buy ProAV, there is a need for this technology, not just to be used in the most expensive products. The benefits of the NMOS work can reach a much larger audience. Although
ProAV products might cost 10% of the cost of a broadcast product, there are hundreds of thousands of them sold. If that development work can be used cost-eﬀectively in ProAV products, well, that will be great for everyone. So NMOS can have a long future in terms of ProAV products related with a market that has nothing to do with broadcast. BRAD: The other thing I would just add is our work on security. If you have a set of APIs that can be used to control a facility, then that obviously poses a security challenge in an IP-based world. We recognized that early on. We got a group of chief security oﬃcers and media companies and media experts together and have published specifications that if they’re followed, they tell you how to securely access the APIs that we have to connect a camera to a monitor, for example, using current best practices so broadcasters can be confident that those systems are secure.
Summary The broadcast industry has changed much since SDI appeared 25 years ago. The move to IP-based systems is necessary, but is very big and very complicated. The industry has to go back to the beginning and rethink about how the technology works. And it is a lot for both suppliers and their customers to completely understand. The truth is that systems over IP are not the future any more, but the present of broadcasting. As associations such as AMWA have exemplified, the democratization of knowledge is important for there to be real competition between all members of the market. If manufacturers and end users do not agree to collaborate, the transformation to systems over IP will be impossible. This is why the work of the AMWA is so important, and not only of this organization in particular, but of all those who strive to promote joint work and to create a common and free language for broadcast technology.
How are we going to dress our new camera?
Is it reasonable to make the choice of a camera condiঞonal upon its accessories? Are they really that convenient or necessary? Are we acquainted with every feature in all elements available on the market? And how to use them to get the most? Text: Luis Pavía
We will endeavor to answer all these questions and some others. First of all, we would like to establish some premises. The first one is to clarify that our main purpose is explaining the usefulness of most of the accessories currently available, something that is not always as clear as it seems at first glance. In addition, we will do the complicated exercise of trying to minimize the mentions made to brands or models, so as not to build a commercial catalog; also, this content can serve as a reference in the future
while maintaining its validity over time. And finally, we have grouped the diﬀerent elements into functional blocks. Let’s get to it! It is very common to find situations in which a lot of attention is paid when choosing a camera for a project or to turn it into our main tool, and very little to its versatility or ability to adapt to changing circumstances. Sensor size, resolution, and dynamic range are usually the main and unchanging parameters, but we will discover other details that
CLIENT PROFILE AND SATISFACTION WILL BE KEY TO MANY OF OUR DECISIONS
can make a big diﬀerence in the future. Is it because our cameras are not good enough? Of course they are! So why do we need the accessories? Because they are precisely what allows us to enhance their capabilities and, even on some occasions, we should take them into account when choosing our model. The reason is that minor features may or may not enable adding devices that expand their capabilities far beyond what we would have initially imagined, or occasionally need, such as -for instanceRAW output or several takes with diﬀerent signals for diﬀerent monitors in parallel.
This enhancement possibility makes special sense in cases in which we only occasionally need to cater to certain needs for certain clients. By renting the right equipment for these specific situations, we can deal with them without having to invest in elements that we may never use again.
towards achieving what we are aiming for.
In such cases, client profile and satisfaction will be key to many of our decisions. It is not about convincing them of the success of ours, but about knowing when a client is able to distinguish and assess the diﬀerent types of results that are achieved, how much they are willing to pay for this and when their priorities diﬀer.
In this section we could publish not only a feature article, but a whole encyclopedia, so usmmarizing this topic will be quite a challenge. features such as the mount or bayonet on the one hand, and the adaptation to the size of the sensor on the other, will be the first physical characteristics that obviously constrain their feasibility in our cameras. The need for power supply and software compatibility wwill be part of the specifications that we must also check carefully when making our choice.
Due to the length of the topic to be covered, we will split the content into two parts. And, as you know we like to do quite often, we will try to provide content that is far from the typical collection of brands and models, focusing instead on the key features oﬀered by each range so as to identify what we should look for and what they contribute
As for the mount, each manufacturer usually has its own and, in some cases, also they also build it to meet some standard, like the well-known and firmly established PL mount, from the world of classic cinematography. But even in the same mount from the same manufacturer it is necessary to ensure that the chosen optics are
suitable for the required sensor size, that is, checking that the image circle so created perfectly covers the entire sensor area without vignetting. This happens because it is very common to find optics made for small sensors that would be useless in larger sizes.
A GOOD NUMBER OF CURRENT OPTICS CAN INCLUDE INTERNAL SERVOMOTORS FOR CONTROLLING ALL OR SOME OF THEIR MECHANICS: IRIS, FOCUS AND ZOOM.
It does not mean that they are worse quality, although it is true that the greater diﬃculties and demands when building larger optics can reserve that extra quality found in higher ranges for versions designed for larger sensors. While in those designed for smaller sizes, it will be easier to find clearly cheaper models. Let us also remember that an optic is suitable for any sensor equal to or smaller than its nominal size; note that in such cases we must consider the adequate correction factor to properly interpret its focal length. Once physical compatibility is assured, it will be critical to be clear about what type of images we intend to create and then decide on the appropriate optics for
our visual narrative. That
will oﬀer more brightness,
can also change for each
less distortion and better
project. In general, and
contrast and sharpness
any other features being
than a zoom. And among
equal, a fixed focal length
zooms, the greater the
range of focal lengths, the more compromised all its other features will be. As always, features are never good or bad. They are simply adequate to our needs or not. There are excellent zoom optics that oﬀer a wide range of focal lengths, more often than otherwise they are not the brightest and we will only find them available for small sensor sizes, such as the classic 2/3 inch in broadcast cameras fitted with B4-type mounts. It is also important to consider electrical compatibility. A good number of current optics can include internal servomotors for controlling all or some of their mechanics: iris, focus and zoom. This is the most common situation when working with optics and camera bodies from the world of photography. It would be very unusual to find compatibility problems in these cases, as long as the camera and optics are from the same manufacturer. Although what should really concern us in these cases
is software compatibility. Software…? Yes, that is correct. Most modern lenses are not just optics and precision mechanics, but in many cases an interface is much more than a simple electrical control of the optics’ servos from the camera body and, in addition, they communicate through sophisticated information exchange protocols. In these cases, we must be extremely cautious, and not only with initial compatibility, but with the consequences of a potential upgrade in firmware (internal control software, something like the operating system of each device). Finally, let us open a small parenthesis in this section to deal with adapters. These are rings manufactured to facilitate the use of optics from one frame on bodies from a diﬀerent one. In this case the possibilities are very limited since, due to the build of each frame, combinations that are virtually impossible do exist. There is usually a perceptible loss of luminosity and, in addition,
in many cases a part or even all features relating to communication or control between the optics and the body are lost. There are some of recognized quality and benefits, but in general they entail a more than reasonable compromise solution when the budget -or any other conditionsdo not allow access to the optics which we would like to work with and thus ensure that the client will be satisfied with the final result.
Recording media (cards and disks) We could say that these are the celluloid of the 21st century. They are the physical media in which our recordings will be stored. As always, from the most generic and economical memory cards, such as CompactFlash or the diﬀerent variants of the SD family, to the most widespread proprietary XQD, SxS or P2 cards, or others even more specific. Each camera or recorder will require its own, and
all of them have diﬀerent ranges that we will have to reckon with and select based on our needs. The most striking fact is usually capacity which, funny enough, will be limited by the management capacity of our camera. It is not something very common, but as a consequence of the diﬀerent file systems required for managing certain capacities, there may be the case of cameras that do not recognize the latest generation or largecapacity cards. In these cases, a firmware update for the camera could solve the issue, but let’s make sure that this will not generate undersired side eﬀects or other incompatibilities, as we have just seen that could happen with the optics. Yet, we believe that the really most relevant specification about a card is not its capacity but transfer rate or speed instead. We must read the specifications carefully. Some feature fairly high rate, but only when reading, and the one that really has an impact for us when
recordings is writing speed, since it will have an eﬀect on the maximum quality at which we will be able to work. In this case, we must base our decision on the specifications of the camera or recording device that we are using and, therefore, be acquainted with the diﬀerent rates required for the qualities we want to work with, and then choose the card that meets or exceeds said specifications. Exactly the same is true for disks. Nowadays it is unlikely that mechanical hard drives will be used, in view of their sensitivity to movement, the majority of them being solid-state drives like SSD. We will approach the concepts of capacity and transfer rate in the same fashion. Although these parameters will generally be significantly higher than for cards, it will also be a common practice to use them in more demanding situations such as in external recorders.
External recorders Very much in line with the previous point, these devices are one of the items that allow us to
REGARDING EXTERNAL RECORDERS, THERE ARE A FEW CONSIDERATIONS TO TAKE INTO ACCOUNT. SIZE AND PORTABILITY ARE THE FIRST DECISION ELEMENT.
significantly expand the capabilities of our camera. As a result of their processing speed and the high transfer and recording rates required, most cameras are generally not capable of recording RAWtype formats internally. But they do have a direct data output from the sensor, the signal that can be recorded in these devices. And not only this RAW format, but some others as well, always depending on the needs of each specific project. That
is the reason why in our introduction we mentioned the importance of certain features as a decision driver, not for using them right away or permanently, but to count on them should the need arise.
ones, which feature both types of inputs -bridged to the respective outputs for possible additional cascaded devices- and can even oﬀer the possibility of switching formats between them.
In these cases, there are a few considerations to take into account. Size and portability are the first decision element. Then follows connectivity, which is usually via SDI or HDMI links. From the simplest devices, which come only with one input in one of the formats, to the most sophisticated and complete
They are powered by one or two batteries or by an external power supply, which allows -or not- long recording sessions. It is a usual occurrence that if you have a connection for two batteries they work in relay mode, thus allowing for permanent powering as long as we have enough batteries charged.
The same goes for the number of slots for cards or disks. If there is only one, our continuous recording capacity will be limited by the capacity of the card or disk, while if two are available, continuous recording for long periods, or even recording in two diﬀerent formats simultaneously, will be possible. Last, it is common for them to be equipped with a screen to monitor the signal. Here again, we find big diﬀerences. From screens which -due to size, resolution or color
precision constraints- only allow for information on the existence of a signal, to those that provide image monitoring by applying LUTs or assessing it with tools such as a histogram, vectorscope and/or waveform monitor.
Unlike in the previous
Typically, they record the audio that arrives synchronously and embedded with the video, and the only thing than needs to be done is simply check that all the channels sent by the camera can be recorded. This requirement is usually fully met since it is very common for them to handle between four and eight audio channels.
In these cases, and even
Monitors (Video assist) In this case, specificity is much higher. There is a wide range of possibilities; from small portable monitors to medium or large field monitors, which enable very diﬀerent uses. But much the same as in the previous case, size, portability and connectivity will be the key elements for choosing.
instances, these focus all their functionality on a reliable recreation of the incoming signal; either as a mere presence in which quality is not decisive, or up to the highest precision in sharpness and color. more so in current times where RAW recording is so widespread, it is important that LUTs can be applied. In this way, and given this format’s unique exposure requirements, a signal similar to the final result that will be achieved after the color grading process can be provided, thus facilitating the evaluation of the signal.
Hollyland Mars 400 PRO
Wireless transmitters The purpose of this equipment is to get the signal generated from the camera to a more or less distant place, dispensing with any type of wiring. The relevant range can be from a few meters up to the other face of Earth, literally. To achieve this, there are diﬀerent technologies and possibilities depending on the needs. Thus, we have autonomous wireless equipment in which the transmitting unit collects the camera’s signal and sends it to the receiver through its own point-to-point link. At the point of reception, the
signal is reconstructed as if it were the other end of a cable. In these cases, the technology is usually supported by several channels on free and public frequency bands such as Wi-Fi or radio, splitting the signal between them and then recomposing it by using a technique known as bonding, but without the need of relying on any other additional elements or services. In these cases, signal quality is usually guaranteed. Range is limited and also decreased depending on barriers such as walls or structures between the transmitter and the receiver. Even latency is an aspect that is quite under control. This delay suﬀered by the signal from the moment it leaves the camera until it becomes available at the receiver’s output does not usually have an impact on feasibility, but it will be an important aspect to wath out when selecting the ideal equipment for our production. As an alternative to cover theoretically infinite
THE RELEVANT RANGE CAN BE FROM A FEW METERS UP TO THE OTHER FACE OF EARTH, LITERALLY. TO ACHIEVE THIS, THERE ARE DIFFERENT TECHNOLOGIES AND POSSIBILITIES DEPENDING ON THE NEEDS.
distances we have other systems, colloquially known as backpacks. These devices rely on data transmission technology through mobile phone providers and use one or more channels to deliver the signal to the receiver. The big diﬀerence in this
case, lies in the fact that transmission is achieved through the standard public Internet network. The great advantage here is that range is theoretically unlimited because the receiver will be in a place, such as a production company or a television station, that will foreseeably have plenty of connectivity. Although there are limitations such as the range being restricted to broadcasting from a place with mobile data coverage, quality will be conditional to network availablity and latency can be significant. And this can limit certain possibilities. Take, for example, a remotely controlled camera: a delay of just a few seconds runs the risk of causing inaccurate movements.
Optics remotes These elements also oﬀer a wide range of possibilities. Common in the world of broadcast studios and cinema, optics remotes have seen their use extended to lower-budget productions. The simplest
ones are mere mechanical controls that, -by means of a gear- handle the ring to which they engage in a smooth and precise manner. The most common use is for focus, but they are also practical to control the iris or even the zoom. In cinema-type optics the necessary toothing is part of the rings themselves with standardized measurements to mesh with the continuous thread of the relevant remote, while in those not fitted with this, the part having the toothing would have to be attached to the relevant ring. Naturally, diﬀerences in quality of construction and finishing vary between the various ranges of the diﬀerent manufacturers. With more comprehensive features as we move up the higher end, we find models that include physical stops to facilitate, for example, transfocal operations. For a more sophisticated action, there are models controlled by a servomotor. Among their main benefits: they allow to memorize various positions,
controlling the speed or making it variable with great precision without the need for an expert hand, and even with the possibility of remote control. This remote operation can be relatively close, with the control unit connected with a cable, or from a large distance when using a radio link connection for communication between the control and the servo systems. These are the most common options, for example for hot heads or cameras fitted in places that are hard to access while filming. Again, the various ranges oﬀer more or less benefits in terms of functions, distance, precision, etc. Although they will be reviewed in their relevant section, we must mention that all these controls require a rig-type support. This support is normally a pair of cylindrical bars running from the base of the camera to the front of the optics and whose purpose is precisely to provide a mechanical support for this and other types of accessories.
Continuing with the range of models, it is worth mentioning the electromechanical controls that actually operate the servomechanisms fitted on the optics through an electrical connection. Similar to the ones mentioned above, their main feature is that the servo systems are fitted into the optics, and we
The first thing that we must take into account is the number of input channels and connector type, being XLR the most common in the professional field and the 3.5 mm mini jack, the typical one in cameras from the photography environment. The reliability, mechanical and electrical robustness, and performance of an XLR is unquestionable. But once plugged in, how is sound picked up?
only need the appropriate electrical connection for operation. These are normally found attached to the tripod handles in broadcast studios for large studio or event optics. Naturally, it is an essential requirement that they be the right model for the optics that we want to
are not always clear.
Again, we are dealing with an area that would itself provide content for another encyclopedia. Summarizing as much as possible, these are the well-known devices that allow us to capture sound, but all their advantages and possibilities
There are several classes of microphones based on construction, although we will focus on two. In the first place, condenser microphones, usually of higher quality and more sensitive, but more delicate and traditionally vulnerable to humidity. In addition, they require power to function, hence many cameras have a selector to correctly manage their input, both with regards to level and power supply, but only in XLR connections. Secondly, the dynamic microphones, which oﬀer somewhat lower sound quality, but are extremely robust and reliable in
WE CONSIDER THAT THE MOST IMPORTANT ASPECT IS DIRECTIONALITY, THAT IS, THE DIRECTIONS WITH RESPECT TO ITS AXIS FROM WHICH SOUND IS CAPTURED WITH GREATER OR LESSER AMPLITUDE.
practically any environment and situation. Attention, when we mention quality we are referring to microphones that are comparable in all other aspects. Let us not make the mistake of comparing ‘the best’ of one type with ‘the worst’ of the other.
More features to take into account are sensitivity, which indicates the minimum signal at which it will produce a correct recording; response curve, which indicates the range of frequencies to which it responds under optimal conditions; and signal/ noise ratio, which indicates the ability to separate clean sound from internal electronic noise.
is captured from both sides, but very little from the front; directional, when it mostly picks up sound only coming from the front; and cardioid, which picks up from the front and the sides, but whose sensitivity diminishes as we move away from the central axis. As this sensitivity curve is lengthened forward, we find the supercardioid and hypercardioid variants.
By physical shape we distinguish three large groups: barrel, the typical one mounted on ENG-type cameras; of hand, usual in scenes and interviews; and tie or lavalier microphones, the tiny ones that are usually attached on clothing near people’s necks.
Almost all combinations of these specifications can be found on the market, so our sound technician will have to choose based on each particular scenario, purpose and need.
But beyond physical characteristics, we consider that the most important aspect is directionality, that is, the directions with respect to its axis from which sound is captured with greater or lesser amplitude. The most common ones are: omni-directional, when sound is equally picked up from any direction; bidirectional, when sound
Microphones must always be connected to the camera or recording device chosen for our production. The connection to the camera is always wired and needs its own audio input, since even the wireless models end in a receiver that connects to said input. And in this sense, a caveat which we do not always take into account. In the case of wireless connections, it is a common practice to render a dual conversion: the microphone
transmitter makes a digital transmission that the receiver converts into analog in order to connect to the camera’s XLR, which again converts into digital when recording. To avoid this, there is already a wireless receiver model that allows digitally connecting to the camera body without going through the XLR connector, thus preventing such dual conversion and facilitating access to a greater number of independent audio channels.
The simplest already oﬀer several independent channels and the most sophisticated ones even have a time code capture, which is very convenient for subsequent syncing with the image. If you don’t have it, don’t forget to use a clapperboard at the beginning of each take, which also makes subsequent synchronization much easier. And don’t let anyone say they don’t have one because, in the worstcase scenario, just a clap
in front of the camera is
This type of equipment, dedicated exclusively to audio recording, make production much more flexible, since it allows increasing the number of independent channels that will later be treated with much greater eﬃciency in the post-production stage. Being able to control each channel separately ensures that it will be much easier for the final assembly to re rendered much cleaner, by emphasizing or attenuating each source separately at all times.
that has been around for
an emergency solution decades.
The range of models is wide, and we will only have to ensure operation if we want it to be autonomous, the number of channels available to suit our needs, storage options, which are usually memory cards, and the connections, which will typically be XLR.
Matte boxes This device is that flapped frame that is placed in front of the lens to act as a lens hood and prevent stray light from entering the lens, thus avoiding the eﬀects caused by flaring and undesired haloes. We will choose a model that comes with one, three or
four flaps depending on the directions from which we need to avoid the stray light. We will have to adjust the flaps depending on the focal length of the optics in use so as to avoid darkening the image. They are supported on the same type of rig as mentioned in the optics remotes and, depending on the manufacturer and model, they also work as filter holders. In this case, it will be important to make sure that size is suﬃcient to avoid vignetting eﬀects. It is advisable to choose an oversized matte box it since the filters, once mounted, will be at a certain distance from the front lens.
Filters Another endless range Build quality is absolutely critical, since it is of little use to invest a significant part of the budget in excellent optics, if they are going to be fitted behind glass that adds unwanted imperfections. Fortunately, the range is very wide and there is a lot to choose from.
In this section it is interesting to classify the options by construction: round, to be screwed directly onto the optics and which must have the exact diameter or use a larger one with an adapter; and square or rectangular ones, without frame, to mount on the filter holder or matte box. In this case, close attention must be paid to the diﬀerent standard measures, always
considering the exception mentioned in the section of matte boxes for the filter holders. And in terms of types, the most common ones are the ND, os as to dim the amount of light and work with more open diaphragms, or those for conversion of color temperature, polarizers, gradients, color ... The options are endless.
Just one last note before closing: We must take into account that the filters also exist in a flexible format, in rolls of about one meter wide and several meters long; and they are the ones that we will use to control or tint the diﬀerent light sources, thus opening up a new and enormous range of creative possibilities.
Batteries This is yet of those sections for which we could create a full feature. Like all electricity storage devices, its two main technical features are voltage (in volts) and capacity (in Ah, or ampere-hour). But in reality, they are very easy to classify because their distinct feature is always the type of media and connection. Each manufacturer has its own proprietary format for the smallest models, while the large ones, similar in size (and weight) to building bricks, tend to meet some standards. In this case, we would dare to say that one of the most common stardards does not belong to a camera manufacturer,
IN THE CASE OF SMALL BATTERIES, GIVEN THE SPECIFICITY OF THE MODEL REQUIRED FOR EACH CAMERA, WE ARE FACED WITH A CURIOUS DILEMMA: THERE IS NOT MUCH TO CHOOSE FROM FOR EACH CAMERA MODEL, BUT WE COULD NEED DIFFERENT BATTERIES IF WE USE DIFFERENT RANGES OF CAMERAS, EVEN IF FROM THE SAME MANUFACTURER.
but to a company that is mainly dedicated to the design and manufacture of batteries. In the case of small batteries, given the specificity of the model required for each camera, we are faced with a curious dilemma: there is not much to choose from for each camera model, but we could need diﬀerent batteries if we use diﬀerent ranges of cameras, even if from the same manufacturer. Once the required model for our camera(s) is known, the dilemma will be whether to use the original brand model or a generic model from other manufacturers. Our experience tells us that original ones tend to have the best compatibility, performance, reliability, and long-term charge and discharge life cycles. In third-party manufacturers it is possible to find batteries that come close to such specifications at somewhat lower prices. But if we go for models with a clearly lower price, we can be sure that the
diﬀerence in features will be significant. And there are situations that can be very awkward, such as working in environments with temperatures or humidities other than the comfortable ones. The turning point here will be in the balance between budget and reliability needs. When dealing with large batteries, it is much easier to find some models with excellent features and benefits that equal -and even exceed- those oﬀered by the camera manufacturer itself. Another aspect to keep in mind will be that batteries are not “camera accessories”, but they are also “accessores for accessories”. This is because many of the accessories discussed in this article also need them to function, such as external recorders, wireless transmitters or torches that we will see below.
illumination of subjects at close range. They were more necessary in the times when the sensors were less sensitive than the current ones, but their high consumption made the power supply heavy and limited.
batteries provide us with the maximum usage time. Other features that we should assess will be if they allow adjusting intensity, color temperature or whether they come some type of diﬀuser or color filter.
Since the advent of LED lighting technology -with a light output that allows significant amounts of light for reasonably long periods of time and with a wide range of adjustment in both color temperature and intensity- a meaningful use has been found for them once again.
There are camera manufacturers that also oﬀer this device as an accessory. In these cases, we find some other interesting additional feature worthy of assessment like all the others, such as the possibility of turning it on and oﬀ in synchronization with the recording or taking the power from the camera itself. Although this feature is very handy, it will not always be the best solution. As usual, to be considered based on the needs of our projects.
Their main purpose will not be to illuminate the scene since, being relatively close to the subject, they tend to generate very harsh highlights and shadows. But in ENG or feature report environments, they are very suitable for filling shadows, attenuating the eﬀect of an intense side source, and even providing that sparkle that we like so much in the eyes of our protagonists.
Torches are small sources of continuous light that we generally use to fill in the
Ideally, they should be lightweight and oﬀer good performance so that the
Since this is another one of those devices that requires power to operate, and given the huge number of manufacturers currently available, each one often chooses to manufacture with battery holders that are compatible with the standard models
of the main camera manufacturers.
Adjustment charts and color charts These easy-to-use items are part of a number of little things that we can’t live without once we’ve tried them. They consist of a holder with patches of diﬀerent colors and exist in diﬀerent sizes. The purpose is to optimize the exposure setting and color
reproduction with minimal eﬀort. Once we have our scene set up, with the camera angle and lighting properly determined, at the time of filming they help us to establish the correct exposure, making it easier for the latter to adjust to the most important parts of the scene or on which we are trying to focus our attention. Later, while the takes required for our production are
being shot, other takes are shot with these cards in the same position as our protagonists. These are the takes that we will later use in grading to adjust the color. To do this, and as long as we have the appropriate monitors and suﬃcient experience and training, we can use a mere comparison to adjust or divert the color towards the result we want. And making it even easier for those without all the experience and tools, some of the current color correction applications can identify the patches from those cards and make, nearly instantantly and with extreme accuracy, a first correction adjustment of color that in many cases will be enough to oﬀer a very good result to our clients. This has been part one. In the second and final part we will continue to expand our catalog including a large number of specific elements for the broadcast world and we will also leave some space to cover oﬀ-camera lighting equipment.
Unlimited Live Broadcasঞng
By CARLOS MEDINA Audiovisual Technology Expert and Advisor
Every audiovisual content provider must be very aware of the possibility of broadcasting live to as meet the demands from the audience or end users. In the field of broadcast television, on Internet channels and/ or on social media, the “live” technique is increasingly common, allowing immediacy, on-site presence and a customization of whoever is willing to tell a story; the here and now of any event: news, sports, conferences, corporate, presentations, web tutorials, live streaming outdoors, lifestreaming, IRL (In Real Life), live podcasts, board games (TTG), events, concerts and much more.
LiveU has received the Frost & Sullivan 2021 North American New Product Innovation Award for its LU800 unit and won the 71st Annual Technology and Engineering Emmy Award as recognition for its innovation and achievements in video on cell Internet protocol technology (VoCIP).
The audiovisual signal
Sometimes, setup is
techniques vary, but they
transmitted is a broadcast
complex and expensive.
all have the same purpose:
master signal either from
When using a Mobile
conveying a video/audio
a multi-camera production
Production Unit (MPU)
signal from one place to
or from the central control
together with a Mobile
another through transport
of a TV station or from a
Broadcast Unit (MBU).
and dissemination by cable
But the new audiovisual
(fiber optic), by satellite
That is to say, directly the
systems (C-band, Ka-band
signal from an autonomous
and Ku-band), via terrestrial
video camera that is sent to
The development of
microwave links or via the
a production control where
technology in the digital
this camera can be “tapped”
encoding process and in
thus making real-time
live, this obtaining a live
computing, progress seen
in mobile communications
(3G/4G/LTE/5G) and tight budgets have given way to broadcast or link backpacks, or as known in professional audiovisual slang, simply ‘the backpack’. In 2006, LiveU was the inventor of ‘the backpack’ system itself. This company has a long history in the field of broadcasting highquality live video from anywhere in the world. With
IN 2006, LIVEU WAS THE INVENTOR OF ‘THE BACKPACK’ SYSTEM ITSELF. THIS COMPANY HAS A LONG HISTORY IN THE FIELD OF BROADCASTING HIGH-QUALITY LIVE VIDEO FROM ANYWHERE IN THE WORLD.
over 3,000 customers in more than 130 countries, LiveU’s technology is the solution of choice for global broadcasters; news, sports and entertainment agencies; live video streaming to TVs; mobile devices, online and social media. LiveU has received the Frost & Sullivan 2021 North American New Product Innovation Award for its LU800 unit and won the 71st Annual Technology and Engineering Emmy Award as recognition for its innovation and achievements in video on cell Internet protocol technology (VoCIP). Broadcast or link backpacks are not a mere transport bag padded to protect the equipment from impacts, but a broadcast solution that allows to make the most of the capacity available in mobile phone networks. A new technological advance that allows, from anywhere with mobile coverage, to send a video and audio signal without the need for satellite support, as well as the possibility of carrying
out live and FTP broadcasts in a nearly instantaneous fashion. The origin of these backpacks comes from the very essence of being at the forefront of the events taking place around us. Get the news the first and on an exclusivity basis. As far back as the 80s arose the need to be with the video cameras at the ‘the news spot’. Specifically, in 1985 manufacturer Sony launched a type of camera known as camcorder on the market. These camcorders allowed images to be captured and stored on a 1/2” tape in a recording format known as BETACAM. The good thing about this type and camera system was that the equipment was free from any ties and allowed the camera operator to enjoy great mobility, both when operating the equipment and in news coverage. It was so important, this saw the birth of a new denomination and a new professional profile, such as the ENG (Electronic News Gathering / Electronic reporting) camera operator.
A camera operator specialized in capturing and recording outdoors independently. Therefore, the television environment was reaching one of its great communication objectives: being at any place of interest and getting recordings of events or news. The only thing missing was a lightweight solution for making ‘a live broadcast” with the very camera operator. And that is what a broadcast or link backpack oﬀers us: getting to the news, turning on the equipment and broadcasting live. It is here fitting to point out that they have not squeezed out other broadcasting means such as, for example, the combination of an ENG together with a DSNG (Digital Satellite News Gathering) vehicle. Simply, backpacks have opened the possibility of live broadcasting to more communication agents; and they have even become ancillary means to the more traditional fiber or satellite broadcasts, reaching
IMPLEMENTATION OF 5G TECHNOLOGY WILL ENTAIL A DEFINITE ADVANCE IN THE WIDER MOBILE BROADBAND, WITH A HIGHLY RELIABLE COMMUNICATION SYSTEM PROVIDING COVERAGE CLOSE TO 100% AND LOW LATENCIES
places that require greater technical complexity, always in real time. We have moved from DSNG and DENG to DMNG (Digital Mobile News Gathering) with the appearance of mobile networks, which allow live broadcasting in HD quality without the need for a mobile unit, that is, directly from the camera to broadcast studios. A camera-studio broadcast is made possible through three alternative solutions. First, the video camera itself has the built-in technology required to generate a WiFi network; second, the camera has an interface or slot to “tap” a USB WiFi dongle, a plug-and-play device that allows access
to the Internet; and third, there is a connection between the camera and the broadcast or link backpack with a physical, wired connection between both pieces of equipment. These backpacks will allow making a direct contribution through the technical means that are within their own bodies and under the operation of a single person. The term ‘backpack’ is perfectly appropriate because it is exactly what the camera operator wears on his back and where the necessary equipment for such live broadcasts is housed. The main elements that make up a backpack are: a video/audio encoder,
modems with their corresponding antennas and universally compatible SIM slots so that SIM cards from any provider or telecommunications company are supported. We are making reference referring to 3G technology at its inception, 4G+ or
LTE (Long Term Evolution) nowadays and 5G in the near future. Each of those figures and acronyms corresponds to the abbreviation for the generation in the latest progress made in mobile communication technology. The International Telecommunication Union (ITU) created a committee to lay down the specifications. This committee is the IMTAdvanced and it sets forth the necessary requirements for a standard to be considered within the 4G, 4G+ or LTE and 5G generations.
The French company AVIWEST is shaping the future of live and deferred video contribution over linked unmanaged IP networks: cellular, WiFi, satellite or the public Internet networks.
The main diﬀerences between 4G, 4G+ or LTE, and 5G correspond to parameters such as data transfer speed (Mbps), latency (milliseconds) or coverage in the territory. But also, an increased bandwidth for data and energy consumption. Implementation of 5G technology -foreseen between 2020 and 2030 in Europe- will entail a definite advance in the wider mobile broadband, with a highly reliable communication system providing coverage close to 100% and low latencies (between 1 and 2 milliseconds). Thanks to this network, data transfer speeds of up to 20 Gbps will be achieved. It will allow more than a million connected devices per kilometer and an energy savings of almost 90%. Due to the great technological impact that the 5G network is expected to cause, the Spanish Ministry of Economic Aﬀairs and Digital Transformation designed the National 5G Plan that defines the lines of action to develop this technology from 2018 up to 2020.
After the approval of this Plan, this ministry tendered the first licenses to use the 3.6 - 3.8 Gigahertz (GHz) frequency band, a priority for the deployment of the 5G network. Understanding the development of the generations of mobile telephony is essential in order to assess the benefits of a broadcast or link backpack. We should not forget that inside the backpack there must be a battery power system and the necessary wiring for correct operation. The backpack allows working as an FTP (File Transfer Protocol) system, a network protocol for transferring files between systems connected to a TCP (Transmission Control Protocol) network, based on a client-server architecture. That is, we can record a video file on camera and then send it to a receiving server. And this oﬀers us a widely used system called Storeand-Forward that is not live. It is the equivalent to a false direct. It consists of
the possibility of making the recording on the camera and in sending the video continuously in parallel. In about 3 or 4 minutes the material is already available in the studio or production control room where the live broadcast is being produced.
AVC (H265 / H264); Audio: AAC-HE/LC. VIDEO/DATA interfaces: SD, HD, 3G-SDI (BNC), 12G-SDI (BNC) HDMI 2.0, HDMI 1.4, USB 2.0, RJ-45 Ethernet, Micro SD Card Slot, Audio Jack (in+out).
There are several manufacturers of broadcast/link backpacks, but they all have a few features in common that are worth discovering:
Video resolutions: 1080p50/60/25/30/24, 1080i50/60, 720p50/ 60/25/30/24, PAL, NTSC as well as the most up-todate, ready for UHD and 4K content.
A/V encoding: Refers to the codecs supported by the device, such as HEVC/
Supported technologies: 3G, 4G LTE, 5G, HSPA+, HSUPA, HSDPA, UMTS,
The WMT UltraLink Enterprise family of encoders from manufacturer Mobile Viewpoint is the first bonded backpack transmitter to deliver true 4K.
ONE OF THE MOST INTERESTING FEATURES THAT SOME 4G/5G BACKPACKS HAVE IS THE POSSIBILITY OF BROADCASTING HD VIDEO THROUGH BONDING OR AGGREGATION OF NETWORKS, INCLUDING 3G/4G/LTE, WIFI, BGAN, ETHERNET AND FIBER.
CDMA EVDO Rev 0/A/B, Mobile WiMAX, and external WiFi 802.11 a, b, g & n support. IP Satellite (KA/BGAN). Measurements and weight: Compact in size and a weight of around 1.5 kg including battery. Temperature: -5°C to +45°C. One of the most interesting features that some 4G/5G backpacks have is the possibility of broadcasting HD video through bonding or aggregation of networks, including 3G/4G/LTE, WiFi, BGAN, Ethernet and fiber. Bonding (short for Bandwidth ON Demand INteroperability Group) is the name given to a method for joining or aggregating multiple physical links to form a single logical link. Applied to transmission of images, it oﬀers the possibility of using several channels for transmission of content between two
American manufacturer Teradek designs and manufactures highperformance video solutions for general imaging, film and broadcast applications.
Intinor, a Swedish company, presents the Direkt Link 500 and Link 600 backpacks.
points in order to improve the reliability of the link, its quality or both. Broadcast of HD signals and, to a greater extent, UHD signals, demand bandwidths such that they allow the transfer of large amounts of data. Outdoors, where mobile telephony is typically used to send information, the possibility of joining/ aggregating networks notably improves the transmission result.
Constant Bitrate (CBR),
In order to establish a stable transmission, backpacks feature Adaptive Bit Rate (ABR),
water resistant and must
Variable Bitrate (VBR) and Automatic Forwarding Error Correction (FEC) options. LiveU has patented its own LiveU Reliable Transport (LRT™), algorithms -reliable transport protocol. Finally, any 4G/5G backpack that is to be favored by camera operators must be exceptional as to ergonomics in order avoid fatigue, as well as shock and have a quiet and eﬃcient cooling system such as, for instance, active cooling.
The combination of these features must oﬀer optimal results for live broadcast, that is, what bitrate is supported (for example, up to 30Mbps); latency (the time it takes for a packet to be transmitted over a network, and the lower the better the transmission) and an eﬀective bootup system (less than 20 seconds). The most relevant manufacturers and developers of broadcast backpacks are the aforementioned LiveU with its LU800e, LU600 5G
ANY 4G/5G BACKPACK THAT IS TO BE FAVORED BY CAMERA OPERATORS MUST BE EXCEPTIONAL AS TO ERGONOMICS IN ORDER AVOID FATIGUE, AS WELL AS SHOCK AND WATER RESISTANT AND MUST HAVE A QUIET AND EFFICIENT COOLING SYSTEM SUCH AS, FOR INSTANCE, ACTIVE COOLING.
HEVC and LU300 5G HEVC models or the LiveU Solo, which we cannot fail to mention. The latter model has a video encoder that oﬀers easy, one-touch wireless live streaming straight from the camera to popular online platforms like Facebook Live, YouTube, and Twitch. It also features LiveU Xtender, an integrated antenna solution that increases network reception, providing additional strength for live video broadcast in extreme scenarios such as crowded areas. Xtender oﬀers broadcasters the flexibility to use cellular connectivity as part of their existing SNG/ENG trucks, bridging cellular and satellite connections for the best live video performance or for remotely connecting to the LiveU dongle and handheld uplink units. American manufacturer Teradek designs and manufactures highperformance video solutions for general imaging, film and broadcast applications. From wireless monitoring, color correction and lens control to live
streaming, SaaS solutions and IP video distribution. Both professionals and amateurs alike use this technology throughout the world to capture and share engaging content. Regarding backpack solutions, its product portfolio includes the Bond 659 Backpack AVC + MPEG-TS with high-gain antennas and the possibility to choose between Gold or V mounts, or without a battery adapter for portable batteries. Each backpack supports up to 5 USB or Teradek Node dongles, ensuring connectivity wherever you go. The Enlace Pro Radome products and Node modems, both from Teradek, are ideal solutions to face any location for a successful live broadcast. The French company Aviwest is shaping the future of live and deferred video contribution over linked unmanaged IP networks: cellular, WiFi, satellite or the public Internet networks. The PRO3 series integrates up to eight cellular 3G/4G
The TVU One 4K HDR model from manufacturer TVU Networks can broadcast up to 4K 60p at 3 Mbps in 10-bit HDR image quality.
modems or six cellular 3G/4G/5G modems, compatible worldwide with a custom and patented high-eﬃciency antenna array. The device can also be remotely connected to external Aviwest QUAD antennas in order to strengthen signal transmission in critical environments. It natively supports additional links such as built-in WiFi and
Dual Gigabit Ethernet for relaying on LAN/WAN, BGAN, GX, or Ka-band satellite networks.
as Ethernet, WiFi, and dual
The Aviwest AIR series is a solution available in three versions: without built-in cellular modems (for wired installations), with two builtin 3G/4G modems or with two 3G/4G/5G modems. All models come with additional interfaces such
of encoders from
USB ports. The WMT UltraLink Enterprise family manufacturer Mobile Viewpoint is the first bonded backpack transmitter to deliver true 4K. It is capable of delivering true 4K and Ultra HD video quality at 50/60
frames per second from
of video and audio over
broadcast point at any
the fields using 4G link
the Internet and other
time and in any location.
technology. In addition to
IP networks. This unique
its 4K capabilities, UltraLink
backpack solution is based
increases the input
on the Easyrig vest, which
connections to four 3G and one 12G for enhanced
is ergonomic and can of course be combined with
performance and flexibility.
its camera gear.
The units are capable of
The TVU One 4K HDR
supporting up to 40MB/sec over linked networks and
model from manufacturer TVU Networks can
can stream back to a studio
broadcast up to 4K 60p
or to the cloud by using up
at 3 Mbps in 10-bit HDR
to eight internal 3G/4G/5G,
image quality. Unmatched
WiFi, LAN and/or satellite
performance with six
modems and can link all of
internal modems plus four
these connections together
external modems, WiFi, and
to make a single high-
a built-in hotspot. Two-
way VoIP communication
By using an adaptive bit rate, the UltraLink product line streamlines video transmission based on the available bandwidth. This ensures high-quality video even in the most demanding situations. Intinor, a Swedish company, presents the Direkt Link 500 and Link 600 backpacks. These pieces of equipment are used for live broadcast and distribution
and internal recording for up to seven hours. TVU transmitters use less data and fight packet loss to achieve broadcasts with up to a 0.5 second delay, even in a moving vehicle. Each and every one of these broadcast backpackbased solutions oﬀers several advantages worth highlighting: Allow every camera operator to be a live
Guarantee the reliability and quality of broadcasts through the 4G LTE and 5G network. Integrate the signals relayed with platforms such as Facebook Live, YouTube and Twitch. Decrease live-related costs. Start a live broadcast in nearly no time. Any user willing to work in the audiovisual market as a camera operator or as a content creator must have proper audiovisual training and some basic technical equipment: a video camera, a tripod, a handheld microphone and a tie microphone (wired and/or wireless), power batteries, a portable lighting torch, various storage media for recording, the relevant video and audio cables; and certainly a complete 4G/5G backpack for unlimited live broadcasts.
TOUR DE FRANCE
Live broadcast through valleys and mountains
Image: Tour de France 2021 – Etape 8 – Oyonnax / Le Grand-Bornand (150,8 km) - Dylan Teuns (BAHRAIN - VICTORIOUS) - Vainqueur au GrandBornand. Credit: Charly Lopez.
Christophe Barrier, Production Director for France TV
The Tour de France is a truly demanding competition. It is so at a sporting level due to the number of kilometers and slopes traveled by the cyclists who compete and, also, it for those same reasons for the teams that try to make history. A competition as exciting as the Tour de France deserves to be told on an international level. On occasions like this, technology and human teams come together in the best possible way to show the entire world the eﬀort, satisfaction and enthusiasm of all cyclists who relish this competition.
Insurmountable ascents, vertigo slopes on the road, descents with no brakes being used, hidden valleys and the highest peaks in France on a motorcycle, camera on shoulder, following the tremendous eﬀort of great athletes. This is how the tour is broadcast, without use of 100% reliable wired connections and even without the telecommunications technology that promises to change all known broadcasting models: 5G. How do they do it then? Christophe Barrier is a Production Director for
France TV. His job, during the Tour de France season, is to coordinate the production teams so that everything runs smoothly. To do this, he must attend each stage and lead the teams on site. For these reasons, at TM Broadcast International we have interviewed him and thus obtained the first-hand testimony of such a titanic eﬀort.
How is the Tour de France broadcast? You are climbing up a valley. You are keeping a good pace, generating
TOUR DE FRANCE
a lot of power with your pedaling. Even so, you climb up slowly. The wall in front of you is a challenge, and you have to do your best to overcome it. But, nevertheless, you are enjoying your passion as you travel up the mountain paths lined by pine trees and steep cliﬀs. As a cyclist, your job is to overcome each stage by means of
THE PRODUCTION PLAN IS BASED ON THE IMAGES CAPTURED FROM THE MOTORCYCLES AND HELICOPTERS THAT CONSTANTLY FLY OVER THE HEADS OF THE CYCLISTS DURING THE COMPETITION
Tour de France 2021 – Etape 8 – Oyonnax / Le Grand-Bornand (150,8 km). Credit: A.S.O./Charly Lopez
sheer concentration and superhuman eﬀorts. The job of the professional who follows you with a camera on his shoulder, as a package on a motorcycle, is to tell your feat. And, in addition, do it with all the passion that their soul can show and with all the knowledge that they have learned throughout a professional career. The technical eﬀort that each edition of the Tour entails is a titanic one too. Mr. Barrier assured that the production plan they have followed is based on the images captured from the
motorcycles and helicopters that constantly fly over the heads of the cyclists during the competition. “These production units generate a signal that is sent to an airplane or another helicopter that acts as a booster antenna.” The reason is that the natural environments through which cyclists pass have a great handicap: the steep terrain makes communication diﬃcult. Land-based and fixed infrastructures cannot be used because of their limited range. So what is the solution provided by
the France TV production teams and the organization of the cycling competition? Nothing more and nothing less than having the infrastructure go after the pack. It is both as simple and complex as it sounds. Mobilizing such a large infrastructure in this way is as daunting a task as that ahead of every cyclist when facing a mountain pass. The flying infrastructure, once the signal is relayed, “sends that information to the TV infrastructure that does have its reception points on the ground.” Christophe Barrier added that the way of broadcasting this mobile signal varies depending on the duration of the relevant stage and ensures that in this edition they will have been broadcast in full. The change depends on the terrestrial reception point. “The first of the reception points on the ground relays the images through their arrival by satellite signal. The second point does it through microwave links”. It seems simple, right? In a display of both modesty and humor, Christophe
Tour de France 2021 – Etape 19 – Mourenx / Libourne (207 km). Credit: A.S.O./ Charly Lopez
Barrier has admitted that it is a simple task. But that is where the real beauty of the system lies: In the fact that, behind its apparent simplicity, there are a large number of technological infrastructures and human teams that make the complex simple. The workflow that involves the Tour de France signal, from the moment a camera captures a rider cycling ahead in a sprint until viewers enjoy that excitement on their devices, is as follows according to
the Production Director from TV France: “Mobile cameras on motorcycles generate a signal that reaches -as explained above- a mobile unit (OB Van) sitting at the finish line. There a signal is produced, the international signal. The mobile unit’s staﬀ and technology combine the signals from the twelve cameras, the finish line signal and the graphics, and all of this is broadcast around the world via the EBU (European Broadcasting Services) network. Afterwards,
TOUR DE FRANCE
to an improvement of technology with respect to each edition of the Tour de France. Every year, it’s about oﬀering an upgrade to the viewer as technology progresses. And technology in bicycles also progresses every year in order to reduce the eﬀort of cyclists and improve their experiences so as to get faster to the finish line.
TV France, in this case as a co-host of the Tour, relays a private signal to a second control room via satellite telecommunications”.
Technological novelties The accumulation of fatigue in the legs of a cyclist is gradual. When climbing a mountain pass, each pedal stroke takes more eﬀort than the previous one. A fitting simile would be the one that makes reference
Christophe Barrier told us that the technological implementations that each edition of the tour carries out are always based on “improving broadcast quality”. In 21st century television, viewers have become active users, capable of influencing in real time each of the contents they are exposed to through their TV sets, mobile phones, tablets or computers. This being the case, it is important that viewers feel rewarded with the best possible quality. As Christophe Barrier mentioned: “We always try to improve it through new codecs and modulations.”
also progressed over time. Christophe Barrier shared with us the items that they have changed for this latest edition. The first thing that the Production Director highlighted is the use of cameras with 4K capability and appropriate technology for the use of super-slow motion in each and every camera position. “The cameras on motorcycles and on fixed spots at the finish
“THE CAMERAS ON MOTORCYCLES AND ON FIXED SPOTS AT THE FINISH LINE HAVE BEEN IMPROVED TO CAPTURE 4K AND SUPER-SLOW MOTION IMAGES.”
Like equipment for cyclists, technology for broadcasting the Tour de France has
line have been improved to capture 4K and super-slow motion images.” In this way, the commitment to oﬀer the best image quality and the most spectacular shots in the competition has been fulfilled in this new edition. In addition, new technologies have been implemented in keeping with this commitment. Mr. Barrier highlighted the “replacement of the Cineflex by the GSS Gyrocam at the helicopter for broadcasting purposes.” This improvement in equipment, which comes from the same company, has managed to attain more quality of broadcasting in aerial takes. And, not least, it has managed to give a distinct view of the French landscapes, another major feature of the Tour de France. And this in addition to implementing other technologies that we will discuss later. On the other hand, we had the opportunity to speak with Christophe Barrier about cutting-edge technologies that are already part of the daily routine of professional
broadcasting on a global
level. We asked our
“The possibilities oﬀered by
interviewee his opinion on
this new capture technology
two of these technologies so in vogue: augmented reality and 5G connectivity. Regarding the augmented reality technology applied
have allowed us to oﬀer our viewers information based on tracking technology.” He also shared with us the plans of the French
to the Tour de France,
television network for the
Christophe Barrier was able
future regarding these
to explain to us that “they
new possibilities: “We hope
have attained good progress
to oﬀer information on
in the use of augmented
places, borders, and other
reality”. They have achieved
this precisely through
in the future thanks to the
the implementation of
additional possibilities oﬀered
the Gyrocam GSS in
by augmented reality.”
TOUR DE FRANCE
However, broadcasting through 5G networks has not gone that far in this edition of the Tour de France. The advance that promises to be a total revolution in terms of connectivity is not yet mature enough to be a key part in usage cases such as the Tour. Precisely this competition would greatly benefit from its possibilities when using it for transmission of large amounts of images in Ultra High Definition (UHD), involving quite significant volumes, in networks where
such transfer would not cause any kind of delay thanks to the nearly zero latency achieved. In this way, such demanding remote live required by the Tour de France would be comfortably met. But the time has not yet come, according to Christophe Barrier. “We have performed tests with a camera capable of 360-degree recording that communicates over 5G networks. But the coverage is not suﬃciently developed as to apply this use in the Tour”.
Content Management The 108th edition of the Tour de France has covered a total of 24 days, from June 26 to July 18, 2021. In total, 184 riders from 23 diﬀerent teams have participated. 21 stages have been covered with a total of almost 3,415 km over a total of 757 municipalities in two diﬀerent countries: France and Andorra. The coverage has generated 120 hours of live content. The way of managing all that amount of content was a question we could not fail to ask to our interviewee, Christophe Barrier. To all of us, even if we are familiar with the technology, it is amazing how it is possible to manage such an amount of content so eﬀectively. For Production Director Christophe Barrier “it remains a mystery.” But immediately afterwards he confessed to us that “during the live show they draft a script that allows them to index all the events that take during each stage”. Tour de France 2021 – Etape 15 – Céret / Andorre-La-Vieille (191,3 km) – Caravane. Credit: A.S.O./Aurélien Vialatte.
“We use a specific naming convention for each file, taking into account the type of event we want to refer to, the stage in which it occurred, and so on. The files are stored and managed on an EVS and Avid network under the supervision of a Media Director”.
The added value of France TV As we have mentioned, France TV has deployed the infrastructure required to broadcast the Tour. And two signals have come out of his work, one for the international mode through the EBU network and a private one that has been received, edited and broadcast through the France TV network. Christophe Barrier told us about the added value that the television company he works for has provided. But above all, it emphasized the value of France: “The added value that we provide is the discovery of the architectural, historical and natural legacy that we take into account when broadcasting the Tour de France.” Moreover, for France TV, this is
so important that they broadcast in advance, before the pack of cyclists in the competition reaches the strategic points. “We anticipate and show, with steadycams and drone cameras, the most beautiful places that cover the routes through which the Tour de France takes place.”
The impact of the Tour de France in figures According to the oﬃcial figures provided by the organizers, more than 150 million Europeans have enjoyed this event. The oﬃcial European broadcasters have been
France TV Sport, oﬃcial host and responsible for the production of the competition, and Eurovision Sport (EBU). The Tour de France oﬃcial media have shared on their oﬃcial websites that more than 100 hours of live television have been generated among all French television channels. All stages have been broadcast from beginning to end by France 2, France 3 and France 4. Such amount of content has been enjoyed by 3.8 million TV viewers each stage, according to data from the France 2 network. This reached 39.4% in share.
Pyramide du Louvre. Credit: A.S.O./Aurélien Vialatte
TOUR DE FRANCE
THE ADDED VALUE THAT WE PROVIDE IS THE DISCOVERY OF THE ARCHITECTURAL, HISTORICAL AND NATURAL LEGACY THAT WE TAKE INTO ACCOUNT WHEN BROADCASTING THE TOUR DE FRANCE
compared to the same age group in the previous edition.
The Tour de France is a
In addition to content
passions on a sporting
intended for a TV audience, the Tour de France organization has also taken into account digital audiences. The Tour de France fan base on social media has grown by 700,000 members since last year’s edition, bringing the total to 9.4 million fans. Social media interactions related to Tour-generated content have reached nearly 850 million people at the time of writing this feature article. The total digital audiences have been counted in two diﬀerent ways: through
Records have also been set. 42.2 million viewers have watched the Tour on French televisions, almost
the oﬃcial Tour page and through mobile apps. Regarding users of the oﬃcial website, two
competition that raises level and also with respect to its technological broadcasting capabilities. The reasons behind this are clear. Traveling so many kilometers while broadcasting a highdefinition signal without connecting a single cable deserves the admiration from anyone who is dedicated to capturing and telling content. The Tour de France has always been able to provide a consistent signal about what is going on in those French valleys and mountains. In the future, a time when the oﬃcial Tour de France broadcasters will benefit from much faster, more accurate and more capable
out of every three users
are foreigners; the total
number of visitors has
excitement of this historic
been 14.5 million and the
competition will be
number of cumulative
conveyed through quality
visits 42.5 million. Through
and technology never seen
mobile applications, traﬃc
to date. And TM Broadcast
years old, have increased
of 12 million people has
International will be there
by one million viewers as
to tell you all about it.
two and a half million more viewers than in the previous edition of the Tour de France. Another record: young audiences, between fifteen and twenty-four
(II) Technique and creaঞon blended
By CARLOS MEDINA Audiovisual Technology Expert and Advisor
Fiction films, series, advertising, television programs, video clips, short films, documentaries and even audiovisual content for social networks in their diﬀerent versions are part of leisure and (entertainment. This scenario in which we currently operate allows any image professional regarded as such -be it a colorist or not- to know how to use the possibilities oﬀered by the various software packages available. All these software solutions and computer applications share common knowledge and enable us by means of diﬀerent tools, to modify, adjust, balance and define the visual side of things. Therefore, it is now time to go over some parameters (the ones that seem essential to me) when tackling a color correction or color grading session or, even, to be mastered before becoming a qualified colorist:
Resolution: It is the number of lines or pixels that comprise an image both horizontally and vertically (height and width). Hence, the number of lines/ pixels that a space covers. That is why it is also known as spatial resolution. The higher the number of lines/ pixels, the higher the level of detail, the better the image’s definition, more visual information and, therefore, the higher the quality. It is important to be clear about this parameter because of what it currently means to share images of diﬀerent resolutions in the same project. This implies diﬀerent qualities and textures when working with the visual side. It is important to take this parameter into account right from the time of choosing the recording/ filming camera. Aspect Ratio: It is calculated by dividing the width by the height of the image that can be viewed on the screen and it is normally expressed as ‘X:Y’. The visual outcome depends on the aspect
ratio, therefore resulting in the so-called square images or wide-view images. Based on the origin of their respective professional environment (television broadcast or cinema) images have diﬀerent proportions between width and height. The aspect ratio in the era of SD television was until more or less 2009, 4:3, which is expressed in cinema as 1.33. High definition (FHD) images must have a 16: 9 aspect ratio (in cinema: 1.77: 1 that is 1.77 times wider than height).
Other aspect ratios used in TV/cinema at present are 18:9/2:1; 14:6/2.35:1; 11:4/2.76:1, and even 360° all-round vision (12.00:1 in cinema). In some instances, this parameter receives the name of dimensional relationship, often called aspect relationship, aspect ratio, aspect proportion or aspect rate. Frame Rate: It expresses the number of frames per second in a moving image. Acronym is FPS. A highly relevant piece of data throughout the
process, from on-camera capture to mastering for distribution within the various production environments: cinema, television, streaming and/or social media. Luminance (Y): It is the component of the video signal that represents the information on black and white levels contained. In other words, it is the photometric measure of brightness in a video image. Therefore, it tells us white, black and gray values in an image. It is a very technical
parameter, and one that requires proper assessment in order to achieve a correct output of the master broadcast/ exhibition material. Chrominance (C): A technical term that designates and defines the color components in a video signal and contains information on the primary colors R (red), G (green) and B (blue) in additive synthesis. It is equally important as luminance (Y) for a correct technical processing of the image/ video to be delivered. Contrast: A fundamental parameter for adjusting the level of intensity between black and white in an image, by fine-tuning the values in a gray scale. Tweaking this parameter results in images that are more washed or more contrasted. It is closely related to two other concepts such as dynamic range and latitude, which are essential to take into account when choosing a type or model of video, film or TV camera. The values that contrast can oﬀer us allow us to
generate very diﬀerent visual sensations in relation to the audiovisual narrative received by viewers. Saturation: One of the properties of color. It is the combination of light intensity and distribution of the diﬀerent wavelengths in the color spectrum. It involves referring to intensity/pureness of a color. Knowing the eﬀects when modifying saturation values of an image is key, because a highly saturated color has a vivid and intense color, while a less saturated color appears more washed out and gray. Hue: Another of the essential properties of color. It defines the shade, the qualitatively diﬀering aspect of the color experience that is related to diﬀerences in wavelengths or to mixtures of diﬀerent wavelengths. Diﬀerent tools will allow us to modify the color hue of the image and, therefore, modify the real image captured. Brightness: Also called clarity. It is the third attribute of color. The darker the color, the
weaker the brightness. This term is sometimes associated with the concept of value, luminance and/or brightness. Along with saturation and hue, this value expresses -within the meaning of color in the aesthetics and narrative of the audiovisual work- the psychology of color and the sensations and emotions that it can cause in viewers. Color sampling and subsampling (Chroma Sampling): is a digital video processing technique that has an impact on luminance and chrominance information. The higher the color subsampling, the higher the number of bits per color and therefore more information on the color gamut for better color correction and treatment. Its current maximum value is 4: 4: 4: 4: which corresponds to values for Y: R: G: B). Color Depth (Bit Depth): A specified amount of binary data (amount of bits of information) to represent a color. The greater the color depth, the greater the
ability to achieve realistic colors and a better color rendering. We are dealing with images/videos at 8 bits, 16 bits or higher figures when quantifying the digitization of a video signal. Color Models: They are abstract mathematical models that allow colors to be represented in numerical form, typically using three or four color values or components. Knowing the RGB, HSL, HSV models is a must here. Color Spaces or Color Gamut: It is a representation of a specific range of colors established within a color model. In 1931, the Commission Internationale de l´Éclairage (CIE), after various studies and experiments, laid down the rules of the color “pitch” by means of mathematical values and coordinates, in which we know as the colorimetry triangle or CIE Diagram. This diagram represents the spectrum of human vision and point D65 is the center of reference for white. Thus, each new contribution/ innovation by agents in
the image sector (film, television, photography, graphic arts, design, Internet…) has generated their own color space or gamut within the CIE Diagram. It is essential to take into account this parameter since a given video material can have diﬀerent color spaces depending on the regulations it is compliant
and minimum level of brightness allowed in this professional environment. It varies depending on the image’s color depth. Thus, an 8-bit encoding has a legal range of 16-235 versus a 10-bit encoding, which is 64-940. Extended range (extended signal): A level established by digital cinema technical standards in order to limit the maximum and minimum level of brightness allowed in this professional environment. It also varies depending on the color depth of the image.
with. For example, ITU-R BT709 in implementation of UHDTV 1 PHASE 1 production environments; or the P3/XYZ gamut applied in the digital cinema sector (DCI 4K/2K). Legal range (legal signal or data video): A level established by broadcast radio and television technical standards to define the maximum
In this case, an 8-bit encoding has an extended range of 0-255 versus a 10-bit encoding, which is 0-1023. Gamma Curves: The level of response of the image obtained in camera for a better adaptation of midtones in the video signal. The heyday of color grading arises when logarithmic curves are made available at shooting. This results in a low contrast, washed
out image, but with a greater dynamic range and requiring subsequent correction. Thus, the main broadcast camera or digital cinema manufacturers generate their own curves: CANON C-Log, C-Log 2 and C-Log3; Cine-like, V-log and PanaLog from PANASONIC; SONY S-Log, S-Log2 and S-Log3; RED ONE’s RedLogFilm; ARRI’s Log C or BLACKMAGIC’s Film Mode among others. Metadata: The information that is available from a video clip taken from the camera. They can be structural or descriptive. The former are an essential part of the entire visual treatment process carried out on the resulting image, especially when working with a RAW video format. RAW: Like its own name suggest this is a term used to name the very data of the digital file that is generated in the image/ video as captured by the video, cinema and/ or photography camera’s digital sensor. Workflows: this aspect is already familiar to everyone
coming from the worlds of video editing and postproduction. It involves knowing all the possibilities that exist throughout an audiovisual production to reach a broadcast or exhibition master. It is essential to know how to handle file exchange protocols such as: EDL, XML, AAF. LUTs (Look Up Tables): These are table-like files with values that modify our input colors (camera material) n order to achieve a specific output (what we see on the screen), thus allowing us to modify the colors in a color grading session. There are two kinds: LUT 1D and LUT 3D. The diﬀerence lies in the accuracy of the color transformation and the number of colors can be described between them. Without a doubt, there are more concepts, terms, processes and protocols that must be handled by any colorists who want to show their mettle in the audiovisual industry. In this sense, it is essential to know the image/video analysis tools, known as scopes:
Waveform Monitor (WFM): It allows displaying the values of the video signal that correspond to measurements in volts and/ or IRE (or in percentages) for the luminance parameter. Vectorscope: A tool for measuring chrominance information values in a circular graph. Both in the angle of the vector -being the hue- as in its length, related to the saturation of each color wavelength found in the image.
RGB Parade: It presents the waveform information for the RGB channels separately and in parallel for a better comparison of levels on channel. Histogram: This is a tool that allows viewing the distribution of the amount of information contained on each RGB channel in terms of dark, medium and high-light areas. That is, a map of tonal RGB values in reference to the measurement of the real exposure made in the take
the material based on the adjustments to be made. • Global adjustments: Those that address image adjustments in a broad sense, as a whole, and allowing for correction of specific technical parameters (gain, range, saturation, color temperature, dynamic range ...). They are modified with tools like color wheels; vertical sliders (primaries / RGB mixer), horizontal sliders, edit or numeric boxes, curves, 3-way color, levels…
of the recording/filming process. These scopes are key, because each one presents us with objective technical data contained in the video signal and, therefore, they are the best “X-ray” depicting errors/hits oﬀered by the image/video. These tools can be found in a myriad of applications and programs used for correction, color grading, editing, post-production, composition and visual integration.
To finish this article on the figure of the colorist, let’s try to classify the modus operandi of the colorist when approaching a routine in his color room; the room or facility where it seems that magic makes -visually speaking- “almost everything” possible: 1. Calibrate and fine-tune the room’s equipment. 2. Analyze the received video material in a first viewing. 3. Classify and sort out
• Range settings: applied to “tackle” certain ranges of color in images. They allow stressing, modifying or decreasing color in areas of an image. Qualifier is one of the tools used for this task. • Selective adjustments: these adjustments allow you to isolate parts of an image where the relevant corrections are to be applied. The relevant tools are the masks, windows, shape tools, key, alpha channel, shapes. • Metadata settings: This deals with the structural
and descriptive data that
Tools like sizing, pan, zoom,
all those such as sepia,
come from a RAW camera.
tilt, rotate, transform…
vintage, B/W, posterize.
• Tracking adjustments:
• Filter and eﬀect settings:
• Presets: Changes that are
They allow working with
These are the settings
made with looks and LUTs,
that are included by the
both generic (filmstock,
looks camera patches,
or by some developer
cine looks, style) as well as
of plugins. The Da Vinci
standard (encoding color,
• Positioning adjustments:
Resolve PowerGrade are
709, sRGB, DCI/ P3 ...) and
They refer to reframing,
well known, but in this
custom (settings look). And
rescaling, rotations, etc.
section we are including
the procedures that allow
adjustments on the moving image. They are the
adjustments with the video
Academy of Motion Picture
Color Awards. Wolfgang
card reference in the take
Arts and Sciences). ACES
Lempp, CEO of FilmLight,
is a proposal for color
comments: “We believe
that, for all sorts of
a film or television
historical reasons, colorists
production: from capture,
don’t always get the
to editing, through VFX,
recognition they deserve.
As part of our responsibility
archiving and any future
and commitment towards
the industry and our
• Special settings: correction solutions for very specific areas such as stereoscopic images, lens corrections and implementation of the diﬀerent HDR variants (in the not too distant future it will be essential to know
In the audiovisual sector
how to work with moving
we can find basic tools to
start practicing as a colorist
• Output setting (deliver): Knowing the output settings to gear and finish our work. Output formats: frame sequence, AVI, Quicktime or MOV. And output files /codecs: Kodak Cineon, SMPTE DPX, CinemaDNG, Cinepack, DV, Cineform,
in software applications specializing in video editing (Final Cut by APPLE, Premiere CC by ADOBE, Media Composer by AVID or Vegas by SONY). Also, at a higher level, Lumetri Color by ADOBE or the Magic Bullet Suite plugin by RED
customers, our goal is to remedy this by means of these awards”. The program presents four categories: colorist in a motion picture released in theaters, colorist in a production not released in theaters or in a television series, colorist in advertising or music videos, and the award for
the most innovative use
2000, PNG, TGA, AJA Kona,
Advanced and highly-
achieving a creative result.
specialized solutions are
DaVinci Resolve from
H261, H263, H264, JPEG
In all this learning and professionalism in being or becoming a colorist, we cannot miss the reference of the Academy Color
BLACKMAGIC, Assimilate Scratch, Rio from GRASS VALLEY, Mistika from SGO and Base Light from FILMLIGHT.
of technology aimed at
These awards, as an example, and the continuous recognition that the audiovisual industry is giving to color grading are the result of a balance between technique and
Encoding System (ACES),
2021 will see the first
creation in the hands of the
created by the Hollywood
edition of the FilmLight
AMWA: Democratizing IP transition. Camera Accesories. 4G/5G Backpacks. Tour de France. The Colorist (II).