NEWS - PRODUCTS
Titular noticia Texto noticia
EDITORIAL These days we are experiencing something that we had been missing in the international newsroom of TM Broadcast. A flurry of launches, press releases and presentations that are making it every day and at all times to our devices. This is because nothing less than the NAB Show is just around the corner. One of the two major global fairs in our sector will be held in Las Vegas from April 23 to April 27. Four days where the market will recover some of the shine lost because of the pandemic and will get back with a more determined pace towards a promising future. In this issue of TM Broadcast International we wanted to take a close look at the British Isles to find out the reason for the fervent growth that both studios and spaces available for shooting professional content are experiencing. The truth is that SVOD/OTT platforms have a lot to say in this regard and the thrust they provide is such that players such as Garden Studios or RD Studios are strongly committed with the next great revolution in our world: virtual production. Romain Lacourbas, director of photography of French origin and well-known internationally, is
Editor in chief Javier de Martín email@example.com
also a first-hand witness to the growth of SVOD/ OTT platforms and the innovative, avant-garde influence that these technologies will provide. We chatted with him about his work for Netflix in The Witcher. Don’t miss out on his experience with sorcerers. This issue could not leave out a first-hand account from a SVOD platform like Joyn. Born in Germany and, in an eﬀort to become the number one choice for every German, this platform has told us about its business model, current situation and plans to meet an ambitious goal such as the one they have set themselves. We would like to highlight the collaboration we have undertaken with Maria Courtial, founding member of the VR and VFX Faber Courtial studio. They are creators of worlds, because they teleport us -by means of technology- to places that are extremely far from ours and that we would not be able to envision otherwise. Last but not least, in this issue you will find the second part of Ivo Burum’s special feature on Mobile Journalism and the SVTVD TV3.0 Forum project in Brazil.
Creative Direction Mercedes González
TM Broadcast International #103 March 2022
Key account manager Susana Sampedro firstname.lastname@example.org
Administration Laura de Diego
Editorial staff email@example.com
Published in Spain
TM Broadcast International is a magazine published by Daró Media Group SL Centro Empresarial Tartessos Calle Pollensa 2, oficina 14 28290 Las Rozas (Madrid), Spain Phone +34 91 640 46 43
NEWS Garden Studios Towards virtual production and beyond We spoke with Rich Philips, who is in charge of the studios’ technology area, and he showed us how they have designed the place to take into account the needs of such a demanding world as the one we are talking about. But he also showed us how Garden Studios is changing ways and means through virtual production.
RD Studios Film facilities to tell incredible stories Located in the Park Royal area of London, an area of the metropolis heavily focused on the film industry, the facility oﬀers the first-hand knowledge of a company that has spent more than ten years creating content and is now capable of creating film, photography, websites, virtual reality and augmented reality features.
Faber Courtial Creators of Worlds
Romain Lacourbas, ASC, AFC Cinematographing witchers
Joyn One platform for all of Germany
Rethinking Infrastructure Implementation: Brazil’s Next-Generation Digital TV System
WELT Switches to a New SoftwareDefined Vision with Vizrt
“How to Mojo” by Ivo Burum (Part 2)
Telos Infinity® IP Intercom Helps Progressive American Flat Track Save Time and Money 5
NEWS - PRODUCTS
Dream Chip presents one of the smallest PTZ cameras in the market: the AtomOne Mini Zoom Dream Chip has recently launched its AtomOne Mini Zoom camera. It is a miniature camera that introduces Zoom functionality in a device measuring just 60x80mm and weighing only 267g. It oﬀers up to 1080p60 resolution captured on a 1/2.5″ sensor.
to maintain full creative
the AtomOne mini Zoom,
The Zoom functionality
control over every element,
Christian Kühn, Product
has been designed to
as the AtomOne Mini Zoom
Sales Marketing Manager
facilitate more eﬀective
maintains full iris control on
said: “The AtomOne
shot reframing. The base
a remote basis and can be
aperture stands at F1.6 –
colour matched with other
2.9mm, providing for a wide
cameras in the production
angle 130° shot, and allows
setup using multi matrix
for the frame to be reduced
to a 66° angle shot at 9mm. The goal of the entire AtomOne range has always been to bring audiences closer to the action and with the Zoom ability, these images can become dynamic and provide another tool in a creative professional’s arsenal. Production teams are able
range has always been about providing creative possibilities that can’t be achieved with larger cameras. Whether it be mounted on a net, a drone,
In additional to the Zoom
or the front bumper of
function, Dream Chip have
a car, our cameras bring
partnered with Bradley
audiences closer to the
Remote in order to add
action and allow for
the potential for pan-tilt functionality, turning the AtomOne mini Zoom into one of the smallest PTZ cameras on the market. Speaking of the evolution of
the communication of excitement and emotion. Developing these elements is always the focus of our continued, intensive research and development process.”
NEWS - PRODUCTS
NEWS - SUCCESS STORIES
Nordic pay-TV company Allente relies on Kaonmedia, 3SS, Nagra and Broadcom to launch Android TV Service
The Nordic pay-TV service Allente has recently launched Android TV Service on satellite, IPTV and OTT based on technology to hybrid KAONMEDIA and 3 Screen Solutions (3SS). Hybrid KAON BCM72180 PVR STBs (set-top box) will be available to Allente’s over 1 million subscribers, enabled by KAON middleware, NAGRA Media CAS, Broadcom 72180 SoC and based on 3SS UX (user experience) technology.
The service will have a phased rollout throughout the Nordic region in 2022, starting with Sweden. PVR functionality will be released later on 2022. With KAON STB middleware integrated with the Android TV software stack, new pre-certified Android TV capabilities are enabled, such us Custom Over-TheAir Update. In 2021 Allente became one of the world’s first operators to complete
Google Common Broadcast Stack integration on an Android TV Operator Tier STB. The integration was completed in six months thanks to close collaboration between Allente, KAONMEDIA, NAGRA, Broadcom, 3SS and Google. Google created its standardized Common Broadcast Stack (CBS) to help more TV viewers get the next-generation apprich services enabled by the Android TV operating system (OS).
NEWS - SUCCESS STORIES
Allente was formed from the merger of Viasat Consumer and Canal Digital in May 2020. The operator oﬀers TV and broadband services to customers in Norway, Sweden, Denmark and Finland. Jon Espen Nergård, CTO of Allente, said, “We are very pleased to now oﬀer our next-generation service to subscribers via satellite, IPTV and OTT
streaming. We thank our amazing partners KAON and 3SS, and all of our technology partners, for helping us provide our customers with a rich array of content, including their favorite apps, all wrapped in a world-class experience enabled by 3SS and delivered on KAON’s powerful STB,” he added. “By making its superaggregated service
available to consumers in so many ways, Allente is once again demonstrating its ongoing commitment to providing a superior entertainment experience to all subscribers, on satellite, OTT and IPTV,” says Kai-Christian Borchers, CEO of 3SS. “Allente is at the forefront of technological innovation and customer focus, and we are very proud of our long and ongoing partnership.”
NEWS - SUCCESS STORIES
RTL Today Radio is born and goes visual with BCE’s StudioTalk
RTL Luxembourg has recently given life to a new web radio in English: RTL Today Radio. For this creation, they have count in BCE’s StudioTalk to produce and broadcast its visual radio. The radio will bring together news, local events, insights, traﬃc, weather, sports, and much more. The essence
of the station, according to its representative’s words, “is fun, friendly, interactive, and always a part of the community”. StudioTalk is an all-inone solution designed for automated or manual video production, channel branding as well as monitoring features. RTL Today Radio wished to
integrate the full branding of their new media, synchronize with their radio automation and broadcast on their website while being fully automated. Broadcasting Center Europe (BCE) deployed the solution in less than three months, including the cabling, infrastructure, and platform configuration.
NEWS - SUCCESS STORIES
Geared up with four PTZ cameras and multiple screens, StudioTalk is synchronized with the radio automation to broadcast the music and news with the content on the visual radio screens (in the studio and on the website). StudioTalk also triggers the diﬀerent live shows, manages the camera production, with the intelligent identification of the speakers, and automatically add the graphic titles with the data encoded in the system. “StudioTalk is constantly evolving, allowing the radio world to enrich its programs and create new experience for the viewers. The radios can continue to deliver their audio programs and at the same time produce high quality live video programs,” explains Olivier Waty, Technology & Project Director at BCE. “StudioTalk is an amazing solution, it completely transformed our way of working. The solution takes care of everything, we can concentrate on our content and live shows,” concludes Gerard Floener, Head of Digital Development Radio & Deputy Head of Radio Programmes at RTL Luxembourg.
NEWS - SUCCESS STORIES
PlayBox Neo and eMAM, through PS and Sons, upgrade playout equipment at Thai TV5 headquarters Thai TV5 has recently chosen a PlayBox Neo playout system. The aim with this election was to replace equipment at its studio headquarters in Bangkok. The project was supervised by PlayBox Neo partner PS & Sons (Thailand) Co Ltd and includes fully integrated eMAM. “Thai TV5’s legacy thirdparty system had become impractical and costly to maintain,” comments David Srikalra, Managing Director at PS & Sons. “We recommended PlayBox Neo on the basis of its successful operation at many broadcast networks and playout facility companies around the world.” “As Thai TV5 is one of Thailand’s highest profile broadcasters, the system is configured to function as main and backup with the additional protection of a cool spare,” adds PlayBox Neo Asia Pacific General
Manager Desmon Goh. “Our servers and software are closely integrated with eMAM from initial ingest, quality control, graphics and scheduling, right through to pre-transmission storage.” “The media asset management solution chosen for Thai TV is based on our eMAM Enterprise platform,” details EMAM, Inc. VP of Business Development Chuck Buelow. “eMAM gives operators the tools they need to locate and preview content managed from an unlimited number of locations including local storage, cloud storage and archive systems.” In this solution, eMAM
Enterprise ran on five or more servers to support 50 active defined users, but the package can actually scale to user counts in the thousands, all maintained through LDAP/Active Directory. Editors and designers can use panels inside Adobe Creative Cloud apps or an extension inside Apple Final Cut to access all eMAM’s tools designed to manage media, and collaborate seamlessly with eMAM web users. Thai TV5 deployed AirBox Neo-20 for playlist scheduling; Capture Suite to optimize ingest workflow of television networks; and PlayBox Neo Multi Playout Manager (MPM) to operate channels via IP remotely.
NEWS - SUCCESS STORIES
Iralta VR relies on Mistika VR software for European Investment Bank advert series The European Investment Bank has recently released an advert video series that tells how European engineers and scientists tackle climate change. The video series are called The EIB 360º: Quest for Climate Solutions and they are created by Iralta VR and post-produced with Mistika VR. “For large organisations, such as EIB it is sometimes challenging to explain some of their activities in a visually attractive manner to a broader audience, which is why the 360º format felt like a natural choice for this project. It takes the narration to a much higher level, oﬀering the viewer
the possibility of visiting the EIB projects in an immersive way and meeting the people behind them,” shared Ramón Verdugo, Director of Photography at Iralta VR. The team from Iralta VR had to plan very carefully all the production and the minimum resources needed because it is a long-term episodic project that requires travelling to diﬃcult-access areas. The first four chapters of the EIB series include 360º video and photo, standard 4K video footage and graphics that alternate in the script of each video. The shooting was carried out with Kandao, Insta360
VR and Sony 4K cameras, underwater equipment and drones. The first three episodes had a very tight schedule from the shoot to final delivery and, for Ramón and his team, it was extremely important to establish a workflow that would allow them to move back and forth between diﬀerent stages of post-production. “For this, Mistika VR has been fundamental, since it allowed us to go back to the stitching and make the necessary tweaks easily and very quickly.” Many 360º shots were shot from a drone, which involved more complications due to the reduced agility and shaky footage. “The quick and easy corrections of image swings, twists and tilts were possible due to the QooCam and Obsidian inertia sensors, and their compatibility with Mistika VR, reading the metadata and stabilising the shots immediately”.
NEWS - SUCCESS STORIES
Game Creek Video chooses TSL’s MPA1-MIX-DANTE for audio mixing in its remote broadcast kits and it’s easy to use and interface with. All of the sources can be customized, making it a flexible and intuitive solution for anyone using it.” The first GCV Anywhere kit was deployed for a major golf tournament in 2020. Since then, the Game Creek Video, a provider of mobile production units, has announced that it relies on GCV Anywhere IP remote broadcast kits to produce sports. GCV Anywhere is a remote interface for OB trucks that allows production teams to produce and control live broadcasts while oﬀ-site. When the company was looking to build these kits, they decided to achieve reliable audio mixes with the MPA1-MIX-DANTE from TSL Products. The MPA1-MIX-DANTE is designed for use at an outside broadcast or headend where fast audio
QC of multiple sources is needed. TSL’s MPA1-MIXDANTE enables the creation of a custom mix on the fly using the independent source gain encoders, from any combination of the 64 Dante/AES67 and 64 MADI sources available. This audio monitor features a source label and a monitor mix label display.
company has created
“TSL audio monitors are reliable and sound great. TSL oﬀers both analog and digital connectivity options,” says James Piccirillo, Broadcast Network Systems Manager, Game Creek Video. “We love how we can monitor up to 128 audio channels if necessary,
server designed to enable
several more kits, which are used across both major national and regional sports broadcasts, and he believes it is something that will continue well into the future. All of TSL’s MPA1-MIXDANTE user features are accessible via the web UI with a built-in web remote configuration and monitoring of networked units Source labels can be automatically read from the Dante network, or manually entered using the web UI. MPA1-MIX-DANTE also oﬀers an internal full range loudspeaker system.
NEWS - SUCCESS STORIES
Straight Arrow News uses Clear-Com solutions to maintain comms between its production teams Straight Arrow News, an
productions and streaming
the Eclipse HX system is
Omaha’s based start-up,
to Facebook Live and other
mainly used for in-studio
in Nebraska, USA, has as
productions, handling the
its business model to oﬀer unbiased approaches to news. Their website, which has been recently launched, has been drawing on this capacity more than doubling in size thanks to the Clear-Com wireless intercom systems.
“We’re not building for now, we’re building for the future,” said Chris Childs, Director of Technical Operations. “What we have right now is more than we need, but we wanted something that could scale with us as we grow. We’ve
Straight Arrow News
invested a lot to get this
is using the Clear-Com
site oﬀ the ground, so the
Eclipse HX Digital Matrix
last thing we want is to
System, LQ Series of IP
go back to management
Interfaces and the Agent-
in two years and say we
IC Mobile App as its new
need to upgrade our
equipment. Eclipse HX is
and production workflows
highly scalable, and it will
continue to develop.
easily accommodate our
The production team is initially focused on pre-
communications needs as they grow in complexity.”
communications among the control room and three camera positions within the Omaha location. The team also uses the system for mix-minus feeds to and from its news bureaus in New York and Washington, DC, where the morning show interviews are recorded. The next phase of Straight Arrow’s growth includes more live programming, which will give the team an opportunity to put the Agent-IC mobile app to use. During one of Straight Arrow’s first tests with Facebook Live, the Agent-IC app allowed their
recorded segments such
Straight Arrow installed
as studio shoots with its
a Clear-Com Eclipse HX
anchors and morning news
system to interface with any
updates, all produced
2-wire, 4-wire, IP (Dante,
with the control room
through their main facility
AES67, native) and wireless
without wires. “We didn’t
and control room in
want to wire him up any
Omaha. Their objective is
They also included an LQ
more than was necessary,”
to expand its workload,
device to ensure future
said Rusty Havener, Senior
adding live shows, remote
system expansion. For now,
Steadicam operator to
NEWS - SUCCESS STORIES
Ninetnine and BluTV develop a linear entertainment TV channel: Helwa TV Ninetnine, the content provider to digital platforms, has signed an agreement with Turkey’s BluTV, one of the largest SVOD services in the country. The deal implies the release of a new live linear entertainment TV channel on 15th March 2022 supported by both companies: Helwa TV.
and North America.
abroad in very competitive
It includes the most
territories. We became the
BluTV, where Discovery is
popular Turkish dramas
also an investor, is Turkey’s
largest original content
and originals from BluTV,
local subscription video-
producer in the region as
movies from Ninetnine’s
on-demand service and
leading pay-tv service
the first SVOD platform
was founded in 2016. It’s
‘Le Bouquet Maghreb’,
a major content producer
cooking shows, children’s
oﬀering a wide range of
content and Ninetnine’s
local and international TV
originally produced and
series, movies, shorts and
licensed content, including
documentaries as well as
Les Amoureux Voyageurs,
live TV broadcasts and
Family Football Club and
is home to all Discovery
other examples yet to
content and live TV
in Turkey. Now we are looking to engage our audience with a linear TV oﬀer and are very delighted to partner with Ninetnine for this new project. The company has a wealth of experience reaching Arab and North African diasporas as well as original
Mustafa Alpay Guler, BluTV
content that is important
Helwa TV, which will
to this market. This is also
broadcast in Arabic and
Director, and Board
a perfect time for both
French, seeks to entertain
Member says, “BluTV
companies to start thinking
the Arab and North African
sustained a significant
of future co-productions
diasporas living in Europe
growth both in Turkey and
NEWS - BUSINESS & PEOPLE
Seagate and EVS collaborate to provide mobile storage and data transfer solution to this industry Seagate Technology
the intended landing
Holdings plc and EVS have
destination. This solution
recently announced a new
is built to ensure mass-
collaboration to provide
data mobility: from data
capture, archive, copy, to
storage and data transfer
access needs for remote
services for media and
on-set production and post-
This collaboration is
“The full value of a digital
centered around Seagate
workflow depends on
Lyve Mobile edge storage
technologies which are able
and data transfer service,
to deliver the right content
and EVS’ live production
at the right time and to the
and replays solution
right audience at any time,
LiveCeption Signature. Use
without compromising on
of EVS solution supports
quality at any stage. Today’s
live shows addressing
M&E businesses need
to implement new data
while transporting multi-
camera content and
that can handle increased
metadata through the
volumes of data (UHD,
Lyve Mobile oﬀering to
multi-camera) while at the
same time, enabling the frictionless movement of their workflow,” said Melyssa Banda, Seagate vice president for Lyve Mobile Solutions. “Adding Lyve Mobile to EVS live production solutions addresses evolving industry needs and empowers businesses with the latest innovative storage technologies.” “With the significant amount of multi-camera sources being recorded and the use of higher resolutions, customers are increasingly preoccupied with bandwidth constraints,” said Laurent Petit, SVP Products & Solutions at EVS. “The use of Seagate’s Lyve Mobile Service with EVS’ LiveCeption Signature solution allows our customers to move content securely and more eﬃciently,” he adds before concluding: “the feedback we’ve been receiving from early adopters is very promising.”
NEWS - BUSINESS & PEOPLE
disguise adds cloud production capabilities by acquiring Polygon Labs
disguise has announced
solutions. By adding this
our work with Polygon
that the company has
solution to disguise’s
Labs will unlock a whole
recently acquired Polygon
portfolio users will access to
new level of productivity
Labs, a broadcast data
a wider range of broadcast
and connectivity for all our
and content visualisation
workflows which empower
users,” says disguise CEO,
solutions platform. The
remote production and
objective of the purchase
has been to reach another
via xR studios.
milestone in disguise’s
“Over the years, Polygon Labs has helped some of
“I am very proud and
the biggest broadcasters
delighted to welcome
turn data into engaging
Polygon Labs to the
stories through powerful
Polygon Labs’ platform it’s
disguise team! This is a key
graphics and data
trusted by broadcasters
turning point for our user
such as CNN, Univision,
communities, partners and
forces with disguise will
The Weather Channel and
customers, and a huge step
take us into the next
TV Globo. Their services
in our direction towards
chapter of accelerating our
provide high-end Unreal
cloud and new media
journey towards graphics
Engine graphics with real-
production workflows in the
time data visualisation via
The future is cloud-based
cloud,” says Grigory Mindlin,
their cloud or on-prem
remote collaboration and
CEO at Polygon Labs.
move towards cloud-based production.
NEWS - BUSINESS & PEOPLE
Globecast claims to use Ateme’s BISS-CA encryption solutions Globecast and Ateme have
of premium subscription-
solution: it’s interoperable,
recently announced their
based content and using
secure and simple to
collaboration to add Ateme
the BISS-CA standard is key
BISS-CA to Globecast’s
security solutions. BISS-CA
François Persiaux, Head of
As Julien Mandel,
Contribution & Distribution
& Events at Globecast,
said, “Content security is
Senior Director at Ateme,
an absolute priority for
Globecast, not least as we
content piracy expected to
work across premium level
represent a $51.6 billion
sports events. The BISS-CA
loss of revenues for the
protocol used in Ateme’s
revocation in real-time for
TV and movie industry
encoder is the perfect
content streams over any
this year”, as stated in
match for high-quality
the Global online TV
video transmissions as
and movie revenue lost
the secure encryption tool
through piracy from 2010
enables broadcasters to
to 2022 Statista’s report,
protect themselves against
“BISS-CA has drastically
piracy. Ateme’s encoders
increased the level of
and decoders can be used
content protection available
across a variety of solutions
to broadcasters. The
and tracking software to
standard not only allows
determine the origin of an
media rights holders to
illegal stream with content
safeguard their content; it
being watermarked. Media
Globecast has been using
can also be enhanced with
rights holders can also
this standard since summer
additional safety measures
grant and revoke receiving
2021. The company is
such as watermarking. And
rights in real-time, securing
expanding its eﬀorts to
it has three powerful key
broadcasts from the source
combat the illegal piracy
advantages over a private
to its end destination.”
is an open, royalty-free, secure and interoperable conditional access encryption standard that can be used on production equipment to transmit content securely. It allows equipment entitlement and
BISS-CA was developed by Ateme alongside an alliance of public service media and other network equipment vendors and first came to market in late 2020. It uses 128-bit encryption, with the encryption key being changed every 10 seconds.
Garden Studios is an example of this need for production space. It is also an example of the need to embrace technology in order to face the changes that multimedia content production is undergoing. We spoke with Rich Philips, who is in charge of the studios’ technology area, and he showed us how they have designed the place to take into account the needs of such a demanding world as the one we are talking about. But he also showed us how Garden Studios is changing ways and means through virtual production.
How Garden Studios was born and what is its history? Garden Studios’ origin started with trying to find a new home for the Metropolitan Film School, which is a business that is also owned by our founder and CEO, Thomas Hoegh. In the course of the planning around that, we’d discovered the Park Royal area. We realized how well-suited that was with other film-related companies. We encountered a couple of buildings that were really ideally suited for conversion into film production studios. And then we decided to make our home there in Park Royal. Why this area? What makes it so good?
Because there are studios such as Shepperton and Pinewood on that side of London, a good support infrastructure has been built in the corridor to those facilities. There are all
sorts of props companies, machine shops, and lighting and camera rental providers. Film is one of the major industries in Park Royal, actually. It is also populated with some industrial buildings that, I assume, were originally built for logistical purposes. Many of them are quite well suited to conversion. We have been in the area for some time and are now buying land to build
THE CORES OF OUR FACILITIES ARE THREE HIGH-QUALITY BUILDINGS THAT WE HAVE CONVERTED INTO SOUND STAGES. THE LARGEST OF THOSE IS 23,000 SQUARE FEET.
extra facilities. We are doing this because we have experienced an increasing demand for more space. Aside from customers asking for more space, there is also a critical mass of space to accommodate a major Hollywood production. In fact, right now we are tracking something like 18 or 19 additional properties in the area for a future expansion. Speaking of which, we would like to call attention to the fact that many new studios are being built in the U.K. Why is this happening? You are right, there are a lot of new studio products and studio expansion projects in the UK at the moment. The reasons for this, in my view, are as follows: there is an underlying demand that is mainly driven by the generous tax breaks from the UK government; we also have to take into account the availability of skilled personnel, which is second to none in the UK, and in particular around London; and we also have to take into account the cultural
adjacency between the English and Americans, the American producers also come because we speak the same language. In addition, demand has skyrocketed further with the explosion of streaming providers and their insatiable need for high quality content. This was aggravated during the pandemic. Shutdowns, delayed productions and the public’s consumption of a lot of content all contributed to its growth. They needed, and still need, to maintain activity on their channels to avoid a drop in subscribers and keep them on their platform. What are your facilities like? Could you give us an overview? Sure. The cores of our facilities are three highquality buildings that we have converted into sound stages. The largest of those is 23,000 square feet. We built the supporting facilities around them, the green rooms, the hair and makeup facilities, the meeting rooms, the production oﬃces, all
of that. Then, clustered around those, is a number of other buildings that either provide production spaces, or workshops, rehearsal spaces, and car parking We also built a virtual production LED stage. In fact, it was the first facility we opened to our customers. It’s something we perceive as one of our main activities going forward. In the last ten months, since it went open, we have received a steady amount of work through that facility. Speaking of this great revolution, do you think it will replace the
traditional modes of production? I think “replace” is too strong. There will always be projects for which you have to go out on location or for which it makes more sense to shoot in a physical production space in the studio. However, many film production jobs can be much more eﬃcient and much more sustainable with virtual production workflows. We are very excited about the possibilities that lie ahead. Going back to the studios, what equipment such as cameras, lighting, rigging, can we find in them?
The traditional studio model is a dry-rental model. We rent the space, but we let our clients choose who they want to use for lighting, for camera, for rigging, for whatever they want to do. That isn’t always the case for studios. Some of the studios in the UK
have exclusive deals with providers. That is something we try to avoid. We believe it is more important to give our clients the possibility to choose and let them work with whom they want to work with. In addition to the equipment, our technical
infrastructure is based on a large amount of energy, the structures that are the trusses, and an IP network. When we build the stages, we gave them enormous power. We have megawatts of power on each of the sound stages. It is way more than a regular production needs. But we
did it because we already had an eye on the virtual production and you can’t even imagine what the LED walls and ceilings can consume. We built additional steel and aluminum structures on the sound stages to support the lighting and sets. One of the things you
have to take into account when retrofitting an existing industrial building is that the structure of that building was not designed to support a large amount of weight, so you have to put in additional engineering to counteract that. Apart from that, we have also invested in a fairly substantial IP network. All the facilities are linked together. We provide WiFi throughout the venue and the ability to wire
network devices wherever necessary.
when we talk about technology.
Can you specify a little more about your IP network?
We have invested in building our own network, which we manage ourselves. Right now we have a gigabit with a diverse backup line that is Internet capacity, but we have linked all the buildings with fiber or wireless links. We can share that Internet capacity throughout the campus, but we can also configure it as we need to meet the needs of the customers. We oﬀer free WiFi, a standard
We have taken a slightly diﬀerent approach than some studio operators, because I think the most common way to handle this is to contract with a service provider, but we don’t want to force our clients to use the same service providers that we use. It’s the same idea we try to maintain
WE HAVE INVESTED IN BUILDING OUR OWN NETWORK, WHICH WE MANAGE OURSELVES. RIGHT NOW WE HAVE A GIGABIT WITH A DIVERSE BACKUP LINE THAT IS INTERNET CAPACITY, BUT WE HAVE LINKED ALL THE BUILDINGS WITH FIBER OR WIRELESS LINKS.
We are looking to the future in this regard, and I believe that next year we will be ready to oﬀer a ten-gigabit infrastructure. And we are making it happen because we expect IP networks to be an important pillar of other technology oﬀerings that we develop over time. We’ve also been experimenting with the cloud. We’ve been working with Amazon Web Services and Unreal to see what parts of that workflow can be oﬄoaded to the cloud to improve performance. Things like light management that can be put in the cloud and run on many parallel machines to take some of the latency out of that process. Don’t you have equipment for hire?
these days, and we have a member of staﬀ who is a great network engineer, and we can set up private networks and do whatever the client needs on that backbone.
No, we are not renting equipment at the moment. It’s something we’re looking into. Virtual production is diﬀerent, of course, because it’s not a dry rental. It’s a package because it includes all the equipment, and also a qualified team to operate it.
Do you have facilities ready for live television production? It is curious because there are not many television facilities left in London. Those that were there have either been renovated elsewhere or no longer exist. It’s something we’re keeping an eye on precisely because of that, because there’s not much supply left and there’s still demand. In the past, some clients have done live production supported with Outside Broadcast vehicles, so the need is there. What makes your studio special? I’ll start with the location. We are as close to Central London as any studio can be, really. We’re 20 minutes away from Soho by public transport. The name is Garden Studios. That name has its origin in the green field that I talked about earlier. We’ve taken that theme to Park Royal. We want to make a place that’s really attractive to the staﬀ. We don’t want it to be a jumble of sheds in
an industrial wasteland. We want it to be a place where people feel connected and happy to work. We are fortunate to be located in a quiet location adjacent to the Grand Union Canal. We are also embracing technology. We see it as an important part of our future. Most studios are facilities that rent space or equipment. We are looking at how we can use technology to oﬀer added value to our clients. Obviously, virtual production is an important part of this. We are also looking at motion capture, 3D asset capture or 3D environment creation, data workflows and image pipelines. We’re trying to bring all of that together. Last but not least, we try to ensure that our ecological values are reflected in the studio itself. The film industry is very polluting. Our studios, which receive part of their energy from the sun, and which select waste, have to be an example of that change in the industry. In fact, we also have to take into account virtual production, which is more eco-friendly, as it
avoids the waste of fuel in the travel and construction of stages. What is you virtual production stage like? It is a relatively modest one. It’s not the size of the stage that they shoot The Mandalorian on. It is a 12 meter wide wall, four meters high. We wanted to build something that could democratize virtual production. This matter, until quite recently, has been the
domain of the big-budget Hollywood blockbusters. They build the volume specifically for that project and rent the equipment to do it. This is very expensive, but their budgets can support it. Our virtual production stage oﬀers possibilities all the way down the industry chain. Whether for episodic television work, music promos, commercials or independent films, we believe we have built something aﬀordable for these tighter budgets.
I led that project. We were fortunate to come in contact with a partner called Quite Brilliant, who comes from the advertising world, and who was already very interested in virtual production. They helped us understand how to design the space, how to use it and how to do business with it. Other than that, we looked at what others were doing, and talked to experts, either manufacturers or cinematographers who had some experience with this. We compiled all that information and came up
with a design that, in our opinion, fit the needs and was aﬀordable, both for construction and for rental for the clients. You mentioned the Met Film School earlier, are students also involved in virtual production? The School has a longterm agreement with us to sublease part of our space. They have also moved the technology, visual eﬀects and post production courses to Garden Studios. We encourage them to learn about virtual production with us. We have taught a couple of virtual production courses for them at our facility, and we are building a virtual production training facility for their use and for broader industry use. Really, this is one of the bottlenecks at the moment. There is a lack of people with the right experience and expertise to understand and work with virtual production. So we participated in that collaboration because we saw that education is really important to overcome this problem.
What are the most outstanding projects you have hosted in your studio? Since we got up and running last year, we’ve had a lot of projects that have been interesting in their own way. I guess the biggest one so far has been a feature film by Matthew Vaughn that was shot on all three sound stages and took up a lot of the rest of the company’s facilities. We’ve done other things. The BBC comedy series Toast of Tinseltown was shot at Garden Studios last year. It was a lot of fun. We did a live event for YouTuber and rapper KSI. Coming soon, and during the first half of this year, our studios will be occupied by a major streamer for one of the big Hollywood studio streaming channels. I can’t talk about what it is, but it’s an exciting and significant project for us. In terms of virtual production, we’ve done a lot of things like music and advertising, but I think the highlight was an event we
did in collaboration with Epic and The Mill. We had 150 people in stadium seats around the edge of the stage and did a live demo. It was very entertaining and engaging, and it showed the confidence we have in the technology. What are your actual plans? Last year was focused on building the facilities and establishing a line of work through those facilities. Coming out of that period
of company evolution, this year we are going to start investing more in research and development. We are looking at what is available in terms of R&D funding, so if we want to make technology a cornerstone of the business, we have to invest in it. Could you advance some of those ways of development that you mention? Right now we are looking at a few things. We’re looking
an environment or move things themselves. We are working on that.
at high-speed capture. It’s something that has come up as a requirement from our advertising clients for slow motion work. We know it’s already a challenge to get a virtual production LED screen to play back in real time at 24 or 25 frames per second. Even more challenging is getting it to run at 100 or 200 frames per second. That’s an area of research we want to advance. We’re also looking at the human-machine interface
and how it relates. At the moment, the way it works is that a cinematographer comes in and instructs the Unreal Engine operators to modify the look of the scene, move things around or change the direction of the lighting, for example. We would like to be able to oﬀer our customers the opportunity to participate directly in that process and through some intuitive way. It would mean that, without having to understand the Unreal user interface, they can take control and scale
We are also looking at workflows related to asset reuse, because we think it’s a big opportunity. Virtual production already oﬀers great potential in terms of sustainability compared to traditional production. If someone has created a particular environment for virtual production, will they be willing to share it with the community, perhaps in exchange for a contribution to the cost of creating it? What are the systems and mechanisms that are needed to be able to track usage and build it and share the costs? How do you make it available to the wider community? How do you protect it? This also needs to be worked on. And it does not only concern environments, it also applies to 3D objects, virtual props. For example, we have recently been capturing props with photogrammetry and translating them into 3D virtual environments. This research can have very interesting applications.
Film facilities to tell incredible stories
RD Content is the origin of these newly created London studios. RD Content was founded in 2009 by Ryan Dean, the company’s director, the same one who has given the green light to the initiative to create RD Studios. Located in the Park Royal area of London, an area of the metropolis heavily focused on the film industry, the facility offers the firsthand knowledge of a company that has spent more than ten years creating content and is now capable of creating film, photography, websites, virtual reality and augmented reality features. And it also has the know-how of those who understand the need for dedication required to tell great stories.
What is the origin of RD Content? RD Content was founded by myself in 2009. I founded the business in a bedroom. In 2012 I hired our first permanent member of staﬀ and by 2021 we had over 100 permanent employees, based in four oﬃces around the world. At the beginning, it was setup at a time when brands were beginning to use social platforms and channels for video content.
We saw a gap in the market to take the best bits of a production company and advertising agency and bring them together. It has been a slow and steady process building the business. It has all been built on doing great work and keeping our guiding principle at the forefront of everything we do: telling great stories. We have long standing contracts with some of the biggest clients
globally and now produce films, photography, websites, virtual reality and augmented reality campaigns. Why would an advertising creation company set up a space for filming content such as RD Studios? We are a vertically integrated creative production agency. That means we come up with campaigns for clients,
have producers, directors, writers, DOPs, editors, animators and many others all on the payroll. We own post-production facilities in central London and also own millions of pounds worth of cameras, lenses and lights. A natural next step for our business was asking ourselves, what would we do if we took on the construction of film facilities? How could we make a facility that was the
best for telling incredible stories? Part of our inspiration was the challenge RD Content faced in finding great quality places to produce our content. So many of the studio facilities were either completely booked up, or didn’t quite oﬀer the type of environment we wanted. We spent a number of years building our own construction company whilst also working with the property sector to find the
perfect location. I’m pleased to say the result of that is RD Studios. What are the reasons RD Studios is where it is? We believe that West London, and in particular Park Royal, is the best possible location for studies of this type. It is very close to central London, less than half an hour from Soho. It is also very close to Heathrow and other major transport routes. I believe
that the surrounding area is home to the highest concentration of filmrelated companies in the whole of Europe. The major camera, lighting and prop companies are located within minutes of the studio. It is also very close to other major studios such as Pinewood, Elstree and Shepperton. We think this is fantastic. When productions there need to find a location closer to central London, we hope to be the person they turn to. We are also very close to some of the biggest and best creative colleges in the
UK. We are in discussions with several of these schools about how we can partner with them and facilitate new opportunities for young talent. Most importantly of all, it is very convenient for the film crew, many of whom are based not only in the South East of England, but also close to West London. How have you managed to make your studios a green option for production? We chose the building we are in for its green
credentials. In 2020 it was refurbished by Segro and has now become an award-winning building for design work on its green credentials. For example, we have a lot of solar power thanks to the large number of panels on the roof. The building also has rainwater harvesting systems to supply water to the toilet systems. The parking area has been fitted out to allow electric charging of vehicles and there is ample space for bicycles to encourage people to cycle to the studio. In managing the studios
BRITAIN HAS BECOME A BEACON FOR INTERNATIONAL CREATIVE TALENT. WE’VE BEEN FORTUNATE ENOUGH TO HIRE OUTSTANDING PEOPLE OVER THE YEARS FROM ALL CORNERS OF THE WORLD, AND I STRUGGLE TO THINK OF OTHER COUNTRIES WHO CAN CALL ON THAT RICH DIVERSITY OF CREATIVE THINKING.
we will continue with this idea of being eco-friendly. The studios will be supplied with renewable energy and we are taking steps to ensure that the way we deal with waste is good for the environment. We are particularly concerned about how to ensure that waste from the set (including props) is recycled, and we will be revealing more details about this in the coming months, which is very exciting. Why is there such a growing need for the construction of recording studios for film, television and other media in the UK? Over the past twenty years I believe there has been a growing momentum for the creative industries in the UK. As a country, the UK punches well above its weight in almost every creative field, music, film, fashion, gaming, advertising and so on. Britain has become a beacon for international creative talent. We’ve been fortunate enough to hire outstanding people over
the years from all corners of the world, and I struggle to think of other countries who can call on that rich diversity of creative thinking. It’s my belief that this rich melting pot of ideas is what drives the creative industry. Naturally, the demand to create more content here has followed. I think the most visible driver is the world’s biggest streaming companies moving their productions to the UK from other countries. More and more are setting up shop in the UK. They want access to the best talent, worldclass facilities and the latest technology. All of which the UK provides! Could you give us a detailed description of the basic infrastructure of your recording studio? RD Studios is composed of five sound stages. All of them are equipped with fully automated trusses. Studio 1 is composed of nine of them. This studio has a surface area of around 8500 square feet, second only to studio 3 which has a surface area
of around 9250 square feet; and a height of 7700 millimetres, a common ceiling height shared by all the studios. This location has a specific area for resting, make-up and hairdressing and for the toilet in the sound stage itself. The energy capacity of the studio 1 is up to 400 amps. Studio 4, taking into account some of the extra capabilities oﬀered by our studios, includes a green screen inside the studio with a floor to ceiling size. But in summary, we have five stages that are all soundproofed to NR25. We have ample production space, as well as a large lot for parking, or for the art department as we have planned for some productions. What makes your RD Studios special, and how does your technical infrastructure diﬀerentiate you from other studios dedicated to content creation? The spaces are designed to be flexible. While the immediate assumption
may be that they will accommodate feature films or high-end television, we have designed the studios so they can be used for TV (shiny floor shows) as well as, of course, for commercial content. They are sound-proofed. Even better, they are in an area that allows the teams to work at any time that suits their production (as there are no residential areas nearby). This is despite the fact that they are in London’s Zone 3 and very close to Soho and other central city locations. We supply a very well stocked studio facility as standard; it is very rare for a facility to include these things as part of their base package: Large acoustic curtains fitted in all studios; fully automated mechanical trusses with 1-ton hoists allowing installation of up to 18 tons in the ceiling; all studios have bathrooms and showers, as well as production space, kitchen and hair and make-up facilities; and the studios will also be incredibly connected, oﬀering 10 GB fibre connectivity and
WE INTEND TO OPEN A PERMANENT VIRTUAL PRODUCTION STAGE IN 2023 AND CURRENTLY HAVE PARTNERS FOR TEMPORARY INSTALLATION OF VIRTUAL PRODUCTION STAGES.
gallery spaces, lighting desks and edit suites to be installed in them. Through IP connectivity, all studios will be connected to our main server room and this feature will give them 10 GB downstream and upstream. Which brands have you relied on? Regarding brands we trust, Arri would be a good one to mention. We use their
cameras and their lights a
We have a number of
received a lot of comments
partners to support with
and requests about setting
everything from live TV
up stages much earlier
through to OB trucks.
than we expected. We are
However, these contracts
already planning our next
are not finalised with our
dedicated supplier and so
and will, absolutely, think
we can’t state specifically
about how to ensure that
which product will be in
there is adequate coverage
position when we launch.
at all sites to support virtual
What technological tools and solutions have you relied on in your studios’ post-production facilities? We utilise a mixture of tools as we oﬀer an end to end post production service across our facilities in
production Do you have plans
London and New York. They
to implement virtual
include: Full Adobe Suite,
production tools? What
Da Vinci Resolve, Pro Tools,
are your technology
Logic, Unreal, Cinema 4D,
plans for the future?
Maya and many more.
However, one area we are incredibly excited about is our plans for VR and AR. We believe that studios are potential environments
We intend to open
to develop and produce
a permanent virtual
incredible experiences in
production stage in 2023
this field. I believe that,
with galleries for live
and currently have partners
in the next ten years, this
for temporary installation
will become a much more
technology have you
of virtual production stages.
important part of our studio
Interestingly, we have
oﬀerings than it is today.
The studios have diﬀerent sets equipped
VIRTUAL REALITY AND VFX
Faber Courtial is a creator of worlds. Well, it is literally not possible for human beings to have such a creative capacity, but today technology has brought us closer than ever before. Software and hardware can take us to worlds very different from our own. Worlds that were, that could have been or that will never be, but that the tools and professionals at Faber Courtial have put within our reach. Faber Courtial can bring to life the splendor of 2nd century A.D. Rome, it can take us to The Moon, or it can breathe life into the origin of German civilization. And best of all, these creators of worlds can explain how they do it.
VIRTUAL REALITY AND VFX
Interview with Maria Courtial, Producer & Co-Director First of all, how was Faber Courtial born and how has it grown over the years? It all started way back at university. While studying industrial design (at that time there were no such things like media design to choose from), I met Joerg my later partner ‘in crime’ and husband. We got on well, worked on a few study projects together and decided to open our own, joint studio shortly after graduation. At first, we mainly created visualizations and animations for designers and architects and worked on commissions for the industrial sector, until we decided to venture out into the world of film and realized we hit home. We started creating our trademark Faber Courtial Worlds, which set new standards in visualization and so we quickly build a strong reputation as leading experts among
the broadcasting and the exhibition sector. In 2014 we added the important area of virtual reality to our portfolio, which gave us an additional boost in terms of scope and creative expression. Our immersive experiences soon gained international recognition at major festivals such as the International Film Festivals in Venice and Cannes, the Tribeca Film Festival, Siggraph, Stereopsia, FIVARS and SXSW. Currently we operate with a dedicated team of 15 and a large network of freelancers.
Who is Maria Courtial and how did she get to where she is? My two favorite areas of interest have always been technology and the creating crafts. I started out studying physics but switched to industrial design after two years to pursue my desire to create something by myself. After graduating and founding Faber Courtial, Joerg and I had a lot of free space – each in our very own “biotope” to thrive and indulge in our ideas and visions. This was particular true for the time from 2014 onwards when we
reduced the number of commissioned projects to pursue the creation of independent Faber Courtial productions. Creating strong and emotional VR experience by striking a close and poetic balance between
the important elements of camera, music, narration, and editing is an absolute matter of heart to me. Today, I am mostly the wholehearted producer of our products - and reality is that the more the company grows, the more time “I have to” spend as CEO, shaping Faber Courtial’s path within the industry and towards the future… with a tiny little grain of melancholy that there is not more time to also work creatively.
The world of content creation seems to be made up mainly of men, what is your opinion and how do you think this situation could change? I don’t necessarily agree. There are several women, in the field of XR who are producing great content. In fact, I believe that the fascination with the expressive possibilities of XR and the ability to convey relevant topics with a broad array of emotions and intensity is an area,
VIRTUAL REALITY AND VFX
women particularly excel in. Overall, I believe that with the industry growing and evolving we will see far more visionary and technology savvy content by fabulous women. What are the targets and objectives of content such as yours? Ideally our content should quite simply overwhelm. We aim to oﬀer a completely new experience and when people take oﬀ their glasses and have tears in their eyes, then we know, we’ve done it right. Our studio creates fascinating worlds that
WE AIM TO OFFER A COMPLETELY NEW EXPERIENCE AND WHEN PEOPLE TAKE OFF THEIR GLASSES AND HAVE TEARS IN THEIR EYES, THEN WE KNOW, WE’VE DONE IT RIGHT.
used to exist or that open a window into possible future worlds. Worlds that have long gone or that are not easily accessible. Still, everything we do is based on authenticity and has a true core which we want to convey in the most touching and emotional way possible. As such our work resonates with many of us, be it across space travel, science, history or the past and future of humanity. Our focus has always been on the highly realistic and emotional realization of the worlds that we create, whether it’s our long-standing
work in traditional film or, more recently, our own immersive experiences. What projects have your company been involved in and which ones would you highlight as the most challenging? Why? I guess we would have to start with the documentations we have worked on, such as The Germans with ZDF in 2010, The Greatest Race with Channel 4, Smithsonian Channel and ZDF in 2018, or Planet of Treasures with Christopher Clark with ZDF/ZDF Enterprises and Interscience in 2020.
Then there is our work for museums and exhibitions. We were actually the first company to introduce emotionally gripping film installations and immersive experiences to the German museum landscape, such as: Time Travel Vienna 4D for the Experience Museum Vienna (2012), The Popes for the rem in 2017, or our most recent project Time Machine for the LWL Antiquity Commission. And last but certainly not least, there are our own VR productions like Volcanoes, Gladiators in the Colosseum, Time Travel
Cologne Cathedral, Follow Me – Rome, 1st Step, 2nd Step and GENESIS. In the ranking of most challenging projects, I would certainly name some of our own VR projects. We’ve always strived to deliver the highest quality possible, so we often work on the edge of what’s possible, making our own work at times rather challenging and rocky, as in Gladiators in 2016. That was the first time we integrated real actors into a virtual environment in VR, and we went straight down the hard way: Real gladiators
VIRTUAL REALITY AND VFX
need a sandy ground to fight on, so we built a huge 180° green screen and installed it at a local indoor riding arena, where the shoot took place. Back then we also had to build our own camera to record everything in perfect stereo 3D quality. GENESIS is another example: In the middle of the production, we
decided to make a hard cut and switched from our traditional pipeline to a realtime engine workflow. A bit insane to do so without experience and with a tight project deadline to meet, but it was absolutely the right decision. Now we have our worlds, the right pipeline and our expertise across all forms of immersive experiences and also virtual production.
And this leads us to our latest biggest challenge we had to overcome to date: We just developed Phalanx, an in-house toolbox that allows us to scale down the photorealistic and detailed data of our settings, so that they can be immersively experienced even on mobile devices. As far as I know, there is nothing like this available anywhere in the world.
On regular basis, what are your workflows? Do you design and create everything needed for virtual reality content; from sketches to animation? Creating a 360° film is as exciting as it is frustrating. It starts with the delightful phase of daydreaming, sketching, and storyboarding, continues with juggling vision and technology, and ends with the painful realization that in the VR space, you have no frame, no cropping, and no close-ups to convey your vision and intent. Instead, you have a room with an insane amount of detail that requires flawless perfection for the viewer to be fully immersed. You can push and accentuate a little, but ultimately, it’s up to the viewer to experience and hopefully enjoy their version of the story. Luckily, with the more recent software solutions, we can now – even in VR - bring the vision to reality at a very early stage. This can be quite sobering at times, but it also gives you more time and flexibility to achieve your intended vision after all.
CREATING A 360° FILM IS AS EXCITING AS IT IS FRUSTRATING. IT STARTS WITH THE DELIGHTFUL PHASE OF DAYDREAMING, SKETCHING, AND STORYBOARDING, CONTINUES WITH JUGGLING VISION AND TECHNOLOGY, AND ENDS WITH THE PAINFUL REALIZATION THAT IN THE VR SPACE.
Since we have built many VFX worlds over the years, we now rarely start from scratch for a VR project. Still, the amount of work always rises exponentially
when we arrive at the above-mentioned critical stage and the creation of insane amounts of detail. What software do you rely on to develop your work? To achieve the output quality, we envision the number of programs we are using is steadily increasing. Our main software is Unreal these days. For modelling we work with the likes of 3ds max, Blender, Houdini and, in addition, with programs like Substance Designer and Nuke. What companies have you worked with, and could you tell us about an experience that has been really interesting for you? Over the years we have worked with companies from a wide variety of fields. We had a long focus on VFX and animation work for film production companies, like Gruppe 5 Film production and Interscience, as well as TV channels such as Channel 4, Smithsonian Channel, ZDF, WDR and arte. For our VR film productions, we
VIRTUAL REALITY AND VFX
collaborated with partners such as ZDF, WDR and Deutsche Telekom. In our collaborations, we were privileged to always have our partners’ full trust and consequently to be granted an exceptional degree of freedom throughout the entire creation and implementation process. With such a degree of trust, you will never want to disappoint, quite the contrary, you are all the more eager to achieve perfect results. One example that sticks out is the work on the VR film ‘1st Step’ about the Apollo missions. We started by scanning through the NASA archives and selecting original photographs with particularly impressive positions from the Apollo 17 mission, which we then stitched together to a 360 panorama (while obviously taking care of the many holes in between the picture material). We also spend a lot of time with intensive research for the best and highest resolution satellite and elevation data on the moon to create a high-quality 3D
moon model. Since the NASA also provided the exact photo position of the astronaut, the team was able to do something really exciting, which was to position our virtual camera at the exact position of the astronaut at that time. This camera then projected the stitched panoramas onto the digitally accurate altitude data (collected data of satellites and lunar cameras). We had two completely diﬀerent sources here, but the match was perfect. It was breath-taking to realize that this perfect match actually delivered a further proof that the astronauts were indeed on the moon! What challenges are Virtual Reality and VFX facing today? Virtual Reality and VFX are two very diﬀerent areas which I would look at separately: As for VFX, the Unreal Engine opened a broad spectrum of great new possibilities. You can now work much faster and more intuitively through all the phases of your project from early planning until final
implementation. Virtual production will continue to be the important keyword here that will significantly shape the future of the film industry. Virtual Reality on the other hand has received a lot of momentum from the “Metaverse”, which is currently on everyone’s radar, yet the “hardware issue” is still preventing VR to go mainstream. While developments and innovations are being made, a connection to a powerful computer is no longer a prerequisite and there have been improvements in terms of VR headsets (and there are more to come later this year), everyone is still kind
How can software and
We will certainly overcome
hardware be enhanced?
streaming limitations in 1-2
Putting content development aside for a moment, an important improvement, to make
device. In terms of content production VR is far more complex than VFX. While in traditional film you look at a framed piece of a world, in VR the entire world is visible and requires flawless perfection for the viewer to be fully immersed. The challenges you face in XR are much higher, which
tool Phalanx can bridge that time. By then we will also see a significant increase
Virtual Reality experiences
in good content. For
even more impressive,
outstanding experiences, it
is the ability to view and
is however not just a matter
interact with very high-
of high image quality. There
quality content on mobile
is still so much to explore in
devices, which is currently
terms of good storytelling,
not easily possible or only of waiting for the super
years, but until then our
to a very limited extent. With Phalanx, our in-house
new use of the medium and innovative types of experiences. A truly exciting
developed toolbox, we
field for many years to
have taken a significant
step closer to this option as we achieved to scale
What is the future of
down data-rich, high-quality
content so that it can be immersively experienced even on mobile devices. VR has this impressive ability to fully immerse you in a space: imagine walking
is why we have only seen
through ancient Rome,
slow but steady increases in
visiting Mars, or exploring
quality over time.
the moon with an intensity that makes you believe it’s
We will continue to create the Faber Courtial worlds. Worlds you crave to see once in your lifetime - and we will work on making these worlds accessible to everyone. The experiences, products, and ways to get there can be very diverse: from traditional film to
What are the ways of
reality. And just imagine
improvement to make
how fascinating it is, that all
AR and VR apps, to direct
Virtual Reality content
of this is happening right in
experiences as social
even more impressive?
your own living room.
events in the metaverse.
Searching for living technology It’s been 15 years now that Romain Lacourbas’ ﬁlm and television photography skills are well proven. Originally from France and a graduate of a ﬁlm school in Paris, this cinematographer had the good fortune to work with Luc Besson and Olivier Megaton. Godfathers like these gave him the impulse to become what he is today. He has just ﬁnished shooting several of the episodes of the second season of The Witcher, a Netﬂix series of worldwide interest, and we asked him about his experience with the witchers, how he feels shooting with green and blue screens, and how he sees the future of cinematography thanks to the implementation of techniques such as virtual production; in his own words, “The great and current game changer because it allows you to feel the space where you are recording as only a real space could.”
Who is Romain
hired as a trainee on a
Lacourbas? What has
film feature and the DOP
been your progression?
was Pierre Aïm. It was an
I entered a cinema school
meet Pierre Aïm. He then
incredible opportunity to
in Paris on 2000. Then, I
became a kind of mentor
was lucky enough to be
and we also became very
good friends. He basically taught me the
And that’s basically how I came to light. I’m
lighting, and how to use a camera. Together
from the camera department.
with him I became a second AC, first AC, and then I also worked together with other
In the meantime, I was trying to do as many
cinematographers. Some time later, Pierre
short films as I could in between films where
trusted me to be a camera OP with him until
I was an AC. Later, I did my first feature. It
he let me become DOP of the second unit.
was with Director Lola Doillon and that was
the first feature for actually everyone. It was quite a lot of fun. After that, I did other small movies, small in terms of budget, obviously. At some point, I met Olivier Megaton and Luc Besson, and I started doing a lot of commercials and videoclips with first one. After a year, he oﬀered me my big first international movie, which was Colombiana. That’s more or less how it all started, making that film. Obviously, it opened up the possibility of working on more international projects. How did you get involved in Colombiana? As I said, I was doing a lot of commercial with Olivier Megaton. And because of that, I had the opportunity to join a second DP unit in Transporter 3. Again, sometimes all it is about luck. At the end of that film, there were many reshoots scheduled, some in France, some in Ukraine; and the principal cinematographer who was doing the entire film, Giovanni Fiore Coltellacci, was no longer available during the
reshoot. Since I had worked a few days on the second unit with Olivier, he said to me, “Okay, why don’t you come and do all the reshoot with me.” I said, “Yes, of course.” It was supposed to be like three or four days of reshoots, and it became four weeks… something crazy, but also amazing for me.
Colombiana was set to start a few months after the days I reshoot Transporter 3. We were getting along very well together and he was thinking of me, however, he was not sure at all. We went to do the scouting together, time passed, we did the prep together as well and, suddenly, it became logical for him to
between becoming a cinematographer today and 20 years ago? I do not know if it is harder or easier. Probably, in a way, it’s more diﬃcult because there is so much competition in the market, and there are so many talented people from all over the world. That’s because it’s so easy to get a camera these days. Everyone can start training with amazing images as long as they have an iPhone. So access to learning is much easier today than it was 20 years ago because you had to shoot on stock and that used to imply a great sum of money. But, on the other hand, there is a lot of competition. trust me that project. Also, I would say that Luc Besson was involved in that project and he was really into trusting young people and young DoPs in his movies. It depends on multiple factors, but it started with Olivier Megaton, basically. Would you say there is a diﬀerence
What is the main diﬀerence you find between Netflix’s big features and smaller projects? As anyone could think, the main and first diﬀerence is about the budget, and related with that, the scale and crew of the project. From a technological standpoint I would say the
main diﬀerence is how the whole workflow is thought out. On a Netflix project there is a good amount of preparation that is really about making sure the pipeline works: from the shooting on-set until the HDR deliveries, for example. A Netflix production is one of many being developed simultaneously. In a smaller project there is no need to have such a structured workflow. Nor does it usually exist in a very large production in which there is also a lot of money involved, but there is no associated streaming platform. I think the big technological diﬀerence is the structuring of the pipeline. Of course, just because it’s Netflix, it doesn’t have to be a big project. You also have a smaller budget in Netflix production, but there you will also encounter that technique. And that’s very interesting because it gives you the opportunity to try things because they have the structure for it. Do you have a preferred camera and lenses equipment?
I do have some preferred format; however, I think it really depends on the project. You can think that a particular gear fits well, but if it does not fit the story or does not really make sense with it, that would not work.
the screen, simply because of the distortion, the bokeh and that typical depth of field. Reality through the camera is no longer real, so there is a distance, but at the same time it catches you as a viewer.
To tell you the truth, I really like anamorphic, in general, and I’m passionate about Panavision lenses, the old anamorphic glass lenses, like the C, the E series and the hybrid B series. They immediately tell the audience, “you’re looking at a story” because they maintain a distance between the audience and
Now that large-format lenses have come along, I think 1.5 anamorphic lenses are amazing. What Bob Richardson used in the Tarantino film, Hateful Eight, for example, is just incredible. Even the largeformat spherical lenses: we shot The Witcher with the DNAs, and I find it very interesting because it’s
another way of including the characters in an environment. With the same field of view, you use slightly longer lenses, which brings the character closer to you, closer to the camera, and at the same time, you still see a lot of the set and the environment. That helps including the characters in the situation. So answering your question, for my sensitivity, I would go for anamorphic, and when I shoot spherical, I tend to use lenses that are not too perfect, not too clinical. That’s also
why I really liked the DNAs, because you have accidents. I like accidents and I try to look for them. As far as cameras go, I had a great time with the Alexa LF, obviously, but I have to say that I love the color that the Venice produces, especially on the bottom of the curve. It’s really amazing the color palette it oﬀers in low light conditions.
TO TELL YOU THE TRUTH, I REALLY LIKE ANAMORPHIC, IN GENERAL, AND I’M PASSIONATE ABOUT PANAVISION LENSES, THE OLD ANAMORPHIC GLASS LENSES, LIKE THE C, THE E SERIES AND THE HYBRID B SERIES.
Do you consider yourself to have a very personal style? I hope not. I think we all have our style and that’s true for all creative departments. But, to be honest, I don’t think someone watching The Witcher and Taken can say, “Oh, it’s the same cinematographer.” I always try to renew and to adapt myself, which is probably the most diﬃcult thing to do. For me it’s more interesting to start from a blank page and try to find the right visual aesthetic for each project you do. Did you have to adapt to the style of The Witcher because of the video game? I didn’t have to adapt to the visual environment of the video game. I saw a lot of footage, obviously, of it at a very early stage of preparation, which was when I was oﬀered the job, and I watched it just out of curiosity and to see if it inspired me or not. I got involved during the preparation of the second season and, obviously, I
watched the first season twice: first, to understand the characters and the stories and, second, to also see what the other cinematographers had done. However, the second season takes place mostly in Kaer Morhen, and I had to adapt to the work of the production designers and the showrunner, but on the other hand, I had a lot of freedom to explore new colors, diﬀerent densities, and diﬀerent types of contrast. That gave me a lot of freedom to try diﬀerent directions. I was mostly influenced by the production design work: because it’s the look of the sets, the textures of the sets, and the color of the costumes that influence me, and the script, of course. I wasn’t forced to do anything very special. However, you obviously keep in mind that you’re working on the second season of the series, so the goal of all this is that the audience has the feeling that they’re watching the same series.
When did you become involved in the series? I got involved in November 2019. That’s a long time ago. And we finished shooting in April 2021. It was a long adventure, which, obviously, was even longer because of COVID. We had to take four months of hiatus in the middle of pre-production. When we resumed, around mid-July, I had one more month of prep since then; it’s a long prep time, which I love. It gives you more time with the director, with the producer, with everybody, and to find the right aesthetic and to choose the right equipment. Basically, it also helps save money. Preparation is the key because you start on set with a plan. Then you can always change the plan, but the most important thing is to have one. Did you notice any diﬀerences in terms of the long preproduction stages between larger and smaller projects? From what I know from my experience, you can tell the diﬀerence between projects in the preparation
because it is always tied to the budget. It’s really understandable that on a small project you can’t have three months of preparation. That would be nonsense, wouldn’t it? However, I always try to participate as much as I can. Even if the budget doesn’t allow it, you try on your part to do a little bit of research and try to start working a little bit in advance. Surprises happen, so have you had to change your preparation plans during the filming of The Witcher? You have to adapt all the time, yes. There are always surprises. I remember a scene on one episode outside the mansion. We were shooting that on stage. Vereena turns into a bat and attacks Geralt. A long fight ensues in that courtyard. She gets killed at the end, gets speared through, and then she’s supposed to go back. It’s night, it’s snowing, and the scene has a lot of other parameters to take into account. You have to combine all the elements:
the snow, the consistency of the snow and, also, finding some trick to believe that only her head is going to do a 180 while the body doesn’t turn. There are so many situations that a plan never takes into account, no matter how much could be planned. Other times you simply change the plan because you realize looking through the camera that what you had prepared doesn’t work.
What was the most important technological challenge you faced during your time at The Witcher? I guess it was dealing with the visual eﬀects of the last few episodes. There’s all these big basilisk-style dinosaurs coming out of a portal and that leads to a very long fight between the warlocks and those monsters. Even though we started with storyboards
and then we started with previs, stunt previs, VFX previs and we had all these diﬀerent documents so we could know where we were going and where we were, it was a big challenge. It was because you just shoot witchers fighting a guy in a blue suit holding a fake foam head. You don’t quite know what it’s going to look like in the end because it’s pure CGI. To get through it everyone involved has to
be very communicative and collaborative. You’ve already mentioned it, but what was your team for The Witcher and why? We had a multi-camera setup consisting of three Arri ALEXA Mini LFs and a set of DNA lenses. We also had a set of Signature Primes and a couple of zoom lenses. I also added a 58-millimeter Petzval that I
used for very, very specific moments. I think I used it only four or five times over the course of the entire series. We used DNA lenses for the reasons I gave you above. They’re spherical lenses, but they have these weirdnesses, and accidents. There’s not really distortions, but they’re not perfectly clean and there’s a little bit of poetry, life and organic things that happen with those lenses. Also, it’s interesting because you can detune them a little bit. During prep, we did a lot of work trying to misalign them by shifting the edges and investigating how we could make those accidents happen. Also, I have used a lot of wide lenses like the 15 millimeters and 18 millimeters Signature. Those focal lengths are impressive. DNA lenses only start at 21. I needed to have shorter and wider lenses as well. Have you ever had the opportunity to shoot in HDR? What do you think of this technique? We don’t have the ability
THE PROBLEM WITH HDR IS THAT IT MAY NOT BE FOR ALL TYPES OF SHOWS. THOSE VERY INTENSE LIGHTS AND THOSE VERY LOW, DEEP BLACKS MAY NOT BE RIGHT FOR EVERY STORY.
to shoot in HDR yet, all the work you can do is shoot for HDR, displaying in SDR on set, and then composite it in post-production. Today we are not able to properly monitor HDR on set, yet. It is starting but still is very heavy. We monitored in REC709 and our color space was ACES. The final grading was done in HDR, and then we did a trim pass for SDR. The problem with HDR is that it may not be for all
types of shows. Those very intense lights and those very low, deep blacks may not be right for every story. And, precisely for The Witcher, I think it’s been a very good tool that I’m liking more and more. Anyway, I think the good use of HDR is not to push it too hard because, in theory, you could set the lights to 4,000 nits or something as exaggerated as this. For the audience this becomes aggressive and,
expensive technology, but it’s going to represent the end of the green and blue screen. From a technological point of view, it’s incredible, and also for the actors, who can be on set and not see a green screen, but feel the environment. It makes a big diﬀerence.
even, disturbing. Although if you adjust it so that it’s punchy, nice, dynamic, but not aggressive and not hard to read or see, then it’s an amazing tool. I must say that we also did HDR post production for Marco Polo, the first season. At that time, I also liked it, but it was the beginning of HDR. Now, I had a lot of fun doing The Witcher with that technology.
going to change the way we all produce content?
What do you think about Virtual Production? Is it
It’s going to take a little time because it’s still an
I don’t see how it could not change. I haven’t shot with that technology, yet. I saw a lot of tests and I tried a little bit of it but I’m not very experienced in Virtual Production. However, and still looking at The Mandalorian and stuﬀ like that, it is a real game changer, especially during COVID times.
Apart from price, which is the main limitation at present, I don’t see many other limitations. Although I have to say that space will limit you as well. For example, imagine you want to photograph someone framed in full height walking across a really wide space; or someone running for a long time. I am sure that more and more productions are going to use it. And, therefore, there are already many companies that are investing in this system and creating suitable scenarios with it. I don’t know what the next challenge will be in the future, but right now we are taking the steps towards developing all the possibilities of this amazing technology.
Joyn is a VOD content platform designed for and by Germans. Under its two subscription modes, which combine both free access to content and the enjoyment of a more selected portfolio of titles through a premium subscription, they aspire to become the number one platform in Germany, according to our interviewee Benjamin Risom, CPO at Joyn. In this interview, we answer questions that explain how the platform is financed, the technological development it has undergone throughout its life and the fundamental characteristics of its foundations. Check out the foundations of an OTT that aspires to become the first choice of all Germans.
How were Joyn / Joyn PLUS+ born? Joyn launched in June 2019 as a joint venture between the broadcasters ProSiebenSat.1 and Discovery. In December our premium oﬀer Joyn PLUS+ was added. Consolidating diﬀerent entertainment oﬀerings into one app and under one roof was something that a lot of people were looking for, but that didn’t exist on the German market until that point of time. How has it developed through the years? We are continuously improving and developing our product. In the beginning we worked on the availability of Joyn on diﬀerent devices and platforms, so that we are now available on all common devices. Also we integrated a lot of partners and on demand content. With more than 24 million app downloads and around 6.4 million monthly active users we are very happy with the development of Joyn and look forward to everything coming.
What are Joyn / Joyn PLUS+ business model? With Joyn, we oﬀer a wide range of free content with unique Originals, media libraries, live TV, and catch-up of many popular TV shows. This extensive oﬀering is adfinanced and virtually the gateway to the Joyn world. With Joyn PLUS+, we also oﬀer a subscriptionfinanced streaming service that includes additional international exclusives, previews, and premium originals. What makes Joyn / Joyn PLUS+ diﬀerent from other OTT / VOD platforms? Joyn claims a unique positioning in the German market: With our freemium model, we are clearly positioned in the German market and thus have an important diﬀerentiating feature. The combination of a free oﬀering with live streams and a media library with catch-up, preview content and local originals, as well as a premium area with live streams, including
Benjamin Risom Joyn.
HD pay-TV, even more exclusive international content and premium originals, as well as movies and series, has not been available in the German TV market before. And we are constantly building out our proposition. Our content portfolio is evolving and we are currently adding more short-form content for snackable entertainment moments. Moreover, we will have over 100 exclusive previews in 2022 from our Shareholder ProSiebenSat.1, which will be airing on Joyn, before they will be shown on live television. Additionally, our platform itself is developing into a more personal experience with amazing features.
What criteria do you use to choose the titles you oﬀer on your platform? We focus on Variety in our in-house productions. We give young talent and bold ideas a chance. But we particularly try to close the gap between YouTube and classic TV. We produce storytelling content with well-known personalities from the social web or create our own formats around social media stars. In addition, we receive a lot of content from our content partners and our shareholders. This content is so varied that on Joyn we can truly oﬀer something suitable for everyone. What is the technological infrastructure of your platform?
Joyn is built using micro
with our business model.
in terms of orchestrating
services and message
With regards to our own
individual user journeys,
driven architecture. We are
productions, we produce
especially in regards to
cloud native which helps
the originals in 4K, even
us with speed, agility and
though they are shown
recommending content and
resilience. We build most of
in HD. The future will
the services in house and
certainly move towards a
use SaaS products where it
4K standard, even on TV,
and we plan to increase the share of high resolution
What is the cloud
platform that hosts Joyn / Joyn PLUS+? What
Do you have any goals to
services does it oﬀer to
We focus on the local
ad monetization. The multimedia content market is growing a lot and the oﬀer is huge for the viewer. What do you think the future of OTT / VOD platforms will be like? There will be a consolidation of content
We have a multi cloud (GCP
market in Germany and on
and AWS), multi region and
oﬀering our users the best
multi CDN approach for
achieving the best quality
experience. The German
for our customers and have
market is very unique and
the best tech product fit.
we believe to have found
and interests. Whether
a great mix to excite the
it is news, local creators,
How much of your
users here. Our goal is
sports or blockbuster
content is broadcast
to take a leading position
in HD or 4K? Are there
in the German market
plans to continue
expect that their needs are
and this demands all our
growing in this aspect?
met on one platform. We
feel well positioned as an
The private broadcaster live streams and media libraries are in SD within our free oﬀering. In the subscription
What is the technological next big step for Joyn / Joyn PLUS+?
and oﬀerings. Users will demand more specific topics and an experience tailored to their habits
aggregation platform for this. We already bundle the content of various providers and believe that our content and product
product, live streams and
We will be focusing on
on-demand contents are
leveraging the power of our
strategy will pay oﬀ in the
oﬀered in HD. We have a
data in order to provide the
future. Our goal is to be the
clear distinction between
best user experience. We
number 1 local streaming
the tiers here, in alignment
aim to be state-of-the-art
provider in Germany.
Rethinking Infrastructure Implementation: Brazil’s Next-Generation Digital TV System With a rigidly evaluation-based selection model for the country’s TV infrastructure, the TV 3.0 project in Brazil sets the benchmark for technology selection and evaluation. Authors: Elena Burdiel and Adrian Murtaza, from Fraunhofer IIS
Brazil is currently implementing a major technological upgrade of its digital TV system. The country is looking to move to the technologically most advanced solutions available today in order to provide the best possible experience to their viewers for decades to come. Recognizing the massive scale of such an undertaking for all stakeholders, the project lead was taken over by the SBTVD (Sistema Brasileiro de Televisão Digital/ Brazilian Digital Television System) Forum,
which “was created by the Brazilian Presidential Decree […]to advise the Brazilian Government regarding policies and technical issues related to the approval of technical innovations, specifications, development, and implementation of the Brazilian Digital Terrestrial Television System (SBTVD). The SBTVD Forum is composed of representatives of the broadcasting, academia, transmission, reception, and software industry sectors, and has the participation of
A language selection menu can be included in MPEG-H Audio TV content. It allows users to seamlessly switch between language options if the content creator and broadcaster provide the option.
Brazilian Government representatives as nonvoting members.” (https:// forumsbtvd.org.br/wpcontent/uploads/2020/07/ SBTVDTV-3-0-CfP.pdf) The SBTVD Forum determined that the best way to achieve a smooth changeover was to increase the life span of the existing system through a backward-compatible evolution (a project called
“TV 2.5”) Parallel to the running TV 2.5 project, the TV 3.0 project, that is, the development of the next-generation Digital Terrestrial Television system was started. It includes the introduction of cutting-edge video and audio technologies as well as even more advanced options to benefit audience, broadcasters, and producers alike.
After issuing a Call for Proposals for next generation technologies in 2020, the SBTVD Forum selected the most advanced technologies from the submitted proposals. The decision for each technology selected for TV 3.0 was based on the results of very strict and transparent testing and evaluation conducted by multiple independent laboratories involving about
70 researchers from seven Brazilian Universities. With the Brazilian Ministry of Communications’ approval, the SBTVD Forum published the results of their selection in January 2022. For the audio component of TV 3.0, three audio technologies were proposed and all of them were evaluated with the same test procedure by the audio experts from the test lab in charge of the audio component. The test used an end-to-end production and broadcast chain and only the MPEG-H Audio system met all mandatory requirements defined in the TV 3.0 Call for Proposals, demonstrating its
maturity and un-matched capabilities. Accordingly, MPEG-H Audio was selected as the sole mandatory audio codec for the future TV 3.0 broadcast and broadband applications.
What a Future-Proof Audio System Needs to Deliver In their Call for Proposals, the SBTVD Forum had specified several technology requirements an Audio System had to deliver on to become part of the new broadcast system. These included immersive or 3D Audio as well as personalization options. The latter make for a customized experience
A menu for dialog enhancement options can be created with pictograms (for example for an audience that has trouble with the original language or cannot read it).
of programs like sports events, documentaries, and concerts. But they also ensure a higher degree of accessibility for people with visual and hearing impairments, allow the easy choice between several languages, and ensure a comfortable listening experience with dialog enhancement options. Such advanced accessibility options are indispensable for modern and inclusive broadcast systems that have to meet the demands of a diverse audience. Fraunhofer IIS has previously demonstrated the capabilities of the MPEG-H Audio system on several occasions around the world and in Brazil: The technology enables content creators and broadcasters to deliver a broad range of easy-to-use customization options to their audiences in a single data stream and at the same time ensures that the creative vision as well as the optimum sound delivery frame are maintained. During the technical evaluation of all proposed audio systems, MPEG-H
The Bigger Picture: What Can be Learned from the Brazilian Model?
America, the country often plays a leading role in the adoption of new technologies and standards: In 2006, it was in the front line of adopting ISDB-T and several South American countries followed the lead (Chile, Argentina, Venezuela, Ecuador, and Peru, to name just a few). Adopting the same broadcast technology again now could have several benefits for them: First of all, using Brazil as a Best Practice example and deciding for systems that were thoroughly tested for TV 3.0 saves resources during the costly implementation of a new infrastructure. The production and exchange of content also becomes much easier, faster, and more cost-eﬃcient and border regions do not suﬀer from lack of service due to incompatible systems – a case in point is the comparable situation at the Canadian and Mexican borders to the USA.
The TV 3.0 project could have an impact across Brazil’s borders. As the largest economy in South
Looking at Brazil’s approach to implementing a new TV infrastructure from a more general point of view, some
Menu with advanced audio options that can be integrated into an MPEG-H data stream.
Audio stood also out due to its capabilities to handle metadata in live production and live broadcast. Metadata is essential for oﬀering these new interactivity features to the audience and it has to be created on the fly during live events. Additionally, during advertisement breaks, the audio format usually changes from an immersive and interactive representation to an ad with only stereo audio and back. The MPEG-H Audio system proved during the test phase that it contained the richest metadata set, thus ensuring that the broadcaster has full control over all features. It was
also the only NGA system capable of handling such configuration changes in an existing live broadcast infrastructure. These advantages are expected to contribute significantly to an improved quality of experience, to its realism, and to the audiences’ feeling of immersion into the content.
aspects of the selection process are worth noting for replication in similar projects. The importance of appointing a central managing body responsible for the definition and implementation of the entire process seems so obvious that it hardly needs mentioning – nevertheless, it is a key success factor of TV 3.0 that is not in place in every market. The next point that needs to be mentioned is the structured definition of the requirements for the entire TV system and the creation of detailed “spec sheets” for all technologies needed. Based on this, the SBTVD Forum was able to create a transparent selection process that considered all relevant stakeholders including the audience and government. For the testing itself, independent test labs were assigned and financed by the Brazilian Ministry of Communications. Such measures serve to choose the best possible technologies based on their evaluation and to build trust in the choice as well as support from all stakeholders.
A highly relevant aspect of the new infrastructure chosen for Brazil was the inclusion of advanced accessibility and personalization options into the Call for Proposals. These features are crucial to enable media access for people that are at a disadvantage either due to a disability, their age, language barriers, or other factors. New technologies make it possible to create an inclusive broadcast landscape and countries that are interested in the implementation of such features could benefit from Brazils testing of mature systems that truly deliver on such points. In many countries, several of them in Europe, there is a significant demand for accessible, interactive, and immersive sound on TV. Broadcasters and content creators are looking to provide content that meets this demand. In many cases they are slowed down by infrastructure that does not support such formats or by unclear responsibilities and schedules of upgrade
projects. One example for this is Spain, where entertainment industry itself has taken the lead in updating the TV infrastructure: In 2021, the broadcast community and major stakeholders created the UHD Spain forum in order to achieve an orchestrated adoption of UHD technology. The group’s goal is to create the most useful pool of knowledge about UHD and all relevant technologies as well as a network to share and advance this knowledge. In 2022, the group focuses on audio technologies and the learnings from Brazil’s TV 3.0 project can help them advance their plans in an eﬃcient manner. All in all, the Brazilian TV 3.0 project can serve other countries that plan to make changes in their infrastructure as a blueprint for a transparent process and implementation that involves all relevant stakeholders and makes decisions based on technological maturity and suitability.
WELT Switches to a New Software-Defined Vision with Vizrt
The free-to-air news channel runs 24/7 and provides regular news updates to other German channels. The WELT team, led by CTO Thorsten Prohm, Technical Lead, Ralf Hünefeld and Senior Director Philipp Kern took the move as an opportunity to completely reinvent its infrastructure and at the same time allow it to make studio presentations graphically stunning.
Running a broadcast
components all working
Welt’s requirements - creativity enhancement with simplified technology
in lockstep, responding
“WELT had several ideas,
news channel is a huge undertaking. It needs many
to changes in the news programming cycle, able to bring in media from diverse sources and react fast to breaking stories,
both creative and technical,” says Philipp Kern. “The creative ideas included a hybrid augmented reality (AR) studio and a physical
all the while keeping an
studio with diﬀerent
audience with a lot of other
creative layers, such as
distractions interested and
moving LED screens with
content that can be in
diﬀerent positions and configurations around the studio, a cycloramic LED backdrop and AR graphics”. WELT wanted a unified workflow that would simplify and control the many inputs and outputs for production at the touch of a button. It needed redundant systems for operational security but didn’t want the rack space crowded out with extra hardware. In short, it wanted to have more flexibility while depending on fewer components. “We wanted to give the day-to-day production
the possibility to take all the media elements and put it together in a form they wanted and give the journalists the ability to put their ideas directly on screen,” adds Kern.
Automation WELT wished to have a studio that oﬀered maximum automation through robotic cameras and IP-controlled lighting systems. The moving LED panels, which could be arranged to form diﬀerent backdrops, would need to be filled with content as
well as maneuver according to the needs of production. WELT also wanted to be able to switch over to a completely virtual studio newscast in a matter of seconds or transition seamlessly when adding live sources as contributors to a current broadcast and redesign the screen on the fly. All this would demand precise control and a lot of material to be generated, not just the normal news show media such as video feeds and graphic overlays, but the content for the screens and backdrop, the virtual studio and the AR
graphics. WELT wanted more scope for creativity but also required less complex routes to deploy and update its graphics output.
The challenges “We had the problem of constantly moving visual data, such as that originating from the creative department using Cinema4D, moving that into Vizrt or another graphic system and then moving that into Vizrt Mosart which then fed the video switcher. This meant the original creative assets were being
translated a second or third time,” says Kern. It was plain to see that modern storytelling might end up involving a lot of extra complexity, and a lot of hardware.
Solutions However, the WELT team were thinking outside the box. Rather than have fixed hardware components routed through copper SDI to a hardware mixer in the gallery, they saw the potential in software, namely a SMPTE 2110 NMOS-based networked solution to build on the
automation already in place. “We had the idea to input the graphics directly into software and control all the broadcast media in one real-time software environment,” says Kern. Vizrt oﬀers an advanced range of software solutions for media management, automation and graphics that the broadcast engineers at WELT were already very familiar with. A long-established relationship has seen Vizrt develop specialised solutions for WELT over the past twenty years. Thanks to tools such as Viz Mosart the daily production was completely automated and each pre-planned news show required only two to three people in the gallery. Operationally WELT We wanted maximum flexibility for modern storytelling. “
Customised unified workflows “They wanted to have more unification,” adds Lang. “So, we had to create unified feeds for the LED panels and monitors inside the studios, to drive the virtual set and define the look and
feel of the programmes, as well as deliver the graphics that are needed to tell their stories. We wanted to streamline their operation to use very little components, so they were not trying to orchestrate a huge amount of technology to get to a certain look and feel.” “Their idea was to integrate a lot of the workflows into one centralised solution and replace the hardwarebased studio mixer with a software-based mixer,” says Adrian Fiedrich, R&D Project Manager at Vizrt. “We saw how we could modify and extend components to meet the new ST 2110-based workflow, and of course add a lot of new features to fit the customer’s needs. It’s a very tailored solution for WELT. Certain software components had to be introduced because those devices didn’t exist before and other products updated, but everything, all the various components, all the routing, is controlled by Vizrt products.”
mixer panel allows the operator to bring in and cue up media from video servers, graphics servers, clip servers and live camera sources, all routed via copper cables. “Viz Engine and our other solutions, such as Pilot Edge for journalist workflows and Viz One for asset management, can do the work of standard default studio workflows, combining them within one or two software components instead of five or more,” says Fiedrich. “Our engine can handle everything internally, so we refer to it as a ‘Mixerless’ or ‘Dynamic Input Handling’ workflow.”
VIZ Engines influencing event based logic Everything runs on eventbased logic, with Viz Engine providing the power. “Each gallery has at least four Viz Engines just independently switching network streams, all the time whenever they need,” says Fiedrich. “A new plug-in architecture was developed for the engine which takes care of requesting the source from the network, they talk to the system orchestrator which talks to the routers and it gets proper feedback as soon as the sources are switched.” “It allows
In a traditional live studio environment, a vision
WELT to have as many as 100 sources across the whole building ready for the broadcast within a few seconds - operators just drag them into a template-based rundown on the UI,” he adds. “ The plugins take care of creating the commands and prepare the source for routing. The operators get notified as soon as the source is ready to avoid any content issues.” The whole system has a oneto-one redundancy. For example, it contains two vision mixers in parallel, always producing the same output. So, at any time, WELT can switch over into the backup and just keep producing the same show with the same functionality. Plugins listen to hardware inputs and will switch things over if something goes wrong. The system doesn’t rely on hardware with a fixed number of inputs or outputs. Instead, it can have many ‘instances’ running on the same machine to provide diﬀerent kinds of output, such as a diﬀerent aspect ratio. It can thus be easily scaled up not only in size but also in functionality.
One Viz engine supports WELT’s seven cameras because of the way the mixer-less workflow can switch them. “Camera control is something we’ve been doing for years,” says Fiedrich. “This system just asks them to select the cameras they want to have, and the camera positions are handled by the robotic interface.” I believe we are the first in the world to build a facility of this size using software only. “
Software driven automation Due to the software-based environment, previously discrete devices can communicate with each other a lot more. So the vision mixer can talk to the camera tracking system as well as drive functions like tallies. It’s a similar set-up for the IP-controlled lights; the system just wants to know which lights to use and the brightness levels and everything is automated. The massive cyclorama video wall is also driven by the Viz engine. It can be controlled directly via Viz
Multiplay or automated by Viz Mosart. “You can have the whole programme rundown completely automated or have an operator hitting ‘next’. But there is no hardware vision mixer, and no panel and no copper-based routing, because the switching at the backend is all now completely dynamic,” says Fiedrich. “As soon as you accept the rundown, then all you need to do is press play,” he continues. “You no longer need the person who controls the audio mixer because this is done by Mosart; there is no video mixer because the engine itself knows what comes next and mixes it. It handles all the content,
Tried and tested
real video switcher and
The full development
solution has run since April
period was one and half years, much of which took place during the lockdown and social distancing restrictions Germany was
the Vizrt software.” “This 24 without any problems at all. It’s incredible.” The journalists have more direct responsibility for the show and it gives the
operating under due to
COVID-19. However, work
more possibilities when
advanced greatly from
putting the show elements
October 2020 to April 2021,
together. We are now
with Vizrt and WELT teams
working with colleagues
working together with
to integrate the virtual
solution integrator, Qvest
environment in Studio 2
diﬀerent layers of graphics
with a real environment.”
on top. It all just switches.”
“WELT was the first to think
such as combining graphics and clips and compositing
An eco-friendly practice Sustainability concerns are also satisfied. “If you look into the WELT server room where the hardware now sits, there are just standard HP machines pulling a few hundred watts, rather than the power required for hardware video and audio servers, clip servers and
about these particular
Although the system very
dynamic workflows at this
much reflects what WELT
scale,” says Fiedrich. “So we
originally requested, and so
had to be the first to have
is tailored to that customer,
to find the solutions. There
the Vizrt components have
was already a huge level of trust in us to achieve the things they wanted to do. It was good to have such a strong partner because what we are doing at WELT hasn’t been done anywhere before.”
been developed to be easy to adapt to the needs of another customer. “We see this project as the prototype for this way of working,” says Fiedrich. “There are already other customers in Germany who are starting to create some
huge mixer panels,” Fiedrich
But the best thing we’ve
test setups on a smaller
adds. “They only need a
heard is some of the
scale, and we are looking at
rack, instead of a whole
users saying there’s no
how the system can work
diﬀerence between a
“How to Mojo” by Ivo Burum (Part 2) Which tools do I need and how do I develop mojo stories?
Ivo Burum is a journalist and award-winning television executive producer – and a mobile journalism pioneer. His company Burum Media provides mojo training in remote communities and to many of the world’s largest media groups. Dr Burum has written five books on mobile journalism. His latest, The Mojo Handbook: Theory to Praxis, published with Focal/Routledge, has been chosen as one of 12 must-read books for investigative journalists in 2021. Ivo lectures in mobile and digital storytelling and in television production and is discipline convener of Media Industries at Latrobe University, Australia. Twitter: @citizenmojo
Ivo Burum using kit with Sennheiser MKE 400, Panel Mini light, Beastgrip Pro (with Kenko lens) and Manfrotto PIXI tripod (Photo credit: Ivo Burum).
If you are close enough
external microphones can
Once you have decided whether your phone is mojo-friendly, you might consider the following mojo tools.
to your subject, with
dramatically enhance audio
little or no background
Microphones For this article we looked at a number of Sennheiser microphones, some that I’ve been using for mojo work for almost a decade, and a couple of relatively recent additions.
noise, the microphone on your smartphone, or
(1) SHOTGUN or directional
the one on your headset
microphones with super-
should get you out of
cardioid patterns that
trouble. However, using
Previous Sennheiser MKE 400 and the new version.
kits. Both microphones come with a dead cat (furry windshield) for use when recording audio in extreme wind in outdoor locations.
Sennheiser XS Lav USB-C Mobile Kit.
sound in front of the microphone capsule are essential for all handheld close-quarter, run and gun mojo filming. I’ve been using an industry standard, the Sennheiser 416, for about 25 years. However, it’s a bit big for mojo work and a better option for on-camera run-n-gun work might be the Sennheiser MKE 400 shotgun microphone ($199.95USD). A metalbodied mic with a supercardioid pattern, it has attenuation and a lowcut filter and runs for an incredible 100 hrs on two AAA batteries. I have been using the earlier version for about 6 years. The Sennheiser MKE 200 ($99.95USD) is a cheaper, smaller version of the 400 — no battery required,
Ivo’s tip Choose a lapel mic with a longish cable, or buy a short extension, so that the cable can be hidden out of shot.
same in-built mesh wind protector, internal shock mount and TRS to TRS and TRS to TRRS cables. A stubby metal-bodied microphone, its small size makes it ideal to use with smartphones and smallbodied DSLR cameras. Its sturdy design and the fact that it doesn’t use a battery make it fail-safe and ideal for schools and student
(2) LAVALIER microphones with an omni-directional pattern are excellent for sitdown interviews. They are clipped to an interviewee’s lapel 6-8 inches from their mouth or can be placed on a desk between two interviewees. Sennheiser’s XS Lav Mobile ($49.95USD) is a lapel microphone that you can wear on your shirt. Intended for mojo work it comes with a mini jack that fits most smartphones, or in a USB-C configuration ($59.95USD). It also comes in a combo kit that includes a phone clamp with multiple fixing screws and a Manfrotto PIXI tripod, all for under $100USD. I like the XS Lav because its cable is longer than most lapel mics and can more easily be hidden. (3) WIRELESS MICROPHONES are used to record audio where the source is some distance from the smartphone,
like on talent moving in a demonstration or a walkand-talk interview. My advice is to buy the best wireless or radio microphones you can because you’ll keep them for some time. Here are a couple that Sennheiser produce that I use: Sennheiser AVX Combo Set ($1049USD) is a favourite because of its functionality and its compact size.
Norwegian journalist recording a stand-up using an iPhone and Sennheiser AVX Combo Set wireless system (Photo credit: Ivo Burum).
This combo kit includes a handheld microphone as well as bodypack and lapel microphone. The reduced size makes the AVX wireless system ideal for mojo work, range is over 100 metres. I use this set a lot and recommend it highly for mojo work. Here’s an article and video on the AVX: http://smartmojo. com/2015/09/17/ sennheiser-avx-combo-firstlook/
Sennheiser AVX Combo Set.
The Sennheiser XSW-D ENG Set ($470USD) is a small digital kit in the 2.4GHz band, that includes plug-on transmitters for any dynamic mic and the included ME2-II omnidirectional lapel condenser mic. Rechargeable batteries
with 5 hours run time. I have tested it eﬀectively to beyond the 75m range that Sennheiser recommends. Recently, Sennheiser has added the XSW-D Portable Lav Mobile Kit ($329.95USD) to its portfolio, which includes a mobile-optimised connection cable. Camera Cradles and Lenses Sennheiser XSW-D Portable Lav Mobile Kit with optimised 3.5mm TRRS cable.
Ivo’s tip Sit your subject back to the wind to shield the lapel microphone. Or, if using a shotgun-type microphone, put the mobile journalist’s back to the wind when recording an interview to shield the microphone.
Cradles enable a steadier shot when working handheld and provide attachment points for lenses, microphones, lights, and tripods. Attaching a wide-angle lens to the cradle adds stability and enables mojos to get closer to their subject while maintaining a medium close-up (MCU) interview frame. Being close to the sound source improves audio quality. New smartphones ship with eﬀective stability optics and electronics and a variety of onboard lenses, so cradles are not always needed. If you require attachment points and don’t need additional lenses, then a cheaper clamp, like those in the Sennheiser Mobile Kits, might be for you.
Ivo’s tip When cradles are loaded with smartphone, light and microphone, they make the mojo look more professional. In conflict regions all that gear can cry out ‘target’
There are many cradles, or rigs on the market. I use the Beastgrip Pro bundle ($248USD), which includes their Kenko Pro series 0.75 wide-angle lens. I like the additional weight that I’m used to that gives my smartphone a heavier feel, making it easier to control when shooting handheld. The Beastgrip Pro is a strong contender because it takes a variety of phones. Tripods I use a short Beastgrip BT 50 tripod ($60USD) that can be attached to a cradle and used as a handle to help stabilize shots. I also use a Manfrotto
PIXI tripod ($35USD) and love it because of the red articulation button that makes it quicker to adjust than the BT 50. If you need extra height for a standup, stick it on a car roof, a wall, or on a filing cabinet. I also use a Pro Master Professional XC525C Carbon tripod ($390USD) because it’s light, folds relatively small and has a removable monopod leg. Lights On-camera lights come at various price points from $30USD to $250USD. I look for the ability to choose intensity settings because interviewees can find bright light distracting. I use a Manfrotto Lumimuse 8 ($140USD). I also use a Lume Cube Panel Mini ($59.95USD). It’s the thickness of a bunch of 10 credit cards, has adjustable colour temperature ranging from 3200K to 5600K and adjustable brightness from 1% to 100%, so you can avoid blinding your interviewee. Gimbals There are a number of gimbals on the market that enable the operator to walk
or run smoothly with the smartphone. I use the DJI Osmo Mobile 3 ($99USD), because it’s cheap (most are a similar price), folds up, and is easy to use for occasional operators. Find one you like and buy that.
Mojo Kit You can make up your mojo
mojo kit. You can of course mix and match the level of equipment and brands.
Apps The app industry began in 2008 after the launch of the iPhone. Today, there are more than six million apps for Android, iOS, and other platforms.
kits depending on your
needs and budget. Here
I use the native camera app that ships with the iPhone, except when I need
are examples of a basic, intermediate and advanced
Osmo Mobile 3 gimbal (Photo credit: Ivo Burum).
Basic Kit With iPhone XS, MKE 200 Mobile Kit (includes Manfrotto PIXI and Smartphone Clamp), XS Lav Mobile clip-on mic, and TRRS to Lightning adaptor (Photo credit: Ivo Burum).
a higher level of control, in which case I use the following: • Filmic Pro ($19.99USD) for iOS and Android is the most used advanced video camera app, with separate white balancing, light metering and focus points, variable frame rates, bit rates and realtime audio monitoring. • Camera+: is arguably the best stills camera app on the market, with high-level
Intermediate Kit With iPhone XS, XS Wireless Digital Portable Lav Mobile Kit (includes Manfrotto PIXI and Smartphone Clamp), MKE 200 vlogging microphone, XS Lav Mobile clip-on mic, Airstash, Lume Cube Panel Mini, and TRRS to Lightning adaptor (Photo credit: Ivo Burum).
Ivo’s tip Learn to shoot with the camera app that ships with your phone and concentrate on exposure, framing, and recording shots that tell a story. When you run out of functionality, look for a more powerful app.
Advanced Kit With iPhone 12 Pro Max, MKE 400 Mobile Kit (includes Manfrotto PIXI and Smartphone Clamp), AVX wireless microphone system, XS Lav Mobile clip-on mic, Manfrotto Lumimuse Light, Airstash, TRS and TRRS to Lightning adaptors (Photo credit: Ivo Burum).
image control, stabilizer, separate exposure and focus settings, white balance, and control over brightness, colour and sharpening ($2.99USD)
Sound Apps I record audio using my camera app. If you need 96kbps audio, Lossless (ALAC/CAF) or Wave formats, you might try one of these two excellent audio apps:
• Ferrite (iOS) is probably the best audio app I have used because it is very user-friendly and the free version gives you so many features that you may never need the paid one. • Rec Forge ii is an excellent full-featured Android audio recording app that I find a little complicated and so do my students.
Edit Apps The following are the most functional edit apps on the market for advanced mojo work: • iMovie ships with the iPhone and was one of the
Audio Ducking (Photo credit: Ivo Burum).
first apps to oﬀer multiple video tracks. It provides all the features you need to edit professional stories quickly but lacks a powerful titling tool and keyframe audio ducking. It includes Green screen (chroma). Great app to learn on (free). • LumaFusion is the relatively new kid on the iOS block and probably the most powerful of the edit apps. It has six video and six audio tracks (plus six embedded), slip-trim and anchored edit features, colour correction, layered titles, keyframe audio and is optioned like a
professional edit suite. It’s a fully featured edit app that trumps many desktop editors ($29.99USD). • Kinemaster was the first professional smartphone edit app loaded with features that works across iOS and Android platforms. It includes Chroma key, an easy-touse titling tool, blur, audio manipulation and lots more. Kinemaster is free and adds features and removes the watermark for a subscription cost of $39.99USD. • VN is another multi-track cross-platform edit app that enables good control over vision and audio, has
against a green screen then shooting a Wide Shot (WS) of the city before keying the city into the green screen. Having worked in TV for years, these facilities available on a smartphone are game changing.
Kinemaster Green Screen (Photo credit: Ivo Burum).
four video and multiple tracks of audio, easy-touse titles tool, keyframes, animation and much more, all for free, without a watermark. VN also has a PC version. • Power Director is another fully featured crossplatform edit app that enables multiple video and audio tracks, powerful eﬀects, on-board grading facilities and playout at 4k. Free to install, then $56.99USD per year subscription. Has a desktop version. Key features to look for in edit apps Multitrack Video: An app with at least two video tracks.
Multitrack Audio: Look for at least three audio tracks, plus the in-video audio. Audio Mixing: All five apps include an audio mixing tool. The least eﬀective is iMovie. Audio Ducking: Involves selecting audio keyframes to lift or lower specific audio levels at key points. Transitions: An ability to make a variety of transitions between video clips and audio tracks. Chroma Key: A number of the apps described above oﬀer Chroma Key facilities. Below is an example of Chroma Key (Green Screen) on Kinemaster. The composite was made by shooting the journalist
Titles: Kinemaster and Luma Fusion include powerful title tools with a variety of fonts. iMovie Titles are rudimentary and when using iMovie, I generally import the finished video into Kinemaster, Luma Fusion or VONT to do a titles pass. Subtitles: I often use the Kinemaster and Luma Fusion titles tool to create subtitles. However, I also use two discreet subtitling apps: - DIY Subtitles is a manual app that creates titles relatively easily. - Mixcaptions creates subtitles automatically and it works well enough to save time. It oﬀers several font styles and three sizes. Once the app has created the subtitles, you can alter those that are incorrect.
Video Grading: I use Video Grade ($6USD) which has 14 functions for fixing underexposed or incorrectly balanced media. Many edit apps have onboard grading functionality. Intermediate graphics: On iOS, I use Vont for video and Photo for stills; for Android I use Kinemaster’s on-board graphics, and for advanced motion graphics, the Alight Motion app takes time to learn, but is relatively intuitive and incredibly powerful. It includes grading features. Free with watermark and basic features, and subscription gives added features and no watermark.
Ivo’s tip Do not delete media from your smart device until you have rendered your edit project, your timeline, into a video!
version that works across platforms.
Battery Pack I use the Powerstation AC at 22000mAH ($199USD) that gives me between 24 and 100 extra hours of power.
Transfer Devices Smartphones get clogged with media and we often delete valuable content to free up space. Don’t delete your media — you might need it to update your story, or to sell. I use transfer devices to shift and store media and clean out my smartphone. A Lightning version for iOS devices is iXpand by Sandisk; Airstash by Maxell is a WiFi
Developing Mojo Stories In 2007, the advent of the iPhone was a game changer. It became apparent that mobile technology would redefine the way we produced TV, even the way we did investigative video journalism. It was a lightbulb moment.
By 2010, we were completing the whole production process, even the all-important edit, on the phone. This gave mojo stories their own diverse and unique voice. Nearly all types of video stories can be shot and edited using a smartphone. Journalists might consider the following development and workflow points when working in the mobile story space. The Story Focus Story is everything, without it you’ll lose your audience and without that you may as well do something less demanding. Philip Bromwell, a video journalist from Ireland’s RTÉ, who works in the mobile space, says: “My job requires me to tell stories, and we have to remember the audience doesn’t really care how content is created, but they will engage with a good story.” Having a story focus enables the mojo to choose the right gear. So, how do you find the story? First, you need to
The Author working as a mojo doing a piece to camera (PTC) in Yangon.
follow your passion, then you decide if you have an audience for your idea, next you need to know who the good guys and bad guys are, understand their journey and have access to all the players. Before you develop your idea choose a topic, know your market and ask the following: • Who’s the audience? Why is the story being told, who’s watching, what is the demographic, what’s the political and the cultural imperative? • What’s the angle? What’s the focus and how
will this be best achieved creatively and editorially? • What’s the style? Is it a current aﬀairs form, a series of short stories, a long investigative exposé, and is it narration, or interviewee-driven? • What’s the structure? What is the beginning, middle and end; who will speak when and what elements will be required to support the story structure? As the concept gets more of a story focus, I complete a more specific SCRAP check list:
• Story: what is it and why is it being told? • Characters: who is best to tell it and why? • Resolution: what’s the structure and why? • Actuality: what will be filmed, when and why? • Production: where is it happening, what are the logistics and how will I produce it? SCRAP helps answer journalism’s 5Ws — who, what, when, where, why and how. To better understand these relationships between your story and its audience, consider the following:
• Currency of the story. • Significance of the story will be impacted by its Currency. • Proximity of the story will impact Significance. • Prominence of story talent can trump Proximity. • Human interest can increase with Prominence.
A Character Focus Mojo news stories may be breaking and unplanned, so it’s even more important to quickly determine a story focus and the best characters/interviewees to help tell the story. Knowing who to film is important when planning a mojo video production. Unlike print, you can’t just call everyone on the phone — mojos need pictures, so managing time and expectations is critical. For example, who do I interview at the scene of an accident and, importantly, who do I interview first? Can I ask the ambulance driver for a comment while he
is inserting an IV line into an injured person’s arm? If not, how do I cover this as B roll, so that I can ask my questions when the paramedic is free? What’s most relevant in a breaking story at a Bosnian war grave? More bodies, longer interviews with a witness, or the man with photographs of his missing family, who he believes are among the dead? Do I need it all? If so, how will I record it all and how will I include it all in a short video story? The best way to decide who’s relevant is to: • Map your potential interviewees against your structural plan (see below) and your audience expectation. Once you firm up interviewees, you might need to rejig your structure and how you will use your talent: • Will the story begin with actuality from the interviewee or B roll with narration? • When in the story will we hear from the main interviewee?
Ivo’s tip If it’s a choice between a smart interviewee and an engaging one, choose the latter, if you can’t have both. You can always add the factual smarts using narration and B roll.
• Who will I use to tell the climax and how will I create and build to this? • Who will I use to close the story? • Do I have clearances? • Do I have B roll to introduce my interviewees and cover their edited interview moments and my narration? You can generally work the above out once you begin mapping your structure.
Structural Focus Next, I create a simple five-point structural road
Structural matrix by Ivo Burum.
map of how my story might go. This guide does three important things: (a) it articulates the various plot points and (b) acts as a story and character checklist and (c) is the beginning of my edit plan. Because real research happens on the ground, and because story changes, you need to understand your options and be prepared to shift your views on location. That’s where a simple structural plan helps. Rather than being prescriptive, a plan keeps the story focused and the mojo open to new ideas. A plan is also a checklist to help decide if everything is covered before wrapping location.
The Story Character Matrix Story is built around an EVENT that impacts our CHARACTER. Through a CHARACTER’S eyes we witness action, strife, emotion, visual and dramatic development corralled in the place holder we call an EVENT. Video works best when it’s centred around EVENTS (that provide currency), told through the eyes of a CHARACTER (that makes it real, personal and emotive), who RISKS losing everything (the stakes and the drama). One of the mobile journalist’s jobs is to design a STRUCTURE around
EVENTS that tell a STORY and give rise to certain EMOTIONS. In essence, what we as mojos do in telling STORY, is capture a CHARACTER’S emotive JOURNEY. A story event creates change in a character that’s expressed in terms of value (impact) and often achieved through an attempt to overcome an obstacle (a conflict). As mobile journalists we need to look for that CHANGE, and in particular, the RISK experienced by our CHARACTER(s). Ask: What does our character stand to lose if s/ he does not get what they want? Go mojo…
Telos Infinity® IP Intercom Helps Progressive American Flat Track Save Time and Money Progressive American Flat Track (AFT) is the world’s fastest-growing motorsport. AFT attracts more than 300,000 viewers per event attending live races or watching via telecasts or live streams.
The Challenge: Thousands of Files Daily in Various Configurations Live event comms are critical for sophisticated live television productions. AFT’s comms were consistently hard to set up with an overly complicated software interface. Additionally, some device settings were only available on the hardware itself, and there were multiple areas in the signal flow to set input and output gain, making for diﬃcult troubleshooting. Integration into any of AFT’s current equipment also presented challenges. With AFT utilizing a rack full of Blackmagic Design ATEM hardware, they needed an intercom system that could handle integration with
enough play levels to make comms crystal clear in the control room and in the field. In 2017, AFT had outgrown their comms setup and purchased a digital intercom system; however, they experienced even more problems as settings in the software weren’t available on actual devices; they were next to impossible to set up on the current network infrastructure; and they didn’t integrate with Blackmagic ATEM Talkback Converters.
The Solution: Telos Infinity IP Intercom Platform for IPbased live event comms At NAB 2019, AFT visited the Telos Alliance booth and experienced the Telos Infinity IP Intercom. Working together, AFT gave the Telos Infinity a test drive at a oneoﬀ live event. Impressed with Telos Infinity’s easy set-up and Telos Alliance’s handson support, AFT now uses six Telos Infinity master panels, two Telos Alliance xNodes
to tie in camera comms with analog ins and outs for twoway radio intercoms. Now, the announcers booth, pit talent and race control have full communication from the field back to the production control room, ensuring a smooth event broadcast. AFT’s broadcast liaison to race control is always located in the grandstand tower–sometimes more than a half-mile away from the control room in a production truck. Because Telos Infinity is IP-based, AFT can piggy-back oﬀ the camera’s fiber network back to the SFP ports of the dedicated intercom network switch. All comms via UHF two-way radios, fiber party-lines, and announcer consoles can now be tied into the Telos network. The Race Control Kit now has two fiber media converters– one being an NDI to HDMI decoder for multiview video feed for the Race Director and another strictly for the broadcast Liason’s Infinity Beltpack. When he keys up his Beltpack, it goes into the xNode out and into a UHF base station transmitter so
ready with a new solution for that in Telos Infinity VIP Virtual Intercom Platform. AFT has plans to implement the new Telos Infinity VIP Virtual Intercom panels next year. That way the AFT team cannot only have a virtual key panel in its competition trailers, but also if a vital team member is not able to be at track, they can still communicate with the trackside team and help from home without requiring
AFT production truck control room.
interconnectivity on the LAN. anyone in the paddock, TV
a significant amount of time
control room, or on track can
in post-production thanks
to the capability of Infinity’s xNode and Link Licenses,
The Benefit: Telos Infinity for Live Event Comms Saves Time and Money
which gave them the ability to
In short order, AFT began
can go live to tape with less
pushing the capability of
than a 24-hour delay with
Infinity’s Dashboard and
Software and created multiple
communicate with the oﬀpremises Control Room and Announcer booth. By using Telos Infinity IP Intercom, AFT
Zach Prescott, Production Manager for AFT states, “In terms of service, training, and support I have received from Telos Alliance, I have saved countless hours of troubleshooting, configuring, and setting up the Infinity system over tradition live event comms. When we are in the most mission critical times of our live broadcast, the Infinity system simply
Since they first began with
works, and it’s now easier
Telos Infinity in 2019, AFT’s
than ever to make changes
needs have changed, partly
on the fly. The time saved
due to the pandemic. During
with this system is constantly
COVID, for example, AFT
money back in our pockets,
The Yamaha Atlanta Super TT
was able to VPN into the
and I could not have made a
at Atlanta Motor Speedway
production truck with the
better decision than to go with
race was one of the first quick
ability to control everything
the Telos Infinity IP Intercom
turnaround live to tape events
but the intercom. Luckily,
system for our live event
where AFT was able to save
Telos Alliance was at the
partylines, IFB mixes, and really began to dial in the settings and features of the Infinity system.