TM Broadcast International #91, March 2021

Page 1

Photography Credit Three UK, Pretty Green, photographer Matt Alexander



EDITORIAL We usually refer to our field of activity as 'the broadcast world'. But what if we start calling it a 'broadcast universe'? An infinite space, in continuous expansion, reshaping boundries with unprecedented developments, having a halo that keeps us inevitably captivated. It is complex to reflect in a few pages the endless diversity that such a deceiving landscape implies; however, like so many other times, we have designed a magazine that clearly captures constellations that define the present and future of the sector. This issue heads towards highly interesting areas. We embark on virtual production with two representatives who are rolling out amazing applications on our field: Dimension Studio and Orca Studio. We tackle those remote production flows that were unthinkable a decade ago with Extreme E,

a challenging Motorsport competition that takes place at 5 remote locations. We analyze the latest technological possibilities on the OTT platform of ITV, one of UK's leading broadcasters. We speak with Dan Stoloff, director of photography for The Boys, a series evidencing that the aesthetic barrier that we used to notice between television and the movies no longer exists. As if all this were not enough, we continue to unravel the keys of AoIP, we address the latest trends in the cloud landscape with AWS and we put an extremely interesting broadcast equipment to the test in our laboratory. Next month we will be heading to new galaxies within our particular broadcast universe. However, we are sure that they will be as exciting as the ones we are bringing you today. After all, this journey is as exciting as it is timeless. Are you coming with us?

Editor in chief Javier de Martín

Creative Direction Mercedes González

editor@tmbroadcast.com

design@tmbroadcast.com

Key account manager Susana Sampedro

Administration Laura de Diego

ssa@tmbroadcast.com

administration@tmbroadcast.com

Managing Editor Sergio Julián press@tmbroadcast.com

TM Broadcast International #91 March 2021

TM Broadcast International is a magazine published by Daró Media Group SL Centro Empresarial Tartessos Calle Pollensa 2, oficina 14 28290 Las Rozas (Madrid), Spain Phone +34 91 640 46 43 Published in Spain ISSN: 2659-5966


SUMMARY 6

News

22 Virtual

Production: Dimension Studio and Orca Studios

TRT2 brings a new 44 perspective to virtual studio production for broadcast

48 Extreme E: A

challenging production from the harshest environments

The Extreme E approach is as attractive for viewers as it is complex for broadcast engineers: remote production, more than 70 cameras, FHD, AR, VR… We discovered how these groundbreaking races will take place thanks to Alejandro Agag, Founder and CEO; Ali Russell, Chief Marketing Officer; and Dave Adey, Head of Broadcast and Technology.

56 Dan Stoloff Successfully overcoming the creative challenges of 'The Boys' 4


48 64 AoIP. Practical applications. 70 Test, QC and monitoring: 4 future-proof solutions

72

ITV's Journey with Server Side Ad Insertion

76 80 84

22

Looking to the cloud to scale up production Migration of the broadcast sector to the cloud

Take on All Remote Production Workflows with

Matrox Monarch EDGE

88 Test Zone: Panasonic Lumix DC-BGH1

5


NEWS - PRODUCTS

Ross Video introduces Ultrix Acuity, a “hyper converged production platform”

The evolution of the innovative Ultrix routing and AV processing platform and the Acuity switcher is here. Presented as a “Hyper Converged Production Platform”, the Ultrix Acuity converges the functionalities of these two solutions. Ultrix Acuity takes routing, audio mixing, MultiViewers, trays of frame syncs and audio embedders/deembedders – all solutions that have traditionally filled multiple equipment racks – and compresses them all down to a single 5RU chassis. Ultrix Acuity 6

has been therefore designed for environments where size really matters, such as OB vans and mobile units. As with the current Ultrix solution, Ultrix Acuity is based on Ross Video’s Software Defined Production philosophy. The Software Defined Production Engine – SDPE – from Ross removes the need for costly ‘forklift’ upgrades by providing base hardware that can grow via software licenses. The flexible architecture of Ultrix Acuity means that format and connectivity challenges

“simply disappear”. “Transition from HD to UHD with a simple software license. Mix SDI and IP sources in the same frame transparently. Use sophisticated tie-line management tools to incorporate the system into a larger distributed routing fabric”, states Ross Video in its press release. Commenting on the new platform, Nigel Spratling, VP of Production Switchers, points to an ongoing emphasis on workflow efficiency. “Our customers have been focused on increasing efficiencies, reducing costs and embracing new signal transport and management schemes. By bringing together an entire suite of necessary production solutions, we’re providing an answer to this need and delivering a compelling new platform that works for 2021 and beyond.” 



NEWS - PRODUCTS

Riedel's Artist and Bolero Give Flexibility, Scalability and Reliability to NVP's Flagship Van

NVP S.p.A selects Riedel’s Artist-1024 and Bolero Intercom for its latest 4K-HDR OB Van Italian provider of advanced production facilities and services NVP S.p.A has chosen Riedel’s Artist-1024 digital matrix intercom node and Bolero wireless intercom to provide seamless crew communications on board OB 7, the company’s brand-new, 4K- and HDRcapable outside broadcasting (OB) van. “Riedel’s Artist and Bolero were the obvious choices for OB 7, the flagship of our fleet. 8

Highly requested by our broadcast clients, these solutions represent the state of the art in IPbased, high-performance wireless communications networks,” states Ivan Pintabona, Chief Technical Officer, NVP. “With 1024 non-blocking ports in only two RUs, Artist-1024 is ideal for the space limitations of an OB van. And Bolero is a truly groundbreaking wireless intercom system offering standard-setting

flexibility, scalability, reliability, and range. Both of these outstanding products reflect Riedel’s industry-acclaimed excellence and innovation in signal transport and media networking technologies.” With its higher port densities and full SMPTE 2110-30/31 (AES67) compliance, Riedel’s Artist-1024 node is the high-end solution in Riedel’s Artist intercom ecosystem. Together with


NEWS - PRODUCTS

the Artist-1024, OB 7 is equipped with five Bolero antennas and 20 intercom beltpacks. The new van has already provided 4K/HDR coverage of several high-profile sporting events in Europe, including Ferrari Challenge Europe and UCL football matches. “As one of Europe’s most prestigious production companies, NVP is the

latest in a range of prominent OB providers that have embraced the powerful combo of Artist1024 and Bolero,” comments Giuseppe Angilello, Riedel Sales Manager, Italy. “The system’s industry-leading flexibility and scalability to support production of events of all types and sizes, and its decentralized

communications, are huge plusses for live broadcasting. And, because Artist and Bolero are so widely deployed among the European OB community, NVP has discovered that they can interconnect the intercom system easily with other infrastructures. That has been a crucial success factor.” 


NEWS - PRODUCTS

Rohde & Schwarz welcomes new member of the SpycerNode media storage family: R&S®SpycerNode SC Rohde & Schwarz, a supplier of broadcast media technologies, has announced the expansion of the SpycerNode media storage series. Completely Ethernet based, the R&S®SpycerNode SC targets the storage requirements of both broadcast and post production workflows. SpycerNode SC is designed to fulfil the requirements of modern post production teams, where performance is a higher priority than redundancy, but reliability, compact design and affordability are key requirements. Based on the latest HPC technology SpycerNode SC is capable of delivering a data rate of up to 22GB/s within a single device. Rohde & Schwarz production asset management software, SpycerPAM, can run 10

natively on SpycerNode SC eliminating the need for separate infrastructure or further interoperation. All SpycerNode products use 100 Gb Ethernet connectivity to enable both SAN and NAS operation within a single system.

creators can access assets fast and work with confidence.”

“Creative teams need to collaborate without fear of technology limitations dictating awkward workflows,” says Andreas Loges, Vice President Media Technologies at Rohde & Schwarz. “The powerful, unique performance and redundancy features of SpycerNode SC ensures that talented content

SpycerNode SC can perform as a stand-alone unit or configurable within larger clusters of a SpycerNode system. Using Rohde & Schwarz’s metro cluster technology, SpycerNodeSC can be used in environments with high redundancy requirements, such as broadcast production and playout workflows. 

SpycerNode SC can accommodate up to 60 drives configurable in sets of 30 spinning or flash. Fully equipped the net capacity in a single unit is 800 TB spinning or 430 TB flash.


NEWS - SUCCESS STORIES

Century Han Tang leverages AJA Technology to streamline 4K HDR Production Based in the Chaoyang District of Beijing, China, Century Han Tang services the local broadcast community as a rental equipment provider and production company. On average, the production outfit creates and delivers more than 100 hours of content per month, a majority of which is programming for China Central Television (CCTV). Century Han Tang prefers to work in 4K HDR. Even though its workflow varies from project to project, every production harnesses a range of equipment from AJA Video Systems, including the FS-HDR realtime HDR/SDR frame sync and converter; an HDR Image Analyzer HDR waveform, histogram and vectorscope monitoring tool; KUMO 3232-12G and KUMO 6464 routers; and Ki Pro Ultra Plus digital video recorders. A Swiss Army knife for 4K HDR production, AJA FSHDR gives Century Han

AJA HDR Image Analyzer 12G

Tang the flexibility required to handle a range of HDR standards and camera formats, and meet each production’s unique needs. For instance, in color and gamut conversion for multiview monitoring, they often use FS-HDR to downconvert HLG HDR content to the BT.709 color space and adjust the brightness in the BT.709 environment. Century Han Tang also leans on the device to ensure all of its video footage matches the intended color style when capturing multi-cam footage from ARRI AMIRA cameras in ARRI Log C. Working with 16-bit files, the Century Han Tang team uses a color correction and non-linear editing program to create a 33-point 3D LUT, then imports it into the FSHDR. With the program recorded in 1080p25 and

four channels of processing accessible via a single FSHDR, they can then feed all signals, including the PGM (clean and embedded) and multi-view, through the FSHDR. While working on-set for a recent 4K HDR production, Century Han Tang also tapped FS-HDR to assist with monitoring needs. Unable to output a 4K HDR PGM for real-time monitoring on-set given monitor budget limitations, the team used the FS-HDR to down-convert an UltraHD signal to 1080p and convert the color space from HLG BT.2020 to 2.2 Gamma BT.709. After conversion, they used the FS-HDR’s proc amp controls to ensure the image’s look appeared similar to that of HLG HDR's color style for on-set monitoring.  11


NEWS - SUCCESS STORIES

Planeta TV adopts 4K UHD with PlayBox Neo Planeta TV, one of PlayBox Neo’s longest established customers, is upgrading to 4K UHD @50p at its studio headquarters in Sofia. Neo-20 channel-in-abox production, scheduling and playout will be used across all three of the broadcaster’s channels which transmit round the clock to viewers in Bulgaria and around the globe. “With 4K displays now in widespread consumer ownership, we are confident that a high proportion of our viewing audience will appreciate the very high image quality which UHD has to offer,” says Radostin Petrov, Planeta TV’s Technical Director. “We have upgraded all our SD channels to HD and invested in an additional AirBox Neo-20 for our new UHD channel. PlayBox Neo playout systems have performed excellently over the years and are very popular with our staff. The AirBox Neo-20 GUI allows scheduling, graphics preparation, video editing

12

and text manipulation to be performed “creatively and quickly”. Channel branding and character generation are handled using TitleBox Neo-20. Its template-based structure also ensures “adherence to a consistent branding theme”. Ingested programs, commercials, interstitials and images are managed using DataBox Neo which provides access to a large library of indexed records. Planeta TV has also invested in PlayBox Neo’s ListBox Neo-20 option, which allows the television to create and edit schedules

days or weeks before transmission. “Planeta TV has proved a very loyal customer over the years and is a showpiece example of efficiently managed serverbased multichannel broadcasting,” adds PlayBox Neo CEO Pavlin Rahnev. “UHD presents great opportunities for television broadcasters to deliver high-impact content to their viewers, not least in the area of musical entertainment where Planeta TV already enjoys a reputation for excellence.”


NEWS - SUCCESS STORIES

Pebble playout technology selected to drive Asharq News Asharq News is headquartered in Riyadh with central offices in Dubai International Financial Centre in UAE, and Washington in the US, and key hubs and studios in Cairo and Abu Dhabi. It also has regional offices and correspondents across key Arab countries and in major global cities, in addition to access to Bloomberg’s worldwide network. A winner of the prestigious BroadcastPro ME Innovative Project of the Year award, and recently announced nomination for the Excellence in Technical Installation and Supplier

Innovation award in the esteemed Digital Studio ME Awards, it features many state-of-the-art broadcast components and is one of the first SMPTE 2110 compliant stations operating in the region. As a pure IP deployment the Asharq News project illustrates the power and flexibility of IP systems when it comes to establishing new broadcast channels in the 2110 era. At its heart is a complex transmission pipeline with a large number of inputs and outputs, and featuring approximately 80 ‘flows’ (video, audio, and ancillary

data) per integrated channel device. 2110 inputs can come from any source — studio, camera, elsewhere —, while outputs include the main program output, dedicated outputs for YouTube (with and without graphics), clean outputs for monitoring, and more. The system also features DVEs, native Pebble Integrated Channel graphics and 2110 Vizrt external graphics, NDI monitoring outputs, 2110 live Voice Over, SCTE Signaling support, loudness processing, media recording, and more. Multiple integrations were required and managed by the Pebble development team overseeing the installation, including VizRT, Avid MAM, Evertz, and BTS. All changes to the system installed at Asharq News can be undertaken via a single user interface, ensuring the channel has an agile set up that is both responsive to market needs and future-proofed.  13


NEWS - SUCCESS STORIES

TV 2 Norway chooses Phabrix QxL for 25G IP UHD test and measurement

Svein Henning Skaga, Solutions Architect, TV 2 Norway

TV 2 Norway, the country’s largest commercial broadcaster, has purchased a PHABRIX QxL rasterizer to provide 25G UHD IP ST 2110 video and audio generation, analysis and monitoring at its Bergen headquarters. PHABRIX’s authorized distributor Logical AS supplied the system. Svein Henning Skaga, Solutions Architect, TV 2 Norway, said, “We are highly impressed with the QxL’s fully-flexible architecture, covering all our UHD signal monitoring and generation requirements in both 12G-

14

SDI and 25G IP ST 2110. We will be using the QxL within our test network, but also on UHD sports productions. With its value for money, easy to access via VNC functionality and compatibility with all the devices in our media IP network, the QxL is an excellent solution for high quality signal generation, analysis and monitoring.”

The QxL is one of the first

The PHABRIX QxL is a flexible, compact 25GbE UHD rasterizer designed to meet the requirements of 10GbE/25GbE IP workflows suitable for professional broadcast. Out of the box, this IP enabled rasterizer supports generation and analysis of JT-NM TR 10011:2018, 2110-20 (video), 2110-30 (PCM audio), 211031 (AES audio), and 2110-40 (ANC) media flows, all with 2022-7 Seamless IP Protection Switching (SIPS), and independent PTP slaves on each media port for fully-redundant media network operation.

tools (up to 80 channels of

devices of its type for which SDI is an option – not part of the core – making it significantly better value for HD IP broadcast ops users today. Licenses can be added to accommodate UHD over IP and HDR/WCG tools as options to meet future needs. The QxL comes as standard with a comprehensive range of audio per 2110-30/31 flow), Quality Control tools, Closed Caption and Loudness monitoring. PHABRIX CEO, Phillip Adams, said, “The QxL was created to address the needs of professional broadcast media IP networks at an unparalleled price performance. TV 2 Norway are at the forefront of IP deployment, so we are delighted they have chosen QxL as they continue to invest in their impressive industry-leading studios in Bergen.” 


NEWS - SUCCESS STORIES

Creative Technology implements Quicklink ST500 for professional contributions Creative Technology, part of the NEP Group, has utilised Quicklink’s ST500 (Studio-in-a-box) for multiple professional contributions to deliver virtual events for their clients, which include consistent and low latency video and audio feeds from numerous remote contributors in a variety of locations. Recent events the company have delivered utilising the technology include a virtual event for an international Blue-Chip company, which ran three simultaneous 90 minute live virtual shows twice over one day in six different languages. Creative Technology also prerecorded videos with multiple contributors, to be streamed across two online events. Quicklink’s secure guest URLs were used to link remote presenters back to the event control rooms, providing high-quality content.

“With the COVID-19 lockdown, we are connecting to people within their homes and there is varying quality to those solutions. We are relying on people’s own electronics like their laptops, cameras, lighting, and their own soft microphone. This is a system that is sent out to people’s homes to replace that and it absolutely elevates the quality of that contribution capture. All the people at home have to do is plug the microphone into the box, establish it on their local Wi-Fi and they are ready to go. All of those elements are all controllable from our production hub here at

Creative Technology” said Steve Purkess, Head of Sport & Broadcast at Creative Technology UK. The Quicklink Studio-in-abox (ST500) is a compact unit with in-built camera and lighting, that allows Creative Technology. The camera, lights and audio can be controlled from any Chrome browser using the Quicklink Manager Portal to ensure in-frame broadcast content is received. The central operator fully manages the broadcast and can choose both file recording as HD-SDI and live output to air. 

15


NEWS - SUCCESS STORIES

Viaccess-Orca powers Megacable’s OTT platform arrival to Android TV set-top boxes Viaccess-Orca (VO), part of the Orange Group and provider of OTT and TV platforms, content protection, and data solutions, announced that Megacable, one of the largest cable TV providers in Mexico, is expanding the delivery of its Xview+ OTT service to Android TV settop boxes using VO’s TV Platform solution. With unified content management and multidevice delivery capabilities, VO’s TV Platform will streamline the delivery of Xview+ to Megacable subscribers, providing them with access to live, VOD, time-shift TV, catch-up TV, and recorded content as well as premium OTT offerings on any device. “As we expand the reach of our Xview+ service nationwide across Mexico, it is critical to partner with a technology provider with extensive OTT deployment expertise and on-the-

16

Megacable is expanding the delivery of its Xview+ OTT service using VO's TV Platform solution.

ground support,” said Gerardo Seifert, Chief Marketing Officer at Megacable. “VO’s TV Platform is supported by a rich ecosystem of partners (including Dotscreen, Broadpeak, Technicolor, ZTE and Oregan Networks) and a dedicated local engineering team that will simplify the rollout of this exciting service.” With VO’s TV Platform, it will be “simple” for Megacable to manage, deliver, and monetize Xview+. The platform includes an end-to-end security solution for content protection, with content encryption, device authentication, license management, and parental control. Additionally, VO’s platform offers analytics-

based content discovery and TV app features that will allow Megacable to deliver personalized content recommendations. “Xview+ is available on Android TV set-top boxes, bringing an unparalleled video streaming experience to millions of Megacable subscribers in Mexico,” said Philippe Leonetti, CEO of Viaccess-Orca. “As the most complete solution for operating OTT services, our TV Platform simplifies content preparation, management, delivery, protection, and personalization. With our platform, Megacable can deliver a seamless crossscreen user experience, increasing their viewer engagement and revenues.”


NEWS - SUCCESS STORIES

Mediahub Australia upgrades to IP with EVS MediaHub Australia, a specialist in broadcast playout, content delivery, data handling and archiving solutions, has selected EVS’ Media Infrastructure solutions as the cornerstone of its upgraded in-house master control room (MCR). Following an extensive 12month RFP process, the current SDI-based environment located inside the Ingleburn facility will make way for a SMPTE 2022-6/7 and 2110 IP platform based on EVS’ Cerebrum control and monitoring solution and Neuron, its IP stream processing platform. The deployment will be undertaken by EVS alongside local partner Magna Systems. In addition, MediaHub Australia is also one of EVS’ newest customers following the acquisition of Axon, confirming the strategic importance of the deal that was signed in May 2020. With Cerebrum, MediaHub Australia will be able to master multiple workflows,

MCR MediaHub Australia

offering total control and customization over complex setups designed by the many broadcasters it works with. In addition to its customization and flexibility in managing SDI environments, it will also be able to adapt and scale to future infrastructures using NMOS IS-04 and IS-05 specifications, ensuring that it can manage and control future production workflows in native-IP formats. The new setup demanded flexible gateway devices with a small footprint and all-round processing capabilities for a wide range of audio-video needs

in IP environments. EVS was able to meet these specific requirements by deploying this EVS Neuron solution. Neuron’s ethernet-based architecture supports various advanced video processing capabilities such as frame synchronization, up/down conversion, color correction, HDR conversion, audio (de)embedding and audio shuffling. This ensures that any crew located inside MediaHub Australia’s MCR can easily bridge the gap between baseband SDI and any type of IP media streams and take advantage of the huge set of functions available to them in an instant.  17


NEWS - SUCCESS STORIES

Dome Productions chooses Calrec Apollo console for new SMPTE 2110 UHD OB truck Canada-based Dome Productions has unveiled a new all-IP SMPTE 2110 OB mobile unit. Called Gateway, the truck is Dome’s first large scale, all IP, UHD/HDR unit and is currently being used for TSN and Rogers SportsNet on their hockey broadcasts. The deal has been managed by Calrec’s Canadian partner SC Media. The OB vehicle features a Calrec Apollo digital audio console. The decision to go with the Calrec’s solution came down to several features. Al Karloff, Manager of Engineering Services, Dome Productions, said, “In the North American sports broadcast market, there are a few key pieces of hardware you need to have in the truck to be at the top tier productions and a Calrec console is one of them. It’s important for our clients to have the power of the Apollo’s 1020 DSP paths, as well as the double layer of faders for direct

18

access on the control surface. There’s also a large A1 operator base that makes for better and more consistent productions for our clients, no matter where they are on the continent.” Dome has been using Calrec consoles for 20 years and all different generations of the consoles are still active in the company’s fleet. The Apollo was commissioned by Canada-based SC Media. Jean Daoust, SC Media Founder and President, commented, “The Apollo was the only console that met Dome’s requirements;

no other model could offer a surface with 144 faders, or the mix power of over 1,000 input channels. We’re proud to be partnered with companies such as Calrec and Dome Productions. The newfound addition of Calrec to our AV portfolio has opened new doors for us in the broadcast media vertical, as well as provided incredibly innovative solutions to our already valued customers. The Gateway truck is an incredible project, and we look forward to building upon this success for all parties involved.” 


NEWS - SUCCESS STORIES

SDNsquare to Install GRID on United’s OB Vans SDNsquare will provide EMG company United with their patented GRID Software Defined Network control software to support the Dutch media production service provider with the IP transition of a number of OB trucks. Having initiated their working relationship with SDNsquare two years ago, when GRID software solution for IP network orchestration was firstly installed at United’s Operation Centre (RoC) in Hilversum, United wanted to extend this relationship into the purchase and IP extension of several OB vans. Though the events of 2020 placed an inevitable impediment on progress, as both companies look forward in 2021 with cautious optimism and as business projects for both parties begin to reinitiate, United are now in a position to finalise the contract for purchase. SDNsquare’s GRID software solution will become the network controller for their entire IPbased fleet, as well as

remaining compatible with their existing RoC. The IP-based nature of GRID is key for United. The construction of United’s fleet will be modular in nature, centering around several ToR switches and calculating the capacity needed for different applications. The use of GRID will ensure that the hardware United deploys can be located wherever needed, in accordance with their ‘United Anywhere’ concept, in which United are able to offer fully decentralised, remote production and video editing anywhere in the world, no matter where the event recording takes place. Finally, thanks to the use of

GRID United will remain vendor agnostic and therefore free to pick and choose the most suitable components for their installation (as well as providing a future-proof, scalable network management system that can accommodate any future upgrades). CTO of United, Paul van den Heuvel, said: “We have already had success with the deployment of GRID in our RoC back in 2019. It provided a level of reliability and flexibility in our network operations, and as such it will be key to the success of our OB rollout, even when we’re producing from remote locations.”  19


NEWS - BUSINESS & PEOPLE

disguise joins forces with Kwanda and Workfinder to address diversity in tech industry disguise have launched a new internship programme designed to seek out, train and invite the very best talent from other “under-represented backgrounds into the technology industry”. disguise hopes to uncover the most promising young individuals through new partnerships with two initiatives that share disguise’s values of equality and diversity: Kwanda, a non-profit organisations supporting enterprise and creativity within the black community; and Workfinder, which Matches students with opportunities in start-ups and scaleups. Specifically, by partnering with Workfinder, disguise is directly supporting their Women in Tech scheme that aims to deliver increased gender equality in the STEM sector. The new internship programme simultaneously gives young people the chance to explore a technology career while addressing inequality in the technology industry. Diversity is something disguise actively seeks to play a part in furthering as it develops high potential talent. 

20

Chyron acquires DTN’s Weather Suite

DTN Weather Suite's Clients Can Now Access Chyron's Production, Service, and Support Capabilities

Chyron has finalized the acquisition of DTN’s Weather Suite business (formerly MeteoGroup). “The acquisition of Weather Suite brings Chyron a weather platform with deep multiplatform, collaboration, and role-based access capabilities,” said Chyron Vice President of Product and Strategy Mathieu Yerle. “We are excited to bring our in-depth production, service, and support capabilities to Weather Suite’s clients.” Weather Suite staff is transferred to Chyron as part of the transaction, becoming a core part of the growing Chyron Weather team. With this acquisition, Chyron also enters into a partnership with DTN to provide customers with weather data. “We are excited to welcome Weather Suite employees and clients to the Chyron family ,” said Chyron CEO Ariel Garcia. “We have identified weather as one of our core areas for growth and innovation, and we look forward to working with the growing team to continue developing the best products for our customers.” 


NEWS - BUSINESS & PEOPLE

LiveU will distribute Derk Chauvin Trial via LiveU Matrix LiveU, in partnership with the Minneapolis Pool, will offer customers four camera feeds from the courthouse media center covering the trial of Derek Chauvin, the former Minneapolis police officer charged with second-degree murder and manslaughter in the George Floyd death case. The three courtroom camera iso feeds and a switched clean feed will be sent to the media center (located across from the court room) where a LiveU LU800 will transport four video streams and distribute them to global customers via LiveU Matrix IP cloud video management and distribution platform. The highly anticipated trial will begin on Monday, March 8th. “LiveU has been the trusted transmission solution for broadcasters to deliver high-quality high-profile breaking news to global audiences,” said Mike

LiveU Matrix.

Savello, LiveU VP of Sales, Americas. “This trial is one of the biggest global news events in the last few years. With COVID restrictions, it is crucial for our customers to get instant access to the content and quality video for their newscasts.” “The introduction of the LU800 production-level field unit has enabled our customers to effectively cover more ground by supporting up to four fully frame-synced feeds in high resolution from a single unit. LiveU Matrix increases efficiencies by

letting news teams reliably share multiple live feeds across internal and cross-organizational endpoints through the public Internet. The combination of hardware and software tools provide a powerful all-inone live transmission, content management, and distribution solution for our customers,” added Savello. LiveU will offer the pool feed for anyone to pick up (current customers and other global broadcasters).  21


VIRTUAL PRODUCTION

22


DIMENSION STUDIO

Polymotion Stage Truck. Credit Nikon, MRMC, Dimension.

23


VIRTUAL PRODUCTION

Virtualized production is becoming increasingly popular in productions of all kinds. It offers 'instant' solutions for creators of all kinds, who after trying out this technique, think twice before returning to the classic, handy green chroma. Dimension Studio firmly relies on this technique, as well as on many others that will mark the future of broadcasting and which are already finding surprising applications in broadcasting: volumetric capture, 3D scanning, photogrammetry... We reviewed in depth the studio's history and some of its most novel techniques to delve into all the possibilities of mixed reality in our world. By Sergio Julián Gómez, Managing Editor at TM Broadcast International

Interview with Callum Macmillan, Co-Founder and CTO at Dimension How was Dimension born? What’s your story? For the past 20 years, I’ve been interested in the evolution of multi-view, computational and volumetric imaging, ‘virtualising’ the real-world as both still and moving image, to help content creators – from filmmakers to musical artists to brands – tell their stories in creative new ways. 24

In 2007 I took over as managing Director at TimeSlice Films, founded by my brother Tim Macmillan in 1997, which became a leading multicam and array-based camera specialist. We’ve pioneered innovative capture solutions, including many generations of frozen-time and ‘bullet-time’ camera systems, including deploying the very first frame synchronised GoPro video array in 2011 to Fiji

in a campaign shoot for Rip Curl. In 2014, having been focused on photogrammetry, 3D scanning and volumetric capture for a number of years, TimeSlice began working with Hammerhead, one of the first pure-play immersive entertainment studios, and together we began to create some of the first augmented and virtual reality experiences.


DIMENSION STUDIO

Working with the founders of Hammerhead, in 2017 we launched volumetric filmmaking studio Dimension, which was the first Microsoft Mixed Reality Capture studio licensee. In 2020, we merged both companies - TimeSlice and Hammerhead - into Dimension and expanded

our studios to create a new breed of XR entertainment studios. With a rapidly growing, global team of over 50 brilliant people based in London, Newcastle and North America, today we’re creating cuttingedge XR content, digital humans and virtual production.

What differentiates Dimension from other companies? We believe that tomorrow’s entertainment is going to be mindblowing. From 5G to the metaverse - volumetric content, real-time technologies, computer vision, mixed reality

Fireworks Virtual Production - © Wilder Films, photographer Tom Oxley.

25


VIRTUAL PRODUCTION

headsets, AI - filmmaking, gaming, music, fashion and sport will be totally redefined. With our diverse skill sets in capture solutions, realtime engines, character creation and AI / ML, Dimension is ready for this new era of XR.

What are the services and the solutions you offer? Digital humans – Dimension’s volumetric capture studios operate worldwide and are redefining the creation of realistic virtual humans and avatars for broadcast, games, and immersive entertainment. We see volumetric video capture of humans and 3D scanning to make avatars as two sides of the same ‘Digital Human coin’. This means that whether a client requires the natural believability and realism of a volumetric video or the great flexibility of an avatar or ‘digital puppet’, we‘ve got it covered. XR content - From superheroes and supermodels, magic dragons to love-crazed robots, we’ve been 26

pioneering high-quality virtual, augmented and mixed reality experiences, broadcasts, games, and live events for over a decade. We have a team of artists who are passionate about VR and

LED Screen Shoot. Credit Dimension Studios.

the power of immersive storytelling. Virtual production - My lens on Virtual Production is that it is a term to cover a broad range of technology solutions and innovation teams - for


DIMENSION STUDIO

example MoCap, VolCap, Cyber Scanning, LiDAR Scanning, V-Cam, Simulcam and Virtual Art Department - that converge on the content production pipeline, touching it at various

points. This includes previsualisation, scouting, tech-viz and scene creation for LED and XR stages. Our real-time workflow and industry leading partnerships enable us to rapidly

respond to Virtual Production needs for film, drama and broadcast.

What technologies and procedures are defining the present of the production? What are the solutions most requested by your customers? Real-time engines and volumetric capture are at the heart of productions. The Eurosport Cube mixed reality broadcast XR studio is a great example of our real-time programming team and art team coming together to create a cutting edge, versatile virtual studio powered by Unreal Engine. It was able to ingest multiple data feeds in real-time whilst representing spatial content, fully tracked in the 3D environment, blurring the boundary between real and virtual in a highly responsive fashion. The Balenciaga Fall 21 Collection video game was another technology first, where cloud infrastructure was not only used to compute Dimenion’s volumetric 27


VIRTUAL PRODUCTION

videos on a large scale and in short time but was also used to remotely serve the final game (including our captures). This allowed for near zero latency real-time control of the virtual world, streamed to the user’s device. I could stand in the middle of my local park using my mobile phone to play the game and view Dimension’s volumetric captures, all without having to download a thing. No data heavy assets or scenes, just a lightweight video stream.

And what about the state-of-the-art workflows? What does the industry have to offer the most innovative companies out there? From a virtual production point of view, our pre-viz pipeline lets you control all aspects of shots, including camera setups, actor placement, props, locations, lenses, lighting and atmospherics. Using game engines, such as Unreal Engine, we are seeing a real-time revolution in the 28

production process, benefiting greater creative collaboration, planning, and faster decisionmaking. The traditionally linear production process is disrupted. Teams work together earlier and run in parallel, giving huge space for innovation. We’re exploring the combined powers of virtual production with our volumetric video technology. Working with volumetrically captured actors in 3D environments creates a workflow that means you can edit camera angles around them, try out lighting and environmental effects on them, all in real-time, seeing the final results as you work. This realises an entirely new content layer for Virtual Production in the form of natural looking digital extras for LED walls or traditional VFX pipelines.

Who are Dimension’s technological partners? Our volumetric capture stages use Microsoft’s Mixed Reality Capture technology. Microsoft have invested millions of

Credit Dimension Studios.

dollars over the last decade to further the capabilities and adoption of volumetric capture and their solutions. We have used other technologies including many previous generations of our own hardware and software stacks. We are now very much on a collaborative journey with Microsoft to offer a scale, quality and resource to our clients that is far beyond what could be achieved alone. Working with Nikon Group company MRMC, in 2019 we launched the


DIMENSION STUDIO

Our XR stage is offered with technology partner 80six, providing a futuristic mixed reality studio for broadcasts, events and live performances.

Dimension has several studios in the UK and USA. Can you tell us more about them?

Polymotion Stage - a state-of-the-art mobile multi-solution studio capturing high-quality volumetric video, image and avatar creation at up to 4K with integrated motion capture. The ‘volume’ which contains all these technologies is housed within a custom engineered expanding chassis of an HGV lorry, similar to what you see with OB trucks for broadcast - but even bigger! For virtual production, we recently announced a

partnership with Oscar®, BAFTA and Emmy awardwinning visual effects and animation company DNEG. The timing of our partnership with DNEG is well placed to help serve the increasing demand for, and de-risk the hype around, Virtual Production. By being able to call upon and collaborate with such an incredibly mature visual effects house and the highly skilled talent within it we can scale into the Virtual Production market as and when required.

Dimension’s first volumetric and 3D capture studio is based in Wimbledon, West London. It features 110 camera array using hardware from IO Industries and Nikon, full sound and dynamic LED lighting, creating wonderfully detailed and realistic volumetric video performances, avatars, or static scans. Our Newcastle studio offers end-to-end realtime production services for XR and a 3D scanning studio capable of capturing volumetric stills for cyber body scanning and Avatar. The Polymotion Stage mobile dome and truck can be deployed anywhere in the world to capture digital humans at up to 4k along with maker-based and markerless motion capture systems. 29


VIRTUAL PRODUCTION

Avatar Dimension, our Washington DC studio launched in 2020 with Avatar Studios and Sabey Data Centres, is an advanced volumetric capture studio focussed on creating digital humans for enterprise. It uses the very latest Volucam cameras from IO Industries. The XR studio, launched with 80six, has a permanent location based in Slough, UK, using ROE LED walls and Disguise media servers. The setup can be very easily reconfigured with different screen panels, real-time play-in systems and camera tracking hardware to suit any size budget and production. We custom build LED stages for virtual productions in locations around the world.

Let’s dive into Virtual Production. How did you develop this area? Was it a central part of the Dimension concept from your origins? Working in entertainment and using real-time engines, virtual 30

production for film and drama is a natural progression of the Dimension team’s skills. We began to discuss virtual production services with filmmakers in 2019 and the industry has been accelerated in the past year by the global pandemic. It enables creative and efficient solutions during a time when productions are limited by social

distancing and travel restrictions.

What are the core technologies of your virtual production setup? We custom build LED stages for virtual productions in locations around the world. The technologies used depend on the requirements for the shoot. The cinematic, photorealistic, live 3D

Credit Three UK, Pretty Green, photographer Matt Alexander.


DIMENSION STUDIO

worlds running on LED screens are created in real-time in Unreal Engine or using Disguise and Notch. Our ROE screen setup is provided by 80six, and real-time camera tracking and positioning solutions from the likes of EZ Track, MRMC Bolt and Mo-Sys work with Unreal and cameras from the likes of ARRI and Panavision.

How has this area evolved in recent years? The virtual production workflow is ever evolving. Hollywood is taking notice, investment in the area is growing and new use cases are released every day. A diverse generation of storytellers is using a new set of tools, with the results ever more creative. Fundamentally the shift is from using Virtual Production tools in areas to help with previsualisation scouting and live action framing guidance i.e. ‘off-camera’, to a world where the realtime content is actually output to high enough standard for ‘final pixel’ i.e. ‘in-camera’, filmed live.

How relevant are led walls in your virtual production workflows? LED walls are fundamental to creating the ‘in-camera’ capability of virtual production for principal photography. LED panel technology is constantly evolving, with the dot-pitch ever decreasing and visual fidelity / dynamic range getting better. In another

3 to 5 years we should have many of the current gotchas and limitations to filming on LED ironed out.

I’d also like to know more about your Mixed Reality studios. What is currently being done thanks to MR? You have a strong relationship with Microsoft in this area, is that correct? Dimension creates ground-breaking holographic and immersive experiences, from Madonna’s holograms at the Billboard Music Awards to Sky’s Britannia VR game and Eurosport’s mixed reality Cube broadcast studio. We are a Microsoft Mixed Reality Capture Studio, with Azure cloud capabilities key to the processing pipeline behind volumetric digital humans. Users view and interact with the digital humans and mixed reality experiences created by our team using mobile phones, desktop and headsets, including the Microsoft HoloLens.

Another impressive development is Volumetric Capture and 31


VIRTUAL PRODUCTION

how you integrate it into entertainment of broadcast. Could you tell us more about the type of work you have developed for these areas? What are the critical technologies that drive your Volumetric Capture workflows? Broadcast understandably has a very high quality bar. Prior to Dimension entering the broadcast market there had been limited use of volumetric capture and 3D Avatars, due to either the low visual fidelity of the assets or the unnatural movement and look of the digital humans shown to that point.

Specifically, Sky Scope is an amazing case study on this. How did this collaboration develop? When it comes to technology, Sky has always been innovative. Early in my career around 1999, I was lucky enough to work on some multicamera VFX shots for a Sky Sports ident called ‘Millennium’. Millennium visualised what the Future of Sport might look like and the technologies that Sky imagined could be 32

harnessed to literally present “a new dimension in sports broadcasting.” In 2019, the vision that Sky Millennium presented two decades earlier became a reality in the form of Sky Scope, using Dimenion’s combined volumetric capture and motion capture innovation.

It enabled highly capable presenters such as Nick Dougherty to maximise how they told the story of a particular golfer and highlight the unique differences between each professional in a way never before seen.

This development is spectacular, but what it is even more is the fact that you can capture it on the road thanks to Polymotion Stage. Could you introduce to our readers some keys about this truck?

Sky Scope is a Sky Sports broadcast segment using digital humans to power free-viewpoint swing analysis - allowing a full 360-degree inspection of a golfer’s swing - presented in ‘broadcast AR’, using real-time camera tracking of the holograms into the live tv studio, giving the viewer unparalleled insight. Golfers, including Dustin Chapman, were captured using the Polymotion Stage on site at the Open Championship.

The Polymotion Stage truck is the world’s most advanced volumetric capture studio on wheels. It’s capable of creating volumetric digital humans, avatars and 3D stills, and comes complete with two fully climate-controlled rooms, which can be used as green rooms or for hair, makeup and wardrobe.

The key to Sky Scope’s success and warm reception from the viewing public wasn’t the fact that volumetric ‘holograms’ were used for the first time in such a production, it was the editorial value those digital humans unlocked.

The stage is able to travel anywhere in the world, ideal for capturing teams at sports games, full cast on set and multiple looks for fashion shows – as seen in the Balenciaga Fall 21 Collection video game, which was shot on the truck in Paris.


DIMENSION STUDIO

Once stationary, the truck triples its footprint to present a green screen capture room of 46m2 in size, while automatic selflevelling hydraulics provide the stability that volumetric video capture demands.

There are two concepts mentioned on your website that will undoubtedly shape the future of broadcast: AI and 5G. How does this fit in the work Dimension does? Realistic digital humans and XR experiences can rely heavily on mobile bandwidth. Low latency, high bandwidth 5G means volumetric video can be streamed efficiently, often live and in a collaborative situation, either one-toone or one-to-many. In 2020 we worked with telco Three UK on a 5Gfuelled campaign at London Fashion Week. As part of a multi-sensory catwalk experience, supermodel Adwoa Aboah’s 46-metre volumetric capture was projected onto a wall overlooking the front row, with mobile handsets streaming an AR version of

the supermodel walking down the catwalk. Artificial intelligence is key to virtual production, and to the volumetric filmmaking we see as the future. From texturing, lighting and rendering both humans and scenes, to creating entire crowds from a single volumetric capture, we’re already embracing this new, datadriven style of filmmaking.

Finally, what will be the future of Dimension? Where do you think technology and creativity will lead you? There’s still a huge opportunity for filmmakers, brands, broadcasters and more when it comes to XR. From virtual worlds to games to AR product-try-on, only small numbers are innovating in this space right now. With its effectiveness proved, we expect to see many more brands being brave enough to push boundaries and explore the possibilities of using XR. Beyond that, we believe vast persistent virtual worlds – the metaverse,

the holodeck, the Oasis are coming. Brands and entertainment experiences will live and thrive creatively in these bold new worlds. We expect the virtual economy to boom, and our team has the skills to embrace it. On the large screen, in terms of filmmaking, the adoption of real-time will exponentially increase over the coming 5 years. Beyond that, as this realtime growth continues and the scenes and content used in production become largely high quality three dimensional assets, there will be a tipping point where the viewing medium of long form film will finally open up from just 2D to having ‘true volume’, not brash, stereoscopic ‘3D’. It’ll be a more subtle and natural experience. There will be a time when each person who sits down in a cinema or at home to ‘experience’ a film, will literally have a unique viewpoint of the story being told - very similar to theatre.  33


VIRTUAL PRODUCTION

34


ORCA STUDIOS

35


VIRTUAL PRODUCTION

Independent visual effects and virtual production company Orca Studios has been since 2015 offering the latest technological solutions in areas such as cinema, television, and advertising. Deep in their skin is a firm will to make the most of technology and state-of-the-art implementations in production. Innovation does not limit, but rather expands the possibilities of an industry always characterized by progress. At closing of this edition, they have participated in productions such as 'Down a Dark Hall' or 'Paradise Hills', as well as in a soon-to-come series that will be seen on the video-on-demand platform Netflix. Adrián Pueyo, Compositing & VFX/VPX Supervisor, who has worked in renowned productions such as 'Captain Marvel', 'Star Wars - The Last Jedi' or 'Pirates of the Caribbean 5', brings us a vision of present and future on virtual production.

What is the history of Orca Studios and what are the main areas of activity of this studio? Orca Studios was born in 2015 in Las Palmas de Gran Canaria, in he Canary Islands. It was founded by Adrián Guerra (producer and partner of Nostromo Pictures) with the aim of developing the possibilities of the new field of real-time technologies for cinema. A year ago, Orca Studios reinvented itself as an independent visual effects and virtual production company. 36

At Orca Studios we cover the entire cinematography process, from previewing with real-time techniques and motion capture, concept art as well as illustration, modeling and creation of environments and assets, to complete post-production of visual effects, including virtual production on a LED set. At the core of our philosophy are filmmakers, and one of our top priorities is to guide them through this process with technologies that are constantly reinventing themselves, thus offering efficiency and creative control.

You have your own facilities where you can carry out your work. What spaces do your studios include? We currently carry out our face-to-face work mainly from two sites. For the virtual production part, we have a dedicated set in Madrid. It is equipped to meet all production needs (dressing rooms, offices, hairdressing, and makeup areas, etc.) and it is where we have our fixed LED environment, with a curved, 100m2, highresolution HDR screen and another set of high


ORCA STUDIOS

luminosity screens so as to create an immersive environment, in addition to all the technical means and human resources that are necessary to carry out the filming. On the other hand, our visual effects studio is located in Las Palmas de Gran Canaria, where we have a team of professionals and an infrastructure that, in

addition to supporting virtual production, has a full pipeline of preproduction and postproduction of visual effects. I would also highlight that our pipeline is highly based on remote work, which currently allows us to have a large number of our artists working from different countries.

What do clients of Orca Studios usually request? What area are they most interested in? Where is the trend moving to? The interest in the advertising area stands out, since virtual production (especially in these times) provides much more flexibility when traveling to various locations, in addition to

Adrián Pueyo at Orca Studios

37


VIRTUAL PRODUCTION

benefits relating to project times. In large fiction productions, which is our main market, the learning curve is slower, and we have more demand for large international productions familiar with technology, which have been delayed by the Covid situation. Domestically, fear of this new technology and its

38

possibilities is gradually dissipating, and not only because of the aforementioned production side, but as a vital part of the story's visual style. This seems to be one of the trends that will be seen the most in the coming years: using virtual production not only as technological means, but also as narrative means.

One of your most interesting focuses is your virtual production system. What is the system you have implemented like? What are its uses? Orca's virtual production area (VPX) includes several different processes, comprisingscouting virtual or preview (virtual cinematography), but the


ORCA STUDIOS

most representative of them is perhaps a process known as 'in-camera vfx', the filming system in which we use a large LED panel as background, capable of displaying a rendered image in real time for the position and orientation of the real

camera, creating an illusion of perspective and immersive background. This, beyond generating 'live' VFX, presents a wide range of advantages and new possibilities as compared to other traditional methods of

integrating virtual stages such as the chroma key:  In a LED environment, the panels that surround us, in addition to acting as a background, illuminate both actors and actual props, greatly favoring the integration

39


VIRTUAL PRODUCTION

and credibility of the final result. This is especially important in more reflective areas, where generating, at post-production stage, a complex reflection of the environment on moving characters is a very complicated task.  By having an image instead of a chroma, any

component of spill (an unwanted flood of green on the characters) also disappears, replacing itself, as we have seen, by the colors and shapes of the virtual environment in which we want to be in every moment.  It allows us to create, in a matter of seconds, a

Adrián Pueyo teaching a workshop at Orca Studios

40

darkening effect on any part of the screens that generates an unwanted reflection or lighting. Or move part of the background to get a different reflection. Or create a small square of greenchroma (or of any other color) behind the actors and props in case we may want to use a conventional chroma,


ORCA STUDIOS

but without wrinkles and without generating any front spill, as it is placed only on the portion of the screen jocated just behind. Or perform live color corrections on the background, extra screens, or even on the materials of the 3D stage that we are seeing. The possibilities are really vast.  On the other hand, the fact that all those involved in the filming (from actors, director or photography director to lighting technicians or even the clients themselves) can be seeing live the environment in which we find ourselves and not only an abstract background, creates a much more immersive and fun work environment, which means that everyone will understand much better the context of the final piece we are working on. Actors know where to look, lighting technicians where to light, seeing the final result; and, for everyone else, there is

much less room for imagination at a time when every wrong decision could be costly. Also, when compared to filming in a real environment, this system offers clear advantages in certain situations such as the following:  A situation of natural lighting that we cannot maintain over time, such as the golden hour, sunrises or any weather situation that is not very controllable or predictable.  Environments where we have already shot and we need to replicate perfectly, but in reality, it is impossible for us. For these cases, we can generate a virtual environment that replicates the desired scenario. A clever solution for this is to capture any real set at the time of shooting, and in case we need retakes at a later stage we have the virtual set as a 'backup copy'.  Places for shooting are very expensive for us due

to a number of reasons.  Scenarios where moving the entire team is impractical, for legal, logistical reasons, or where transferring talent is just not feasible.  Scenarios where in case of filming in the real environment we would need a team of many more people, either for lighting, transport, catering, or any other reason. Especially important in currently prevailing Covid-19 times, where the limitation of the number of people attending shootings plays an important role.  In general, any environment impossible to achieve in reality!

What specific technologies are part of your virtual production system? Referring now by virtual production only to the context of in-camera vfx (or direct visual effects on camera), that we mentioned before, some pieces of the puzzle are the following: 41


VIRTUAL PRODUCTION

 Real-time camera tracking, with systems like the Mo-Sys StarTracker.  Real-time rendering engine, like that of Unreal Engine.  LED panels with different lighting features and color rendition according to their function, electronics and other hardware and wiring necessary to power the set and launch the images.  Machines in charge of translating the camera position to the equivalent image and mapping it to the necessary position on the relevant screens. In this case we can directly use engine functionalities like Unreal Engine, create our own tools or combine it with external platforms like disguise.  Synchrony signal generator between all parts, for both camera and machines and screens. The above are those related to hardware. Regarding digital 42

workflows, the number of possible technologies that converge significantly increases. We could categorize as capturing environments (via photogrammetry, mapping projections, 360º photography, LiDAR scanning, etc.), generating purely digital environments (modeling, texturing, and shading, lighting, etc.) or color management (required in all steps from capture to recording), in addition to related to interaction and real-time rendering.

shooting with a high percentage of takes on camera ready to assemble and give shape to.

The LED screen is a key device. How is it integrated into the workflow?

The number of doors that these real-time render engines are opening for creation of audiovisual content is huge and goes far beyond virtual production. When creating visual effects, for example, the rendering process has traditionally been a very expensive and time-consuming part of production, where iterating through 3D scenes in order to generate changes would always tend to mean rerendering, thus taking up a large number of machines

It is part of the shooting process, once we have all assets created and the number of remaining variables in production is very low. We can consider that pre-production in a virtual production workflow is much more important than in a traditional workflow, and it is the price to pay for all the aforementioned advantages and potentially leave the

In the same way, we see getting increasingly relevant the use of graphics engines originally from the world of video games that are becoming authentic standards. The most illustrative example is the Unreal Engine. How important are they not only in Virtual Production but in current post-production workflows?


ORCA STUDIOS

and time. In any context that requires feedback on the final result of a 3D scene seen through a camera, which as you can imagine are many, having the possibility of rendering said scene in real time gives us an incredible advantage in order to pursue a result and to be able to iterate everything necessary and at full speed to achieve it. And these render engines have been becoming truly powerful and realistic in recent years, relying on both visual tricks and GPU processing, which is being accompanied by a rapid progress of processing capacity of GPUs

themselves and machine learning algorithms aimed at reducing noise, increasing resolution, or generally improving the render results.

Could you tell us about any recent experience that you have undertaken in the field of 'Virtual Production'? Since we started with virtual production a year ago, we have been able to use the set for a variety of projects: some involvingmore traditional techniques such as the use of LED screens as canvas to project previously shot material (with all the advantages of light and

realism that are inherent to a LED environment), others where the rendering of 3D environments in real time was a central part… Recent projects have included the filming of a series for Netflix and several commercials. In one of them, for example, we did not only rely on static 3D backgrounds, but activated different interactions in real time based on animations and transitions on the screens in order to generate very specific, ad-hoc effects for production, right at the time in director might signal to proceed. 

In-Camera VFX Reel

43


VIRTUAL PRODUCTION

TRT2 brings a new perspective to virtual studio production for broadcast

but TRT’s outlook is one of creating an ambience that feeds the story in a natural way. Additionally, while the channel’s long-term strategy included live programming, they sticked to record-totape production since their launch in the beginning of 2019.

Since 2016, Zero Density’s Reality Virtual Studio has powered numerous live events, esports, commercials and episodic TV. TRT2, a culture and art channel’s virtual studio and AR workflow, is one of the prominent users of Reality in this manner. Additionally, TRT2 programs became the epitome of how photorealistic virtual worlds can enhance the storytelling and immerse the audience.

TRT2 – Channel of Culture and Arts TRT2 is a branch of the media giant and national television network TRT. It focuses on cultural and

44

educational programming, arts, talk shows and documentaries as a separate channel for 2 years. In the process of shaping the identity and branding of the new channel, TRT2 decided to adopt latest virtual studio technology as the bulwark of their production workflow.

The Vision for Virtual The way TRT2 uses the technology introduces a new perspective to virtual studio and augmented reality adoption in broadcast. The hyper-realism and flexibility of Reality provide ample opportunity to adopt AR pop-ups and CG graphics,

When the TRT2 team looked at their program lineup for the new channel, they realized the contents are heavy and the hosts as well as the guests are important names in their respected areas such as movies and music. The creative team craved to build worthy locations for all the special themes that would reflect the expertise of their hosts and guests and support the story. Shooting in different studios or building distinctive physical hardsets for each program were not realistic in terms of time, cost, and relative comfort of the hosts. So, the team started looking for the solution in virtual studio options that would allow


CASE STUDY

them to create the atmosphere they envisioned. The name that the TRT2 team kept coming back to would be none other than Zero Density. “Reality platform is unmatched. There is no alternative to Zero Density in this area for its quality and usability.” says Mustafa Taşçı, Creative Director at TRT2. Reality is an Unreal Engine native node-based compositing platform that works in realtime. While using Unreal Engine as the renderer, Reality’s special toolset tailored for live production offers photorealism, sophistication and ease of use for virtual studio and augmented reality. Reality suite’s pipeline is specifically designed to achieve the perfect blend of the virtual world with the physical, in consequence of effective handling of tracking data, powerful keyer and intuitive control tools. TRT2 built their virtual studio, installed Zero Density products in late December and went on air on 22 February 2019. For a month, the crew learned everything about Reality and started shooting as early as February. Within two

months, TRT2 successfully realized their vision of soulful backgrounds for their thematic stories. The team started shooting a couple of shows from the virtual studio and increased the number over time. As of now, the inhouse creative team worked on 14 distinct designs for 14 shows. Every day there is at least one program on TV that is shot inside their virtual set. TRT2 team uses the studio for production for 8 hours during the weekdays between 10 am to 6 PM and 3 hours during the weekends on a normal schedule.

The Studio The L shaped green box studio, which is 10 by 8 meters and height of 3.5 meter, a corner studio similar to a rounded square, is equipped with 3 Engines for 3 Sony HDC 1400 cameras. The studio employs the RedSpy camera tracking technology from Stype. As for lenses, TRT2 opted for 2 Canon HJ24ex7.5B and 1 Canon_HJ14EX4.3B on 5.5m tall crane. 2 cameras are on Sachtler Vario Ped 2-75 pedestals and 1 on the crane.

The studio employs 40 Kino Flo 4ft 4Bank portable lighting system and 20 De Sisti high frequency LED Fresnel series in F6 and F10 models. TRT2 has produced 14 different programs from this virtual studio. The director shoots the program realtime but also record-to-tape for minor color grading and cuts. Switching from one virtual design to another actually means switching from one studio to another with an entirely different light setup if applies. And this takes less than a minute as each program can be saved as rgraphs to be loaded when needed. “As the variety of programs increased, traditional broadcast studios started to have difficulty in responding to demands coming from the broadcast industry.” Says Taşçı. “Especially when the topic touches on culture and art, the studio structure that will accommodate wider opportunities to satisfy both the audience and the producer, which can be suitable for the shooting of many shows under difficult conditions, and which can

45


VIRTUAL PRODUCTION

bear the weight of the content eventually becomes a necessity. Therefore, conventional studios are unlikely to fit for culture and art content that frequently needs surreal places.” The surrealist outlook of TRT2 is supported by Reality’s high quality compositing capability and advanced keying technology. As a result, the shows with the virtual designs take power from the naturalness of its locations. They do not stand out but melt in the background during the discussions. From time to time, TRT2 also utilizes virtual windows in the best way to support the story. Mustafa Taşçı explaines that ”Coming from the fact that the art itself is abstract, creating a blending scene would be seen as a compelling process for the concept of TRT2’s programs. Culture and art programs are fragile and its overflowing structures should not contain any restrictions. They should have enough space to give refined information to the audience while protecting its principles and dynamics.”

46

In this context, TRT2 conducts multidimensional programs from cinema to literature, telling a story about specific time of Turkish culture and art history. Of course, to use the creativity to the fullest and to provide efficiency in terms of time and cost, it turned its face towards high technology and managed to keep up with the requirements of the age.

The Mystery of the Bosphorus TRT2 drew in the viewers visually and contextually from the first day it went onair. The first time in the channel history, the national culture and arts channel built a state-of-the-art virtual studio, leveraging the broad and nonrestrictive

nature of the virtual sets that fit as a puzzle piece to the channel’s vision and theme. One of the landmark shows of the channel Movie Like Lives stood out with a massive audience locked in to watch one of the most iconic movie stars in the 90s host equally epic names in a very polished location. The Bosphorus in Istanbul has numerous palaces and mansions by its coast. Renting such a space and getting permit for these ancient buildings are difficult tasks. To create this ordinary but exclusive atmoshphere in the background TRT2 conducted many experiments. As a first practical solution TRT2 thought, they could use


CASE STUDY

time program will be in the upcoming season because it is not our style but we continue to change some of the existing designs. Also, there is minimal postproduction need when shooting and recording with Reality because effects are rendered in real time. Timewise, good things come slowly, but Zero Denity 360 spherical videos for the view. Although it was working in theory, it did not meet with their expectations because it could not provide enough resolution for details. The scene was demanding more pixels. On the other hand, resolution was not the only challenge to tackle. The data streaming from the rig set conducted with three 4K cameras stiched was gigantic because of the panaromic video. To elaborate, when a panaroma was created from these videos, the resolution got significantly increased. It was not HD, not Ultra HD, not 4K but a 10K video. Event though TRT2 was concerned about the performance, the three panaromic videos side

by side made a 10K video which ran seamlessly in the background powered by Zero Density’s Reality without any compromises on quality and performance. “Indeed, the result was dazzling. Not only the audience but also some of the professionals in the industry did not realize whether the scene was real. I received phone calls from these people in the business, asking for the location of the Bosphorus set. The set was so vivid that people started asking if they need a permission to shoot at this location.” says Taşçı. “We will continue to expand virtual studio covarage. We have plans to use virtual sets for other channels as well.” says Taşçı. “In addition to that, no real-

offers TRT2 the option to be lightning quick, when we needed to be.” TRT2, the culture and arts channel have showcased an alternative perspective on virtual studio and augmented reality use in broadcast both with application and vision. TRT2 also started to open its doors to live events in its virtual studios. International Migration Film Festival opening and closing ceremonies was held in realtime with a special presentation by the Ministry of Interior. Last but not least, TRT World Forum was hosted in TRT2 studios for 2 days and 20 hours of live broadcast.  47


PRODUCTION

Extreme E A challenging production from the harshest environments

Photos: Copyright MCH Photo

48


EXTREME E

'Electric Odyssey': this is how the ambitious Extreme E project is presented in society as a motorsports competition that features starring electric SUV vehicles that will travel through remote and -of course- “extreme” places: The Alula desert in Saudi Arabia (April 3 and 5), Lac Rose in Senegal (May 29 and 30), Kangerlussuaq in Greenland (August 28 and 29), Pará in Brazil (October 23 and 24) and Tierra del Fuego in Argentina (11 and 12 of December). The Extreme E approach is as attractive for viewers as it is complex for broadcast engineers: remote production, more than 70 cameras, FHD, AR, VR… We discovered how these groundbreaking races will take place thanks to Alejandro Agag, Founder and CEO; Ali Russell, Chief Marketing Officer; and Dave Adey, Head of Broadcast and Technology.

49


PRODUCTION

Background How Extreme E idea was conceived? Alejandro Agag (AA): We started planning Extreme E around three years ago. I was going around in circles in my mind thinking of what to do after Formula E that could have a bigger link with the cars on the street and that could also have even more meaning and link to the fight against climate change. I started brainstorming and talking with a friend Gil de Ferran, former Indy 500 champion, and came up with this idea of going to extreme places to do the races, which I think is the genius idea of Extreme E - it’s not my idea sadly, it’s Gil’s. We started working on the project and the first thing we did was buy the St. Helena because we knew that we needed a specific ship for this project, so we did that around three years ago and then we started building everything else for the championship. About two years ago, we really started pushing and now

50

we are not far away from the first race.

As the name implies, this is a competition that will take electric car SUVs to extreme conditions. How do you plan to solve all logistical challenges that this whole concept implies? Ali Russell (AR): The series will use a ship, the St. Helena, to transport cargo associated with the championship to these remote locations, as this has a lower carbon footprint than air freight. The ship will also be home to a scientific laboratory and we will have scientists on-board conducting research into climate issue during its transit.

Extreme E has signed agreements with global broadcasters. Could you provide more details on how many viewers you expect to reach and how many broadcasters and VOD platforms you plan to distribute the content to? AR: The series has signed a variety of broadcast

agreements all over the world from BBC in the UK to Globo in Brazil. We plan to reach as many screens as possible through partnerships like these, but also through the championships social media platforms.

Production First at all, this will be a brand new production. Have you had any particular reference


EXTREME E

when designing the TV production of this new competition? Dave Adey (DA): Because of the remote locations and the sensitivity around the sustainability of the Championship, we have opted for a remote production format. There will be the minimal number of production staff at each location – cameras, sound, minimal production staff,

infrastructure. Camera feeds and data will be sent to the main gallery in London where graphics, replays, Augmented and Virtual Reality and commentary are added to produce the final World Feed. The remote production actually involves 5 different companies operating over 4 locations.

Will you opt for UHD (4K + HDR) as production standard or will you start with a full HD production? DA: We are producing in full HD 1080p50, with

51


PRODUCTION

distribution of the finished programming available as 1080i50.

How many cameras will be part of the coverage? DA: In the region of 70, including on boards, track, POV (point of view), UAVs and Handheld cameras. Only 10 will be manned, the remainder being remotely operated.

Are you going to implement on-board feeds (cameras, stats and so on)? How would you solve the transmission to the TV compound? DA: Each car will be fitted with 4 on board cameras to support the World Feed. The World Feed streams to our digital platforms will also show approx. 4 on board camera feeds. All cameras and car telemetry data will be connected to the TV compound wirelessly.

In fact, that will be a really challenging part of the production. How will you connect the sources to the main production centre? 52

Extreme E ship.

What will your streaming solutions be? Have you considered introducing transmission units based on 4G/5G technologies? DA: There will be minimal cabling onsite. There are 2 sets of nodes. Broadcast cameras will be picked up by a number of

nodes placed around the track. These nodes interlink bringing the camera feeds back to the broadcast compound onsite. The data feeds follow a similar methodology. We harvest car telemetry data from the cars and connect to an innovative wireless mesh


EXTREME E

Let’s talk about audio. What sources will viewers be able to hear? DA: Apart from the normal trackside audio, we will incorporate driver audio and communications talkback from team to driver within the broadcast production. Within the digital streams, we can offer a choice of each teams’ audio and talkback.

This huge production will require reliable communication between the production team. What intercom system do you plan to implement?

network. A number of these nodes are placed around the track and even use each car as a node creating the mesh network. These data nodes are completely selfsufficient running off batteries topped up with solar panels and no cabling.

Once in the broadcast compound, the data is embedded within the picture and sound signals and back-hauled to the gallery in London via satellite and fibre. We have little or no internet at each of the locations.

DA: Our communications partner is MRTC, who are one of only a handful of specialist motorsport communications companies. They will provide all radio comms, driver audio, Championship radio comms and frequency management.

Today’s motorsports include all kinds of graphics in mixed reality and augmented reality graphics in real time. 53


PRODUCTION

What graphics system have you chosen for the competition? Are you planning to include AR + MR graphics in the production? DA: Our Timing and Telemetry Partner, Alkamel, will provide all data harvesting, Timing and networks on site. They also provide a full graphics service to Aurora Media Worldwide, our Host Broadcaster. Equally, we will be utilising quite a bit of AR and VR.

Does Extreme E have technology partners? DA: As Host Broadcaster, Aurora Media Worldwide. Al Kamel Systems will provide Timing and Telemetry. Last but not least, MRTC will handle Radio Communications.

What do you think will be the biggest challenge in producing Extreme E races? DA: Connectivity is the biggest challenge. There will be little or no existing Internet in the locations we will be racing. Our systems are designed to use little or no Internet bandwidth for remote

54

working. We are working with local internet service providers to investigate the feasibility of extending coverage to the event site, along with other technologies including 4G/5G and satellite.

There’s a great possibility that you will face adverse weather conditions in some of your races. How would

you adapt to this type of condition? DA: The clue is in the title – Extreme E. Extreme sport in extreme locations. The brief to all our partners is to design infrastructure that can operate within extreme locations, terrains, temperatures and weather. Expect the unexpected.


EXTREME E

Distribution How will content be distributed to broadcasters? DA: Our Distribution partner is Red Bee Media, who will distribute the World Feed to our global Rights Holders via 2 completely resilient and diverse satellite paths. Red Bee Media will also

encode and distribute our digital feeds to XE platforms / OTT with flavours of HLS and RTMP feeds, also making them available to our rights holders.

Does Extreme E have plans to also offer additional content via a VOD/OTT platform? DA: Digital platforms

offer us the ability to include added value fan engagement whether this includes additional camera feeds, full immersive data and alternative audio sources. We have many more innovations planed over the next two seasons, including Apps, Esports and gaming. 

55


DOP IN TV SERIES

Successfully overcoming the creative challenges of 'The Boys' Since childhood, our today’s protagonist found himself ‘infected’ with the thrill of shootings. He was always clear that this had to be his goal. And so, each step of his career was steadily heading in this direction. He trode his path forward through short films, feature films and commercials; recently, 'stability' reached him when he landed in the world of TV fiction thanks to the effectiveness, creativity, and passion he puts into his jobs. It is a world he is passionate about. Today's TV environment is not only capable of presenting a single cinema finish, as is the case in most films, but it is also capable of adapting and evolving in each episode or each season. This chameleonic capacity is evident in the chapters he has authored for the second season of 'The Boys' (Amazon Studios) -he is already shooting season three, by the way- but also in other renowned productions in which he has worked: 'The Americans' (FX), 'Zoo' (CBS), 'Suits' (USA Network), 'Memphis Beat' (TNT)... We chatted with Dan Stoloff to have him share with us his vision of the profession and the technical keys to his latest work.

Frequently, all children who end up working in cinema begin to be fascinated by the silver screen or television, and then move to a home recording device, such as a Super 8. Nonetheless, there’s a time when the roads divide and decide to focus on 56

direction, photography or any of the other areas of this amazing process. How did you end up choosing photography? What were the reasons for taking that path? I was first introduced to a super 8 film camera at the age of 10. Neighbourhood


DAN STOLOFF

friends and I made a series of little films. I was intoxicated by the process. Not excelling in traditional academics, I vowed to myself that if there was a way I could make a living doing this, that is what I would do. I'd had no exposure to anything related to the film industry and I

made it my mission to learn everything I could. I saved my money and purchased a subscription to American Cinematographer. I read every issue from cover to cover. It became my bible! In junior high school I took a class in animation, followed by filmmaking. In

57


DOP IN TV SERIES

high school I continued on this journey, taking college film coursed in summer school. There was no doubt in my mind I would go to film school for college. I went to Ithaca College and my studies reaffirmed my commitment to the craft, and here I am.

What was the first TV Series whose photography amazed you, whatever the reason behind it? I came up in my career through Feature films and commercials without much interest in episodic television. I think my interest was first sparked by ‘The Sopranos’, not so much for the photography but for the cinematic potential of TV. It was the storytelling, in an epic and intimate medium that I saw potential for me and my career. It would be years later that I first got an opportunity to photograph a TV series. It was a show called ‘Memphis Beat’ and I found everything I'd learned previously prepared me for this. I

58

loved the pace, the energy, the thinking on your feet that TV required. I was hooked.

What is that spark that drives you to choose one project or another? Is it the script, the creativity behind it, maybe the technological

possibilities of the project…? What I am looking for in choosing a project is multifaceted. Primarily, the story must grab me. I ask myself what is my personal connection to the story and how can I access my own sensibility to make a meaningful


DAN STOLOFF

I DO LOVE THE EXPANDED EXPOSURE RANGE THAT HD AND HDR AFFORD, IT IS ALL GETTING THE CAMERA CLOSER TO THE WAY MY EYE SEES. contribution. Variety is important to me as well as discovery. When

undertaking a project the world you enter is very much dictated by the subject. After spending months in dark alleys, or gritty basements, it’s nice to switch it up to high end glamorous locations. Personalities of those involved is a high priority too.

Is Dan a DP who loves to add his signature to every project he works on, or does he prefer to be a tool to the director of the film or series? Maybe you feel comfortable halfway? I don’t consciously try to put a specific stamp on a project. I try to let the story dictate the look of a project. That said, so much of what we do are decisions we make based on our own taste, so I suppose that would be the common denominator.

Do you consider yourself as a “techy” when approaching your profession? Does technology expand your creativity or inevitably limit it? I do not consider myself a “techy” or overly technical director of photography. I keep up to date with technology as one must, but I rely on my DIT and camera teams to really execute the technical.

How has working in 4K or 8K + HDR changed your way of filming? I don’t think working at higher resolution has affected the way I work at all. I tend to light primarily by eye; I know if it looks good to my eye, the camera will enjoy it too. I do love the expanded exposure range that HD and HDR afford, it is all getting the camera closer to the way my eye sees. I love shadow and highlight details… I want to see it all in the frame!

Although you have worked in several movies, you’ve been 59


DOP IN TV SERIES

specially linked to TV series during the last decade. What’s so special about this field? What do you like the most about working in this area? How has your job in TV change since you worked back in 1995 on MTV’s The State, for example?

responsible for lots of episodes of ‘Suits’ and, in my humble opinion, some of the best episodes of ‘The Americans’, among other projects. What is it like to take an already established show, such as the latest, and take it into its final episodes?

What I love about dramatic episodic television is that as we move through a season a TV series takes on a life that in feature films is a bit different. A feature is a finite project, all prepped and executed with a specific vision. I have found that TV has more room for growth in the look as well as in the characters. Just as actors shape a performance over a season, the look of a show evolves as well. In a perfect world, the show grows in its look, the efficiency of execution and depth of character, which the look helps to support.

Taking over a show that has an established look presents its own creative challenges. Fortunately for me, the shows I've taken on have had wonderful look already, so putting on those shoes was painless. The question becomes how I maintain the look of the show, keeping it consistent, and at the same time putting myself into it. I found with ‘The Americans’ following a few ground rules kept the show in the zone but also allowed me to introduce new locations, new characters and keep them in the same world. The lensing of the show was very specific and that to me became the common denominator. We stayed with wider lenses, close in with actors to keep the environments sharp and present. I felt

Before talking about ‘The Boys’, the last big project you were involved, you were 60

that this created a more intimate proximity to the audience. I would often juxtapose very wide shots, through doorways or windows to create an outside perspective, like a spy might view the world, with the close wide intimate coverage, as though inviting the audience in on an intimate secret.

‘THE BOYS’ IS A MASTERCLASS IN EFFECTS, VISUAL EFFECTS, SPECIAL EFFECTS, AND STUNTS. MOST DAYS WE DO THEM ALL.


DAN STOLOFF

What were the greatest challenges in these two TV series? ‘The Boys’ has so much in the way of variety that I found every scene was a discovery. Eric Kripke, the showrunner of ‘The Boys’, told me early on to “play with it, have fun with camera moment, keep it dynamic but grounded in reality, and never let me

catch you lighting!” I thought this was the best coaching I'd ever had. I knew what he meant and although I never want to get caught lighting, what he meant was for the conceit of the show to work it had to be within the confines of reality. The challenges of these two shows were very different. ‘The Boys’ is a masterclass

in effects, visual effects, special effects, and stunts. Most days we do them all. ‘The Americans’ was a different kind of challenge. Being a winter show in New York, with a lot of nights is a tough go. It was a lot of fun keeping it dark and textured – not too dark, you've gotta see the wonderful actors –, but gritty realistic and edgy.

61


DOP IN TV SERIES

What was your choice of camera + lenses choice for ‘The Boys’ (or, at least, what was chosen when you arrived?) When I arrived on ‘The Boys’ the show was being shot on a Red with Cooke SF anamorphic lenses. I loved the look of the lenses and the aspect ratio of 2:35 so I kept the lenses; however, I switched the camera to Sony Venice, primarily for its color rendition and high speed. The base 2500 ISO was crucial to a lot of our night work. The aesthetics of the show really comes from the writing. I don’t think it needs a defined look, except to say that the world we create must closely resemble the grittiest of the real world as we peel back the layers to uncover what is beneath the surface of mass corporate manipulation. The elements within must reflect familiar visual tropes that viewers can identify with. For example, if there is a music video from the 90s embedded in a story line, that video must be visually

62

I THINK THE MOST RADICAL TECHNOLOGICAL UPGRADE I'VE SEEN IN MY CAREER HAS BEEN THE ASCENDANCE OF DIGITAL CINEMATOGRAPHY

legitimate. A news cast must look like a news cast and clips from a tentpole summer blockbuster superhero movie must have the production value of any Marvel project.

You’re also working on a brand new HBO Max project. Could you tell us more about this and

the technological / visual choices that have been taken? After finishing season 2 of ‘The Boys’ I was looking for something smaller and more intimate in scale. I got exactly what I was looking for in the HBO Max pilot ‘Julia’. It is the story of Julia Child's entry


DAN STOLOFF

into the world of television. Taking place in 1962, it was a period character piece with a lot of cooking. Coming from the world of TV commercials I was familiar with food shooting, and I love it! The care and detail in shooting the making of a french omelet uses different muscles that the planning and executing of a congressional hearing mass murder! Both are infinitely satisfying, however more so when juxtaposed together!

Finally, what do you think has been the most relevant technological upgrade for DPs and what technological solution would you love to see introduced in the near future? I think the most radical technological upgrade I've seen in my career has been the ascendance of digital cinematography. We had been using the same tech for 100 years and it was time for a change. No more late night calls from the lab.

No more conflicts over who scratched the film, camera assistant or lab. The dynamic range is vastly improved; the flexibility of the medium is improved. Essentially, we can shape the look of any filmstock or process with the push of a button. It has freed us up to try things and to dig deeper into the needs of a script to provide visuals that can make even greater contributions to storytelling.  63


TECHNOLOGY

In the first installment of AoIP we talked about the advantages of using this technology in our production environments and its main differences in comparison to traditional audio, whether it was analogue or digital. However, implementing a complete AoIP system entails new challenges that must be faced which different manufacturers have been gradually sorting out. On this occasion, we will see the practical applications that are necessary to implement these types of systems: automatic detection, administration, control and security. By Yeray Alfageme, Service Manager Olympic Channel

The plug-and-play concept is something already assumed by everyone, whether they are professionals or consumers in any IT environment. Wi-Fi just works, USB just plugs in, and my headphones are directly audible when I plug them into my phone. But reaching this level of maturity does not happen overnight. I still remember the days when in order to get

64

any new device connected to a computer one had to prepare the drivers, install them -the correct version, of course, depending on the operating system, device model, etc.-, connect the device in proper sequence, follow a protocol and manually configure everything and more... And this was not so long ago, only going 10-15 years back.


AOIP

And what has made this change in the IT world possible? Open standards, of course. All operating systems, devices and protocols speak the same language, at least partially, which allows them to behave in this highly desirable automatic way that makes life so much easier for us today. Well, something similar has happened with AoIP.

Three decades of history Yes, it has been 30 years already since AoIP started to be used in general multimedia production environments. And in the beginning everything was as tedious as in the IT world. Everything had to be configured by hand, nothing was compatible with anything and the manufacturers seemed that they were fighting like successors involved in inheritance proceedings. There was no way for them to understand each other. For example, an IP microphone connected to a network needed to have an IP address and make use of broadcast IP addresses known to the rest of the devices that would 'listen' to the stream sent by the microphone and be able to 'read' it, understand its format, bit depth, sample rate and everything else. Well, all this had to be configured manually. It is easy to imagine the problems and the number of configuration errors that occurred and the time required to set up even the smallest AoIP production environment. This slowed its

65


TECHNOLOGY

deployment a lot, but it was the future and manufacturers and standards continued to evolve to make what we have today possible: AES67.

Achieving Plug-n-Play To ensure that when connecting a new device to our audio over IP production network the device will be recognized and work correctly, two things are required: finding the device and controlling it. When we say finding the new device, it is not only about all other equipment knowing that it is actually there, but also identifying in what format it is configured and what type of audio stream it is generating, if the device is a source. In order to get this done, an IP address must be allocated. This is easy to solve thanks to the DHCP (Dynamic Host Control Protocol) protocol, through which the router is able to grant an IP address to a new device within the network's 66

range. We already have it on the network then.

discovered and in a proper format.

Going one step further, we need to tell the rest of devices what audio settings it is set to. That is what AES67 is for. This protocol sets forth a specific format for RTP (Real-Time Transport Protocol) payload through which the audio specifications are exchanged. This payload is acknowledged by all other the devices and they adapt to it. Now we have the device correctly

Moving on to the control side, in addition to the device's IP we must know the broadcast address(es) used to transmit the relevant stream. These addresses are used so that the rest of devices can 'listen' to the stream and request a copy of these broadcast packets from the network's switch. This reduces the network's load and all devices listen to the same stream at the same time without



TECHNOLOGY

duplicating it. The problem is the large number of existing broadcast addresses even in the smallest network, which immediately complicates their control, although if done correctly they can be managed automatically. Not all is good news with Plug-n-Play as it greatly complicates interoperability between devices. If all devices mutually identify each other, it is assumed they can also communicate among themselves. It is also assumed that other formats such as AES3 or MADI can properly communicate with AES67 devices. In addition to automated systems, control software applications always have manual functionalities so as to enable the different protocols understand each other.

Increasing flexibility Increasing the flexibility and ease for interconnecting any type of device comes at a price: complexity. Having all 68

devices communicate with each other means that we have to carry out complex configurations such as, for example, identifying all the broadcast addresses of the network. Even in the smallest of networks we find hundreds of broadcast addresses, and their number grows exponentially. That is why standards such as AES67 do not include a particular specification for such interoperability and subsequent flexibility, leaving the network and management software to take care of that. Nowadays these systems are mature enough and work quite well in managing an AoIP network.

Enhanced interoperability As with augmented reality, a critical point is that all devices, regardless of manufacturer, not only should communicate with each other, but also that management software, regardless of manufacturer, has to be

able to discover and control all devices. The problem is that each manufacturer develops their control system based on their products, obviously, and they invest a lot of development efforts and money in the task, but they cannot carry out major interoperability developments. In addition, it would entail problems related to sharing certain secrets or source code between manufacturers, something that everyone is very reluctant to do. In the end,


AOIP

the specifications of the ST2110 standard with AES67 is a good compromise. The operations side is more than resolved in AoIP, but there are challenges such as interoperability of control systems between pieces of equipment from different manufacturers; or simplifying the necessary configuration of an IT network when connecting devices between themselves. Rather than fighting the source code is where the value of the intellectual property of the different developments lies, and they involve a lot of money, as we mentioned. That is why, if we want to have the flexibility of AoIP, like AES67, we must allows for certain compromises. For example, having an AoIP network based on AES67 with a control system from a single manufacturer will provide us with all the advantages and flexibility, but within this manufacturer.

It is true that either manufacturers or regulators could work on open standards so that equipment and control systems are compatible with each other, but this is not going to come out of the blue just like that. There are certain advances and initiatives in this regard, but there they have a long way to go before they are ripe.

against the current limitations, it is better to adapt to them and have them in mind in order to work much better and make the most of the benefits provided by AoIP, especially the flexibility that it offers us. Undoubtedly, compromising something in terms of variety of equipment is going to give

Conclusions

us some advantages that,

In the end we must get the best of both worlds. For example, combining

through traditional audio whether analog or digitalwe could not dream of.  69


PRODUCTS

Test, QC and monitoring: 4 future-proof solutions BBright UHD-Decode BBright UHD-Decode high density solution can now process 8K, uhd, hd and sd signals up to 8 video channels and provide advanced monitoring features. Capable of receiving contribution streams in transport stream over IP or DVB-ASI formats, the BBright solution decodes HEVC, H264, MPEG-2 videos as well as JPEG2000 and JPEGXS, and routes them in all ST-2110 formats, SDI or HDMI. A true Swiss Army knife, UHDDecode supports the latest generations of audio codecs (including Dolby AC4 and Atmos) as well as High Dynamic Range (HDR10, HLG and Dolby Vision) and BISS-CA decryption. Intuitive GUI provides advanced monitoring information, such as video thumbnails, audio bar

graphs, bitrates graphs, ETR290 alarms, Dolby audio metadata including Atmos, closed captions and DVB subtitles, HDR metadata and much more! All of this is accessible remotely through the web user interface. For more info please contact sales@bbright.com

Gates Air Maxiva StreamAssure physical, transport and audio/video streams from a single dashboard. This simplifies the traditional component-based architecture that traditionally employs disparate monitoring systems to understand network-wide performance.

Maxiva StreamAssure provides broadcasters and network operators with a centralized platform for monitoring transport streams, transmitter parameters and modulated RF, to ensure full compliance with standards and customer Quality of Experience. This cloudbased system allows on-premises and offsite, managed service-based monitoring of

70

Rapid isolation of performance issues across signal transport and RF infrastructure is another key value proposition for broadcasters. The ability to separately analyze multiple IP, ASI and other streams ahead of the transmitter versus the RF and decoded audio/video streams coming out allows users to quickly identify and address the source of the problem The StreamAssure system is fully scalable from a single transmitter deployment to a large-scale network. Options include compliance recording and streaming.


TEST, QC AND MONITORING

OPNS Radiospy

We offer the easiest way to monitor your on-air radio stations everywhere! Simple, economical, Radiospy offers a unique way to track the status and quality of your broadcasting signals. Remote monitoring with automatic alerting by mail or SMS, remote listening, local recording… are just a few of

the rich features of our new Radiospy. Totally transmitter’s vendor agnostic! So affordable that monitoring will be accessible for all radios, regardless of the number of transmitters that you have to manage. FM, AM, DAB or even HD Radio are supported by our built-in universal tuners. It can even be integrated in your existing SNMP-based monitoring solution or managed through our central Dashboard. Radiospy is available in different models: Micro (1 tuner), Mini (up to 4 tuners) and Maxi (up to 16 tuners). Radiospy can be used standalone, combined with his Central Dashboard or as one component of our Broadcast solutions suite. http://broadcast.opns.net

Riedel MediorNet FusioN Standalone IP Converter & Multiviewer MediorNet FusioN is a versatile standalone processing frame that can be configured with a selection of inputs and outputs from our range of SFP I/O modules as well as with a variety of processing apps. For instance, the software-defined frame can be configured as: • A bulk gateway capable of treating up to 8 gateway conversions for HD/3G or up to 2 UHD signals • A bulk gateway with black burst output, and a 1GE control tunnel for POV camera capture and control • A dual channel JPEG-2000 encoder or decoder with SDI or ST2110 I/Os

• A 16x1 IP Multiviewer connecting right at the back of your display with HDMI or SDI connectivity Therefore, the compact and quiet FusioN series is perfect for sending IP signals to SDI or HDMI displays or connecting SDI signals of remote cameras.

71


BROADCASTERS

ITV's Journey with Server Side Ad Insertion ITV is one of the biggest commercial broadcasters in the United Kingdom, and advertising is the core of our overall business. We offer a range of services starting from multiple DTT broadcast channels across different regions, over-thetop catch-up products like ITV Hub and bespoke SVOD service like BritBox. As a broadcaster, our strength comes from being in a unique position to commission and produce our content and have an in-depth understanding of our viewers' viewing habits. Over the years, we have invested a lot in enhancing technological capabilities for harvesting our viewer data, which gives us a competing edge and scientific tools necessary to do effective and efficient addressable advertising. Our journey with serverside ad insertion started about four years ago when we started looking into potential technologies to 72

replace the RTMP and FLASH based simulcast streaming on ITV Hub platform. Most of our vendors providing technology and support for RTMP had already started communicating end of life product dates. At the same time, we were cautiously observing the industry adoption of the more HTTP based packaging protocols like DASH & HLS. In the early stages, we realised that to take full advantage of the bespoke standards and get maximum horizontal coverage across the platforms; We will need to go through a very focused strategic selection of vendors for this project. We wanted to work with the companies which were as keen and participating as ITV in standard bodies and had a robust, proven engineering culture to create new technology from concepts and whitepapers. The First Big Step: The most challenging part for

By Vinay Gupta - Senior Architect - ITV Video Platform

a complex integration project is always identifying where to start with, and correct kick-off could make the whole experience much more straightforward. In our case, we decided to focus first on the leg work in our broadcast chain & automation schedule and sketched a plan to light our most-watched channels first with SCTE104 markers. We also established the contracts to expose the scheduling & breaks data via APIs that could be consumed by the technical systems in our commercial domain.


SERVER-SIDE AD INSERTION

Commercial needed this data to do a proper forecast, plan appropriate campaigns and to be in the position to cash opportunities for Ad replacement in future. It took us a good six months to spec this out and a lot of back and forth with our playout partner to be in the position where the format of the SCTE104 could be deemed useful. What helped us the most during this phase was to stay focused on the task in hand ( which we had quantified by having only a few channels to start with) and have continuous discussions and a feedback loop with our transcoding vendors to

validate the format of the SCTE104 messages to be conditioned to SCTE35. Like every other commercial broadcaster, we also had our semantics and instructions around content boundaries and segments that needed to have adhered. Continuous validation testing helped us a lot to keep it inline. After this, the projects next phase was to start having a macro look into our transcoding set up and start the right conversations with our preferred packaging provider. It made our task more manageable to have dedicated labs set up between our transcoding

and packaging provider. We did this to build and test tricky integrations as early as possible and uncover any unknowns in the project's early phases only. We organised many syncs up meetings and established processes to continually validate our joined understanding of standards and specifications during this time. It's worth noting that specifications and standard bodies sometimes leave some room for interpretation. The project Integration teams' critical role is to ensure that everyone is working towards the same vision. It took us a few good few months before we can be confident about the streams' output format and our transcoding and packaging vendors played a key role for us to reach there. Once we got sure that SCTE35s are present as CUE IN/OUT points in HLS and DASH EVENTS in DASH output MPDs, We were ready to move on to the next phase. Now the focus for the next phase was more on delivery and architecture 73


BROADCASTERS

aspects of the end to end chain, and we went back on the whiteboards. We started talking about the more firm requirements, projects plans and considerations towards the functional, nonfunctional requirements and thinking about the overall architecture's resilience. After some essential conversation with business and our vendors, we reached an agreement that we wanted to build the entire pipeline in the cloud with an OPEX option for the costs. It's worth mentioning that we have few of our high profile shows like Love Island, which drives a considerable amount of viewing traffic onto the technical front end and back end service. It helped the entire project quantify and model the correct load profiles early enough and share them as part of the NFRs to all the vendors and the tech providers. We wanted to be sure that our content origins, DRM servers and Ad Servers could quickly scale to meet some of 74

these too complicated use-cases. But one can never fully prepare. Even after all the prep work and due diligence, we soon realised that some of our vendors worked with multiple clients/broadcasters influencing their individual release cycles. We couldn't control everyone's roadmap, So we decided to ring-fence the project development around some of the marked GA releases. This approach saved us a lot from doing rounds of regression testing. Once we had a release x.x for the packager and satisfied our business needs, it was expected to remain the same until we got the production. We uncovered some of the more delicate issues during this phase, tracked and flagged back to the respective vendors. By now, we were almost two years in the project. Finally, our architecture and infrastructure were ready to roll-out the

streams with proper DRM and Ad insertion. We have built the services to give us a lot of flexibility to turn Ad replacement on and off or switch between environments which gave us great operational control and options to do regular upgrade and maintenance without causing any downtime to the viewers. It was the time we started to think about the launch strategy, and any client-side work needed. We arranged many workshops between our player teams and our ad tech providers to have correct client-side SDKs integrated, followed by a lot of testing to ensure that impressions and overlays are working correctly. All we needed now was to decide the target platform and a date to aim for the big-bang to happen. Our management and commercial teams came with an idea to focus on the big high profile event of the year "Football World Cup 2018". All welcomed it, and as a


SERVER-SIDE AD INSERTION

HTML5 browsers. All the ingredients needed for successful SSAI are already there. If I have to look back on the last few years and advise others from our journey, I would suggest focusing on the small wins first, getting right experts on your side to integrate early and test platform, we decided on iOS & browsers to start with as they were most stable and quality of service was generally high on these. We chose ITV1 & ITV4 to be the first to be migrated to the new architecture to show most games. It was a big success for all of us, and we became one of the few first broadcasters who got the SSAI working with DASH. The whole process was templated and soon replicated on ITV2 and just in time for our biggest show LOVE Island. At the time of writing this article, we are running a program dedicated to the future of broadcast and specifically focused upon IPTV & IP Streaming.

We are also running active projects to launch SAAI on our Connected TV platforms and debating the merits of client-side and server sidetracking for Ads. Having run SSAI for a few years now and served millions of impressions, Our tech teams have gained a lot of learning and confidence in this area. I am personally very optimistic about the future and our roles in shaping the same. Our vision is to be in a position where any content offering on our OTT platforms could be addressable. The TV market is still uncharted territory, but they are getting more and more MSE/EME compliant and providing support for

as much as possible, and work with the 3rd parties and vendors who understand what they are doing. More philosophically, our journey has just started, and we are very confident about rolling out our tech to all the remaining platforms. None of this would have been possible without our marvellous vendors and suppliers tech teams who all played a crucial role in this project's success; We worked with Redbee, AWS Elemental, Unified Streaming, Irdeto, YoSpace, Akamai and BridgeTech.  75


OPINION

Looking to the cloud to scale up production

The demand for content is not going anywhere but up, and the challenge of rising production costs is front of mind for broadcasters and content producers around the globe. It is no longer enough to just up the level of output. In today’s environment, our customers need to engage with global audiences that can no longer be reached through one or two platforms. Productions must scale vertically and horizontally across formats, versions, language, device types, and platforms to meet an increasingly diverse audience footprint. As the consumer has embraced on-demand content, so has TV technology – and many of the broadcast functions that were only really viable as highly specialized hardware can now be delivered as software-only implementations. 76

With benefits like scale, reach, flexibility and more, everybody in the TV industry needs a ‘cloud’ strategy. One of the most critical gains from a cloud approach is eliminating the need to maintain large infrastructure assets, such as datacenters, racks of servers and complex networking designs. In turn, customers are free to focus on their core line of business – creating and delivering stunning content. Flexible, cloudbased environments that utilize virtualization go one step further, offering a viable alternative geared towards on-demand and OPEX focused models.

Accelerating towards the cloud The potential to lower CAPEX, boost efficiencies and scale operations has meant that cloud and SaaS adoption was firmly in the TV industry’s sights even before the pandemic hit. The move towards cloud-

By Chuck Meyer, Technology Fellow, Grass Valley

based infrastructures and workflows has clearly accelerated due to the impact of COVID-19, with uptake really gaining momentum. Our industry has traditionally run on purpose-built systems specifically created to handle video, with the ‘asa-service’ consumption model mostly used for tasks such as playout and other content distribution processes – but that was already changing before 2020. Data from the 2018 Devoncroft Media and Entertainment Cloud


CLOUD

Adoption Index projected that cloud usage across the sector would rise by 88% between 2016 and 2021. The pandemic and its restrictions have only served to speed up a process that was already underway – what many in the industry thought was three to five years away is already becoming a reality. In the summer of 2020, Los Angeles-based live entertainment and media company De Tune deployed our GV AMPP (Agile Media Processing Platform) cloud-native production platform to support completely virtual live event broadcasts. The De Tune team used GV AMPP Master Control to create a master control room (MCR) in the cloud with full redundancy, accessible anywhere globally to support a series of online events. Discovery's leading multi-sports brand, Eurosport, also chose our newly launched AMPP Playout application suite as part of a major technology transformation

to provide 'true cloud' architecture supporting playout across the network's sports broadcasts.

Doing more with less The move to cloud addresses a perfect storm of circumstances – the need to produce more content to feed growing consumer demand and the need to distribute multiple versions of that content to a broader range of affiliates and platforms. This need to deliver greater volumes of content more efficiently is a major driving force behind the industry’s transition to more flexible and scalable IP-based infrastructures and workflows. This move also opens up new ways of working like remote and distributed production. The FIS Alpine World Ski Championships, the largest winter sports event outside of the Winter Olympics, is a great example. Normally, Sweden’s national broadcaster, SVT, would build a complete production “factory” at the

ski resort town of Åre to cover the 12-day event. However, for the 2019 Championships, SVT relied on extensive remote production capabilities instead, with up to 80 camera positions and a largely IP-based workflow across video, audio and data. Live signals were transmitted to SVT’s production facility in Stockholm, over 600 km away, for production and playout. SVT is not alone. Across the industry, we see an ongoing shift towards technologies and workflows that are designed to enable more remote production and achieve more while limiting or even reducing costs. This shift includes the widespread adoption of IP as a replacement for SDI and more use of software rather than dedicated hardware to reduce cost and enable more automation.

Cloud in a broadcast world The cloud has been entrenched in sectors like IT for several years, but for 77


OPINION

the broadcast industry, tapping the cloud for TV is trickier. In the IT world, when an application takes an extra couple of seconds to carry out a process, the end-user does not really notice. In broadcast, two seconds of black screen during a live sporting event or out of sync audio is just unacceptable. The primary challenge of the cloud is that it fundamentally needs IPbased workloads and the transition to IP in the TV world is very much ongoing. Although many key standards have been defined, many industry segments are still only partway through the journey. Added to this scenario, some of the most difficult technical issues that ensure cloud’s capacity to deliver the same experience as onsite production have only recently been solved. Delivering missioncritical solutions that overcome issues such as audio synchronization and acceptable end-to-end latency – all within a userfriendly operator 78

environment – are just some of the specific challenges that Grass Valley has worked with its customers to develop successful solutions for. As a result, early pioneers are starting to make that transition.

Finding the best fit As cloud-based models become a more common feature of the production landscape, some will argue that throwing everything into the cloud is the solution. However, this doesn’t always make sense – both financially and operationally. Remote production via the cloud is certainly one application with many advantages, including more flexibility and the ability to scale up (and down) quickly within an OPEX model. However, it may not always be financially sensible; existing investment made in broadcast TV technology run into the hundreds of billions and broadcasters can continue to sweat these assets for a significant amount of time. Certain processes are still more efficient, faster,

cheaper, and more reliable via local, highly specialized hardware. At Grass Valley, we believe that cloud adoption in broadcast will be focused around three key areas: playout, production and live. In the cloud, common technologies can be used to support applications that previously were seen as having very little in common. Playout has the lowest technical challenge


CLOUD

based on cost optimized delivery to consumers. Production with its more demanding requirements and need for fast user experiences can broaden economic value creation, but this is at an early stage. Grass Valley has made great progress with managing the user experience when either assets or processing are in the cloud. Tradeoffs, in this case, include media asset security and acceptable human factor latency for a given workflow.

today. It has been the industry starting point; however, with its high utilization of the underlying infrastructure. The economic value gained by moving playout into the cloud is optimized in applications where different renditions of a common program are required for simultaneous distribution, or when other encoding methods and quality can be exploited as “plug and play” services

Live, the most demanding of all applications, requires the lowest latency, the highest simultaneous signal counts and the highest number of operators to capture and create the program. Large sporting events for example, require significant coordination of these resources. In addition, human talent should be empowered to carry out their workflow using traditional control surfaces, yet also being able to adopt new

methods of controlling content creation. Transitioning live production to the cloud is a real game-changer for the industry. It offers the opportunity to optimize the number and location of any resource group, as well as exploiting the cloud for scale and elasticity to fit the requirement of a given production. There’s little doubt that virtualization of traditional hardware, such as on premise chassis and cards, through cloud-first technology brings significant benefits to the media industry. It enables businesses to experiment creatively while providing efficient and scalable workflows for now and the future. We’re excited that GV AMPP is leading the way in unlocking all three of the principal workflows used in broadcast with flexible technology, allowing customers to be prepared for change as the world of TV continues to evolve.  79


OPINION

Migration of the broadcast sector to the cloud Consumer viewing habits and media business models have changed in recent years alongside the progress seen by smartphone technology and connectivity coupled with the mushrooming of over-the-top (OTT) services. In order to keep up with the growing demand for content and adapt to rapidly changing viewer behavior, mass media are turning to the cloud to reinvent the media chain, from acquisition to delivery of first-class IP, adaptive live bit rate and video on demand across multiple devices. In line with this trend, broadcasters are working to refine channels and offload a better part of their traditional video infrastructure onto the cloud. In doing so, the flexibility, scalability, and security provided by the cloud are proving to be a great asset and opening up new technological and operational freedom of action, as well as resulting 80

in cost savings. Although a large majority of broadcasters have already started adopting cloudbased technology for these reasons, conducting a complete broadcast workflow in the cloud is becoming a reality thanks to technological advances that facilitate the delivery of Traditional broadcast content and multi-screen linear video within a unified virtual architecture. In recent years, we have begun to see that more broadcasters prefer cloudbased resources for processing, packaging, and delivering live and ondemand video over traditional on-premises hardware, especially since cloud-based resources have become more affordable. This is in part because moving live linear workflows on to the cloud has significant benefits for remote production and collaboration, can greatly accelerate content production and delivery,

By Carlos Sanchiz, Manager Solutions Architecture at Amazon Web Services

and provides a platform for launching innovation in a faster way. By using the cloud, all of these benefits can be attained with lower initial and operational costs. An example of this is the case of the FOX network, which has opted for AWS cloud services on all its channels with the aim of achieving a more efficient distribution of its sports, news, and entertainment television content to multi-channel video programming distributors to more than 200 affiliate stations and over-the-top (OTT) providers. FOX


CLOUD

configuration readiness allows your team to react faster to market changes, while making content more accessible to a wider audience. As cloud adoption by broadcasters such as FOX and others increases, technological innovation accelerates, laying the groundwork for the broadcasting industry to move the entire video channel to the cloud and gain significant efficiency along the process. As in other industries, in the audiovisual creation and distribution sector we are also seeing strong growth in the use of Machine Learning and Artificial Intelligence, as it allows these companies to simplify many tasks. For example, broadcast and live stream monitoring (OTT) service providers perform a large number of quality control checks in order to avoid errors. These range from low-level signal errors to high-level issues such as content errors. Traditional live media analysis software focuses on

signal-level quality checks. High-level quality checks, such as verification of program content, subtitles, or audio language, are performed by human operators who constantly monitor the broadcast stream for problems. As the number of video streams broadcast increases, it becomes difficult and costly to expand the manual monitoring effort to support more channels and programs. The latest advances in artificial intelligence can automate many of the high-level monitoring tasks that were previously fully manual. With the help of AI-based detections, human operators can focus on higher-level tasks, respond to problems faster, and monitor more channels with higher quality. Amazon Rekognition uses AWS AI services to analyze the content of a live video stream and is capable of performing a set of examples of near realtime monitoring checks. However, the mass

media have not only used these technologies for internal organization purposes, but also in their broadcasts. An example of this is the one Sky News carried out during the wedding of Prince Harry and Meghan Markle. Unlike sporting events, whose broadcasting rights are subject to licenses that have been carefully granted for each country, a public show like a royal wedding allows multiple broadcasters to compete for a higher audience share, causing each broadcaster to offer more. value in terms of both quality and innovation. For Harry and Meghan's wedding, Sky News harnessed the power of AWS artificial intelligence to create “Sky News Royal Wedding: Who's Who”, a live streaming application, based on machine learning, that allowed viewers to identify the wedding guests and participants in real time. The live streaming experience allowed users to watch the arrival of guests up close, with the 81


OPINION

machine learning infrastructure to detect attendees in real time as they appeared on the screen. Half a million people used this feature in more than 200 countries through mobile devices and web browsers.

collaboration with UI Centric.

Sky News designed and created the machine learning functionality in just three months. The team built the service using cloud infrastructure for speed and growth, as well as to ensure access to scalable machine learning capabilities that are critical for the core functionality of the realtime guest identification application.

For some time now, AWS has been working together with the Bundesliga to offer viewers -through Cloud technology- a deeper and more analytical vision of what is happening on the pitch. In fact, just a few weeks ago, AWS and the Bundesliga announced three new match-related pieces of data (Match Facts).

The project combined AWS cloud video infrastructure with GrayMeta's data analytics platform and UI Centric's interactive user interface design. In this case, the Amazon Rekognition image and video analysis service was also used to identify celebrities in real time and tag metadata with related information, integrating the user experience in 82

In addition, in AWS we have some clients that use Cloud technology applied to sports broadcasts, such as the Bundesliga, Formula 1 or the NFL, among others.

Match Facts are generated by collecting and analyzing data from live match video streams as they are transmitted to AWS. Fans will see this information in the form of graphics during broadcasts and on the official Bundesliga app during the 2020/2021 season and beyond. These advanced statistics help the public better understand things like the strategy involved in making decisions on the

field and the probability of a goal per shot. These three new game data pieces better show the action on the field and provide spectators, coaches, players, and commentators with visual support to analyze a team's decision-making. The growth of advanced statistics in sports has led to a deeper understanding of strategies in the game and a better comprehension of the athletes' performance. The new match data: Player Most Pressured, which highlights how often a player experiences significant pressure during


CLOUD

a match; Attack Zones, showing fans where their favorite team is attacking and from which side of the field they are most likely to score; and Average Positions - Trends, which shows how changes in a team's tactical formation can affect the outcome of a match, join those already known: Speed Alert, Average Positions and xGoals.

battle between the best drivers in the world, but it is also a battle between some of the most innovative engineers. No other sport has been so dynamic in its evolution and adoption of new technologies. That is why AWS is proud to be the official cloud and machine learning provider for Formula 1.

Thus, Bundesliga fans already have 6 Match Facts that enhance their match viewing experience.

Furthermore, Formula 1 has been working with AWS for some years to create the statistics system based on artificial intelligence and data analytics, "F1 Insights powered by AWS". This system analyzes a history of 70 years of data with information collected in real time to create a series of statistics on the performance of drivers and cars on the track. This technological system of statistics helps, on the one hand, riders, and teams since it gives them a new vision and data on their work; and on the other, it allows providing both F1 fans and spectators with a new experience to better understand what is

Another of the sports broadcasts that AWS works with is Formula 1. As we all know, F1 is a

happening on the track during broadcasts. At present, data are being provided on: Starting Speed, Predicted Pit Stop Strategy, Pit Scenario, Battle Forecast, Pit Battle Strategy, and Tire Performance. Last, it is worth highlighting the case of the NFL, who chose AWS to make the most of all its data through sophisticated machine learning analysis. The NFL uses the power of AWS to create new statistics on game broadcasts and improve player health and safety by predicting potential injuries, thus creating a new experience for fans, players, and teams, all this in real time. The NFL has created a number of new AWS-based statistics, each based on different data points and broadcast during game broadcasts. The most important among such functions are: Expected running yards, Expected yards after catching the ball and Route classification.  83


ADVERTORIAL

Take on All Remote Production Workflows with Matrox Monarch EDGE

The essential platform for remote production (REMI) and at-home productions, the cutting-edge Monarch EDGE encoder and decoder technology delivers exceptional quality, lowlatency 4:2:2 10-bit video at resolutions up to 3840x2160p60 or quad 1920x1080p60 over a standard one Gigabit Ethernet (GbE) network. Monarch EDGE includes next-generation connectivity—including 12GSDI and SMPTE ST 2110 over 25 GbE connections—while supporting the popular MPEG-2, RTSP, and SRT streaming protocols. Equipped with tally signaling and talkback functionality for two-way communication between on-site and instudio personnel, Monarch EDGE allows live event producers to optimize resources by keeping more staff and equipment inhouse while providing uncompromised, broadcastgrade viewing experiences.

84

Big productions, minimal resources Monarch EDGE extends the production studio by transporting up to four synchronized HD-/3G-SDI camera feeds or a single 12G-SDI signal—in both native progressive and interlaced video formats— with glass-to-glass latencies as low as 100 ms. As a result, these portable-yet-powerful appliances help broadcasters minimize the number of production staff and resources required in the field, making live, multicamera event productions more efficient and highly affordable. For events with higher-density camera requirements, the Monarch EDGE's compact design ensures that two units fit into a single 1RU rack space to easily meet the needs of any-sized production. Event producers can select between two encoder options: the 4:2:0 8-bit H.264 encoder version for programs destined for web or over-the-top (OTT) delivery, or the 4:2:2 10-bit

H.264 encoder model for demanding, broadcastquality productions.

Webcasting encoder for an ever-evolving media landscape The versatile Monarch EDGE encoder provides broadcasters with robust, low-latency, and dynamic H.264 encoding capabilities packaged in a compact, low power, and portable appliance. Programs destined for web or overthe-top (OTT) delivery can leverage the 4:2:0 8-bit encoder version, while the 4:2:2 10-bit capable Monarch EDGE encoder version is ideal for demanding, broadcast-quality productions. "The broadcast industry has seen an increased demand for a wider range of both mainstream and niche content, and the Monarch EDGE platform provides broadcasters and live event producers with an efficient means to deliver more diversified, high-quality programming," said Alberto


ADVERTORIAL

Austria – along with guest experts from anywhere in the world. With the ability to seamlessly execute both remote and cloud-based productions during different episodes, the Monarch EDGE has become Vizrt’s go-to broadcast-quality encoding and decoding appliances for delivering low-latency, strikingly-realistic virtual interviews. Cieri, senior director of sales and marketing, Matrox Video. "With Monarch EDGE enabling users to centralize many productions, resources can be redistributed to offer more multi-camera events featuring new, never-beforeseen camera angles for more immersive and engaging viewing experiences. Monarch EDGE provides a tremendous amount of remote production flexibility, whether it's for sports, concerts, entertainment events, and more."

VizrTV Goes Live on Location(s) with Matrox Monarch EDGE How do you make two people thousands of kilometers apart appear side-by-side in a virtual

studio instantaneously? Teleportation technology. While it may sound like science fiction, Vizrt regularly brings people together from separate locations across the globe with help from a pair of Matrox® Monarch EDGE 4K/multi-HD encoder and decoder devices when producing its own VizrTV online series. The Monarch EDGE encoder and decoder pair alongside Vizrt’s softwaredefined visual storytelling tools for broadcasters allowed Vizrt to ‘teleport’ resident experts Chris Black, Vizrt’s Norway-based Head of Brand and Content Team, and Gerhard Lang, Vizrt’s Austria-based Chief Technology Officer into a virtual studio in Vomp,

No travel? No problem. Due to COVID-19 travel restrictions, what began as a means of internal communication to unite and inspire more than 700 Vizrt employees in 30 offices around the world quickly became VizrTV: an external platform focused on issues currently impacting the broadcast industry while demonstrating how Vizrt’s broadcasting tools can help solve them in real-time. While broadcasters around the world were unable to return to their broadcast studios and forced to report the news from their own homes, Vizrt’s in-house and guest experts found themselves in the exact same predicament. To show the world how Vizrt’s software-defined

85


ADVERTORIAL

tools can help broadcasters create a more realistic live reporting experience for their viewers, Lang did not want to simply talk about the tools. Instead, he wanted to demonstrate their capabilities during live interviews. “We wanted to create a talk show format that looks natural and resembles what viewers are accustomed to seeing on TV,” said Lang. “Zoom-style interviews may become the norm, but with our technology, we can do better. We wanted to inspire our customers by showing them what is possible. How do you bring two storytellers together in one place? This is the question we were trying to answer when we decided to use the Monarch EDGE encoder and decoder devices. The 4:2:2 10-bit video that the Monarch EDGE provides was critical to allow us to achieve the quality of keying we needed to make the production truly convincing.” The Monarch EDGE encoder and decoder pair met all of Vizrt’s requirements for a highperformance solution that would allow them to create the image of multiple

86

Matrox Monarch EDGE enables Vizrt to deliver low-latency, strikingly-realistic virtual interviews

individuals having a seamless interview in one location despite their physical distance from one another. Vizrt’s ‘location’ for VizrTV is a virtual studio, exquisitely rendered by Vizrt’s Viz Engine 4.1 compositing, real-time 3D rendering, and video playout platform. An essential tool to realizing this illusion is the Monarch EDGE, which is able to deliver 4:2:2 10-bit H.264 streams that translate to flawless green screen compositions. Furthermore, SRT support on the Monarch EDGE encoder and decoder pair allows for ultra-lowlatency video transport over public Internet – and ultimately – a smooth and

realistic viewing experience for those watching the VizrTV live stream.

Bringing storytellers together For VizrTV episodes featuring a one-on-one dialog between Black and Lang, cameras capturing each individual are stationed in separate studios in Bergen, Norway, and Vomp, Austria, respectively. During these productions, the Monarch EDGE encoder was housed in the Norway studio, while the Monarch EDGE decoder was located in the Austria studio. SDI cameras send HD video along with embedded audio to the Monarch EDGE encoder –


ADVERTORIAL

which encoded at 6 Mbps and streamed in SRT – with delivery latencies that allow for a natural flow of conversation. The Monarch EDGE decoder received the stream, then output an HDSDI feed with audio to the Viz Engine 4.1 system housed in the Austria studio. A second SDI camera in the studio sends SDI video with embedded audio to the same Viz Engine 4.1 system, which generates the final composition for delivery to a Viz Vectar switching system. This system provides cuts between clips and live and encoded program delivery to Vimeo, Facebook Live, and LinkedIn, as well as proxy feedback to Black for program monitoring on his laptop. Although the Monarch EDGE encoder and decoder pair does have an independent, bi-directional analog audio circuit available to users, Black and Lang opted to use Microsoft® Teams for their real-time audio communications during these broadcasts.

location. A single Monarch EDGE decoder and take up to four streams. In an example of a threeparticipant VizrTV production, Dr. Andrew Cross, President of Global Research and Development for the Vizrt Group, appeared on screen while being filmed in San Antonio, United States, alongside Lang and Black. The Monarch EDGE encoder was used to capture and encode the feed captured by the camera in the U.S. studio and transport it to the Monarch EDGE decoder in Austria. There, the device was also used to decode feeds coming from the other Monarch EDGE encoder in Norway. The end result was three individuals appearing side-by-side-by-side in a realistic virtual studio environment rendered by the Viz Engine 4.1. Despite participants being located in three separate cities across the globe, VizrTV was able to deliver a realistic live interview.

transport them using SRT

More than just great technology

alongside our own solutions

Productions involving three participants follow a similar setup with an encoder required at each remote

The Monarch EDGE encoder and decoder pair’s ability to encode multiple 4:2:2 10-bit video feeds and

over public Internet with ultra-low latency has made it easy for Vizrt to create a realistic virtual studio setting for VizrTV, in which multiple individuals can come together from separate locations and in real time. “We wanted the end result to be a seamless viewing experience for the people watching,” said Lang. “We didn’t want the viewers to say, ‘Wow, this is great technology.’ Instead, we wanted seamless interaction with the video so viewers can focus on the story. We could not have accomplished that without the Monarch EDGE encoder and decoder.” Lang noted that Vizrt plans to leverage the Monarch EDGE encoder and decoder for its upcoming VizrTV productions. “We are looking forward to being able to use the Monarch EDGE devices in future VizrTV episodes and more,” he said. “We are eager to see what else we can accomplish with this dynamic combination.”  87


TEST ZONE

88


PANASONIC LUMIX DC-BGH1

PANASONIC LUMIX DC-BGH1 Adaptability and versatility cubed Making true the well-known maxim “less is more”, simplification has been taken to the limit to offer the most possibilities with the least constraints in this new camera, defined as 'boxstyle' multipurpose. Lab test perfomed by Luis Pavía

It certainly seems like an exercise in innovation to us, and with excellent results, we must add. When it seems to us that everything has already been invented, a new space for innovation always ends up being discovered. And this is one of those cases that we are bringing to our pages today.

A camera that has much less than what we would have dared to imagine in the lightest of our daydreams. So much so that as soon as we have it in our hands, we wonder ... and what now? Yet it seems to have what it takes to meet the needs of a huge pool of potential users and customers. Had we been asked some time ago what the

use of a camera that did not have any type of viewfinder or minimal information screen would be, it is likely that security cameras would have only sprung to mind. Well, imagine a camera that reduces the essential minimum to much less than what the vast majority of us consider such essential minimum. And we will have in our

89


TEST ZONE

hands a Panasonic Lumix DC-BGH1. We feel that the GH name links us to the siblings with whom it shares a sensor, while the B would refer to the “Box” format of its design. In fact, we will make some references to the wellknown GH5s, reference model in Panasonic-Lumix, with which the sensor is shared. Although both are placed within the same price range, their configurations, capabilities, and standard components are quite different.

90

Very schematically, it is about housing an M4/3 (micro 4/3) sensor with its electronics in a box with a bayonet in the front, a compartment for the battery behind, a multifunction dial and 9 strategically distributed buttons, the entire set of connections on the back with protection caps, and 11 threads to attach accessories. A box-format camera with good cooling, in an aluminum and magnesium alloy casing that is virtually a cube (93 x 93 x 78 mm) and

lightweight (545 grams excluding optics or battery). Who is a camera like this aimed at? Paradoxically, to a huge number of potential customers and users. And this is one of the keys that make it a distinct piece of equipment. It has been conceived to be customized only with those elements that we really need, since it only has the minimum strictly essential elements. So much so that it is not even served with any type of


PANASONIC LUMIX DC-BGH1

battery, which in this case makes sense.

elements that can often be unnecessary?

It is true that we will always have to have the elements we need, but only those required for each production, hence its enormous versatility. Although it is important to note that we will nearly always need to add additional components to shape up our tool. But do not let this throw us off. In the same way that it is nowadays common to purchase cameras without any type of optics, why not do the same with other

We will then go into a detailed outline of its features, highlighting those that mark distinctive aspects, but we would like to give you a hint of our personal conclusion: this camera's design seems to cater to customization capabilities and minimalism as fundamental elements of its starting concept. Since often the equipment that strives to satisfy many needs ends up being ineffective due to

the number of unnecessary elements fitted, the approach in this case seems to have been the just opposite. Integrate only the essentials so that each client completes their customized design exactly tailored to their needs. The first aspect to determine when choosing a piece of equipment to tackle any job is the tool's suitability for the relevant purpose. And in cameras there are a few key concepts that determine such adequacy. The sensor

91


TEST ZONE

is usually one of the key elements, because its size determines the appearance of the final image, 'the look', its field depth, dynamic range, noise; but it also determines what kinds of optics are available. Size and usability are other decisive aspects that will be determined by shooting conditions. Comfort of use to handle it by hand, size for placing it in difficult places, or lightness to put it on a hot head or upload it to a drone. We must also consider the type of files it provides and connectivity to adapt it to different situations. They are not the same for making films, for documentaries, reports, live shows, streaming, or for any of the different facets that we will need to face. In this case, the camera is designed as a video camera, so there are no limitations as to recording time. It is built around the same 10.2-megapixel M4/3 Digital Live MOS sensor as the GH5s, 17.3 x 13mm in size, that is capable of working with DCI 4K and

92

UHD resolutions up to 60p, Full HD up to 240p, or anamorphic with a 3328x2496 resolution, also up to 60p. And as such, it has its two classic recording indicators on the front and back of the body. Since the size of the sensor is not particularly large, keeping a relatively low number of pixels clearly favors

performance and results in low-light conditions. It is the way of achieving a proportionally large pixel size in a relatively small sensor. Like the GH5s, it internally records 10-bit 4K DCI in 4: 2: 2 at 30p, or 4: 2: 0 at 60p. And when it comes to output it handles the same samples and frame rates in HDMI, improving on the GH5s in


PANASONIC LUMIX DC-BGH1

that it comes with Genlock synchronization and SDI output, although limited to 3G/HD; furthermore, it features a standard BNC connector without an adapter for time code input/output. To finish with the recording possibilities, let us simply add that in HD formats frame rates of up to 240 images per second can be recorded.

Internally, the highest recording quality is achieved by recording 4: 2: 2 10-bit sampling in .mov containers, All-intra compression, H.264 codec, and 400Mbps data rates. The next step would be for 4: 2: 0 at 10 bits, LongGOP compression and H265/HEVC codec at 200 Mbps. There are also different combinations for sampling in 4: 2: 2 at 10 bits for color and 4: 2: 0 at 8 bits, in Long-GOP compression and H264 codec and rates between 100 and 150 Mbps. Last, and already in MP4 containers, we have possibilities for data rates from 20 up to 100 Mbps. Continuing with the sensor, it is Dual ISO with bases 400 and 2000. The standard working range is 160 to 51200, which extends to a range of 80 to 204800 when expanded. Panasonic assures us that the exposure latitude reaches 13 f-stops using the V-Log L curve. This one-stop improvement over the figures of its GH5s sibling is due to an enhanced processing of

highlights achieved in the V-Log L curve, which is actually an improved version of the previous one. Thanks to the combination of both features, images with low noise levels and large work margins can be achieved in order to get spectacular results once the images are correctly color-graded. One of the features that provide great versatility is the ability to record internally and externally simultaneously at maximum qualities, while also keeping the SDI output active for monitoring. Although through SDI we will only obtain an HD signal, it is noteworthy that we the three data streams available simultaneously: internal recording and HDMI output in maximum qualities, up to 4: 2: 2 at 10 bits in 30p (or 4: 2: 0 at 10 bits in 60p) and the SDI signal for monitoring. We also find very interesting the possibility of loading and assigning different LUT curves to the HDMI and SDI outputs

93


TEST ZONE

simultaneously, thus facilitating workflows. This allows us something as sophisticated as recording internally without LUT, externally with a LUT applied through HDMI, and monitoring with a different one through SDI. Does it make sense to record with one LUT and

94

monitor with another? We understand that it does, so as to adapt the display to the different types of monitors that we may be using in each case. In these same outputs it is possible to rely on histogram and zebra superposition, although we do not have a

waveform monitor or a vectorscope. This is an aspect to consider carefully. If we are going to use an external recorder, the ideal configuration will be one that allows us to have these tools integrated into said external monitor/recorder, in this


PANASONIC LUMIX DC-BGH1

case leaving the HDMI signal completely clean of overlapping data. Absence of these aids has not seemed to us a serious inconvenience, but we have missed the possibility of some kind of Raw format. At least in the firmware version on which we have run our tests, there was no possibility of having this type of Raw data. The best result is obtained using the correct setting of the base ISO and the V-Log L curve. Since its GH5s sibling does have this possibility, we dare to dream that this functionality will be included in a future firmware update. It has a wide range of color profiles, such as Standard, Vivid, Natural, Landscape, three Monochrome variants, two Cinelike modes, 709, V-Log L, Hybrid Log Gamma, and several photo variants. The cinema type and 709 are the most suitable if we do not want to obtain direct results without getting into color grading processes. A wide range of parameters can be

configured in most profiles, such as 'Contrast', 'Highlights', 'Shadows', 'Saturation', 'Shade', 'Hue', 'Filter Effects', 'Sharpness’, 'Noise reduction', 'Base ISO and ISO setting', and 'White Balance (WB)'. Although, of course, not all parameters are available in all profiles. For internal recording there are two SD card slots (UHS-II), which offer the usual behavior for relay recording (jumps to the next card as they fill up), duplicate copy (the same content on two cards), or selective (saving video in one and photos in another). Relay recording coupled with the absence of limitations as to duration of the clips to be recorded allow for uninterrupted recording of sequences of theoretically infinite duration, as long as memory cards are replaced alternately and power is kept on. Precisely, on the power side, we find another series of unique features. Available batteries advertise capacities of 43, 65 and 86 Wh. Considering

that the camera’s consumption 7.5 W, it seems that the declared running time of more than 500 minutes with battery having the largest capacity is completely reasonable. Although it seems that the price of this battery is not exactly cheap. There is also a dedicated 12V power input. But what is a completely novel feature this time is having the possibility of power supply via USB-C PD (Power Delivery) or PoE + (Power over Ethernet). In this way, especially in usage cases oriented towards streaming, remote production or IP environments, installation and handling is greatly simplified, being a single cable enough to obtain signal from the camera while supplying power to and remotely controlling the unit. Regarding its streaming capabilities, by means of the LUMIX Webcam software it is possible to use the unit as a video conference camera with

95


TEST ZONE

the most popular platforms on the market, such as Facebook Rooms, Google Meet, Line, Teams, OBS Studio, Skype, Webex, Whereby and Zoom. Camera control is one of the aspects which are worth pondering and delving into. Because, although we have a multifunction dial and a set of 9 buttons, only 4 are customizable: they are too few to use as shortcuts for all the necessary functions during certain shootings. In addition, there is no screen of any kind on the camera body. Neither as a monitor to view the images we are recording, nor as a minimal auxiliary screen with status indications. Is this a good thing? As always, the correct answer is "it depends." If our expectation is to use it as a conventionalstyle recording camera in film or video-type environments, we will surely miss it. But if we intend to use some kind of external monitor/recorder, the missing screen will be irrelevant. And it will be

96

even more unnecessary if we intend to use it with a gimbal or hanging it from a drone. What is more, in these cases we will be glad of the weight that we are taking from any of such implements. To manage this control, we have two possibilities via software: LUMIX Tether in order to have complete control from a computer and LUMIX Sync App to control it from a mobile phone. Although at the time of writing this content is not yet available, an SDK development kit has been announced in order to allow for access to development of thirdparty remote control and management applications. We think that one of the uses in which this camera will excel is shooting situations in which the equipment is far from the operator's hands. And we reached this conclusion not so much because of the absence of a viewfinder, but because of the absence of a control screen. The only way to fully control the camera is

by external means, through the dedicated 2.5mm mini-jack connector, the USB-C port, through a Wi-Fi or


PANASONIC LUMIX DC-BGH1

Bluetooth connection; or via an Ethernet port. In the same way, these are also the means available to view the camera's

configuration parameters. And in events involving Wi-Fi and Bluetooth, also to view the content in capture.

This does not mean that the camera cannot be used by hand, a perfectly viable situation once we have fitted the necessary

97


TEST ZONE

elements such as some type of monitor and some type of handle or support. In the same way as placing it in complicated or atypical places. Its compact size and lightness allow to shoot having it placed in remote places, such as some corner inside a car, on a pole or in any other traditionally impossible place that our imagination may suggest. It is true that by means of a mere mobile phone we already have all the necessary functionality for control, parameter monitoring and display screen. But this is a feature that we must carefully weigh when thinking about our final configuration. And, depending on the situations, the delay of a wireless transmission, minimal but nonetheless still there, is something to take into account. Especially in situations such as manually focusing, this small delay can be an added difficulty. Although, on the other hand, this also provides us with

98

other advantages. Thanks to the aforementioned LUMIX Tether software, it is possible to control up to 12 cameras simultaneously from a single computer over the network. In relation to the autofocus system, it seems that both the speed of the continuous autofocus and the contrast detection system with face and eye recognition and tracking has been improved compared to what we find

in the GH5s. Focus can also be controlled manually through the LUMIX Sync and Tether applications, but only when using certain optics. It is important to consider at this point that, if we are using a viewing system through Wi-Fi or Bluetooth, we must take into account that they add a certain delay. Another outstanding aspect, which in this case has a favorable impact on prolonged and intensive


PANASONIC LUMIX DC-BGH1

use, is the cooling system, for which an internal air flow is established through forced ventilation from side to side of the camera's body, which in turn prevents temperature problems and keeps the camera operational for long periods of time. The grille on one side acts as an inlet for fresh air from the environment, while the one on the opposite side is responsible for venting the hot air from inside. The behavior of the fan, its automation options and the different speeds are configured from the menu, allowing us to adapt its operation to the needs of the different circumstances of use. The menus are very similar to those found in other LUMIX cameras, with the pleasant surprise of keeping the same filtering functionality. This is aimed at facilitating the selection of parameters for use by restricting the visible options as we limit frame rates, resolution, codecs, variable frame rate and Hybrid Log Gamma.

As for audio, there is no microphone built into the body: just the headphone and 3.5mm mic-in connections. Useful for general purposes, or to have a reference signal. If we want to record higher quality audio with our images, we must have the DMW-XLR1 module, which provides us with XLR inputs for the two audio channels. There are multiple scenarios in which the audio is managed completely independently from video, so it does not seem like a problem and we will only have to take care of this issue when, for whatever reason, the highest quality audio must by generated by the camera together with the recorded files or live signals. Audio is always recorded at 24 bits in LPCM and at 96 or 48 kHz sample rates. The difference in quality will vary depending on the various features of the equipment capturing and transmitting the audio through one type of connection or another. Said adaptor is the same

as for other camera models. It provides us with two standard-size XLR connectors with the usual input controls for line, mic, or powered mic signals, as well as independent lowpass filters for each channel. The module's connection to the camera is made internally through the flash-type mount itself. In this case, audio would be recorded synchronously with the video takes on the aforementioned UHS-I / UHS-II SD cards, or embedded in the HDMI, SDI and Ethernet signals. As for the mount, it is the Panasonic-Lumix standard one for M4/3 lenses. We have the full range of native lenses available for this mount, plus all others that we may want to use with the corresponding adaptors. We will not go into detail concerning adaptors, as that is outside the scope of this article. But there are a couple of aspects to highlight related to the mount. On the one hand, in the short space between the mount and the sensor there is no

99


TEST ZONE

neutral density ND filters at all, which will force us to have them in front of the optics if we need to use them. And on the other, the camera offers us the possibility of using anamorphic optics, because we have different aspect correction values so as to obtain the image already corrected without the need for additional subsequent processes. In terms of build quality, finish and touch, our impression is that there is a good balance between solidity and reliability. Given the peculiarities of this camera, which we mostly find positive, there are some important details to consider when setting it up for each new assignment. Beyond the traditional optics and batteries, it will now be necessary to determine whether there is need for different accessories such as a screen, external recorder, a handle, or some type of remote controller. But it will even be interesting to establish a well-thought set-up of functions for the 4

100

configurable buttons, for instance to control the iris (speed is assigned by default to the dial) or the shooting function for still photos, thus determining which ones will be assigned to the physical button, or if they will be controlled from any of the already described remote

control means. To conclude, another point in which it is very close to its sibling GH5s is the price. This makes the deciding between one or the other something far more dependent on what they offer around them, or on what we will be using them for. Basically, the


PANASONIC LUMIX DC-BGH1

Furthermore, it is suitable for solving situations in which other models, due to their size, weight, or remote-control limitations, would be less suitable. All these kinds of considerations will be decisive when it comes to getting the most out of this camera. Especially in this new creation environment, where more and more contents are being generated in streaming and/or remote camera operation environments.

GH5s is a mirrorless photo camera that records video. On the other hand, the DC-BGH1 is a video camera specially designed to adapt to the most varied conditions of use and even with slightly better video results than its sibling. We think this is a camera

that is versatile enough as to perform well: from film and broadcast environments in live multicamera productions, to videoconferencing or drone operation. Small, lightweight, and versatile, it is adapted to be useful in the most varied circumstances.

Among the excellent news that we have kept to close our analysis, it is confirmed that the camera has the 'Netflix approved' seal, being therefore classified as suitable for creation of content for the aforementioned platform. And this alone, is already a guarantee of significant levels of performance and quality. Although this should not lead us forget that no single tool will be able to go beyond where our knowledge and skills can take it.  101



Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.