TM Broadcast International 81, May 2020

Page 1

Chase Center, photo credit Jason O'Rear



From HD to UHD, more than just a leap



DR: +15 short questions with Mikkel Müller, CIO.

32 Rakuten TV


State-of-the-art technology ready for an ambitious 4K HDR catalogue


Duplication, recording and storage

Test Zone:



Chase Center A state-of-theart, future-proof venue


Editor in chief Javier de Martín

Creative Direction Mercedes González

Key account manager Susana Sampedro

Administration Laura de Diego

Managing Editor Sergio Julián

Life after NAB

62 TM Broadcast International #81 May 2020

TM Broadcast International is a magazine published by Daró Media Group SL Centro Empresarial Tartessos Calle Pollensa 2, oficina 14 28290 Las Rozas (Madrid), Spain Phone +34 91 640 46 43 Published in Spain ISSN: 2659-5966

EDITORIAL The global paradigm caused by the social, economic, and operational crisis derived from the Covid-19 pandemic has profoundly affected our industry. We must not escape this axiom that is defining our present and future. Both supranational associations and the governments of each country have carried out what they have considered the best measures to safeguard the health and economic solvency of each society. At the same time, manufacturers, production companies, broadcasters, news agencies and the media have demonstrated their commitment to tackling this extraordinary situation with determination. It will take time to recover, but step by step we are firmly moving towards our goal. The horizon is getting closer, as evidenced by the numerous product launches and strategic deals announced in recent weeks. Fortunately, constant reinvention is part of our DNA. Webinars stand as meeting points for shared knowledge; web events create new business spaces on the cloud; and the radios and televisions continue their activity in unprecedented circumstances but progressing with solvency and conviction. TM Broadcast International continues to demonstrate its commitment to information and to all the companies that are part of our industry. In this issue, you can find an interview with Mikkel MĂźller, CIO of the Danish corporation DR, who fitted us in his complex agenda to tell us all the news and future of the organization; we approach the OTT world, so important in this period, with Rakuten TV; we bring you the most relevant releases of these weeks in our special "Life After Nab"; we show you all the details of the NBA's technology benchmark stadium, The Chase Center; and much more. The situation is tough. But we are absolutely convinced that with the effort and collaboration between all parts of the broadcast world we can win this battle.



Atomos AtomX SYNC module for Ninja V delivers wireless timecode for troublefree multi-cam and dual system sound any number of existing Timecode Systems sync devices, which allow connection to other cameras, sound mixers and sources.

Atomos is now shipping the new AtomX SYNC module which brings wireless timecode, sync and control technology to the Ninja V HDR monitorrecorder. Available in early May, the module clips in between the Ninja V and your battery using the AtomX expansion system. It connects multiple AtomX SYNCequipped Ninja Vs with cameras and audio recorders using reliable long range RF wireless technology patented by Timecode Systems. At the same time, it can bring other devices into the synchronised system via 6

in-built Bluetooth. On iPhone and iPad this works with apps like Apogee’s MetaRecorder audio recorder, Mavis Pro camera and MovieSlate8 logging. It is also compatible with professional audio recorders like the Zoom F6 and F8n. Using the same RF technology used on major motion pictures, Ninja V users can now connect multiple AtomX SYNCenabled units to form a single network over distances of approximately 200m. The Master Ninja V is also able to synchronise with

In addition to using RF technology, each Ninja V on the network can also guarantee accurate sync via Bluetooth Low Energy (BLE) to additional devices. This has a shorter range of between 10-15m line of sight and is designed so users can add up to 6 iOS devices to each AtomX SYNC module. Compatible applications include the popular iOS Mavis Pro video camera app, the Apogee MetaRecorder iOS-based pro audio recorder app, MovieSlate8 Pro advanced timecode slate and content logging app, and free iOS timecode slate app UltraSync Blue Slate. ď ľ


Ross Video updates Ultrix Platform to add more flexibility

The latest version of Ultrix firmware – V4.2 – sees the introduction of a new SFP I/O module for the latest generation of Ultrix-FR1, FR2 and FR5 chasses – ULTRIX-SFP-IO. This new module offers all of the powerful features customers expect from Ultrix (MultiViewers, frame syncs, clean/quiet switching, audio embedding/deembedding, etc.) but the board features 16 SFP cages for standard video SFPs and 2 AUX ports for video/MADI connections. Ideal for customers with large fiber deployments, UHD systems (where cable distance is important) or where additional HDMI I/O is 8

needed, this new module will be available in this month. In addition, a new optional ULTRISYNC-UHD license is being announced with V4.2, enabling customers to purchase softwareenabled assignable 12G frame-syncs (with up to half a second of variable audio delay per mono channel). These licenses can “float” inside the frame and allow customers to move them between inputs as required. Finally, the company has announced the launch of the UltricoreTLX Tie Line Control Manager – control software to automatically

manage routes between multiple frames. With Ultricore-TLX, single switch commands simplify the operation of routing systems, removing the requirement for an operator to understand of the system design or architecture. UltricoreTLX is an optional license for Ultricore-BCS, and offers multi-hop automatic path-finding between all supported router types (Ultrix, NK and many 3rd party matrices). It enables the seamless routing of both SDI and IP video signals, so customers can build sophisticated hybrid routing ecosystems while maintaining easy control of the system via a DashBoard interface. 


Agama launches new version 6.0 of its video analytics suite

This new version 6.0 of the Agama Video Analytics solution supports integrations with leading collaboration platforms, and the new Agama Widget SDK supports data integration from any source into the solution. With the new release, Agama also widens the scope for Head-end monitoring and assurance with IP-SDI capabilities, enabling monitoring of baseband video services over IP, also in virtualized and cloud head-ends. This provides a single pane of glass transparency in all types of head-end scenarios, making every processing step transparent and verifiable. In the head-end visualization area, a new Penalty box functionality is introduced to directly show thumbnails of impacted channels, for quick visual confirmation and highlighting of problems. ď ľ


Nevion’s Service Operations Center (SOC) supports and monitors TV2’s IP media networks operations

Nevion, provider of virtualized media production solutions, has unveiled that TV 2, the largest commercial television broadcaster in Norway, is using Nevion’s Service Operations Center (SOC) to provide support and monitoring of its IP media network operations. Having moved to IP production in 2017 with Nevion’s help, TV 2 initially


chose Nevion’s advanced remote services, “in order to leverage the expertise and experience of the SOC’s staff”, and focus on its own core-competencies, including content creation. When the COVID-19 crisis developed, TV 2’s decision to virtualize the support and monitoring of its IP media network operations proved to be “invaluable as very few staff would have

been able to be on-site to maintain the network”, according to the press release. Nevion launched its SOC, in Gdańsk, Poland, in September 2018, in response to the growing demand from broadcasters for additional help with the day-to-day management of their media network solutions. The SOC is now Nevion’s main operational

unit for handling 1st line remote support, and backed its 2nd and 3rd line support teams across the world. Terje Amundsen, Head of Content Technology, TV 2 said: “We were an early mover to IP in the facilities, which has enabled us to gain unprecedented flexibility in our production. As a broadcaster though, we need to concentrate our resources and efforts on the creation and delivery of a compelling programming for our viewers, so it made sense to use Nevion’s experts to monitor and support of our IP media network. Nevion’s SOC has enabled us to identify and fix potential issues before they have occurred, which has been extremely valuable to us. And of course, during the COVID-19 lockdown, having Nevion looking after our IP media network remotely has made our task to adapt to the new situation much easier.” Petter Kvaal Djupvik, Chief Operating Officer, Nevion concluded: “Over the past few years, Nevion has invested heavily in its ability to deliver projects and support customers remotely, and this has ensured we can continue to serve our customers even when it is not possible to be on site. Our SOC, in particular, has proved to be of great help to our customers, such as TV 2, which we are obviously very pleased about.” 


CP Communications equips IRONMAN group’s latest streaming studio and control room CP Communications, a provider of solutions and services for live event productions, has completed the systems integration of a custom-designed streaming media studio and control room that enables The IRONMAN® Group—a Tampa-based sports event management company—to deliver live coverage of its branded multisport and marathon events over-thetop (OTT) via its website and Facebook Watch. Owned by the Chinese Wanda Sports Group, The IRONMAN® Group produces more than 235 race events across 55 countries worldwide. These include the iconic IRONMAN® Triathlon, which combines swimming, biking, and running into a single longdistance contest; the Rock ‘n’ Roll Marathon Series, and more. Besides bonded cellular for IP-based transmission, the design leverages Dante® to configure,


Ironman Control Room

manage, and route broadcast-quality audio over IP; and NDI®, an IPbased video networking platform that extends the HD-SDI workflow. A digital media room supports production of live stream video podcasts and ondemand training videos for IRONMAN athletes. IRONMAN had already purchased some equipment before engaging CP Communications, including a 4-input NewTek TriCaster Mini video switcher and an analog Mackie audio board.

They required a more sophisticated technical infrastructure to accomplish their production goals and CP expanded IRONMAN’s capabilities to maximize those capital investments. Within two months and on a tight budget, CP harnessed the combined experience and technical expertise of their designers, engineers and systems integrators, who built-out the studio and control room in two adjoining offices at IRONMAN’s Tampa headquarters.


The control room now includes the following gear to enhance the production workflow: - An external 20×20 expansion router to expand the switcher’s video inputs from 4 to 20. - A Dante license was added to the TriCaster Mini to accommodate Dante-compliant audio - A Yamaha TF-1 digital audio mixer with Dante sound card

- A Dante-enabled Unity intercom system, wireless mics, and in-ear monitors

- A greenscreen stage for immersive virtual sets and backdrops

- A Wowza Clearcaster for streaming to online portals like Facebook

- A NewTek TalkShow for bringing Skype callers into the live program

- Mobile Viewpoint 2K4 playout servers and Terralink bonded cellular encoders

- Two Sony PXW-100 (1080i) cameras with added zoom and focus control, fluid head tripods, Dante communications, and NDIbased teleprompters

The 20-square foot studio design added the following capabilities: - A News Desk Production set

- A DMX lighting control system to control color temperatures and lighting set-ups 


NC State University revolutionizes production facilities with IP Connectivity

Control room Distance Education and Learning Technology Applications (DELTA), NC State University

North Carolina (NC) State University has transformed its live course production capabilities with a networked video infrastructure, based on the SMPTE ST 2110 standard for connectivity, around its campuses in Raleigh, N.C. Technology from Imagine Communications is at the heart of NC State DELTA (Distance Education and Learning Technology Applications) providing a 14

quantum leap in capacity and operational efficiency. DELTA already had dedicated fiber runs linking their centralized operations center located in Ricks Hall Annex with six other buildings on the main campus and two additional buildings located up to 10.5 kilometers (6.5 miles) away on their Centennial Campus. Fiber links were being used simply as

single-mode, point-topoint connections, providing the capacity for just four video signals per classroom transmitted back to the operations center. In addition to replacing systems that were beyond end of life, it was important to DELTA to provide much greater capacity if it was to meet its commitment to the wider university community for highquality educational media supporting distance learning in an efficient, effective and service-oriented environment. For the year 2018-2019, this included more than 130 live courses and greater than 7,000 hours of classroom recordings, to support almost 50,000 students enrolled in online and distance learning courses. The new system uses just 16 fibers to provide extensive networked capability, leaving 20 strands from the original


installation free for future expansion. By taking a networked IP approach, the new system gave the DELTA team the capability of 32 x 32 HD signals from each remote building. This is achieved thanks to bidirectional 100 gigabit ethernet capacity across the fiber network. Users can control and manage the entire network from a single

point of contact, using Imagine Communications’ Magellan™ SDN Orchestrator, a fieldproven software control layer. Each production area can design its own monitor wall, using Imagine’s EPIC™ MV multiviewers under the control of a Crestron GUI. EPIC MV is a highly scalable solution for probing IP streams and

monitoring mixed signal types on a single canvas, while offering missioncritical system visibility across compressed and uncompressed IP workflows, and is an integral part of the monitoring solution at NC State. The Cisco Nexus 9336 router supports up to 7 terabytes a second to ensure there are no data bottlenecks. 


Full-IP OB twins usher in a new era for Belgium’s RTBF In early April 2020, RTBF, Belgium’s public broadcaster for the country’s French-speaking community, took delivery of the first of two groundbreaking, full-IP OB trucks. Due to the covid-19 lockdown, the 12m long trailer had to be configured remotely using TeamViewer, a few web cameras and microphones for confirmation purposes as well as via VPN in order to be delivered on time. The second identical OB truck will arrive on 1 July. Masterminded by Geert Thoelen of NEP Belgium and Dirk Sykora of Lawo, the vehicles built by Broadcast Solutions in Germany will be leased to RTBF for eight years. The twins come equipped with a 100Gbps Arista-powered network core (audio, video and matrix) that revolves around Lawo’s V__matrix C100 platform for SDI16

and IP-based video and audio input/output as well as processing. A Sony XVS-8000 IP vision mixer provides powerful video control for the 16 Sony cameras.

RBTF OB Production Control Area.

The Sony Live System Manager provides the control interface to Lawo's VSM and NMOSIS04/05 protocols as required. The LSM allowed Sony to remotely


access the Switcher network to configure the IP routing and XVS setups from its engineer's home during the unfortunate events of covid-19 and pan European lockdowns. Each truck offers 36 vm_dmv heads delivered by V__matrix C100 blades for multiviewing purposes. Overarching

control and orchestration is managed by Lawo’s VSM IP control system.

future-proof “dream

Both OB trucks support all audio formats (analog, AES3, MADI with SRC, Dante with SRC and AES67/RAVENNA), which are mixed using a 48channel-strip mc² 56 audio production console and 512 DSP channels provided by A__UHD Core. Audio and video signals can be sourced from six video stageboxes equipped with V__matrix Silent frames and A__mic8 audio I/O edge devices. In addition, there are eight audio stageboxes containing maxed-out Power Core units as well as two stageboxes with one DALLIS I/O frame each, providing 128dB microphone inputs for exacting audio quality requirements. All of the above run in ST2110.

specified, Thoelen and

While reading through the tender documents, Geert Thoelen and Maxime Delobe of NEP Belgium realized that RTBF was expecting highly efficient and

eliminates time-

trucks”. Given that an IP backbone had been Delobe suggested basing the two vehicles on ST2110-20 for video and ST2110-30 for audio. In response to a request for Dolby E signal compatibility, Lawo proposed to make all audio streams also available in the ST211031 format. “This way, the OB trucks cater to both 24-bit and 32-bit audio needs in any combination: the ST2110-30 or ST211031 audio stream versions can be selected at the press of a button on a VSM soft- or hardware panel,” explains Lawo’s Dirk Sykora. “This userfriendly approach consuming and costly system reconfigurations before an upcoming event,” adds Geert Thoelen.  17


Deutsche Welle boosts its remote production from home offices with LiveU technology The foreign news channel Deutsche Welle, headquartered in Germany, is maintaining and expanding its global live coverage during the current coronavirus crisis with LiveU’s mobile IP broadcasting technology. Deutsche Welle’s worldwide reporters and correspondents are working under difficult conditions with many of them being locked down at home and unable to access a studio. This is why Deutsche Welle draws on LiveU equipment even more frequently than usual these days. The station has been building a comprehensive LiveU infrastructure with the help of Netorium, LiveU’s local partner in Germany. For its mobile smartphone production kits, Deutsche Welle is taking advantage of LiveU’s LU-Smart app, usage rate of which has 18

grown significantly over the past weeks. The mobile application is currently used by more than 400 trained reporters within the worldwide Deutsche Welle network to broadcast live from any location. Max Hofmann, Head of News at Deutsche Welle, said: “A

couple of years ago nobody would have imagined that we wouldn’t need to book an SNG truck anymore for broadcasts outside our studios. Being able to deliver high-quality live videos simply from the smartphone means much more freedom for us. Now we can always be in


whatever location to inform our viewers in a fast, direct and accurate way.” Using the LU-Smart app, reporters can deliver a live signal directly from their smartphones to the Deutsche Welle master control room. In addition to the LUSmart app, Deutsche Welle has been deploying dozens of LiveU field units of various sizes and types such as the LU300 HEVC portable encoder or the LU610 fixed location encoder. The transmitted signals are received by one of the 20 LiveU servers worldwide that Deutsche Welle operates together with its partners and can be managed and controlled via LiveU Central. This centralized web platform is used by Deutsche Welle on a daily basis for managing all incoming live feeds delivered with LiveU technology. Germany-wide support is taken care of by Netorium whose sales representative Gunnar Schmidt said: “We are very excited to be able to help our long-time LiveU customer Deutsche Welle keep live coverage going

at times where many aspects of social life are being shut down. Deutsche Welle correspondents are spread all over the world, broadcasting in 30 languages, which makes this a truly global venture for us.” Zion Eilam, VP Sales and General Manager, EMEA, LiveU, said: “It is fascinating for us to see how customers like Deutsche Welle have proven extremely agile in this challenging situation and continue to deploy our mobile technology in completely new production environments such as the reporters’ homes. The Deutsche Welle example shows that our live video solutions, especially the LU-Smart app, are very easy to use and facilitate live broadcasts from virtually any location.” In response to the latest developments, LiveU has recently launched its new Guest Interview feature for LU-Smart. This feature makes it “even easier” for guests to be added to a live show from remote locations. 

Philippe Leonetti appointed as new CEO of Viaccess-Orca Viaccess-Orca has announced that Philippe Leonetti has been appointed as CEO of the company, replacing Paul Molinier. He has established expertise in the field of security with over 12 years of experience in senior management roles at Orange, one of the world's leading telecommunications operators. Philippe Leonetti was previously in charge of the IT and service platform security for France Telecom and then the Orange Group until 2007. Since 2008, he served as the Vice President of Smart Access for Home and Mobility and later as the Senior Vice President of Steering of Innovation at Orange Group. 



Lutz Rathmann takes over Managed Technology Division at Riedel Communications

Lutz Rathmann has joined the Riedel Group management team as director of the Managed Technology division.

Lutz Rathmann joins the Riedel Group management team as director of the Managed Technology division, effective June 1. An engineering graduate, Rathmann most recently served as CEO at APS, a leading agency in the field of automotive events. He also previously held several management positions at Riedel, including Deputy Head of Global Events. Rathmann succeeds Kai Houben, who has moved to the management team at wige SOLUTIONS, a media company particularly well known in motor sports, after serving at Riedel for more than 25 years in various roles. Rathmann's tasks include the expansion of new strategic business areas and partnerships as well as the continued development of the Riedel solutions portfolio. 


Net Insight is now part of the Zixi Enable Network “To reduce the cost to serve and increase innovation agility”, according to the press release, Net Insight is joining the Zixi Enabled Network, thus enabling interoperability with other Zixi customers and partners, as well as making it easier to acquire and distribute content over public Clouds such as Amazon Web Services. The Zixi Enabled Network is constituted by more than 700 customers and 170 OEM and technology partners utilizing the Zixi protocol, “the preferred retransmission protocol” of Amazon Web Services when it comes to acquiring live content. “We are thrilled to announce Net Insight as part of the Zixi Enabled Network, facilitating interoperability of hundreds of devices, over 100 countries and 80,000 end-points across the world,” says John Wastcoat, SVP Alliances at Zixi. With Net Insight joining the Zixi Enabled Network, Net Insight’s customers can now ingest content through AWS MediaConnect, as well as interoperate with other Zixi customers through the Nimbra platform. 


Tightrope and ENCO forge partnership to bring automated closed captioning to Cablecast community media workflows

Tightrope Media Systems – the pioneering developer of the Cablecast Community Media automation platform – and ENCO Systems have announced a new strategic partnership designed to help community media organizations easily and cost-effectively incorporate automated closed captioning into their broadcast, online and OTT workflows. The new agreement makes tailored configurations of ENCO’s enCaption automated captioning solution

available to Cablecast customers at special pricing. The Cablecast customer service team will offer one-stop support for integrated Cablecast and enCaption workflows, while the two companies will explore further opportunities for technical integration between the two product families. The Cablecast broadcast automation, playout and streaming platform lets community media organizations efficiently reach both ‘traditional’ and ‘cordcutting’ audiences across

outlets ranging from cable television channels to OTT services. ENCO’s software-defined enCaption platform enables users to effortlessly add closed or open captions to live and pre-recorded content in near-real-time, helping them comply with regulatory requirements – including FCC captioning rules for television and ADA compliance of website content – while better serving hard-ofhearing viewers and making their content more accessible overall.  21



Danish Broadcasting Corporation 22



Does the saying "Brevity is the soul of wit" sound familiar to you? TM Broadcast has the opportunity to put it into practice to bring you the exciting technology strategy of DR, the Danish Broadcasting Corporation. We take a look at the corporation’s closest future, its latest challenges and its particular way of understanding television production with its CIO, Mikkel Müller. Join us to discover the technical resources that support the Danish corporation.




What is DR vision regarding technology?

DR is a public TV organization. Is “addressing technology innovation� among the public services objectives you must achieve?

To support DR with the best technology solutions, with a focus on cost efficiency and excellent operations and stability.

and their content are all available through our OTT/VOD platforms as well. We have a significant internal production with +20 TV studios and around 35 radio studios.

What TV and Radio channels does DR include? What are your production facilities?

In your opinion, do DR viewers demand and appreciate TV technology innovations?

DR delivers 3 TV linear channels, 4 FM radio channels and 3 DAB radio channels. These channels

There is a strong demand for OTT/VOD and continued work to enhance theses platforms

No, as a public organization DRs objectives and regulations are focused on content, and technology objectives are mainly focused on distribution.



is the key area of technology demand from our viewers.

What has been the critical technological changes that DR has faced in recent years? The shift towards OTT/VOD as an independent platform has been the biggest change. Some years ago, our OTT/VOD solution served primarily as a catch-up option for viewers who

missed a program in a linear TV channel. Now, a significant and growing number of users use our OTT/VOD platform as their primary platform. We have invested significantly in our OTT/VOD platform to support this development.

Is DR currently undergoing renovation? What are the areas you need to develop shortly? We are renewing our play-out facility currently

and we will also renovate our radio studios in the coming years.

What are the biggest challenges for DR in the following years? The biggest challenge is keeping up with the shift from linear TV to OTT/VOD media usage, and maintaining DRs penetration and relevance amongst the users in Denmark, where more and more use digital media.

We are renewing our play-out facility currently and we will also renovate our radio studios in the coming years



TECHNOLOGY What is DR’s production standard nowadays? Are you currently working with HD or 4K? What about 8K and HDR technologies? We are working in HD in general production and have no current plans towards 4K, 8K or HDR in our general production, but we monitor the development in these technologies.

What is the basic infrastructure you deploy to produce a standard program such as TV News? The basic infrastructure in our News studios is both SDI & IP-based, the video equipment is still interconnected via SDI, all audio equipment is IPbased & soon we will move intercom systems to IP as well.

What is the main equipment at DR’s control rooms? Do you unify resources in all studios? The main equipment in our control rooms are 26

Photo copyright Bjarne Bergius Hermanse.

unified across most of the studios. We are using routers, switches, audio systems, multi viewers & intercom solutions from companies like nVision, SA-M, Grass Valley, Miranda, Solid State Logic and Riedel.

Graphics are getting more and more complex in TV. What graphic platform do you use in a common basis? Are you using AR or VR at the moment? We use Vizrt as the main platform for TV graphics. We are using AR quite a lot in the News production and there is more coming.

Do you use a unified MAM system across the entire DR’s infrastructure? Yes, we have a unified MAM for TV and radio content. This has been in production for +10 years and has been upgraded regularly to keep up with demands and the development in technology.

Are you considering adopting an IP infrastructure? Do you currently use SDI? We have been using IP infrastructure for audio for a few years. Now, we are extending IP infrastructure


them? We use a combination of SNG, backpacks and OB depending on the production type and needs.

Are you currently working on big data and IA tools to improve your archiving or even production workflow? We are doing small scale proof of concept with this, but nothing in production yet.

Uffe Weng copytight DR.

The basic infrastructure in our News studios is both SDI & IP-based, the video equipment is still interconnected via SDI, all audio equipment is IP-based

What about distribution? What playout system are you currently deploying? We are renewing our playout facility and the new platform is based on Pixelpower and IPinfrastructure from Axon.

What is your strategy regarding OTT/VOD platforms? Are the days of Linear TV numbered? into our new play-out facility. SDI will still be used extensively, but IP is the future.

Regarding outside broadcasting, do you use DSNG, transmission backpacks, OB trucks or a combination of all of

OTT/VOD is the key strategic area and DR have invested significantly in this, but we still believe that linear TV will be here for many years to come. ď ľ 27


Rakuten TV

State-of-the-art technology ready for an ambitious 4K HDR catalogue



The VOD platform landscape plays an important role in these uncertain times when confinement is extending around the world. Rakuten TV, an initiative with global ambition and a current reach of 42 countries, wants to ratify itself as an important candidate in the global dispute for the attention of viewers. The platform has some interesting features, such as a commitment to integrate into the vast majority of SmartTVs or an impressive 4K HDR content catalog. TM Broadcast interviews David Villanueva, Deputy CTO, who kindly answers our questions. Interview by Sergio Juliรกn



What are the main technical challenges involved in a platform like this?

David Villanueva, Deputy CTO at Rakuten TV

I would like to start by providing our readers with some context. What’s the origin of Rakuten TV? Rakuten TV is an evolvement of, which was one of the pioneers in the industry, globally as well internationally, in offering fresh cinema content directly to people in their homes via their Smart TVs. Only a few companies were doing this at the time, but the entertainment industry was rapidly evolving and Rakuten TV was launched in response to offer a more inclusive service via TVOD. 30

The biggest technical focus is ensuring quality for our customers who look to us for the best home cinematic experience. Our platform is available in 42 countries and across multiple forms of access, including SmartTVs, set-top-boxes, web and mobile, which means that we have to be prepared for numerous scenarios to provide the best user experience. In addition, we have very aggressive quality KPIs in terms of audio quality, video quality and performance. We are always looking for the excellence regarding our quality, and constantly exploring innovative ways to deliver the best available technology, through having a deeper integration with manufacturers and technical providers, in order to provide the best user experience. For us to be able to accomplish this, knowledge is key. We develop everything inhouse, which means that

everything is managed by our team of experts, hence, attracting the best talent is vital. We have a big team, and a fantastic team, in order to fulfil all functionalities from encoding, delivering, serving, developing apps, through to innovation.

Most people have already adopted HDTVs but the world is turning to UHD technology. How does Rakuten TV prepare for this? One of Rakuten TV´s goals is to offer the best cinematic experience at home though the best technology, and we are constantly focussing our technical efforts into delivering the highest quality service. As such, Rakuten TV has the widest 4K HDR catalogue of Hollywood new releases for SmartTVs in Europe, and we are set to continue to expand this catalogue, to provide for our wide audience base.

Integration with Smart TVs is a big part of your core. What is it like to develop an application like this for a wide variety of televisions?


Yes, it is a vital part of our business to enable the end-user easy access to the range of content on their televisions and home cinema set up. We are pre-installed in Smart TV’s and so far, we have launched a Rakuten TV branded button in the remote controls of Samsung Electronics, Philips, LG and Hi-sense. The more platforms we have to support, the more difficult is to develop the app since each TV manufacture is different and have their own operating system and performance.

Are you offering any particular HDR standard? We offer HDR10, HDR10+, Dolby Vision and Imax Enhanced. Not all of them are available since it depends on different factors like the studio and the device that is used to reproduce the video.

Recently, Rakuten TV has started offering an ad-supported service. Is there any difference in terms of video fidelity? No, it’s not. We offer the same video quality. For

instance, most of our Rakuten Stories – our exclusive or original content- such as Matchday – Inside FC Barcelona are available in UHD. It would depend on the studio but technically speaking, there is not any restriction.

Variety magazine revealed a few months ago that you plan to provide users with original shows. Does Rakuten TV have its own productions teams? Will you work with associated production companies? Rakuten TV has already launched a variety of original and exclusive content, including MatchDay – the official TV series of FC Barcelona, and feature length documentaries Inside Kilian Jornet and MessiCirque. We have a number of new exclusives coming to the platform in the next few months, including a documentary about footballing legend Iniesta, Andrés Iniesta The Unexpected Hero. All of these productions are in collaboration with third party production companies, and our team

work closely with them, and forging new partnerships, to continue to bring new and exclusive content to Rakuten TV.

Last but not least, in your opinion, what’s the future of OTT platforms? The AVOD business model is an area we’ll see more expansion into. It offers an alternative to traditional TV for both users and brands. The service also allows users to continue to watch chosen content with some, but with less advertisements than traditional TV, and for brands, the service allows them to better target their customers through an emerging and powerful media channel. Rakuten TV is fully focused on this area at the moment, having become the first pan-European platform to offer an AVOD service, and the first European platform to combine both a TVOD and AVOD service.  31




It has been more than a decade since the audiovisual environment was facing a major change: the move from SD to HD. Back then it was just a change in definition, we could -in perhaps too simplified terms- say. But what is going on with transition from HD to UHD is somewhat more comprehensive. Time to reflect a bit about it. By Yeray Alfageme, Service Manager Olympic Channel



Let us recall those times. The leap from SD to HD meant a fourfold increase in definition of the image being produced and received by TV viewers. Yes, just TV viewers, as back then digital content distribution platforms, either live or ondemand, were nearly non existent and very little known. And yes, YouTube was already there, but hardly anything else. HD "simply" meant more pixels. Neither color space nor image scan type. Progressive images still coexisted with interlaced images. Neither sound nor image luminance changed. It is true, however that with regards to audio a change did take place, as distribution of surround audio was enabled either through Dolby or multichannel. As far as metadata were concerned, the signal was capable of conveying something more than just teletext; I know: I am oversimplifying.

Traditional TV, SDI, satellite and DTT

However, the move from HD to UHD is definitely a more substantial change as not only definition -more pixels- is increased four times once again, but also are color space, luminance, frame speed and, of course, sound capabilities, all this resulting in better pixels.

HD-HDR images are already regarded as UHD signals. No definition leap is required for an image to be regarded as ultra high definition, keep that in mind. As laid down by the UHD Forum, different definition, luminance and chrominance are possible, and not only definition, within a UHD environment.

At this point, I deem it convenient to draw a line between two worlds: OTT and traditional TV distribution, as they are dramatically different, not only in their ways but also in their nature, and transition from one environment into the other can -and must- be approached differently. Let us see in detail what I mean.

Moving from SDR to HDR image involves changing from 8-bit to 10-bit or even 12bit depths based on the standard so chosen. With regards to contribution, what we could simplify as a SDI environment means switching to 3-Gbps linear signals instead of the regular 1.5 Gbps. Most equipment units are capable of handling these types of signals, but this



distribution. Back to the SDI linear world, a 4K-HDR image requires 12 Gbps for non-compressed transfer, so hardly any 1.5 Gbps equipment supports this standard. There are some pseudo-standards such as 12G-SDI that allow conveyance of these kinds of signals on a single coaxial cable, but they are just that: pseudostandards. Some manufacturers have adopted them, but letting our infrastructure leap rely on non-standard technology seems a hardly acceptable risk. must be borne in mind particularly with regards to compression and distribution. Maybe the kind of coder being used for compressing and sending out our signal for distribution will not be capable of handling signals having depths higher than 8 bits, which will be a problem when handling HDR signals. A combination of these luminance and chrominance levels with a leap in definition up to 4K images, make increased bandwidth indispensable, both for contribution and

What is indeed official is transmission through the use of 4 3G-SDI signals for an effective 12 Gbps bandwidth. But this would require 4 coaxial cables, something for which many infrastructure setups are not ready either. Could a central control array split routing capacity in four in order to support these kinds of signals? Not in most cases. At this juncture we switch to the world of fiber, nothing but a different medium for SDI signals which, nonetheless avoids having to use 4

cables; and to the IP world through BT.2110 and this is of real help. And the issue is that, in my opinion, switching from HD to UHD in a traditional linear TV environment practically forces the move from SDI to IP. Obviously, a UHD production environment can be implemented on SDI infrastructure, especially on fiber, but I think that this is a real constraint that has nowadays a much shorter life cycle when compared to implementation of an IP environment. The IP world is more uncertain, though. Technology has matured and standards are finally there, but manufacturers and their technologies are not yet up for the job in some aspects. That was the prevailing opinion not long ago and doubts are increasingly dissipating, so I am quite convinced that investing at present in an IP production environment is a much better decision than just updating our SDI infrastructure to host UHD signals, even if it involves switching to fiber. As a final world on distribution, even though 35


coders and compression standards are fast improving and allow now increased quality on the same bandwidth it is still too early –about this I am more convinced- to replace a traditional TV distribution network either satellite or DTT- for UHD in just one go. The few experiences recorded in both instances are pilot projects and the only existing commercial channels in UHD are via satellite as this makes it easier to control all parts of the distribution chain, coding, distribution channel and set-top box.

OTT and IPTV environments I dare not say that the leap is any easier in the native streaming world, but I think that at least it is not as disruptive as in traditional TV, because of the fact that an OTT platform that had been so far broadcasting HD content may switch to UHD content distribution entails an update not as much as an intrinsic, essential change in model and infrastructure. 36

Be it an IPTV model in which the network is under control and streaming is not actually distributed through the Internet but by a telecom company owning the same; or the OTT model distributed through the Internet, quality standards under which content is distributed are already adaptable nowadays. The various platforms typically have a minimum of four and a maximum of nine quality profiles for carrying out distribution. Therefore it can be inferred that increasing the number of profiles in order to support UHD images turns out to be at least feasible. A thing to bear in mind and, justifiably so, is distribution codec. Under H.264, the most usual codec for distribution of content in streaming, only certain profiles are capable of supporting HDR –and higher definition- content and not all decoders can successfully process these images. Furthermore, the bitrate required for

conveying these signals with adequate quality is really high. Options available nowadays are two: H.265 and AV-1. It is true that MPEG is working on VCC, the alternative that this organization aims to present versus AV-1, but it seems it will not be available until the end of the year. H.265 is obviously the natural evolution of codec H.264, already featuring numerous profiles capable of handling UHD signals in a comfortable way and with reasonable bitrates. However, this falls a bit short in the processing of metadata as well as in


terms of flexibility when it comes to combining various pieces of content in varying resolutions or quality standards. However, as AV-1 has the advantage of being much more modern, it has been specified for much more flexible capacity conditions, in addition to allowing for much higher compression rates while keeping a visual quality comparable to that offered by H.265 without requiring –as some might think- intensive CPU usage. This makes it particularly interesting for implementation in set-top boxes and lowperformance devices

connected. Giants such as Netflix or HBO have already set their sights on AV-1 and included it within their technical specifications as accepted distribution method. The only drawback to AV-1 is that not all hardware offers native support for coding and decoding this standard, although I think this is something that will be fixed sooner than later. One final consideration: starting to distribute UHD content through a different codec does not mean we should change our entire header and content distribution systems or transcode all our content into the new

format. The good side of streaming is that it enables combining various formats in an easy fashion provided that systems and software applications used support this, although the task is no doubt far simpler here than in traditional production environments. In sum. In traditional production environments the leap into UHD entails almost inevitably, if we want to go for a long-term investment- a switch to IP supported by BT.2110 as this will enable us to handle signals under any format in a more flexible way; however, this change is a substantial one and it may turn impossible in some instances. In a streaming environment the change is less marked. Nevertheless, essential decisions in regard to distribution codec to be used will make our move permanent or temporary while we wait for the industry to mature and our competitors get, in some instances, ahead of us. ď ľ 37




Quality, capacity, speed and security


Throughout the history of audiovisual production, various solutions have been developed to log and record contents generated in the fields of cinema, television, video and advertising, among others. We all are well aware of the importance of the change from photosensitive recording to electromagnetic recording and then to optical media and/or to semiconductors; and also the move from analogue to digital. At present innovations in the fields of engineering, electronics and IT are setting the direction and pace of changes in audiovisual technology as applied to the areas of production, post-production, distribution, storage and filing. Solutions spring up for each production process and a wide range of requirements from the audiovisual market, from the most professional environments to amateur contexts. 39


The first thing we found when we approached the subject of this article is the mumbo-jumbo of notions and terms used that have been gathered and transformed throughout the evolution of innovations that have been applied. And secondly, the extensive information and documentation existing on this topic, mostly due to the enormous range of products and solutions that both manufacturers and technology developers are nowadays capable of making available to us. 40

We will try to contribute our tiny bit by presenting here content that is wellorganized, easy to understand, and appropriate for the purpose of dealing with the most usual needs of the audiovisual production sector at present. But the attractive feature of the times in which we are now is that each one of us can find –and we have a definite chance of findingour own “technological recipe” to meet our professional goals within a given budget. Recording, duplication and storage have all an element in common: information. Information concerning image, sound, and data. Information we work with and want to keep. It is what we call 'signal', regardless of

nature (video, audio, synchronisms, metadata; be it analogue or digital) as well as origin thereof (a camera, a microphone, a recorder, a link…). Simply put, this is the so-called WHAT. The following aspect to have in mind is format. This term is used by the audiovisual industry to make reference to a number of standards, parameters and encoding systems that have been laid down for efficient recording, duplication, storage and/or distribution of information signals. Format may be different (even if signal type is identical) depending on the level and stage within the production and edition processes used in order to obtain the audiovisual work. This is the solution implemented by engineering, IT and pure


and applied science ‘wizards’ to materialize – give proper shape- the audiovisual signal for further use. It is the HOW. There are specialist formats for capture/recording; others were designed for intermediation/postproduction; some others are ready for mastering/filing; and, of course, for broadcast. In a digital environment we should discuss in more detail and length the various codecs and containers that progressively shape video and audio formats together with their related information on synchronism and metadata. Before going into the third aspect as basic knowledge, we must make a distinction -within the video field- between the

above-mentioned format notion and the image (in motion) format that we many times use and read, and which, in some instances, may lead to confusion. When the term -imageformat is used in the video field, we are actually making reference the image’s aspect ratio, that is, the ratio between width and height of images comprising what we know as video/film. What we know as 4:3, 16:9 in a TV/video environment; or 1.33:1, 1.77:1, 1.85:1 or 2:35:1 in cinema, among others. And finally, the third issue: the physical or ‘virtual' location (always some real location is involved) in which we place the result of the production and encoding

processes (images, audios, data…). This what we all know as the medium. It is WHERE. Signal, format and medium must be identified separately. Within the audiovisual sector an interaction between all three must take place in order to provide each professional environment with efficient solutions; and also based on the needs that are inherent to each production stage, which are quite different at recording, during postproduction and when it comes to a context involving transfer and 41


distribution of audiovisual content. We need to have a clear purpose to generating a signal, applying a format and using a medium. This is the WHY. A good example of this, a classic in the history of audiovisual technology that many technicians either do not know or have never used: VHS. VHS was a home recording system for composite video signals under JVC's Video Home System format that used magnetic tape as medium. This is a recording, duplication and storage solution developed back in the 70s and launched in the market in 1976 by JVC themselves and their parent company Matsushita (Panasonic). This novel solution came out due to social and economic reasons: it was time for keen users to be able to view, record, play and store videos in a simple way and at an affordable cost. The important thing was to create the need for us to have video as part of our lives. 42

Either as professionals or amateurs, if we take a look through the history of the audiovisual sector or even around our own reality, we can see the coexistence of various types of signals, countless formats and a wide range of media, which are the result of multiple reasons such as quality (broadcast, semi-professional, homeuser), quantity (bandwidth, recording/broadcasting time, bitrate...), use (cinema, television, documentaries, corporate video, concerts, advertising, among many others) and/or, in some instances, the result of patents and conflicting interests. Once we have identified WHAT, HOW, WHERE and WHY, we are now in a position to become acquainted with current solutions, which translate into specific technical equipment for recording, duplication and storage from the various technology manufacturers. We need a general, globalizing vision. This is, a comprehensive

approach that includes all kinds of devices for visual capture (cameras), sound capture (microphones), recording and playback equipment or systems (known as recorders o by a typical acronym in the audiovisual field: VTR – Video Tape Recorder PLAY or REC; or –a bit more up to date- DVR – Digital Video Recorder, or decks), as well as a myriad of ancillary equipment such as encoders, converters, amplifiers, modulators... Leaving aside camera and microphone types which is not within the scope of this article- we


HyperDeck Extreme 8K HDR.

now focus on the field of what are regarded as recording/playback devices. Thus, we find those that are stationary, that is, made part of a fixed technical installation; and the socalled external or portable units, which can be used in different work situations (such as outdoors) because of their mobility, compact size and weight. SONY and PANASONIC are ever present manufacturers of both stationary and portable solutions, but worth highlighting is the presence of new brands

and manufacturers. Among stationary equipment and concerning video, the solutions provided by BLACKMAGIC through its Hyperdeck product line -in all variants- must be mentioned: Studio 12G, Extreme 8K, Studio Pro, Studio Mini. Also the AJA brand presents KiPro Rack, KiPro Go y KiPro Ultra and Ultra Plus. As for external/portable devices –sometimes also called recording/monitoring extension units- mention must be made of manufacturer ATOMOS and its Samurai, Shogun

(4K or Flame 4K HDR or Inferno 4K HDR 60p), Ninja (Blade, Assassin, Flame or V 4K), Shinobi and Sumo 4K HDR 60p models. Back to AJA, this manufacturer maintains the KiPro Quad and KiPro Mini line; in BLACKMAGIC we find Shuttle Hyperdeck and the 5" and 7" Video Assist Monitor; SHINING TECHNOLOGY has a wide range of CitiDISK HD solutions. As for sound, in stationary equipment worth noting are manufacturers such as DENON and its DN and TASCAM product lines (CD and SS ranges). In portables, very high-level solutions are presented by MARANTZ and ZOOM NORTH AMERICA through the H6, H5, H4n Pro, H2n and H1n models (the latter being one of the manufacturers most firmly established in the market); in mini rack format we have implementations by the TASCAM brand (DR models) and from other manufacturers such as FOSTEX, ROLAND and SHURE. 43


Atomos Samurai

With regards to duplication, being understood as duplicators equipment enabled for carrying as main task simultaneous copying of audiovisual material, we must once again mention manufacturer BLACKMAGIC and its Duplicator 4K device, which allows for duplication in 25 SD cards and long-lasting recording. And now time to deal with storage. As it is easy to imagine, we are dealing here with an item involved throughout the whole audiovisual production chain and therefore a broad range of adapted 44

solutions exist. However this is especially relevant when focused on something as specific as filing and preservation. From a standpoint of what is more fashionable nowadays and leaving the evolution of technology to history books, it is now time to share somewhat more accurate information on the various media and systems (some of them more widely used in recording and others for exchange and storage):  SD (Secure Digital) cards: This is a device adopting a memory card format developed by

SanDisk, Panasonic and Toshiba and introduced in 1999 as an evolutionary improvement on MMC. The standard is maintained by the SD Association, an organization in which several manufactures take part, and was implemented in more than 400 product brands across dozens of categories. Includes four card versions: Standard SD, SDHC (High Capacity), SDXC (Extended Capacity) and SDUC (Ultra Capacity); and comes in three sizes: original standard SD, miniSD and/or microSD.


A constant innovation flow and a somewhat complex set of names have contributed to create some confusion among SD card users. Let us try to shed some light when dealing with the features of this kind of cards. First of all, with regards to storage capacity, values range from 2GB for SD cards and between 2TB and 128TB for SDUC cards. In second place, we can reclassify SD card types by data speeds -the so-called class- and therefore those being more suitable for transferring video of varying quality, from SD (standard definition) up to 8K and 360º video:

10, U1 (10MB/s), U3 (30MB/s), V6, V10 and V30 (30MB/s) - UHS II (Ultra High Speed II) mode: Class 4, Class 6, Class 10, U1, U3, V6, V10, V30, V60 (60MB/s) and V90 (90MB/s) - UHS III (Ultra High Speed III) mode: Class 4, Class 6, Class 10, U1, U3, V6, V10, V30, V60 and V90 In summary, concerning the immediate present and short term, V60 and V90 SD cards are ready for FHD, 4K, 8K and 360º video recording.

 CF (CompactFlash) Cards: This was originally a data storage medium used in portable electronic devices. As a storage device it typically uses flash memory within a standard case and was specified and manufactured for the first time by SanDisk Corporation in 1994. There two main types of CF cards: Type I and Type II, the latter slightly thicker. Three card speeds exist: original CF, CF+/CFast 2.0 and CF 3.0.

- Regular speed mode: Class 2 (2MB/s), Class 4 (4MB/s) and Class 6 (6MB/s) cards - High speed mode: Class 2, Class 4, Class 6, Class 10 (10MB/s), Video Class Speed V6 (6MB/s) and Video Class Speed V10 (10MB/s) - UHS I (Ultra High Speed I) mode: Class 2, Class 4, Class 6, Class 45


As for storage capacity, we can find 32GB, 64GB, 128GB, 256GB and 512GB cards. Two pieces of information associated to these cards need clarification in order to better understand their performance. On the one hand, the UDMA (Ultra Direct Mode) acronym: this is a standard interface (in ATA mode) allowing higher transfer rates and associated to card version. From UDMA 0 (16MB/s) to UDMA 7 (167 MB/s) – corresponding to version 6.0-, it is now the state-ofthe-art in maximum transfer speed for CF cards. And, on the other hand, the second detail makes reference to the VPG (Video Performance Guarantee) specification, which ensures a minimum guaranteed writing speed for professional video capture (MB/s) as included from version CF 5.0 onwards. Thus, we have VPG 20 (equivalent to 20MB/s), VPG 65 (65MB/s), and so on. Please note a CFast 2.0 card achieves reading 46

speeds up to 525MB/s and writing speeds of 450MB/s, being more suitable for high-performance recordings such as those required with 4K and 8K. And for HD and FHD, a regular CF card reaching reading speed up to 160MB/s and writing speed up to 155MB/s will suffice.  P2 (a trademark of “Plug-in Professional”) Cards: It is a professional digital solid-state recording medium. This storage format was introduced by Panasonic in 2004 and is especially tailored to electronic news gathering (ENG) in applications. This kind of card is basically a RAID from Secure Digital - (SD) memory cards - packaged in a type-II PC card (formerly PCMCIA) diecast, in such a way that the data transfer rate increases as storage capacity does. There are three P2 card classes: R series, F series (transfer rate up to 1.2GB/s). Both provide storage capacities of 16GB, 32GB and 64GB.


And the latest version: The express series (thickness 50% higher than conventional P2 cards; writing speed up to 1.2GB/s; reading speed up to 2.4GB/s), which is used for Varicam recording and features 256GB capacity. There is a microP2 series, with a size nearly identical to that of an ordinary SD card.  SxS cards: This is a type of memory card based on CompactFlash technology and especially created as video medium by Sony in cooperation with SanDisk. It was introduced in 2007. There are two versions of the Express card: Express Card\54, measuring 54 mm in width and a size similar to a PC Card; and the 34 mm Express Card\34. Three models are available: SXS-1, the basic card; SxS PRO+, which works with RAID PSZ-RA6T and PSZ-RA4T units in order to provide users with a high-speed data transfer solution. 3.5GBps reading speed and

3.2GBps maximum writing speed. Storage capacities are 64GB, 128GB and 256GB. And last, SXS PRO X cards, which make use of the advanced PCI Express 3.0 interface, featuring ultra quick transfer rates up to 10GBps and storage capacities of 64GB and 128GB.  XQD cards: In November 2010, SanDisk, Sony and Nikon presented this medium that does not support CompactFlash or CFast under the PCI Express interface. In 2012, Sony marketed its first XQD card. After the N, H and S series, SONY presented the G series with capacities of 32GB, 64GB, 128GB and 256GB, which allowed reading/writing speeds of 440/400MB/s. This meant the highest reliability and increase performance for 4K users.  CFexpress Cards: Their introduction was announced in 2016. In 2019, specifications were laid down for this kind of memory cards, 47


which come in three sizes: Type A, type B and type C. CFexpress 2.0 type B is the new standard and uses the same form factor and interface but works under PCI Express Gen3 x2 for higher speeds, lower latencies and less energy consumption. Sony defines this as the natural evolution of the XQD and CFast standards, reaching reading transfer rates up to 1,700MB/s and writing transfer rates up to 1,480MB/s, which makes it a suitable solution for 4K. Three different storage capacities are available: 128GB, 256GB and 512GB.  Removable hard disks: Metal disk units featuring rigid magnetic surfaces and mechanical reading/writing


elements placed within a case and equipped with an appropriate connector that determines access speed, size and storage capacity. They can be internal or external (portable).  SSD hard disks: SSD is the acronym for Solid State Disk. These disks, as their own name suggests, use memory comprising semiconductors, also known as solid-state memory. They can be internal or external (portable). This kind of disks is the most usual one at present as they have a number of advantages that perfectly cater to the needs and situations of the audiovisual sector,

especially those found at a recording or shooting: faster boot-up, greater writing speed and higher reading speed (up to ten times more than traditional hard disks), low reading and writing latency (hundreds of times faster than mechanical hard disks), lower power consumption and heat generation, no noise, improved security, resistant (endure drops, hits and vibrations without breaking as they have no mechanical elements), lower weight and size. Manufacturers of SSD controlling this market are SAMSUNG, SEAGATE and WESTERN DIGITAL; or they already are proprietary solutions of a specific recorder or camera manufacturer such as RED ONE (RED DIGITAL CINMA


MINI-MAG 480GB), ARRI (Codex Compact Drive 1TB) or AJA (KiStor 1TB); and SSD have been developed even for ATOMOS recorders: SONY AtomX (2TB), GTECHNOLOGY Master Caddy 4K (1TB), ANGELBIRD (1TB).

combining several rigid hard disks (HD) into a single logical unit in which data are stored in all disks (redundancy). It was born in the University of California at Berkeley (USA) in the late 80s.

 RAID systems: RAID is the acronym for "Redundant Array of Independent Disks”. This is a technology

There are 6 RAID levels that have nothing to do with JBOD (Just a Bunch of Disks) or SPAN (N-RAID) storage:

- RAID 0 – This level is also known as Striping (even splitting of data) - RAID 1 – Also known as Mirroring - RAID 2 – Fault detection in rigid disks - RAID 3 – In this level, data are spread across the disks of the array save for one, which is used for storing parity information - RAID 4 – An improvement on RAID 3 that offers higher speed and supports larger files - RAID 5 and 6 – Parity in the array Some relevant manufacturers in this field are STARDOM (SohoRaid 4 Hdd model), TERRAMASTER (D5THB3 5 Hdd model), LACIE (6Big or 12Big models), OWC (ThunderBay 4 RAID Dual model), among others.  Shared Media Servers: These are solutions that make good use of the basic technologies implemented in the IT industry in general and are streamlined for 49


stricter requirements allowing for real time, high availability and high capacity for TV broadcast and 4K production environments. In order to know these servers in some more detail, storage methods must be mentioned: - DAS or Direct-Attached Storage is the traditional storage method and entails connecting the storage device directly to the computer or server. Its drawback is that this storage is not usually shared with other devices. - NAS or NetworkAttached Storage is storage accessed through a network in which a computer acts as server and shares the drive with other equipment as required. The server acts as an intermediary and reads from and writes on the shared volume. - Clustered NAS is an improved version of NAS and is based on the availability of 50

several servers that share the same drives, thus enabling a better allocation of work loads and with the additional advantage

of having more Ethernet communication interfaces. - SAN or Storage Area Network is a storage


system in which client equipment has the ability of reading from and writing directly on the shared volume as if were local storage.

Through the iSCSI protocol, which is more economical although offering lower output; or, on the other hand, through Fibre Channel,

which provides much lower latency and a better sustained bandwidth average. - Cloud is a kind of storage accessed through an Internet or IP connection with remote servers located outside the local network. Manufacturers offering the solutions providing better implementation are: AVID, PROMISE TECHNOLOGY, GTECHNOLOGY, SM DATA, DELL EMC ISILON, QUANTUM, QNAP SYSTEMS, EDITSHARE.  CPM tapes or cartridges: Based on the principles applied to magnetic tapes offering mass storage capacity, these devices have become an option for the challenges we face when filing 4K and shortly 8K materials, with very high transfer rates and at a low cost while providing excellent preservation times. Two are the most popular options: 51


- LTO (Linear TapeOpen): was originally developed in the late 90s by HEWLETTPACKARD, IBM, and SEAGATE, companies that set up the LTO Consortium. Other manufacturers such as QUANTUM and SONY are also offering this solution. Since 2000 several generations –LTO 0 to LTO 12- have appeared. LTO 11 and LTO 12 are pending forthcoming appearance with native capacities of 96TB and 192TB, respectively. Data for the last three ones used are 52

LTO-8 (12TB native), LTO-9 (24TB native) and LTO-10 (48TB native). - ENTERPRISE: These are high-end tapes that cover storage needs exceeding 5 PB, being two different proposals the most popular ones: Oracle’s and IBM’s. IBM’s 3592 series gradually meets present generations: Gen1 (300 GB native) to Gen6 (20TB native, year 2018) and future ones Gen 8 (30TB native) y Gen 9 (40TB native). As for Oracle, its proposal is marketed as

StorageTek. We could establish year 1995 as a first date of reference and it had a native capacity of 10GB, with series T10000 reaching up to 8,500 GB native. Tape-based technology excels on material preservation times, which directly depend on tape coating. Barium Ferrite technology enables preservation of data for much longer than the MP (Metal Particles) technology that was traditionally used for manufacturing in older generations of tapes such as LTO 4 and LTO 5. The


first LTO tape manufactured with Barium Ferrite was LTO 6 and is 100% used in Enterprise tapes. The result is that the Barium Ferrite tape is the only tape-based technology capable of preserving data for over 30 years, that is, seven times longer than a hard disk and 10 years more than through any tapecoating technology. Other solutions such as SmartMedia, Memory Stick or XD-Picture Cards that exist at present have not become as popular as the

above-mentioned solutions. Some of them because of their failure to increase performance and others due to a lack of implementation in the audiovisual market by developers and manufacturers using them. As for cassette tapes, a faithful partner in audiovisual productions for many of the most seasoned technicians, they are hardly ever used at present for recording and exchanging material. Performance, quality, efficiency, capacity, speed, security and price are and will be the parameters

considered by manufacturers and developers of resources for recording, duplication and storage at present and in the future. What has been written in this article, although now the present, will become sooner than we think part of the history of audiovisual technology. That is why it is vital to identify what signal, format, and media are when it comes to implementing solutions catering to the needs and demands of the market as adapted to the times in which we are living. ď ľ 53


Photo Credit Jason O'Rear



A state-of-the-art, future-proof venue

It can be said that it is simply spectacular. Chase Center, home to the San Francisco’s Golden State Warriors since September 2019, takes the latest technology to the highest level. From its impressive exterior screen, unique in the city; to its Samsung-powered epic video scoreboard, the largest in the NBA; the venue offers an one-of-a-kind experience lived by thousands of attendees every night. TM Broadcast had the opportunity to interview a representative of Chase Center, who share with all our readers all the place has to offer.



Does technology live in the core of Chase Center? The Warriors are looking to create the best fan experience possible at Chase Center, from the moment fans wake up to the moment they go to


sleep. As part of that experience, we’ll utilize technology to enhance the fan experience, not just for technology sake.

Chase Center is a venue ready to host from Warriors games to concerts of some of the

world’s greatest artists. How does AV adapt to the different needs of each event? A truly unique feature of our building is the retractable centerhung video board and gantry system which closes to


create a blank canvas for touring shows, whether they’re planning a standard end-stage concert or an in-the-round experience like our first concert event with Metallica and the San Francisco Symphony.

Even featuring some of the most advanced technology in sports + entertainment venues in the world, Chase Center still manages to be an environmental friendly arena. Could you explain to our readers how your

technology adapts to protect the planet? In addition to energy efficient LED fixtures, our building uses a large quantity of occupancy and ambient light sensors. The centralized Lutron control system is programmed to

Chase Center Exterior Southeast. Photo credit Jason O'Rear and Chase Center.



minimize energy usage and even provides a live dashboard of energy usage and efficiency data.

NBA is more than just

Photo credit Jason O'Rear


basketball game. It is a spectacle in which audio and video have an important role. Which audio system are you deploying? How many

devices are implemented all over the stadium? Which has been the biggest challenge of its implementation?


Our audio system leverages JBL speakers, Crown amplifiers, BSS Soundweb London DSP, Audinate Dante digital audio networking, and a

Yamaha CL5 mixing console. The biggest challenge was ensuring every seat has an exceptional audio experience, whether it is

courtside or the upper level. Thankfully, WJHW and Parsons Electric did a phenomenal job creating an acoustic and speaker coverage plan and then testing throughout the installation process to tweak, tune, and refine all components based on real-world conditions. The end result is a world-class audio experience which consistently impresses me every time I walk the building during an event.

Chase Center features NBA’s largest centerhung LED scoreboard. Could you provide us with more details of this breathtaking screen? Why did you decide to install such a massive display? It measures 82’9”L x 52’8”W x 47’8”H and is made up of 24,959,232 pixels. The size of the display allows the Warriors to provide a lot more information related to team performance and just general content that the fans are hungry for. At a preseason game against the Los Angeles Lakers the 59


Chase Center screens not only provided the regular kind of statistical information (shots made and missed) for players on the court, they also had some changing screens that could do tricks like show exactly where on the court a player just made a shot from – and what that player’s “heat check” stats were for all shots taken so far that game.

In addition, you’ve placed over 1100 LED screens throughout the venue. What could you tell us about this? Are they used for digital signage purposes? Our LED displays in the arena bowl allow our Game Presentation team to create highly impactful digital video content which envelopes the fan from all angles. Some displays are used for video while some are persistent stat displays which provide real-time data throughout Warriors’ games. We also have 1000+ television displays which we use as digital signage for wayfinding, menu boards, 60

advertisement, and even as endpoints to display television programming for premium spaces or private office spaces.

Last but not least, Chase Center has a large external screen. What are its highlights? It is the only outdoor video board in the city of San Francisco and measures 68’ x 38’. It has many purposes including being used for movie nights, game watch parties and displaying important consumer transit information.

I’m sure these AV resources must be exhaustively controlled. What technology do you use to manage all these systems? We invested in powerful IP-based tools such as SMPTE 2110, isolated DANTE networks for frontof-house and back-ofhouse audio, an extensible lighting network with DMX outputs throughout the catwalk, and much much more. My personal favorite tool is the custom GUI for our Audio

Architect system which provides real-time diagnostic data such as temperature and CPU usage from every DSP unit

Court View of Scoreboard Photo Credit Jason O'Rear


and amplifier throughout the building. When there is an outage, we’re able to identify the location and repair it very quickly.

Some say that IP technology will manage each AV component in venues and arenas in the future. Is Chase

Center ready for these future implementations? The future is already here! Our network infrastructure is incredibly robust and handles a massive amount of multicast traffic. Many of our A/V components are already available for remote management and diagnostic, however my past experiences with live event audio are the reason I always insist on an analog backup for critical signal paths.

Finally, how does the venue adapt to the needs of those broadcasters who are in charge of the NBA games? Chase Center was created to be a worldclass venue for basketball, so broadcast needs were a major consideration during the design phase. We have all the usual fiber backhaul services and cabling infrastructure for up to five simultaneous TV network broadcasts which will be very helpful when we host our next NBA Finals game! ď ľ 61




The cancellation of the massive (and essential) NAB show, after a short period of time in during which options such as postponing the event were on the table, was a major disappointment to the entire broadcast industry. A few weeks earlier, some agents stepped forward to raise the alarm about the health and economic chaos that was about to come. It was a turning point for all company, media outlets and organizations. Nonetheless, each and every one of them have continued to operate, renewing their technologies and reaffirming themselves as relevant entities for information and entertainment. Manufacturers were the first to face the situation. While many people predicted a halt in both in production and in showcasing new technology innovations, the remarkable rapid recovery of the Chinese industry and the bets in previous years for local manufacturing, allowed them to continue presenting their novelties to the world. News like the ones we are about to bring you. In the previous issue of our magazine we brought you an interesting summary of the latest market movements. On the closing date of this edition, many other companies have presented their news, whether they are part of their previously introduced strategic plan, or represent alternatives for the new media paradigm in which we will find ourselves immersed in the coming months. We bring you a selection with the most interesting ones. We are aware that it is not entirely complete since day after day numerous companies continue to show their latest bets. Remember that on you will find all the updated information. Now, let's get started! By Sergio Juliรกn



Blackmagic Design releases Teranex Mini SDI to DisplayPort 8K HDR Blackmagic Design has announced Teranex Mini SDI to DisplayPort 8K HDR, an 8K DisplayPort monitoring solution for DisplayPort monitors such as the new Apple Pro Display XDR. With dual on screen scope overlays, HDR, scaled and pixel-bypixel modes, and 33 point 3D LUTs, Teranex Mini SDI to DisplayPort has been specifically designed for use in professional film and television markets. The rear panel has Quad Link 12G-SDI for HD, Ultra HD as well as up to 8K formats. There is a USB‑C style connection for monitors such as the Pro Display XDR, and 2 full size DisplayPort

connections for regular computer monitors. The built-in scaler will ensure the video input standard is scaled to the native resolution of the connected DisplayPort monitor. Or, use the builtin pixel-by-pixel mode to view unscaled HD or 4K content. Customers can even connect both 2SI and Square Division inputs. Teranex Mini SDI to DisplayPort 8K HDR is also ready for latest HDR workflows. Static metadata PQ formats in the VPID are handled according to the ST2108-1, ST2084 and the ST425 standards. ST425 defines 2 new bits in the VPID to indicate transfer characteristic of SDR or PQ, and ST2108-1 defines how to transport HDR static or dynamic metadata over SDI. There

Teranex Mini SDI to DisplayPort 8K HDR


is also support for ST208210 for 12G SDI, as well as ST425 for 3G-SDI sources. Both Rec.2020 and Rec.709 color space are supported, and 100% of the DCI-P3 format.

Broadcast Pix launches StreamingPix Broadcast Pix™ announces the launch of StreamingPix, a live production and streaming solution designed for easy set up and use, yet powerful enough to produce compelling professional content. StreamingPix leverages many of the same acquisition, production, automation and streaming tools as Broadcast Pix’s professional products, but with easy to use control interfaces and a library of


clips and graphics, which give content “a polished and professional look”. It includes a RoboPix PTZ camera with integrated remote control; a LAV microphone; SDI, HDMI, IP and NDI™ inputs for the ability to input PowerPoint and social media; a library of “ready-to-use” clips and graphics; Media aware macros; and one-to-many streaming to popular streaming services, such as Facebook Live, YouTube Live, Vimeo etc., and virtual meetings, including Zoom, Skype, Go to Meeting.

ChyronHego has confirmed that “most courses” are completely free. Once you fill out the form, you will receive a welcome email with further instructions. Shortly after that, users can expect to receive (if requested) a software + software license, which are not necessary to take the curse, but “it’s good to practice”, as the company says. Users can enroll to this initiative through this link: rvices/chyronhegoacademy/

ChyronHego introduces ChyronHego Academy

The latest version of Dalet Xtend module enables remote proxy editing for Adobe Premiere Pro

Offering online classes created by expert product and workflow specialists, ChyronHego Academy is a training and professional development resource for designers and operators. Users will be able to learn topics in a logical order, get hands-on practice, verify their knowledge with short quizzes and become “ChyronHego Certified”.

Dalet, provider of solutions and services for broadcasters and content professionals, has released the new version of its Dalet Xtend module, enabling remote proxy editing capabilities within Adobe® Premiere® Pro. Responding to feedback from Dalet customers, this new capability facilitates collaboration between distributed teams.

Dalet Xtend for Adobe Premiere Pro lets remote users edit “quickly and efficiently, anytime, anywhere in low resolution”, with the finished sequence automatically rendered in high resolution back at the production hub. This significant time and resource savings is “especially helpful” for fast-paced news workflows where journalists need to quickly edit stories on breaking news while it is happening, according to Dalet. Additionally, a key highlight of this update is the ability for teams to edit and collaborate remotely. The significantly improved Dalet Xtend browser provides users with the same user experience to search and select content, whether they are working within Dalet Galaxy five or within the Adobe tools. In addition, as part of the continued partnership with Adobe, work is underway to add improvements to the Adobe Panel within the Ooyala Flex Media Platform. The panel allows 65


Adobe Premiere Pro users to search for content managed by the Platform, as well as trigger workflows for publishing and syndication. New capabilities are being built into the panel following customer feedback.

Digigram unveils IQOYA Guest Preview In light of the coronavirus situation, digital broadcast company Digigram has decided to accelerate the release of its smart and equipmentfree solution for home broadcasting: IQOYA Guest Preview. IQOYA Guest Preview is a light remote broadcasting solution that does not 66

require any equipment or software installation. It is a simple web-based solution for conducting remote interviews of guests, anywhere they may be. The solution turns users’ web browser into a 2-way codec. It provides a connection link to journalist working remotely. From their PC or smartphone, journalist can get “immediately” connected with their studio with broadcastquality audio. According to the press release, it is “simply a smart and reliable alternative to operating from remote locations while respecting lockdown orders”.

Disguise releases its latest software, r17.2 With its latest release r17.2, disguise introduces improvements to help users optimise their workflow at home such as Application Mode and devise new concepts to connect with audiences online with Augmented Reality and 360 video. A key new addition in r17.2 is the ability to run disguise’s Designer software in ‘Application Mode’. The new mode enables Designer to run alongside multiple applications so users can perform tasks concurrently and “effortlessly” switch between its software and


Fujinon products.

other apps. r17.2 also introduces support for HTC VIVE tracking accessories, which make it possible to develop AR experiences at home without high-end tracking equipment. Furthermore, r17.2 allows users to create simple AR screens without the need for other tools so they can quickly test out ideas for AR experiences. In addition, with the new ‘Spherical Camera’ in r17.2, users can render 360 video content, in disguise, engaging content for online audiences.

Fujifilm showcases a wide range of 4K and 8K ready FUJINON products Fujifilm North America Corporation has unveiled the updated FUJINON UA107x8.a AF (Advanced Focus) 4K broadcast lens, the “world’s first 4K box lens that features the Advanced Focus function”. The UA107x8.4 AF incorporates a newly developed phase-detection auto focus sensor, achieving fast, tack sharp

images with a response speed as fast as 0.45 seconds. The lens also features the company’s proprietary image stabilization mechanism and a 107x ultramagnification zoom that covers focal lengths from 8.4mm to 900mm (1800mm with 2x). It also is compatible with High Dynamic Range (HDR) and vivid color reproduction. Furthermore, Fujifilm showcased the Fujinon UA125X8, a 4K-compatible broadcast lens with “the world’s highest zoom ratio of 125X. The UA12 5X8 box lens covers a focal length from the wideangle of 8mm~1000mm, with an F1.7 aperture. It is now the longest and widest 4K field lens for Ultra HD applications in the FUJINON UA series, designed for 4K performance imaging from today’s new 4K 2/3” cameras. In addition, the company announced the development of two additional Jujinon 8K broadcast zoom lenses. The FUJINON HP66X15.2ESM (HP66X15.2) box lens reaches the “world’s longest” 8K focal 67


length of 1,000mm while also featuring the” world’s highest zoom” magnification of 66x. The FUJINON HP12X7.6ERD (HP12X7.6) is a portable lens covering a range of 7.6mm to 91mm, the “world’s widest” 8K angle of view at 93.3 degrees. The lenses are due to be released in the summer of 2020 and the fall of 2020, respectively. Last but not least, Fujifilm told us that is developing two advanced technologies for most Fujinon 4K and 8K lenses. ARIA (Automatic Restoration of Illumination Attenuation) and RBF (Remote Back Focus). 68

Grass Valley details its new cloud-based software Agile Media Processing Platform (GV AMPP) At its GV LIVEPresents – Innovate 2020 event, Grass Valley has presented its brand new cloud-based software as a service (SaaS) Agile Media Processing Platform (GV AMPP). This solution unlocks the power of elastic compute for live sports, news, and playout workflows, helping customers easily transition to the future-ready public, data center or hybrid infrastructures. The first application available for the platform, AMPP Master Control, has been on air with Blizzard since

the opening of the Overwatch League 2020 season in early February With AMPP Master Control, Activision Blizzard Esports’ production teams can create “highly configurable” “virtual” master control rooms accessible from anywhere in the world. All monitoring and local program distribution processes take place entirely in the cloud. GV AMPP allows production teams to flexibly create customizable workflows, with a variety of apps such as multiviewers, router panels, test signal generators, switchers, graphics renderers, clip players and recorders. GV AMPP is the core technology powering the


newly announced GV Media Universe, a comprehensive ecosystem of cloud-based tools and services. Furthermore, Grass Valley has recently announced the new KFrame XP video processing engine, that features true single stream, full raster, 4K processing at 2160p; the 4K UHD-ready LiveTouch1300, the latest member of its live highlights and replay production system; and the LDX 100 camera platform, a high-speed, native UHD camera built specifically for the rigors of IP-connected workflows.

an innovative organic photoconductive CMOS image sensor to capture 8K video with expanded dynamic range. The SKUHD8060B has been designed to maximize the effectiveness of its support for the Hybrid Log Gamma (HLG) HDR specification, and combines it with builtin signal-to-noise management to optimize visual performance. The SK-UHD8060B’s Super 35mm OPF CMOS sensor delivers 8K video with 7680×4320 resolution. The new camera conforms to global standards including UHD-2, the ITU-R BT.2020 color specification, ITU-R BT.2100 for High Dynamic Range, and Japan’s ARIB

specifications. The SKUHD8060B supports PLmount lenses and can output multiple television standards including 8K, 4K/UHD, 1080p, 1080i and 720p. The new camera head can be paired with a dockable recorder that uses a visually lossless codec, avoiding the timeconsuming, postacquisition processing typically required with RAW 8K recording approaches. The SK-UHD8060B, as well as Hitachi Kokusai’s new SK-UHD8240 240 fps high-speed 8K camera, will be used at the major international sporting event in Tokyo next year.

Hitachi Kokusai tells us everything about the SK-UHD8060B, its new 8K camera Hitachi Kokusai Electric America, Ltd. (Hitachi Kokusai) has recently introduced its new SKUHD8060B dockable 8K UHDTV camera. The company’s fifth-generation 8K acquisition solution, the SK-UHD8060B features

The SK-UHD8060B’s Super 35mm OPF CMOS sensor delivers 8K video with 7680×4320 resolution



The new Lawo Flow State Master functionality defines VSM as the master of the true flow routing status.

Lawo reveals VSM Release 2020-2 Lawo introduces with the Release 2020-2 further control enhancements for VSM, Lawo’s IP broadcast control system. Thanks to the new Flow State Master functionality, VSM ensures that an IP infrastructure is in a deterministic routing state at all time. In addition, it restores this state automatically after partial or full outages in the network infrastructure. Another add-on service for multi-studio facilities with frequent changes to the studio floor setup is the Wallboxing functionality, 70

which allows moving devices across an APIcontrolled IP infrastructure and to maintain flow connectivity. With Network Bridging, two or more independent VSM-controlled IP installs can share relevant sources. Whenever one network infrastructure needs to access selected sources of another network infrastructure, VSM facilitates this while preserving most of the functionality exposed to operational workflows. An operator working on the signal-providing system

marks sources as “shared” so that operators on the signal-consuming system can access these signals for local use. The tielines between systems are automatically and silently managed by VSM, based on actual signal consumption across the connected installs. Typical applications are nationwide facility connections, truck-tofacility connections, and increasingly also truck-totruck connections. The new VSM Release 2020-2 will be available from summer 2020.


Lumix S1H users now can benefit from 5.9K Apple ProRes RAW video recording thanks to the latest update of Atomos Ninja V Atomos and Panasonic have announced updates to the award-winning Ninja V HDR monitorrecorder and Panasonic LUMIX S1H. Together, they now record 5.9K Apple ProRes RAW files direct from the camera’s sensor. The Ninja V captures 12bit RAW files from the S1H over HDMI at up to 5.9K/29.97p in Full-frame, or 4K/59.94p in Super35. These unprocessed files are extremely clean, preserving the maximum dynamic range, color accuracy and every detail from the S1H. The resulting ProRes RAW files allow for greater creativity in post-production, ideal for both HDR and SDR (Rec.709) workflows. In addition, more and more cinematographers are now choosing to shoot with anamorphic lenses. The Ninja V and S1H combination caters to them with the new 3.5K Super35 Anamorphic 4:3

RAW mode. The combination can be used as an A-camera or smaller B-camera on an anamorphic RAW production. Each frame recorded in ProRes RAW has metadata supplied by the S1H. Apple’s Final Cut Pro X and other NLEs will automatically recognise ProRes RAW files as coming from the S1H and set them up for editing and display in either SDR or HDR projects automatically. Additional information will also allow other software to perform extensive parameter adjustments. Ninja V and S1H owners will gain these new RAW features free of charge. The Ninja V will be able to update by downloading and

installing AtomOS 10.5 from the Atomos website. Panasonic will release a free firmware update on their LUMIX Global Customer Support website. Both updates will be available on 25 May, 2020.

Magewell boosts its Capture Express software with SRT streaming support Magewell has announced a new upgrade of its free Capture Express recording application that expands the software’s streaming capabilities. Version 3.2 adds new support for multiple streaming technologies including the Secure Reliable Transport (SRT) protocol developed and open-sourced by Haivision.

Magewell Capture Express



Version 3.0 of Capture Express initially offered easy recording and previewing functionality, and the subsequent version 3.1 added the ability to stream H.264encoded video using the RTMP protocol. Now, version 3.2 further expands the software’s streaming flexibility with support for the SRT protocol as well as MPEGTS over UDP or RTP. Enabling secure, reliable, low-latency video delivery over unpredictable networks, SRT ensures high-quality streaming experiences even over the public internet. Magewell is a member of the SRT Alliance. Capture Express 3.2 automatically detects compatible graphics hardware in the host system and leverages GPU-accelerated H.264 encoding when possible, using the CPU for encoding only as necessary. Capture Express runs on the Windows operating system and is compatible with all current Magewell capture product lines — including USB Capture Gen 2 and USB Capture Plus external 72

devices; Pro Capture PCIe cards; and Eco Capture M.2 hardware — as well as Magewell’s firstgeneration capture cards and boxes.

MediaKind presents Aquila Broadcast, its “next-generation” broadcast solution MediaKind has launched Aquila Broadcast, a broadcast solution that enables broadcasters and operators to make a “seamless, step-by-step transition” from appliancebased platforms to all-IP media delivery. Powered by software-based video compression, control and stream processing, Aquila Broadcast features high picture quality with resolutions up to UHD with HDR support, while also offering wide codec support and significant bandwidth savings. The company has developed Aquila Broadcast to ensure compatibility across all-IP technologies and standards, increased transponder density whilst retaining video quality and deployable as an appliance or as pure

software, with the ability to support on-premise and cloud-based implementations. The solution supports of the following video codecs; MPEG-2, MPEG-4 AVC and HEVC. To enable industry leading picture quality and bandwidth savings with an extensive appliance, software and cloud support system, Aquila Broadcast combines a number of leading MediaKind consumer delivery products, including Encoding Live, Stream Processor and nCompass control.

Mobile Viewpoint makes available TrolleyLive Remote Pro As a response to many of their customer requests where journalists, reporters and presenters are being forced to work from home, Mobile Viewpoint introduces TrollyLive Remote Product, a solution designed to be a selfcontained camera unit with all the functionality required to present a


port for connecting any compatible microphone. A microphone can also be included as an option.

TrolleyLive Remote Pro

professional live stream from remote location. TrolleyLive Remote Proit allows content creators to record, edit and stream live full HD-quality video back to a broadcast center, or stream directly to social media. The unit contains a PTZ camera, screen and an encoder that can connect to the internet via a LAN port, Wifi and there is an upgrade option to use multiple SIM cards. The encoder utilizes H.265 and can bond together SIM cards from different providers for reliability. The box has an XLR input

The unit is designed to be mobile and with this versatility, is designed to be used by broadcast and media organizations to ship to anywhere in the world at a moments notice to for professional and reliable live streaming. And by accessing the complementary online LinkMatrix management portal, streams can be monitored, edited and published online easily and remotely.

MOG now supports ProRes formats MOG Technologies, a supplier of end-to-end solutions for broadcasters and professional media, announces full support of ProRes formats in all its Media Production product line. The release of products like MAM4PRO (4PRO mediaREC, 4PRO mediaMOVE and 4PRO mediaPLAY), mDECK and mxfSPEEDRAIL (mediaCARD, mediaMOVE, mediaPLAY, mediaREC and mediaNDI) allows for

content creators and producers to natively encode and decode highquality ProRes content, with significantly reduced video file sizes while preserving full-frame 10bit 4:2:2 quality. Products are now supporting Apple ProRes 4444 XQ, ProRes 4444, ProRes 422 HQ, ProRes 422, ProRes 422 LT, and ProRes 422 Proxy. These will also include high dynamic range modes HLG and HDR10 (2084 PQ) for better video experience, when available.

Nagra and Avid introduces a new forensic watermarking plugin for editing and collaboration workflows NAGRA, a Kudelski Group (SIX:KUD.S) company and provider of content protection and multiscreen television solutions, has announced the launch of a new forensic watermarking plug-in for editing and collaboration workflows on Avid’s Media Composer video editing software. The NexGuard Plug-in for 73


Editing Software allows studios, content owners and post-production houses to ensure protection of high-value assets when sharing prerelease content such as feature films or TV series with their editing department, reviewers or their creative agencies. The integration simplifies the watermarking process within the editing software while providing traceability and a strong deterrent against prerelease content leaks. Media Composer is the first non-linear editing software to be integrated with the NAGRA plug-in. The NexGuard Plug-in for Editing Software, which works as a video filter, “quickly and seamlessly” adds a unique, robust and imperceptible forensic watermark to each copy of the content exported from the non-linear

editing software, regardless of input and output file formats. The NexGuard Plug-in for Editing Software is supported by the NexGuard Detection Service, a cloud based forensic watermark detection service that is fast, highly scalable and fully automated, meeting the needs of content owners who require detection in conjunction with anti-piracy services. NexGuard forensic watermarking solutions are a key component of NAGRA’s comprehensive line-up of solutions to guard against service and content piracy.

Panasonic showcases KAIROS, “the future of video production” Panasonic has announced KAIROS, a next generation live production

Panasonic KAIROS


IT/IP video processing platform that offers an open architecture system for live video switching with input and output flexibility, resolution and format independence, “maximum” CPU/GPU processor utilization and “virtually unlimited” ME scalability. As a native IP, ST 2110 system, KAIROS supports transitions to live IP workflows and “can eliminate dedicated hardware constraints”, according to the company. For television studios, KAIROS integrates into a facility’s ST 2110 infrastructure without requiring additional IP gateways. The platform features standardized IP connectivity in place of one-to-one video inputs and outputs. KAIROS also features uncompressed processing of SDI, ST 2110 as well as NDI streams of


any resolution such as HD and UHD and in any format, whether 16:9 or nontraditional formats such as 32:9 for an LED backdrop display. KAIROS processing latency can be as low as one frame and also supports Precision Time Protocol (PTP) synchronization. In addition, KAIROS system centers around the Kairos Core, main frame, which handles all video processing. Version 1 main frame will manage video I/O through a Deltacast gateway card and/or a Mellanox 100 GbE NIC (Network Interface Card) connection to COTS IP devices and SDI and HDMI gateways. Control will all be managed on devices operating over a separate Gbit Ethernet including Kairos Creator, GUI software for set up and software-based control panel and Kairos Control, Panasonic’s 2ME style hardware control panel. Panasonic also announced it has established a KAIROS Alliance Partners program which includes IP COTS hardware manufacturers and

SmartBoom® headsets

leading vendors of graphics, automation, and media servers. Furthermore, Panasonic has unveiled the AWUE100W/K, a 4K PTZ camera that includes various outputs interfaces such as 12G-SDI and high bandwidth NDI; and the AK-HC3900GJ/GSJ 1080p HDR studio camera system.

Pliant Technologies presents SmartBoom® headsets for intercom systems Pliant Technologies’ is showcasing the complete line of SmartBoom LITE and SmartBoom PRO headsets, which include

“revolutionary” SmartBoom technology, an accessible flip up microphone feature that acts as an on/off switch for quick and convenient mic control. Featuring a single-ear lightweight design, the SmartBoom LITE Headset (PHS-SB11L) offers “enhanced” voiceoptimized dynamic or electret microphone options, a low-distortion speaker, and a foam ear pad “for added stability and comfort”. Moreover, SmartBoom PRO headsets, offered in both single (PHS-SB110) and dual-ear (PHS-SB210) variations, are available in terminations for almost “any application” and 75


provide ambient noise reduction and highquality, clear audio. SmartBoom PRO headsets also feature a collapsible earpiece design with fieldreplaceable cable, windscreens, and padding for the ear, headband, and temple. Available in 4-pin female, 5-pin male, unterminated, and dual 3.5mm connectors, SmartBoom LITE and PRO Headsets feature a nonreflective rubberized matte black finish. Additionally, the headsets’ design allows for either right- or left-side use and has a low clamping force, soft ear cup padding, and adjustable headband.

specialises in real-time graphics, channel branding and virtual production. The service allows news publishers and broadcasters to book any size project for virtual studios, augmented reality, branding, real-time graphics packages, interactive story-telling graphics, video walls, multi-screen tickers, downstream channels, using a flexible utilization plan with no blackout dates. The subscriber simply purchases a number of creative service hours per month and uses them as and when desired. Creative Hub provides services for real-time

production using Vizrt, Unreal Engine and other real-time graphics platforms, as well as postproduction projects utilizing Adobe and Autodesk solutions. Projects are developed remotely and delivered using online access project files. Furthermore, Training Lab is the company’s new expert remote training service, which specialises in Vizrt real-time graphics, workflow solutions and products for channel branding, template creation, virtual production, data visualisation and

Polygon Labs makes available Creative Hub and Training Lab, two subscription-based graphics services Polygon Labs has launched two work at home service initiatives, Creative Hub and Training Lab. Polygon Labs’ Creative Hub provides a remote, on-demand creative and professional services subscription that 76

Training Lab is the Polygon Labs’s new expert remote training service


Qligent Foresight SQM Charts.

interactive story-telling. Users can either sign up to an on-demand session, which provides Vizrt training for a specific course, or a subscription session, which provides a continuous private or public training program. With a basic subscription, users can join any of the monthly public sessions announced in a shared training calendar. Using a premium subscription, users get additional hours of private Q&A sessions and project support.

Qligent unveils Foresight, its second-generation, cloud-base service to mitigate content distribution issues Designed to help broadcasters, MVPDs, and OTT service providers understand and correlate factors that contribute to higher audience engagement, Qligent’s Foresight provides realtime 24/7 data analytics based on system performance and user behaviour.

Qligent Foresight aggregates data points from end user equipment – including set-top boxes, smart TVs, and iOS and Android devices – as well as CDN logs, stream monitoring data, CRMs, support ticketing systems, network monitoring systems, and other hardware monitoring systems. Using scalable cloud processing, its integrated AI and Machine Learning provide automated data collection, while its deep learning technology mines data from hundreds or 77


thousands of layers of data. Big Data technology then correlates and aggregates the data for real-time, cloud-based quality assurance, helping service providers quickly address distribution issues. Foresight creates a controlled data mining environment that is not compromised “by operator error, viewer disinterest, user hardware malfunction, or other variables”, according to the press release. As a result, service providers have a preventionoriented toolset that can predict conditions negatively impacting audience engagement. Customizable reports summarize key performance indicators 78

(KPIs), key quality indicators (KQIs), and other criteria for multiplatform content distribution. All findings are presented on Qligent’s dashboard, accessible from a computer or mobile device.

Ross Video to launch XPression V10 in July Ross Video has shared that XPression V10 is scheduled for release in July 2020. The release includes some significant additions. Firstly, V10 includes a fully rewritten, multi-threaded video codec that supports UHD, HDR and Wide Colour Gamut. This new software (i.e. not hardware dependent) codec adds

support for ten-bit video files and is backwards compatible with older XPression assets – no reencoding is required. Next, the XPression Remote Sequencer has been enhanced to enable customers to create new rundowns, create new Take items, modify existing MOS objects, manage rundowns (import/export/save them) and build rundowns ‘offline’ without activating them. In addition to V10, Ross introduces a new XPression Sequencer Gateway – a special edition of the XPression Gateway for non-MOS users; and XPression ElectIt!, a elections solution


that can drive tickers, full screens, and lower-third results. Finally, Ross is announcing a new XPression MOS HTML5 plugin that will integrate with XPression Clips, UX (Ross Video’s Virtual Solutions control interface) and XPression Maps.

SGO pre-releases Mistika 10 “A color revolution”: this is how SGO has defined Mistika 10, the latest update of its software suite. Miguel Ángel Doncel, CEO at SGO, tell us more: “With a brand new color interface being in the core of this revolutionary upgrade, Mistika has started to shed its skin, initiating a new era of Mistika Technology. Mistika 10 offers its users a facilitated way to unlock the exceptional power of our technology, provide them with more control and boost the productivity of the color-based workflows.” In this widely anticipated upgrade available for Mistika Boutique and Mistika Ultima, users will also find many other new

features and enhancements boosting their overall productivity, such as completely redesigned visual editor, metadata embed for RAW media, embedded scale parameters for all clips, quick access to shapes, new common shape presets for all FX and reorganized menu for histories, among others. The complete list of all new features and enhancements made to Mistika 10 can be found here: ms/index.php?/topic/1377what-is-new-in-mistika-10/ To get as many reactions as possible from users and the creative community before the final release, Mistika 10 is currently in Open Beta and available to all Mistika Boutique and Mistika Ultima users in the Download section of their Online SGO Account. This pre-release is also available in the completely free of charge Eval version of Mistika Boutique, so everyone has the opportunity to try it and give their feedback on SGO Forum before making the formal release.

Woody Technologies launches Woody Ingest Live Woody Ingest Live is the latest product released by Woody Technologies, resulting of its experience in providing ingest software solutions to major players of the industry, and a technology partnership with Libero Systems, an experienced actor in the live playout and ingest area. Woody Ingest Live allows broadcaster to record SDI, NDI and web live sources straight to their production environment. It supports recording of Skype calls and Youtube Live feeds, among multiple streaming protocols. Woody Ingest Live integrates with the leading MAM and PAM solutions, enabling editing while capture for Avid® Media Composer and Adobe® Premiere Pro. In addition, Woody Technologies has unveiled Woody in2it Go, which allows remote ingest “from any device”; and Woody Social, a search engine powered by  79




Blackmagic Design

URSA Mini Pro 4.6K G2 Bridging the gap between cinema and TV



Lab test performed by CARLOS MEDINA Audiovisual Technology and Camera Expert and Advisor

In 2002, the film Star Wars: Episode II – Attack of the Clones, (a project by American filmmaker George Lucas), which was fully shot digitally, marked the first steps in industrial convergence of the audiovisual sector towards digital technology. That date should be highlighted on many accounts, and particularly as it crossed the line existing between two professional areas as cinema and television. Both were quite distinct in a time when image capture and handling processes were still analogue. With the presentation of the URSA Mini Pro 4.6K G2 camera in the industry (March 2019), manufacturer Blackmagic contributed its bit towards more evident convergence and unification of results (in terms of images achieved) both for the world of cinema and television production environments. 82

Blackmagic has succeeded in interpreting the changes taking place in audiovisual production and the demands viewers themselves are making now. Thus, a great opportunity for URSA Mini Pro 4.6 G2, a camera catering to the needs of today’s audiovisual market. This is the update of one of Blackmagic’s product ranges born in NAB 2014 under the URSA name, which saw its first variant in the successful URSA Mini Pro. Therefore, it is easy to see this manufacturer’s evolution towards a more compact, lightweight, professional camera featuring enhanced technical performance. URSA Mini Pro 4.6K G2 is the second generation of URSA Mini Pro. URSA Mini Pro 4.6K G2 is a digital, modular-body, CTV (cinema & television) camera that weights 2.30kg (about 5 lbs). It is robust and features interchangeable optics. It must be pointed out that it finds its natural place in cinema production processes (take-to-take

URSA Mini Pro 4.6K G2 with PL Mount.

shooting) and in TV (multicamera programs). But it is harder for this device to make headway in the field of street side features or news features for two main reasons: number one being the weight of the body, which a whole range of gadgets must be attached to in order for this camera to reach full operating and handling capabilities. And number two, the placement of access buttons on the camera does not offer immediate response in situations where an ENG


It’s a digital, modular-body, CTV (cinema & television) camera that weights 2.30kg (about 5 lbs). It is robust and features interchangeable optics

(electronic reporting) recording can be found. This was first noticed when we received the camera (in an airtight case) to get a closer look to its performance and technical data. Included together with the camera’s body was a Sigma 1835mm 1:1.8 DC optic with EF mount. But we must mention that URSA Mini Pro 4.6K G2 has also versions for PL, B4 or F mounts, which definitely opens up countless optical possibilities and therefore allows inclusion of this

camera in cinema shooting and TV recording processes. The camera can control the optics either by means of the electronic connections existing between mount and lens (for EF type) or a 12-pin Hirose connector for supported lenses (mainly with B4 mounts). As it features modular construction and design, we must include several accessories that boost the camera’s performance and, therefore, provide better operating capabilities for recording.

First of all: viewfinders. They are connected through SDI 3G (FHD output) and 12V (4-pin XLR) power supply offered in the camera’s own body. The operator’s side eyepiece (URSA Viewfinder) must be attached on the top handle. This is a FHD highresolution viewfinder featuring an OLED display and image frame rate up to 60 f/s, 24-bit RGB color precision, adjustable diopters (-4/+3), a built-in activation sensor that switches the camera on when the operator approaches to look, a tally that activates while recording is in progress and buttons/shortcuts for adjustment modification; on the other hand although we were not able to test this feature- it comes with a 7-inch studio viewfinder (URSA Studio Viewfinder) with large buttons and knobs that enable mobility and angle degrees catering to the needs of any operator and very fast, accurate responses (tally light or sunscreen) for any professional live production. The studio viewfinder is directly 83


mounted on the camera’s top. Attaching a battery adaptor is also a requirement: with V-lock base or Gold-Mount base, which provide full compatibility for the various power supply solutions offered by other manufactures. An essential requirement in URSA Mini Pro 4.6K G2 is placing the handle on the top area (on 1/4-inch holes) for an easier grasp and handling. Also found on the lower side of the camera's body there are 6.35 mm threaded holes for direct mounting on a tripod, or even for placing a shoulder pad –by means of universal rosettes and rail mounts- enabling fast camera release from the tripod. On the right-hand side, a side rosette allows attachment of a handgrip or an extension bracket for on-shoulder camera operation. Once everything relating accessories to attach to the camera's body has been arranged, we must point out that this model is not hard to understand or operate. On the front side we have the lens 84

mount for attaching the lens most appropriate to the task at hand, a button for auto W/B and, worth noting, a ND filter knob providing four options: transparent, 2-step, 4-step and 6-step (0.6, 1.2, 1.8 or 1/4, 1/16, 1/64, respectively). These ND filters are of excellent quality and feature infrared offsetting, being essential for solving adverse lighting situations such as daylight in outdoor environments. On the left-hand side, there is a large knob for camera menu access

It’s a robust equipment built with sturdy materials -a magnesium alloy- capable of facing any shooting or recording situation

along with the ISO adjustments, shutter and white balance shortcuts; two fast-access buttons; a further button for HFR mode activation and, of course, a side REC button for easier recording processes. This side’s upper area has the musthave On/Off button (the camera takes about 10 seconds to be ready). Blackmagic’s decision to provide enhanced use to the customary foldaway displays placed on the back of this left-hand side seems a smart move to us. In this model, the outward

URSA Mini Pro 4.6K G2 with B4 Mount.


side of this screen offers operators a back-lit status display (with brightness adjustment for daytime and night-time recordings) providing information on the camera and on current recording (fps, ISO, Iris, shutter angle, white balance, TC, storage card enabled, battery level, audio input level), as well as audio input control (CH1/3 and CH2/4) and a monaural loudspeaker. The foldaway screen itself is a 4-inch LCD tilting, capacitive touchscreen showing the object of our framing that

displays camera details or nothing (clean display), camera menu navigation items, and offers the possibility of viewing material already recorded through very simple operations either by touching or sliding our finger on the screen. Once the foldaway display is open we find inside the camera body another access button to the REC function, the sound input control (MIC, LINE, AES, 48V power supply in input 1 and/or input 2), a menu access button and other buttons for playing, pausing and shifting through recorded media. A restrained, moderate design, conceived for quiet jobs that allow some time for configuring the camera. For that reason, the back side of the camera features a spot for fitting the battery adaptor base, a connector for main external supply (4-pin / 12-20V) three BNC connectors (SDI-12G out, SDI-12G and REf. In / TC In), as well as an audio jack for headphones (3.5 mm minijack connector). On the right-hand side the

camera has the already mentioned universal rosette, a SDI Out connector and a 12V Out connector (additional power supply) for the camera’s viewfinder or a studio viewfinder, and a LANC connector and a multipin connection for control of professional lenses. On the top side of the camera, 1/4" holes for handle mounting, a stereo built-in microphone on the front and two XLR3 connections for professional (24-bit 48 kHz) audio recording on the back. What we have seen so far is only a camera body showing the identifying traits of manufacturer Blackmagic: robust equipment built with sturdy materials -a magnesium alloy- capable of facing any shooting or recording situation: operating temperatures between 0ºC and 40ºC and relative humidity between 0% and 90% without condensation. And then, what else does the URSA Mini Pro 4.6K G2 have? We can clearly answer this question by two words: technology and quality. 85


URSA Mini Pro 4.6K G2 features a Super35 sensor (25.34 mm x 14.25 mm) that provides high resolution levels and leaves SD quality far behind. Thus, 4.6K (4608 x 2592 pixels) 4.6K 2.4:1 (4608 x 1920), 4K 16:9 (4096 x 2304), 4K DCI (4096 x 2160), UHD (3840 x 2160), 3K Anamorphic (3072 x 2560), 2K 16:9 (2048 x 1152) 2K DCI (2048 x 1080) y FHD (1920 x 1080) are achieved. It implements current innovations such as HDR and HFR which, in combination with the possibility offered by this model to store output in Blackmagic’s proprietary RAW codec (BRAW) and the Film (log curve) mode, enable us to achieve a 15step dynamic range when performing color grading and correction. A camera must target two issues: facilitating the operator’s work and achieving professional levels in technical parameters such as resolution, colorimetry, sensitivity, and video signal response level as obtained in various shooting and recording situations. This is why this 86

Bluetooth With iPad.

model has a native ISO value of 3200 and enables achieving a wider color range than the Rec. 2020 space. Blackmagic is firmly committed towards providing both technology and quality at a very competitive price, in the knowledge that the audiovisual industry includes several stages from capture to marketing of content- in a market where viewers expect (and pay for) the best story and the highest quality standards possible regarding both image and sound. This is the assumption under which Blackmagic has developed URSA Mini Pro 4.6K G2 and attained nothing less that Netflix Post Technology

Alliance (PTA) certified quality. In the first place, RAW recording, more specifically supporting the Blackmagic’s proprietary RAW (BRAW) codec, showcased in IBC 2018. By now perfectly established in the audiovisual sector, this is a 12-bit multiplatform codec that succeeds in keeping both quality and metadata generated in a recording in a manageable, compatible, compact file for trouble-free editing and/or color grading, thus minimizing processing times involved in RAW files. URSA Mini Pro 4.6K G2 is capable of generating several types of BRAW codecs, depending on


whether we will go for Constant Bitrate (3.1; 5:1, 8:1, 12:1) or Constant Quality (Q0 and Q5). And let us remind that Blackmagic generates another file named “.sidecar”, an editable text file stored in the same folder as the BRAW file and containing all metadata (color, ISO, gamma, optical data...) for the relevant recording. Secondly, being able to complete the image production process by using one of the most relevant, well-established software programs for editing, color correction and color grading: DaVinci Resolve Studio. But we can also generate Apple ProRes QuickTime files in various compression settings: XQ, 444, 422 HQ, 422, 422 LT and 422 Proxy, in combination with all image resolutions supported by this model and striving towards compatibility of these codecs with the most popular film and TV platforms. URSA Mini Pro 4.6K G2 includes two recording slots for CFast 2.0 cards

plus an additional two for SD UHS II cards for increased flexibility when it comes to choosing storage media and allowing uninterrupted recording (on-the-fly swapping between full and empty cards without cutting off recordings). This decision to include such a high number of recording slots could be seen as a disproportionate -or even flawed- effort because of its impact on the camera’s body design or on the final price of the equipment; but in our opinion this is yet a further strength in this model thought out for answering the needs of today’s audiovisual market. CFast 2.0 cards are for recordings made in 12-bit Blackmagic RAW format, which is more commonly used in the film industry; and SD UHS II cards, which are somewhat cheaper, allow for file storage of Blackmagic 5:1, 8:1 or 12:1 RAW files in order to meet the requirements of the highest-quality production in TV environments (series and/or commercials). And furthermore, this model provides us with a

high-speed USB-C expansion port enabling connection of external devices such as flash or SSD units in order to be able to perform longer recordings. This is of vital importance when shooting in 12-bit Blackmagic RAW format without information loss or even at 300 fps image frame rate. Additionally, this model offers the possibility of recording files in 12-bit Blackmagic RAW format (4K and 4.6K) in regular 2.5-inch SSD units, which are faster and more stable. The Blackmagic URSA Mini SSD Recorder is directly fitted on the rear of the camera, between the body and the battery, and works through a SDI connection supporting 6G. As for HFR recording possibilities (by means of a shortcut button) recording in BRAW 8:1 at 4.6K can be achieved by using the sensor’s entire area up to 120 fps, whereas if we decrease it to 4K DCI it can reach up to 150 fps or 300 fps in 1080 FHD. If the ProRes 422HQ codec is used at 4.6K taking up the sensor's whole area 120 fps are reached, while at 4K DCI 87


frame rate can reach 120 fps; and with 1080 FHD, up to 240 fps. This model excels in offering images in varying types of dynamic range: Film mode enables shooting content by using a 15-step dynamic range log curve; Video mode under the proprietary REC 709 standard for FHD images for TV, where quick delivery of content takes precedence over post-production processes; and the Extended Video mode, a well-balanced solution between the two above-mentioned modes. It allows the possibility of importing, exporting and applying LUT conversion tables to the signal conveyed by the camera. These are only used as a preview tool and do not modify the material recorded in storage media. Includes 4.6K Film to Extended Video, 4.6K Film to Video, 4.6K Film to REC 2020 Hybrid Log Gamma and 4.6K to REC 2020 PQ Gamma (recommended for efficient coding of HDR images). With such a huge amount of data, combinations of available 88

resolutions and codecs as well as metadata, we embarked on the recording of different scenarios -always outdoors, as we must face variables that are less easy to control- to test the response levels of URSA Mini Pro 4.6K G2. The material was stored in a 256 GB CFast 2.0 AV Pro CF card and viewed in DaVinci Resolve. Outcomes were highly positive in all possible combination, but worth highlighting is the strongly favorable and committed response obtained through BRAW 8:1 and the visual show offered by a change up to 300 fps (FHD quality).

Last, connectivity provided by this Blackmagic model is also worth mentioning. The camera features a rear BNC output in a single 12G SDI cable capable of transferring 2160p 60 images and support for SDI 6G, SDI 3G and SDI HD. On the right-hand side, a SDI 3G connection that offers only FHD. It is equipped with Bluetooth technology that allows transmission and reception of commands from a distance of up to nine meters through a tablet or a smartphone. URSA Mini Pro 4.6K G2 means technology and quality at disposal of


URSA Mini Pro 4.6K G2 does not limit creativity and is no obstacle for undertaking film or TV projects. This is a camera ready for both work environments where the difference lies only in shooting/recording dynamics and professional profiles shaping the audiovisual content.

camera operators. We have already mentioned that the camera’s body comes with just the necessary elements for answering to operation and recording needs. Along the same lines is the simplicity of the camera’s menu and navigation between the various items and parameters. The menu features six main items: RECORD (which in turn allows access to three configuration sections: CODEC AND QUALITY BRAW and ProRes-); DYNAMIC RANGE and TIMELAPSE); MONITOR

(with two sections); AUDIO (two sections); SETUP (five sections); PRESETS and LUTS (two sections). In our opinion, the possibility of being able to activate by touch the parameters shown on the image in the foldaway screen is a truly fast and effective feature. Access to FPS, shutter, Iris, ISO, WB, TINT and monitoring options, for example. Some parameters to consider when configuring the camera are: FRAME GUIDES: 4:3, 14:9, 16:9, 1.85:1, 2:1,2.35:1, 2.39:1, 2.40:1; Focus Color:

Red/Green/Blue/White/ Black; Zebra levels: 100%, 75%, 80%, 85%, 90%; 95%; WB (2500ºk to 10.000ºK); ISO (220 to 3200); SHUTTER (1/50 to 1/2000) timelapse (number of frames to capture and time interval for the recording). As for metadata, information is stored relating the camera's technical configuration together with data shared with professional optics (automatic data in EF, B4 and PL lenses supporting i/Technology) and digital clapperboard data like project name, director, camera operator, and especially clip details such as reel, scene and take numbers or special notes (indoors/outdoors, day/night...). URSA Mini Pro 4.6K G2 does not limit creativity and is no obstacle for undertaking film or TV projects. This is a camera ready for both work environments where the difference lies only in shooting/recording dynamics and professional profiles shaping the audiovisual content.  89