Welcome to the February 2025 issue of








Make quick, precise audio adjustments from anywhere, any time. Complement your news automation system with this virtual mixer.
Adjust the occasional audio level with the Virtual Strata mixer as an extension of your production automation system. Mix feeds and manage the entire audio production with all the mix-minus, automixing, control and routing features you need from your touchscreen monitor or tablet. Fits in any broadcast environment as an AES67/SMPTE 2110 compatible, WheatNet-IP audio networked mixer console surface
Connect with your Wheatstone Sales Engineer Call +1 252-638-7000 or email sales@wheatstone.com
Just as our issue was going to press, the long-awaited report from the “Future of Television” initiative launched by the FCC in 2023 was released, days in advance of the new administration.
It’s important to note that the 35-page report is just that—a report, full of recommendations and talking points but no requests for mandates, (which was never the intention in the first place). When she made her announcement at the 2023 NAB Show, (now) former FCC Chairman Jessica Rosenworcel described the FOTV initiative as a “road map” to give both public and private players in the transition to NextGen TV better direction towards a successful deployment.
In essence, the report serves as a document that spells out the current hurdles towards several key goals: sunsetting ATSC 1.0, increasing consumer awareness and solving the technical issues involved in carrying 3.0 on cable, satellite and vMVPDs (YouTube TV, Hulu + Live TV, etc.)
“The report will provide the FCC with a better understanding of stakeholders’ outstanding issues and concerns as it moves forward with the rulemakings necessary to complete the transition and will help focus the efforts of industry as they continue to deploy ATSC 3.0,” NAB said in a statement.
The report represents a “moment in time” for the transition but how the new FCC will react to it depends on where new Chairman Brendan Carr will direct the commission’s attention to a busy first few months. (Something tells me it won’t be NextGen TV).
How much it will impact our industry’s transition to 3.0 is yet to be known, but here’s hoping this road map will elicit a higher level of cooperation among the various players that can lead to a successful completion. We’ll have more to report on this document in the coming months. Stay tuned.
With this issue, we bid adieu to a long-time member of the TV Tech staff.
Terry Scutt, managing editor for TV Tech and editor of the NAB Show Daily, is leaving Future—our parent company—for a slower pace and new business endeavors.
Terry was already with IMAS, the company that launched TV Technology, when I joined in the summer of 2001. About 15 years ago, she joined TV Tech as our managing editor.
Terry is among the most patient colleagues I have ever worked with and her ability to adapt to changing situations while facing publishing deadlines and fixing last minute changes for a demanding editor such as myself are why she will be so sorely missed.
And that patience was also a key to Terry’s success in managing the NAB Show Daily for the past nearly 20 years. Despite the sometimes chaotic nature of our industry’s biggest event, Terry and her team always ensured that the Daily staff could work in a quiet and professional environment— and just as important—that deadlines were met. This year marks the 30th year that Future has published the NAB Show Daily and Terry’s ability to manage and elevate the publication during a period of rapid changes in media consumption are a testament to her talent.
We’re also saying goodbye to a true friend who never hesitated to share ideas and offer guidance on how to improve our operations and our publications. It takes a great team to make a great publication and even though we’re losing a valued friend and colleague, Terry has helped ensure that her successor, Mike Demenchuk, has all the tools and support needed for a successful transition.
Congratulations Terry!
FOLLOW US www.tvtech.com twitter.com/tvtechnology
CONTENT
Content Director
Tom Butts, tom.butts@futurenet.com
Content Managers
Michael Demenchuk, michael.demenchuk@futurenet.com
Terry Scutt, terry.scutt@futurenet.com
Senior Content Producer
George Winslow, george.winslow@futurenet.com
Contributors: Gary Arlen, James Careless, David Cohen Fred Dawson, Kevin Hilton, Craig Johnston, and Mark R. Smith
Production Managers: Heather Tatrow, Nicole Schilling Art Directors: Cliff Newman, Steven Mumby
ADVERTISING SALES
Managing Vice President of Sales, B2B Tech
Adam Goldstein, adam.goldstein@futurenet.com
Publisher, TV Tech/TVBEurope Joe Palombo, joseph.palombo@futurenet.com
SUBSCRIBER CUSTOMER SERVICE
To subscribe, change your address, or check on your current account status, go to www.tvtechnology.com and click on About Us, email futureplc@computerfulfillment.com, call 888-266-5828, or write P.O. Box 8692, Lowell, MA 01853.
LICENSING/REPRINTS/PERMISSIONS
TV Technology is available for licensing. Contact the Licensing team to discuss partnership opportunities. Head of Print Licensing Rachel Shaw licensing@futurenet.com
MANAGEMENT
SVP, MD, B2B Amanda Darman-Allen VP, Global Head of Content, B2B Carmel King MD, Content, Broadcast Tech Paul McLane VP, Head of US Sales, B2B Tom Sikes VP, Global Head of Strategy & Ops, B2B Allison Markert VP, Product & Marketing, B2B Andrew Buchholz Head of Production US & UK Mark Constance Head of Design, B2B Nicole Cobban
FUTURE US, INC. 130 West 42nd Street, 7th Floor, New York, NY 10036
office: Quay House, The Ambury, Bath BA1 1UA. All information contained in this publication is for information only and is, as far as we are aware, correct at the time of going to press. Future cannot accept any responsibility for errors or inaccuracies in such information. You are advised to contact manufacturers and retailers directly with regard to the price of products/services referred to in this publication. Apps and websites mentioned in this publication are not under our control. We are not responsible for their contents or any other changes or updates to them. This magazine is fully independent and not affiliated in any way with the companies mentioned herein.
If you submit material to us, you warrant that you own the material and/or have the necessary rights/permissions to supply the material and you automatically grant Future and its licensees a licence to publish your submission in whole or in part in any/all issues and/or editions of publications, in any format published worldwide and on associated websites, social media channels and associated products. Any material you submit is sent at your own risk and, although every care is taken, neither Future nor its employees, agents,subcontractors or licensees shall be liable for loss or damage. We assume all unsolicited material is for publication unless otherwise stated, and reserve the right to edit, amend, adapt all submissions.
Please Recycle. We are committed to only using magazine paper which is derived from responsibly managed, certified forestry and chlorine-free manufacture. The paper in this magazine was sourced and produced from sustainable managed forests, conforming to strict environmental and socioeconomic standards.
Technology (ISSN: 0887-1701) is
by
Tom Butts Content
Director
tom.butts@futurenet.com
Four of the country’s largest station groups have formed a joint venture to deliver wireless data via ATSC 3.0 transmission for businesses and industries throughout the nation.
The E.W. Scripps Co., Gray Media, Nexstar Media Group and Sinclair have formed EdgeBeam Wireless to provide expansive, reliable and secure data delivery services. Leveraging the oneto-many nature of broadcasting and ATSC 3.0’s inherent internet protocol-based transport, the new venture is taking aim at industries that need to send data to multiple customers, often in real time, EdgeBeam Wireless said.
join forces with its peer companies to create not just a new company but a marketplace that will allow us to more deeply serve our local communities while providing a wide variety of industries with efficient data-transmission services nationwide.”
Wireless data delivery via 3.0 offers data customers a notably cost-effective solution that complements and enhances current wireless solutions. EdgeBeam will be able to deliver data across the country to any civilian, military or industrial device with an ATSC 3.0 receiver, such as cars and trucks, drones, marine vessels, phones, tablets or television sets, it said.
vehicles,” Gray Media Executive Chairman and CEO Hilton Howell said.
In automotive applications, EdgeBeam could deliver software updates, infotainment, precision navigation and safety enhancements. An internal estimate pegs the annual value of the addressable market for automotive connectivity services as $3.7 billion, EdgeBeam said.
In enhanced GPS applications, ATSC 3.0 can improve location information accuracy from meters today to a few centimeters. EdgeBeam estimates the addressable market for enhanced GPS services at $220 million per year.
“The launch of EdgeBeam Wireless is the culmination of many years of technological advancement, market development and, importantly, recognition by government regulators of the expanded services local broadcasters can provide through ATSC 3.0 technology,” Scripps President and CEO Adam Symson said. “Scripps is pleased to
“With adoption of ATSC 3.0 receivers in television sets continuing to increase with more models available and sets sold every year, Gray Media is eager to join EdgeBeam Wireless to expand the user base for our broadcast signals to a new category of businesses and to spur the wider deployment of receiver chips in an expanded range of handheld units and
The FCC’s Media Bureau announced last month that manufacturers of covered apparatus and multichannel video programming distributors have until Aug. 1, 2026, to comply with the requirement to make closed captioning display settings readily accessible to individuals who are deaf and hard of hearing.
Last July 19, the commission released a Report and Order adopting the new “readily accessible” requirement.
The new rules are designed to make watching television easier for those with hearing impairments by giving them greater control over how closed captions are displayed. They are also meant to make it easier for viewers to find the controls.
The R&O, the third by the agency to make television more accessible to those with disabilities, puts in place a “readily acces-
sible” requirement for the display of closed captions that makes it simpler for viewers to access the settings of many covered devices to adjust the captions’ font, size, color and other features.
Besides televisions and set-top boxes, covered devices include any device manufactured or used in the United States designed to receive or play back video transmitted with sound.
Ross Video has acquired EagleEye from sec.swiss (Swiss Electronic Creation), a strategic move that enhances its portfolio of camera motion systems, the company said.
“Cable cameras really bring the audience into the event with camera angles, shots and motion you just can’t achieve in any other way,” Ross Video CEO David Ross said. “Now having EagleEye as part of our portfolio, Ross will be able to offer this creative option to an even broader range of clients.”
The acquisition broadens the company’s set of creative solutions and brings immersive cable camera viewpoints to more productions, Ross said.
Acquiring EagleEye gives Ross customers and partners access to an expanded portfolio of cable-camera systems, complementing Ross’ Spidercam series and extending solutions to a broader range of applications, it said.
EagleEye will be manufactured in Canada at Ross’s global manufacturing facility and will benefit from the company’s robotic camera expertise as well as its scope and scale of operation, the company said.
What makes television spectrum valuable?
The minds of many commercial broadcasters might immediately go to the ad revenue they can earn with it. Others might think of retrans fees.
Public broadcasters, who don’t sell commercial spots (underwriting aside) nor receive retrans, would probably point to something that’s more mission-oriented, such as their spectrum being the means by which they can benefit their local communities and society at large.
broadcasting as well as the ubiquitous use of internet protocol packets to deliver data.
Now, E.W. Scripps, Gray Media, Nexstar Media Group and Sinclair are leveraging their combined footprint to give broadcast data delivery nationwide reach and have launched a joint venture called EdgeBeam Wireless to make that a business. They also are inviting other broadcasters to participate (see page 6).
from $6.4 billion to $15 billion by 2030, depending upon the number of bits allocated. Beyond the obvious boon such newfound revenue would be to broadcasters and their shareholders, 3.0-based datacasting offers an achievable path towards building greater TV spectrum value—something any wireless company must account for in their bids if, one day, another incentive auction happens.
While these are good answers, it’s becoming increasingly clear many broadcasters are tuned in to the fact that, at its core, TV spectrum is valuable for its ability to convey data over the air. That data can be video. It can be audio. And, at the risk of being highly redundant, it also can be data.
Since the launch of ATSC 3.0, broadcasters have had the means to deliver that data as IP packets, and over the years some have laid the groundwork to create a wireless data-distribution network and developed a business that exploits the one-to-many strength of
If successful, EdgeBeam Wireless will create a significant revenue stream for its joint-venture partners and others joining in where there once was none. How much?
An ATSC 3.0-based datacasting scenario put together by BIA Advisory Services in November 2021 forecasted additional annual revenue for the industry ranging
The Broadcasters Foundation of America will present the 2025 Golden Mic Award to Scripps Sports President Brian Lawlor at the group’s annual black-tie fundraiser at the Plaza Hotel in New York March 10.
Lawlor has been president of Scripps Sports since December 2022, spearheading E.W. Scripps Co.’s ambitious addition of national and local live sports content to its Ion Media network and its local stations. Before that, Lawlor led the Scripps Local Media division for 14 years, a period of rapid growth that saw the station group expand from 10 to 61 local TV stations.
The Broadcasters Foundation Golden Mic Award is presented annually to an individual for their excellence in and commitment to broadcasting, and their ongoing service to the community at large.
The BFOA is the only charity devoted exclusively to helping broadcasters in need from illness or tragedy.
“I am honored at this recognition from the Broadcasters Foundation, a charity that brings much-needed help to our colleagues,” Lawlor said. “I have been a supporter of the Foundation and serve on their Board, and I know first-hand the heartbreaking stories of those who we help.”
Lawlor was named B+C Broadcaster of the Year in 2011, as well as one of the “80 Most Influential People in Television.” He was named “Broadcast Television’s Best Leader” by Radio * Television Business Report. He was the driving force
behind Scripps’ launches of such national series as “Right This Minute” and “The List.” He serves on the boards of the Broadcasters Foundation and the National Association of Broadcasters Leadership Foundation.
“Brian is a leader in broadcasting, who guided Scripps to expand their portfolio of stations and become one of the largest television groups in America,” BOFA chairman Scott Herman said. “His hard work at advancing the mission of the Broadcasters Foundation, as well as his contributions to the television industry, makes him a perfect candidate to receive the Golden Mic Award.”
By Fred Dawson
Any lingering resistance to virtual production involving next-generation elements like LED walls, in-camera visual effects (ICVFX) and mixed reality (MR) is rapidly disappearing across the media and entertainment industry as an increasingly viable hybrid approach to using the technology takes hold.
From broadcast programming to motion pictures and advertising, producers and directors are no longer faced with making an either-or choice between VP or traditional production modes. Instead, thanks to a spate of groundbreaking innovations that have made hybrid use-case flexibility a reality, virtual production (VP) in all of its permutations is now seen as an essential tool that can contribute time- and cost-saving efficiency to just about any project.
“Virtual production is an ever-evolving solution,”
approaches to exploiting the technology. Indeed, says Bob Caniglia, director of sales operations for the Americas at Blackmagic Design, the shift to VP is more creative-driven than a top-down, front-office phenomenon like the industry’s shift to video streaming.
“As effects guys and directors are pulled in, they start to realize they can really improve
Jaime Raffone, senior manager of cinematic production at Sony Electronics, says. “The move to immersive and hybrid technologies has been extremely helpful for us.”
Even before the hybrid VP paradigm took hold, researchers predicted big things for VP. In one outlook typical of many findings, Research and Markets projects the global VP market will grow at a 19.98% compound annual growth rate (CAGR) from $1.99 billion in 2022 to $7.13 billion by the end of 2029. According to Grand View Research, the VP spend in the U.S. over that span will increase at a 15.9% CAGR, reaching $1.09 billion in 2029.
By all accounts, resistance within professional production ranks to VP is rapidly giving way to enthusiastic embrace as benefits accrue from the more versatile hybrid
the entire workflow,” Caniglia says. While some directors are resistant to making the changes in production planning and execution required by reliance on LED volumes and ICFVX, many more want to benefit from things like providing “actors on set with live environments” and doing “more in shorter periods of time.”
“As effects guys and directors are pulled in, they start to realize they can really improve the entire workflow.”
BOB CANIGLIA, BLACKMAGIC DESIGN
The rise in VP adoption is having a big impact on Blackmagic’s output of cameras, switchers, editing tools and other products, as it meets demand for things like the locational flexibility enabled by cloud-based workflows and the LED wall-display quality stemming from the use of 12K and, soon with a forthcoming release, 16K cameras. The trendline in VP is “definitely moving upwards,” Caniglia says.
Sony’s Raffone, too, says the shift in perspectives on VP has been a big force behind new product development at her company. Notably, Sony responded to customer demand for LED solutions that aren’t fixed components of big studios by introducing the Verona line of modular LED displays in 2023. These employ the ground-breaking Crystal LED technology Sony created to achieve life-like realism through independent illumination of each pixel.
Over the past year, Verona displays have been deployed in multiple broadcast, cinema and VP educational projects, Raffone notes. New VP engagements in the broadcast industry include hybrid applications of Verona displays in live sports programming at WWE facilities in Stamford, Connecticut, and at the studios of another major sports broadcaster she declined to name.
Sony has also strengthened its position in the production space with a Virtual Production toolset that supports ICVFX (in-camera visual effects) within the Epic Games Unreal Engine multidisplay rendering platform to meld the real and virtual domains and cut production time through virtualized renderings of camera settings ahead of actual production. Integration partnerships with Brompton Technology and Megapixel VR ensure out-of-the box compatibility between Verona LED displays
Research and Markets projects the global virtual production market will grow at a 19.98% compound annual growth rate (CAGR) from $1.99 billion in 2022 to $7.13 billion by the end of 2029.
and leading production LED controllers, Raffone also notes.
The upshot is a hybrid-optimized VP portfolio that’s gaining traction worldwide across the company’s traditional market base as well as in new areas of business, education and government. “We continue to evolve with the needs of the market,” she says.
Ross Video is another major player benefitting from the hybrid VP wave. “Control and integration across all elements is probably our forte and strong suit,” Mike Paquin, senior product manager for virtual solutions at Ross Video, says.
Ross, which in December announced its intention to sell the G3 LED wall-supply business it acquired in 2021, is seeing widespread use of LED walls in VP, but primarily in hybrid situations where broadcast productions can take place on stages without a VP component or be augmented with LED and/or MR elements to whatever degree is warranted in a given situation.
A big contributor to this versatility is Ross’s support for rendering graphic elements in whatever dimensions fit a given scenario, Paquin says. The company’s XPression Tessera platform employs a distributed workflow system that allows users to configure graphic elements for pixel-accurate rendering across multiple display environments with a dynamism that enables real-time changes in content in tandem with live events.
Such flexibility has created a multiuse studio environment that’s saving broadcasters a ton of money. “Building three or four studios for a TV station doesn’t work anymore,” he says, noting that many Ross customers, including smaller stations that “see what their net-
work parents are doing,” are in the process of refreshing their studio environments. “Virtual is a part of almost every one of them,” he says. Toward that end, Ross at NAB Show in April will demonstrate support for a flexible workspace where a hard set and broadcast desk with display wall can become a weather reporting studio with the kind of green-wall functionality weathercasters are accustomed to, and then be turned instantly into a news or sports reporting venue with LED wall support for rendering the setting and attendant graphics. “You can rotate people through that space for different segments of the same live show,” Paquin says.
ENCO Systems, a pioneer in automated broadcast production workflows, has taken its own approach to VP by leveraging decades-old chroma key video monitoring technology with the compositing tool supplied by Miami-based Qimera. ENCO, the exclusive North American sales outlet for Qimera, markets the high-powered computing appliance Qimera Chroma as a way to weave VP capabilities typically supported by LED volumes into existing live broadcast production workflows without actually using LED walls to provide a visual context for onscreen personalities to work in.
Relying on Unreal Engine to pull together all the elements of virtual sets and augmented reality (AR) graphics created by the Qimera compositing tool, ENCO allows newscasters to view on their monitors what’s happening with their placement in the virtualized output their audiences are seeing on their home screens. “Just like VP with LED volumes, using chroma key with Qimera, you can change the sets and images with the click of a mouse,” says Bill Bennet, media solutions and account manager for ENCO.
of LED volumes often start at $100,000 and go up from there,” he notes. “We built Qimera Chroma to avoid that. I love LED volumes, but we don’t always need them.”
One of the more dramatic manifestations of the new hybrid flexibility in VP was recently on display in the middle of nowhere at Disney’s Golden Oak Ranch filming lot, a sprawling property north of Los Angeles. That’s where Brian Nowac, founder and CEO of Magicbox LLC, was on hand for production of a series of Bush’s Baked Beans commercials. While the setting was appropriate for shooting the outdoor scenes, the rest of the filming was taking place in a tractor trailer outfitted by Magicbox as a complete self-contained LED studio.
Speaking with TV Tech, Nowac describes the rapidly expanding role the uniquely equipped Magicbox trailers have played up and down the West Coast and will soon be playing elsewhere in hybrid M&E productions. “We’re building a fleet of super-studio products we’ll be deploying all over the country,” he says.
The Magicbox trailers, measuring 52 feet long, 32 feet wide and 13½ feet high, can be used with an 8 ½-foot high LED volume consuming 600 square feet of horizontal space or with individual LED walls. A motorized turntable floor makes it possible to film multidirectional car scenes that eliminate the need to shoot from a moving vehicle.
A recent Dodge Hornet commercial with a child version of an adult passenger at the wheel was shot entirely inside the Magicbox, Nowac notes, which would have been impossible with a child driving on city streets. The Bush’s Beans commercials will feature in-car scenes involving a driver talking with her dog, which resembles another in-car shoot involving a woman and her dog in a Subaru commercial that goes beyond what’s doable in the real world.
Along with the convenience of bringing LED studios to virtually any location, Nowac notes that Magicbox is making it possible for producers to save a lot of money by having another production space at hand when set-changing and other main studio downtimes would leave high-paid actors and staff standing idly by.
“Actors can walk right outside a sound stage into the Magicbox studio and do alternate takes or more of their key lines using the exact same lighting and other aspects as they’re using in the big studio,” Nowac says. ● Bob Caniglia
This requires processing delivered by Nvidia’s GeForce RTX GPU and a 13th-generation Intel core along with other high-end hardware components in the Qimera appliance. Bennet says that’s a lot cheaper than investing in LED walls and all the processing they require. “The costs
By Kevin Hilton
The physical function of audio mixing remains relatively unchanged today despite technology’s onward march. There are now many ways to mix studio and outside source signals for live broadcasts, but the familiar console with faders, meters and turnable knobs is still a reassuring presence in most sound suites. That’s not to say there have not been considerable changes in the background, with processing and routing racks now commonly situated either in a separate equipment room or, increasingly, replaced by software in the cloud.
As Lawo Senior Product Manager for Audio Infrastructure Christian Struck observes, the implementation of powerful IP networking has directly impacted soundboards. “The trend in audio mixing is shifting toward integration with broader IT-based infrastructures,” he says. “This includes the adoption of data-center architectures and the use of commodity, off-the-shelf server hardware. Essentially, audio mixing is no longer a standalone activity but part of an ecosystem emphasizing agility, scalability and flexibility.”
“All major live production mixer manufacturers” now produce consoles that run on standard CPU hardware and are more in line with IT environments than proprietary broadcast systems, Struck explains. On an operational level, he adds, there is “noticeable demand” for higher channel counts in tandem with support for Next Generation Audio (NGA) formats.
The configuration of an audio desk is often dictated by its application. When it comes to live news, Wheatstone Senior Sales Engineer Phil Owens says, the mission remains much the same as it has been—providing reliability and ease of use along with necessary support for various live in-studio functions, such as automation and remote contribution.
“Audio systems must be flexible enough to support different dayparts, from a newscast with two anchors, sports and weather to a full panel discussion or a single shot news breakin,” Owens says. “New systems are able to recall the sources—remote or local—and settings needed for these and other possible workflows.”
Any discussion of audio console technology over the last two to three years has inevitably included the cloud and distributed production. Henry Goodman, director of product management at Calrec Audio, describes them as the two strands of the main trend in this area.
“Distributed production environments are where a lot of broadcasters can see added value for their businesses,” he says. “The extension of remote operation to wider distributed production and the ability to utilize resources at will— both in terms of hardware and equipment resources as well as people, in a more efficient way—to produce more content is clearly a process many broadcasters are having to go through for commercial reasons, rather than just because it’s a new technology.”
Calrec recently launched ImPulseV, its first mixer dedicated to the cloud, based on a virtual audio mix engine with cloud-based DSP software hosted in AWS. While Goodman says there are
“fairly forward-thinking broadcasters” now considering this way of working, he does not agree that the console or control surface has become secondary to the virtual processing and mixing setup.
“My view is almost the inverse of that,” he explains. “Once you’ve put your DSP in a cloud environment, a lot of people start to think about that processing as more generic.
The differentiator comes down to how the operator uses it and the surfaces and control systems they’re sitting in front of.”
Wheatstone is “seeing a small uptick in demand” for virtual consoles to control audio hardware, according to Owens. “Touchscreens do offer some advantages, such as fewer moving parts, making them easier to maintain or replace, plus lower cost,” he says. “But most audio operators still prefer the ‘fader in hand’ approach. Of course, that may change as more of the ‘iPhone generation’ step into board op roles.”
The transition toward cloud-based DSP and remote production workflows has
introduced more agile and distributed approaches to audio mixing, including the use of virtual control surfaces and computerbased mixing, according to Struck.
“However,” he adds, “for large-scale, high-profile events such as international sports broadcasts, consoles in the tradition of haptic faders and real rotaries and buttons remain essential. Controllers and in-thebox workflows have gained traction for less demanding or smaller-scale productions but they cannot replace the traditional console in high-pressure scenarios.”
Whether physical or virtual, all modern consoles—and their manufacturers—support immersive audio, which usually means Dolby Atmos. While the big streaming services—notably Netflix and Amazon Prime Video—specify Atmos for high-end drama, as Struck points out, it is still not yet a standard requirement for broadcasters and most streamers. “The interest in immersive audio workflows remains confined to a smaller portion of customers,” he says.
Goodman wonders if much of the viewing
public takes advantage of what immersive audio is available, adding that “the vast majority of distributed content” is still not in the format. Even less exploited, in Goodman’s opinion, is the use of object-based audio (OBA) to personalize broadcast audio.
“The trend in audio mixing is shifting toward integration with broader IT-based infrastructures.”
CHRISTIAN STRUCK, LAWO
“Alternative languages are the obvious application,” he says, “but in sports coverage, OBA can also offer a choice of commentary and a different mix relating to the team a viewer supports. It’s still not very commonplace, for a number of reasons—the main one being it’s not cheap to do, because you’re effectively creating another
mix. How you would commercialize it is another question.”
Some sports broadcasters have picked up features of OBA, but it is not a priority in other areas. “We haven’t seen [demand for] that,” Owens says. “But we deal primarily with live news. OBA requires more from audio systems in terms of an expanded number of sources and the ability to pan in new directions. I’m sure that will become a need at some point but I would put it in the scope of five to 10 years.”
Personalization is not the main reason to adopt NGA/OBA, Struck adds. “What we see is a growth of the channel count, from 5.1 to 5.1.4 or 7.1.4 and higher rather than broadcasters striving to achieve an OBA workflow with personalization,” he says. “It is an emerging trend, however, and the shift to OBA workflows has already redefined expectations for mixing consoles, particularly in terms of resource management and operator assistance.”
The mixing console has come a long way in a relatively short period of time. It will doubtlessly continue to evolve over the coming few years, while, based on recent developments, remaining very much itself. ●
By David Cohen
For media production companies, the drive for increased efficiency without extensive incremental costs or added complexity is always near the top of the priority list. This efficiency is being realized by leveraging various technologies such as remote production, cloud computing and the proliferation of open standards for content distribution over IP. Not to be overlooked, though, is the convenience and workflow simplification offered by KVM switches.
KVM (keyboard, video, mouse) enables a single user to control and switch between a variety of systems or edge devices from one workstation. For years, KVM was mostly applied in scenarios requiring an operator to have wide-ranging access to assets throughout a facility—a broadcast studio with multiple stages and control rooms, for example. As the technology has progressed, though, the opportunities for using KVM in more complex use cases is reaping benefits.
As many global media production workflows have transitioned to IP, the costs and flexibility benefits of this transition will continue in the coming years. Introducing KVM-over-IP switches into the workflow enables users to further leverage the efficiency and convenience of KVM over long distances with the ability to connect a nearly limitless number of devices.
Jon Litt, managing director for Guntermann & Drunck (G&D) North America, says this level of flexibility has paid huge dividends. “During a pandemic situation—when employees aren’t coming into the office as usual, but they still need to do things remotely—KVM-over-IP really would provide limitless flexibility to control and observe anything from anywhere,” he says.
The connectivity offered via IP enables the smoother scaling and configuration inherent to IP systems and provides KVM control over more complex operations, David Isola, director of global product marketing for Black Box, says.
“Legacy or traditional KVM systems are limited to physical systems, making it difficult to scale and may require additional infrastructure,”
he says. “In contrast, advanced IP KVM solutions allow for versatile and extensive connectivity options over existing IP networks. Advanced IP KVM solutions, such as Black Box’s Emerald, provide a unified interface and overcome the challenges of hybrid environments where a mix of virtual and physical devices are part of the workflow—further reducing delays and providing a seamless experience to the user.”
Orchestrating workflows across long distances using any method introduces issues of latency and the quality of video encoded and decoded (perhaps multiple times) during transport, according to Thomas Tang, president of Apantac.
“Latency is a critical factor that directly impacts user experience,” he says. “Minimizing latency ensures real-time interactions, which is essential for industries such as production and postproduction that require precision and speed. Additionally, visually lossless compression is equally important to maintain the integrity of video signals while optimizing bandwidth usage. This ensures high-quality visuals without noticeable degradation.”
This attention to working with higher resolutions over IP is crucial in a KVM environment, particularly in M&E, according to Neil Hillier, senior vice president, global sales and marketing, at Adder Technology. “As this adoption evolves, advanced KVM solutions will need to continue to accommodate ultra-high
resolutions with minimal latency, making them indispensable for editing, post-production and broadcasting,” he said.
Without such accommodations, KVM over IP would be useless to most applications that require high-quality videos. To achieve the required level of latency and lossless compression, advanced encoding technologies have been integrated into KVM switches and extenders.
A key innovation in this area is support for open standards such as SMPTE ST 2110 and the IPMX (Internet Protocol Media Experience) suite of open standards and specifications, Matrox Video Business Development Manager Caroline Injoyan says. “With these standards, KVM devices can natively send and receive video to and from other devices within the workflow,” she said. “And by eliminating the need for additional hardware layers, this approach reduces design complexity, simplifies integration and ultimately drives down cost.”
Advancing advanced IP standards allows for more flexible workflows and options, according to Dan Holland, marketing manager for IHSE. Many of our customers have turned to Display Management Systems [DMS] for KVM-over-IP,” Holland said. “With the increased need for computer-based automation, testing, and simulation in these environments, it becomes more important to improve workflow, share resources, and facilitate a more efficient way for employees to
work. IHSE developed their DMS with JPEG-XS to help unburden the stress of workstation connectivity with the ability to remotely manage and maintain computer equipment without compromising on security, quality, and performance.”
Mission-critical workflows that require the sharing of high-quality video are not exclusive to media and entertainment. Public safety, healthcare, transportation and other industries have a growing need to manage and secure the workflows that move their content. And, yes, they also use KVM to reap the flexibility and efficiency benefits. To these industries, security is just as, if not more, important.
“Encryption plays an important role in safeguarding sensitive information,” Injoyan explains. “Audio, video and USB signals transmitted over the network must be encrypted to protect the confidentiality and integrity of the data. So, KVM devices should also employ encrypted communication protocols.
“For example, unauthorized USB thumb drives could pose a security risk, so it is necessary to select a KVM device that gives you the
option to block their use or enable only preapproved makes/models,” she adds.
Tang agrees. “Implementing strong transport encryption, such as AES 256, ensures that data is securely transmitted across the network. Additionally, using LDAP [Lightweight Directory Access Protocol] for authentication enhances security by providing centralized access control and management. These measures protect sensitive information and prevent unauthorized access, which is crucial in sectors like health care, finance and government.”
Additional security measures that ensure the privacy and safety of KVM matrix networks include: Continuous logging to track all usage; complete system observation and detection, including during the particularly vulnerable boot-up stage; and two-factor authentication with rock-solid credentialing and permissions based on active directories.
Given the diverse nature of ecosystems that may benefit from KVM technologies, AI is expected to take part in this revolution as well.
Black Box’s David Isola agrees. “Right now, our centralized KVM management appliance is
monitored by human interaction,” he says. “I can see where AI could be used to streamline and replace tedious human tasks such as operator preferences, shortcuts, machine or screen layouts, and offering an overall intuitive and personalized user interface based on user preferences and needs all the things that are currently handled by human interface.”
The potential for AI use in KVM-over-IP can get even more complex and provide greater functionality, Apantac’s Tang says. “AI can dynamically allocate bandwidth based on usage patterns and demand, ensuring optimal performance even under varying network conditions. This proactive management helps maintain low latency and high-quality video transmission, crucial for applications requiring consistent and reliable connectivity.”
Nevertheless, bringing AI into a shared environment has its own security issues as well, Adder’s Hillier says. “The industry must remain vigilant about data privacy and regulatory compliance, integrating AI in a way that prioritizes transparency and user control,” he said. “This balanced approach enables KVM to harness AI’s benefits while maintaining trust, reliability, ease of use, and long-term success.” ●
Some factors to keep in mind as your organization makes a migration move
There are a number of important steps to be taken before your organization moves full speed into the cloud. For example, senior leadership should aggressively focus on the company’s data-management principles, including analytics and security, in concert with how they approach their digital transformation goals.
Modernizing and optimizing your organization’s data management requires going beyond the basics of analytics and security. Criteria for selecting a cloud service provider should be created much as a project leader or project manager would do when structuring a series of steps to achieve success in any new project or endeavor.
First on the agenda should be setting and understanding the business needs of the organization. Are you a small business with 50 or fewer employees? If the organization is at a level of 250 workers or more, it would be considered an “enterprise” business and by now likely already has a serious infrastructure level as it relates to networking, internetworking, financial planning and structure, legal, technology and other mainstream solutions consistent with businesses of this scale.
Depending on the scale of the enterprise (or small business), you should thoroughly itemize and lay out how your current or future approach to IT support will be structured, as this is one area that will shift into cloud harmony as a solution provider is selected. Services should include managed and/or co-managed support and a help-desk solution to support employees’ desktops and remote workstation (or mobile) environments. On-site, automated or remote support levels should be defined. An ongoing IT “health check” and cost management/containment platform should be outlined for presentation to a cloud provider or solution’s management resource.
Solutions that should be evaluated when
looking at a cloud support and services platform involve the requirements (or not) for a security operations center, how to routinely use and apply concepts such as “penetration testing,” cyber awareness training, managed detection and response (MDR) services, extended detection and response (XDR) services and compliance as a service (CaaS), and include a “cyber risk assessment.”
Other deeper practices that may be available through a cloud service provider could include vulnerability man-
agement; identity and access management; mobile-device management; monitoring of dark web and credentials security (Fig. 1); password protection and malware download protection; advanced email protection and monitoring; cloud application (APP) security and firewall as a service (FaaS); SaaS protections; and cyber essentials service through a certification program.
Cyber essentials certification procedures vary depending on the country, region or locale in which those practices are governed or offered. The scheme is a set of fundamental security measures designed to produce a robust foundation for protecting your organization’s sensitive data and systems.
Key areas of cybersecurity include the implementation of (a) boundary firewalls and internet gateways; (b) security practices and configuration; (c) access control (e.g., “ACLs”); (d) malware protection; and (e) software and application patch management.
When looking at cloud service providers, inquire how or if they offer extensions for any
addresses, online data, and browsing history) for configurations and/or authorization to access.
of the above practices (in items a-e) and how they address them.
Fast and secure access to cloud and data center applications are essential to the organization, regardless of user location. For broadcasters, this became most relevant during the initial stages of the pandemic, when central equipment operations shifted to remote functionality.
Today, an ongoing common challenge is achieving centralized management of multiple locations, including branch offices. For media systems (broadcast news and production), high-speed, reliable bandwidth is necessary for voice, video, data and control/ management of data centers and production centers. Support for unified communications has become essential, unlike what news broadcast requirements needed even long before the pandemic.
SD-WAN (Fig. 2) offers a relevant solution that optimizes WAN connectivity with centralized manageability at a fraction of the cost of WAN-only dedicated connectivity. Check that a potential cloud solution provider can support systems such as SD-WAN and where the limitations are.
Check if your candidate cloud provider has service level agreements (SLAs) in place before signing up.
A good cloud service provider will have considered such critical SLA components of a contract and will already have them in place. The contract should define (a) the level of service you can expect; (b) the uptime guarantees; and (c) compensation for service disruptions.
The contract can be refined to include response times and compensation policies in case something goes wrong. An SLA should outline measurable performance benchmarks and should detail response times for support requests and service restoration timelines.
How a provider manages downtime is a metric of how it manages planned maintenance and unplanned outages. Communication protocols and informant timelines related to planned or unplanned service disruptions should be included in any cloud service provider contract.
Get a checklist that states how the provider deals with (a) performance reporting; (b) resource monitoring; (c) configuration management procedures; (d) and billing or ac-
counting management—each is an important item to check and understand. A provider who cannot address these items to your satisfaction is not the provider for you.
In summary, the following are key takeaways that should be considered to make an informed decision:
(a) Security and Compliance: Itemize, review and understand that the provider will offer and support a set of robust security measures and that the cloud services will comply with relevant regulations. If you are expecting to work across clouds or international borders, be sure your provider will meet those regulations and report accordingly.
(b) Service Offerings: Evaluate each potential provider and its supported solutions. Ensure the selected provider will offer the services and technical capabilities that support your business needs. If using live A/V solutions, test and be certain the codecs you need are supported and that the data rates meet the needs of your services under heavy loads.
(c) Flexibility and Scalability: Suggest, assess and have the provider demonstrate how they allow you to scale resources as needed. Do they offer customizable solutions? How do they work? How do you assess feasibility and fluidity?
(d) Evaluate the Provider’s Pricing Models: Look at current costs, review expansion costs and look at any penalties and/or costs you will pay to make changes. Model any ex-
LTE and broadband internet.
pected adjustments in advance of signing any contract to best understand the current and future pricing and cost models to avoid unexpected expenses.
(e) Preview SLAs and Reliability: Determine if the provider can offer reliable service with clear SLAs that you understand. Check the extreme ends of your usages (loads, data rates, peak demand) and understand how “overages” impact your cost model—where applicable.
Cloud consulting services are often available from various companies throughout the industry in which you engage. Most will start with a comprehensive cloud assessment that will present a thorough evaluation of your current needs, available technologies and suggest (or provide) the best cloud-based solution for your business.
Often these are fee-based services that can offer additional services—where desired— such as end-user monitoring and integration, and that will be there on a contract basis to support operations from installation through deployment and turn-up on your system.
Look for a third-party consulting or provisioning organization that specializes in your business’ operational needs (e.g., media operations, streaming, news, production). ●
Karl Paulsen recently retired as a CTO and is a longtime contributor to TV Tech. He can be reached at karl@ivideoserver.tv or at TV Tech.
If you’ve ever done singlecamera production work, you already know how important lighting is. Taken together, lighting and camera skills comprise the foundation of visual storytelling. And, since you’re reading this column, you already know lighting is much more than just flooding an area with light.
While lighting someone for a one-camera shoot is relatively straightforward, the degree of complexity increases as more cameras and angles are added. As with juggling, the more balls you’ve got in the air, the trickier it gets. To help simplify the task, we’ll look at a method for dealing with this challenge that
doesn’t sacrifice portrait-quality lighting.
To narrow the scope of this topic, we’re only concerned with multicamera television shows. Cinema and episodic television lighting is very different from what we generally do. For starters, cinematic production uses “motivated lighting” as warranted by the story.
As visually stunning as that can be, it’s something rarely used on a news or talk show. Instead, we more commonly work with something best described as “nonmotivated lighting.” Motivated light originates from “natural” or “practical” light sources consistent with the filmed scene, whereas non -
motivated light originates from anywhere that suits our needs.
Our objective is to present on-camera talent in the best light possible, letting them, rather than the light, tell the story. The goal is to provide attractive lighting that supports a well-exposed camera image. When done right, people look good and the video engineer doesn’t have to chase the iris setting when the shot changes.
Lighting one person for a single camera angle isn’t complicated. The well-known “threepoint lighting” method serves as a good starting point for attractive portrait lighting. This technique forms the fundamental building block for television lighting.
But the pillars of three-point lighting can get a little shaky when the number of cameras increases. An example of this would be adding a second angle to cover an anchor camera turn during a long read or an alternate shot with a graphic. The solution is not to just repeat the three-point pattern like a cookie cutter for each additional angle. This is where we need to adapt the fundamental building blocks of lighting to better suit our needs.
For any particular shot, there’s always a perfect key-light position. The problem arises when two camera angles on the same talent position must look equally good without further adjustment. Fortunately, there’s a way to accommodate this without sacrificing good lighting.
Before we discuss this approach, let’s cover some solutions that seem attractive, but don’t work as well.
It’s a fairly common practice to place the key light directly over the camera, so why not just add a second key over the second camera? That approach doesn’t work here, because both cameras would see the second key light. This would result in a confusion of shadows on the anchor’s face, rather than a single modeling shadow. So, “no” to a second key light. Another possible solution is to flood the set with a mush of shadowless soft light. Although this “forgiving” approach would illuminate the subject for multiple camera angles, its very flatness makes everything look two-dimensional and, frankly, dull. So, another “no.”
Yet another possible approach involves using separate key lights that are alternately faded between as the anchor turns from one to the other camera. This is tricky and requires a lighting console operator working in perfect coordination with the camera shot change. This is notoriously difficult to pull off without detection and best avoided.
I suggest using the following simple solution to key-lighting an anchor with two cameras. Split the difference by placing a single key light between the two camera positions.
Normally, “split the difference” implies a compromise. In this case, it’s not. That slight offset of the key light helps provide beautiful modeling. Back when I was drafting lighting plots by hand, I would use a protractor to help guide light placement. As is the case here, challenging lighting problems often yield to simple geometry.
For any particular shot, there’s always a perfect key light position. The problem arises when two camera angles on the same talent position must look equally good without further adjustment.
Although some TV lighting designers opt for putting their key lights right over the camera (which is a zero-degree offset), my preference is for a side/horizontal offset somewhere between 10 and 20 degrees.
As you can see in Fig. 1, the two camera shots are relatively close to each other (roughly 28 degrees apart) with the key light placed precisely between the two cameras. That puts the key light at a 14 degree offset from both shot angles. This results in a good modeling shadow for both camera angles and makes visual sense on the shot change between them. And since only one key light is involved, as shown from behind the anchor in Fig. 2, the modeling remains consistent
with no confusion of multiple shadows.
As for how steep the key light should be, there are multiple factors to take into account. Consider how the light angles will impact facial features (such as deep-set eyes), reflections of light fixtures in video displays and LED walls, or light washing out backlit scenic elements. These can sometimes present conflicting goals, influencing the lighting choices that must be made with every fixture placement decision.
According to one very successful master of network television event lighting, the ideal vertical angle is 22.5 degrees. That turns out to be exactly half of 45 degrees, which isn’t a coincidence.
There was a time when 45 degrees (both above and to the side of the subject) was considered the ideal lighting angle. But the
industry and tastes have changed since Stanley McCandless first suggested his method of stage lighting. Contemporary tastes in this age of screens lean towards less acute light angles for their more forgiving impact on close-up shots. And in our corner of the visual arts, it’s important to keep the drama off of the faces.
A few techniques that inform lighting remain timeless today. We still use the Renaissance painters’ invention of “chiaroscuro” to create a sense of depth. Where they used paint, we use light, and those highlights and shadows are determined by where we place the key light. ●
Bruce Aleksander invites comments from others interested in lighting. Contact him at TVLightingguy@hotmail.com
To
The phrase “content is king” was first coined in 1974 in the magazine industry and perhaps most famously repeated by Bill Gates in a 1996 essay.
Although it has been a longaccepted truism of our industry, I would argue that it really should be “curation is king” given that it is quality, not quantity, that matters most. In this piece I will argue that it is still true regardless and will be even more accurate in this era of generative artificial intelligence (Gen AI).
JOHN
Fundamentally, my argument is that other aspects of the extended media become commoditized cyclically. The only differentiator that endures and is the greatest predictor of the success of a media business is the quality of its content. This has and will remain true from the traditional spaces like television and film and to the newest approaches in streaming, user-generated content sites, games and more.
This is because two things consistently become true no matter how rapid the pace of change gets. Both monetization approaches and technology commoditize and are not a long-lasting differentiator.
It is not that new business models do not arise or that there is no innovation. It is just that fast following is possible. It is easy to see that happened with the rise of cable networks or even subscription-based streaming direct-to-consumer. When a model is successful, it is followed by others and eventually all companies begin to have similar experiences and options for customers.
This has happened in each media era for all of media history. Currently there is a rise (again) in ad-supported approaches such as free ad-supported television (FAST) or advertising-based video on demand (AVOD), and we are quickly seeing most parties adopt these options (again) for their consumers.
Perhaps unsurprisingly, technology also commoditizes. Over time, any class of technology becomes similar. Because there is more than one way to approach any solution, this is not usually protectable by patents.
As an example, over time, nonlinear editing systems became broadly similar. This has been equally true of transcoding or content delivery networks (CDN). It has also been true recently about streaming platforms. They are all largely similar in recommendation engines, player capabilities, quality, etc. What differentiates them is the content itself.
This is why I argue that content companies hold far greater power than they may think with regards to the future of media. There are continued worries about the
growing impact of tech companies relative to media and even concerns of the growth of AI focused companies or capabilities in supplanting media. I think, conversely, that we are seeing the power of media as we watch AI companies start to license the content they train on.
In fact, this is what will be differentiating in the midterm. In the short term, there is plenty of innovation still happening in AI models, especially around architectures. But fundamentally these are algorithms and approaches with decades of existence and the core changes that users are seeing are due mainly to the sheer computing power and scale of data sets for training. There is
already evidence of the decline in performance in the last year as new models are released and plenty of evidence that systems like LLM’s are much harder to scale than thought just a few years ago.
What will make for success for an AI model in the future? In my opinion, it will come down to available data sets and the proper curation and quality control of that data. In many respects, this is what content companies have been doing for decades. And content companies are in the actual business of creating new data (content).
As I mentioned in my September column covering the concept of model collapse (“Could AI Become its Own Worst Enemy?”), there appears to be no clear way to successfully use AI outputs or other synthetic data to train models to be successful in real world media use cases. To avoid artifacts and increasing bias in the system, you need to continue training it with new and real data.
what do you choose to hold on to? This is a fascinating problem to think through and very situationally dependent.
We are already seeing some licensing deals. What is most interesting about this is that what the models need is not just finished content; raw, unedited content is perhaps even more valuable for model training. A local news organization captures tens to hundreds of hours of raw video data each day that is reality-based and very valuable to train models. I would not be surprised at some point to see local TV stations make more money from licensing their content to others than direct revenue from shows.
I expect this trend in value to continue. It is likely the future will be more protective of content rightsholders. Whether at the individual creator level or major media companies, I expect there will be a series of legal precedents and regulations that protect against unlicensed access and other training that is not consented to.
What does all this mean now? It means that you should maintain confidence in the future of this industry, if you held any doubt. Secondly, it means that you should think carefully about curation approaches to content in whatever part of our space you are in. In a world where raw content may sometimes have more value than finished content,
Consider what you want to do to protect against bad actors who may access your data. Much of your finished content is visible in
I would not be surprised at some point to see local TV stations make more money from licensing their content to others than direct revenue from shows.
one way or another to the world and unlike direct piracy, using it for training is far less easy to detect. In addition to taking all the cyber and content protection steps you would normally want to take, consider the potential of “poisoning” your public-facing versions of content to hurt any unauthorized training while having a separate repository for training data authorized users can access.
AI-model poisoning is a fascinating topic worthy of its own column, but fundamentally
it involves using techniques akin to what we currently do with invisible watermarks to corrupt the data in ways that genuinely fool a model regarding what it is “seeing.”
The best example of this in our space is a program developed by the University of Chicago called “Nightshade.” It can corrupt your imaging data in ways that are invisible or very subtle to humans while being so powerful as to make the AI think it is seeing a dog and not a cat, thus confusing the model when asked to generate a “cat.”
Note that this can be used by bad actors to poison your models if you access poisoned training data. A recent paper showed how these techniques could be used to make AI respond with false medical information so it would be wise to familiarize yourself with this potential vulnerability.
“Content is king” is certainly not the most original message, but with the advent of AI it is time to remind ourselves that this will always remain true. Focus on making great and engaging content and the rest will take care of itself. ●
John Footen is a managing director who leads Deloitte Consulting LLP’s media technology and operations practice. He can be reached via TV Tech.
From new backlights to wireless connections and fewer reflections, here’s what will shape the next generation of sets
By Matt Bolton, TechRadar
CES is always a major event for TV tech, and last month’s event in Las Vegas didn’t disappoint. Even though Sony didn’t show off its next TVs, we were still overwhelmed with new models and innovative tech demos from Samsung, LG, Hisense and TCL.
These are all makers of the best TVs around, and all like to push the envelope with cutting-edge TV tech at CES. Some of this tech will appear in models available this year, while some
will just be shown as proof of concept, hopefully to be used in future sets. What we see at CES shapes not only the TVs that arrive in 2025, but those that will come in 2026 and beyond, as the companies battle to outdo each other and as high-end tech trickles down into more affordable models. Below, here’s the tech I saw at CES 2025 that I think will have the biggest effect on the best OLED TVs and best miniLED TVs in the future.
Matt Bolton is managing editor for erntertainment at TechRadar, sister brand to TV Tech.
This tech is probably the TV takeaway from CES 2025, and was shown off by Samsung, Hisense and TCL, all at slightly different stages. It’s a new way of doing LCD TVs and it works the same as current tech: a
backlight of LEDs shines through a grid of color-filtering pixels. Except in current versions of the tech, the backlight is a single color of LED (usually blue), so the color-filtering layer has to do a lot of work. This saps energy from the light, limiting brightness.
In RGB backlights, each LED has red, green and blue elements, meaning it can shine in the right color for what’s on-screen before it goes through the color-filtering layer. This means the color filter can be much less
There’s a new type of OLED panel in town, and it’s all fours. The latest OLED TV panel from LG Display, which makes OLED screens used by every TV manufacturer that offers them, is a leap forward for the tech. It adds an extra layer of OLED pixels into the panel, on top of the three layers used in previous generations.
This adds a load of extra brightness and aids color reproduction, with the screen being up to 40% brighter than the previous highestend OLED panel from LG—and that’s without the micro lens array (MLA) tech that has been used to boost the brightness of the screens previously.
This panel is used in the LG G5 and Panasonic Z95B, and appears to be the panel used in the 83inch version of the Samsung S95F.
This fourlayer panel will likely become the norm that future panels are built on, and hopefully it’ll be possible for it to trickle down to the midrange panels—something that never happened with MLA.
aggressive—TCL even confirmed that it’s dropping quantum dots from its version of the tech—and the light can shine through more efficiently.
The result is even brighter TVs, or TVs at the same level of brightness that use less power. And at the same time, the color gamut is wider, meaning more vibrant and rich images in just about every way.
Samsung said its version of the tech (which it’s calling RGB Micro LED Backlight) shouldn’t really cost more than its miniLED TV. Samsung’s version is likely to come in a 4K TV later this year, though the version Samsung showed us at CES was an 8K TV. TCL said its version of the tech will arrive in 2026, but there was a prototype at the show.
Hisense has the edge here—it unveiled a 116-inch TV that uses the tech, which it’s calling “TriChroma RGB Backlight.” We were very impressed by the Hisense 116UX upon seeing it in person.
LG has offered a TV with a wireless connection box for a couple of years now, but it’s been its highest-end OLED TV—inherently very niche. This year, LG has expanded it to its mini-LED QNED range as well.
Meanwhile, Samsung has introduced a wireless connections box for its highest-end 8K TV, and for The Frame Pro art TV, as well for its 8K Premiere projector. And Samsung’s box is much smaller than the big beast of LG’s unit, making it feel much more like tech a normal person might have in their home one day.
So suddenly, the wireless TV connection arms race is on. The idea of these boxes is that your TV can sit on the wall or on a stand with just a single power cable running to it, and the direction of that cable doesn’t have to be determined by the positioning of your consoles and set-up boxes. You’ll be able to hide all your gadgets away and have a clean
TV area—total aesthetic and tech freedom. Samsung has said that if the wireless boxes are well-received, it will roll them out to more TVs—it said this about its OLED Glare Free tech last year, and it proved to be true, with the tech being added to mini-LED TVs in
TV companies have been talking about AI features for several years now, and it’s been largely meaningless—it basically meant that machinelearningbased image processing had been applied.
But now that people tend to mean generative AI when we talk about AI, that’s being added to TVs too—and in some actually smart ways. On its TVs last year, LG introduced the idea of a chatbot that you could speak to naturally, and the TV could find the setting to help you. So if you said to it, “The picture is too dark,” it would bring up the brightness options for you. That proved to be a bit of a false start, but this year it’s looking better. The updated version will be combined with AIbased voice recognition so that whenever you use the voice control of the TV, you get personalized responses, and even personalized picture settings.
Samsung, meanwhile, is putting an AI button on its remotes, to act as a kind of voice control button. One of the best features it showed off is the ability for the TV to live translate from a show’s native audio language into subtitles for another language. It supports Korean, English, Spanish, French and Italian so far, and appears to work really well—a great boost for accessibility, and again, something I expect to see lots of brands picking up.
Samsung, which showcased its Wireless One connect box at CES, has said that if the wireless boxes are well received, it’ll
2025. So we’ll have to see if this year’s wireless boxes work well and if people are happy paying a premium for them—and, if so, we’ll see whether other TV companies follow suit, and whether LG starts offering it in more of its own OLED range.
Last year, Sony showed off a new backlighting tech for its miniLED TVs. It used really smart lens designs to provide strong brightness levels that don’t leak from bright areas into dark areas, and offered amazing nuance in the picture contrast. This year, we’re seeing a backlight from TCL that looks like an even better version of this—and this will be the future of the more midrange mini-LED TVs if RGB backlighting will be the new high-end tech.
TCL’s backlight is kind of incredible to see—the company had a demo where you could see just the backlights in motion, so you could compare the new 2025 backlight to the previous year’s backlight and a more basic one.
You can basically just watch TV on the new backlight; it has so many LEDs, and such strong control of contrast and tones, that it’s actually detailed. TCL showed this tech off the best, but it’s something other makers are absolutely working on too—expect to hear a focus on well this year’s mini-LED handle blooming and contrast. ●
USER REPORT
Jeff Segal Director of Production
The Joyce Theater
NEW YORK—The Joyce Theater, a 472-seat dance and performance venue in the heart of Manhattan, has supported the dance community since 1982 by providing a home for more than 400 domestic and international companies, as well as by offering an annual 45-to-48-week season, allowing more than 150,000 audience members to experience diverse, popular and challenging performances.
As the theater’s director of production, I oversee numerous aspects of video production, including our practice of providing a video monitor feed to the backstage areas during performances. This can also be used for archival recordings of performances, with the Blackmagic Micro Studio Camera 4K G2 taking a wide shot of the full stage.
The dance companies that perform here use this footage for daily notes, as a record of a particular performance or for teaching purposes for new dancers or stage managers. In the past, we recorded to VHS and then to DVD. Until 2024, we used a consumer handycam, but as digital files are the prevailing format, we updated our own workflow and now use several Blackmagic Design products to achieve this digital archival format.
As a not-for-profit organization, we had to be mindful of the best way to go about this change. Due to funding concerns and
other projects at the theater, we couldn’t just rip out our video workflow and start anew. We had to leave some older equipment in place, which meant we had to consider new equipment that would work with our SDI cabling and distribution points, which are at least 15 years old. We upgraded our handycam to a Micro Studio Camera 4K G2 and implemented numerous Blackmagic Design converters to make it all work.
The Micro Studio Camera 4K G2 is positioned in the back of the theater above the last row of seats, allowing for a wide view of the stage. It is connected to a Blackmagic Mini Converter SDI
to Analog 4K with a single analog output to an SDI distribution box that sends an NTSC signal to monitors in several backstage locations. The SDI output on the converter is split to two locations that can handle a higher quality signal, currently set at 1080p29.97.
One branch goes to our stage manager’s monitor. The other branch runs to a Blackmagic Mini Converter SDI Distribution in the production office. This distribution point sends the 1080p signal to a monitor in the green room and to a Teranex Mini Audio to SDI 12G, which combines the audio signal from our sound console and the SDI feed from the
camera. From there, a HyperDeck Studio HD Plus broadcast deck provides the recording point, and a video signal is sent to a monitor located in the production office via HDMI.
The fact that all the Blackmagic equipment works so well together makes the setup easier, and the ability to switch between media types in the recording process is great. On any given day, the performing company may ask for an archival recording of a rehearsal or performance, and some companies also request archival recordings with the headset feed for teaching and/or training purposes.
We are set up to add the headset feed into the audio feed, either in stereo or mono, as we frequently pan the show feed to one channel and the headset feed to another so that the company can practice and train cue calling with or without the headset feed.
The Blackmagic gear, particularly the converters, allows us to better serve the companies that perform here with minimal cost and without having to endure a huge overhaul. It’s difficult for any small arts organization to upgrade equipment, but the companies gain added benefits by having the archival recordings balance the cost. We anticipate many more years of success in sustaining dance, sharing the art form with audiences and supporting the companies that come through our doors. ●
Jeff Segal has been the director of production at The Joyce Theater since 2008. He can be reached at JSegal@joyce.org. More information is available at www.blackmagicdesign.com.
The LiveU LU4000 ST 2110 allows users to benefit from the best of both LRT and ST 2110. The unit’s rackmount receiver is used to receive, decode, play out and distribute a single 4K video feed or four (Quad) HD feeds from LiveU units into a SMPTE ST 2110 broadcasting facility. The all-in-one receiver is designed to increase efficiency, shorten workflows and reduce complexity as part of a seamless production workflow. Ensuring uninterrupted connectivity, it receives IP-based LRT feeds as reliable, low latency ST 2110-compliant streams.
The LU4000 ST 2110 also supports automated routing, switching and processing of separate bonded video, audio and data streams. In addition, it adheres to the highest 2022-7 SMPTE-defined path redundancy standard using two 25 GbE ports and users benefit from stable stream transmission along with the consistency afforded by integrated hardware PTP. www.liveu.tv
The Cobalt 9905-MPx is a quad - channel openGear card that offers an up/down/cross - conversion, frame sync, embed and de-embed audio processor for baseband digital signals up to 3G, with four independent signal paths. The scalers are specifically designed for broadcast formats, with full ARC control suitable for conversions to or from 4:3 or 16:9 aspect ratios.
The 9005-MPx supports discrete AES and MADI audio embedding routing mixing and de-embedding to any of four processing paths. A standard 3D LUT feature and available color correction accommodate SDR and HDR processing for downstream HDR systems on a per-channel basis. The INDIGO 2110-DC-01 option adds native SMPTE ST-2110 support with multiple 25G Ethernet interfaces. For 4K, customers can upgrade to the 9904-UDX-4K card. www.cobaltdigital.com
The Cerberus Tech Livelink platform for live IP video processing and transport provides on-demand signal conversion, enabling broadcasters and OTT services to dramatically reduce power consumption and cost when performing any conversion task.
Unlike conventional always-on, hardware-based solutions, the cloud-native platform performs processing only as needed. Operating on a pay-as-you-go model, and available as a self-service or managed service, Livelink handles tasks ranging from
frame rate and format conversion to highend motion compensated standards conversion, delivering broadcast-quality results with scalability and reliability.
Optimized for occasional use and 24/7 distribution for live sports and news, event production and streaming services, the platform offers flexible, cost-effective access to best-in-class signal conversion technology within a streamlined workflow for live video delivery. www.cerberus.tech
ENCO’s enCaption Sierra uses artificial intelligence and machine learning for live captioning and translation in broadcast. Sierra reaches new speed and accuracy benchmarks for automated conversion and delivery, powered by GPU processing and large-language models. Sierra can be paired with ENCO’s new SDI captioning encoder to create an all-in-one solution for on-prem environments and containerized deployment options for cloud.
Sierra can be delivered on the Windows or Linux OS and is managed and monitored from a web browser. Its modern GUI features a simple calendar scheduler and various configuration settings, including custom dictionaries, word models, filtering and bilingual language options. The same improvements are present in Sierra’s integrated enTranslate module, which uses machine translation and grammatical structure analysis to deliver simultaneous captions in up to 37 languages. www.enco.com
Tom Pilla Director of Operations MATV
MILTON, Mass.—Milton Access TV (MATV) is the public-access TV station in Milton. We are a contractor for the municipality and have three cable stations in town. We livestream them on local cable as well as YouTube. We televise live municipal meetings as well as townfocused live programming, covering events like ceremonies and school plays that are relevant to our community. Our biggest focus is on live high school sports, where we broadcast everything from baseball and football to lacrosse.
Because our live sports coverage is so popular, we’d been trying to get Ethernet at our sports fields for years. But to make that happen, it was going to cost
$50,000 to dig up the parking lot and run cable—which was way more than we wanted to spend. We tried using cell networks, but coverage in our area is very spotty.
We were almost ready to give up until we spoke to Comrex. Following their suggestion, we began using Comrex LiveShot with unlicensed Wi-Fi repeater satellite dishes from MikroTik. While a Wi-Fi router sends a signal out in all directions, these dishes are very focused and can send a very strong signal directly between two points. Suddenly, we had a connection with close to 200 Mbps up and down and with LiveShot, we were able to broadcast HD over that connection.
We’re now using LiveShot in ways we didn’t think of before —we’re able to livestream sports events from the field in HD on YouTube and max out the quality available to us on our cable stations. People really love it, and because the system is so versatile we’re able to be
more responsive to our community.
LiveShot is easy to use—just plug in cables, press the power button and check the signal. Customizing profiles to change bit rates is more difficult, but I was able to learn how to do that without too much trouble. We’ve been able to take the factory settings and bump them up to account for the much-higher bandwidth we have with our news setup.
Because our LiveShot kit is so mobile, we can also use it as a backup. Right before an important school committee meeting, our regular solution was acting up, and the situation was so dire that we thought we may need to announce that we wouldn’t have coverage that evening.
But a half-hour before we were to start, it occurred to me that we had the LiveShot Rack always on in our studio and the LiveShot Portable, which had always worked flawlessly. I plugged it into the Ethernet and connected our
SDI feed, and about 30 seconds before the meeting was supposed to start we were online.
Anytime we need anything, Comrex has been great at helping us and going above and beyond. For example, we’ve had situations where we’ve had to get our signals patched through complicated city and school firewalls, and they’ve always been willing to get on the phone with us to work with IT. Comrex tech support does more than just help us figure out LiveShot; they help us figure out how to set the environment up for success.
Our LiveShot is really versatile, reliable and needs no maintenance. There aren’t many pieces of equipment like that, especially something as delicate and tweaky as an encoder, so LiveShot’s been a big part of our workflow. It’s been a great investment. ●
Tom Pilla is director of operations at MATV. He can be reached at tom@miltonaccesstv.org
More information is available at www.comrex.com.
Jonathan Fortin Founder and CEO Rec4Box
Quebec—When I started Rec4Box back in 2009, my goal was to create a solution for midsized mobile production trucks, something that didn’t exist at the time. There were plenty of large OB trucks for big events, but for smaller shows—especially those in remote locations—the existing solutions just didn’t work.
We learned about Riedel via their intercom system, but the real game-changer came when we tested the Riedel MediorNet MicroN. I was looking for a way to connect two trucks for a large stadium event to seamlessly share video, audio and data. We tested the MicroN, and it was a revelation. With just one fiber strand and a few MicroN boxes, we doubled the power of our trucks. That moment marked a turning point for us.
Now, when we arrive at a site, the process is lightning fast. We deploy multiple cameras, lay down a single fiber cable and we’re ready to go—it’s incredible how much time we save. We used to spend hours laying multiple cables for intercoms, video feeds and routing. Now, we just unroll one fiber and place the MicroN boxes where needed—it’s that simple.
One of the most memorable moments for us was when a large Canadian broadcaster’s main production facility flooded. They needed to relocate everything in 24 hours and were scrambling for a solution when I suggested MediorNet. I lent them a couple
Rec4Box uses Riedel MediorNet MicroN in its mobile production units, which greatly simplify live productions.
of MicroNs to test, and they were sold on the idea—the flexibility, redundancy and speed of deployment with MediorNet was exactly what they needed. In just a few days, their operation was up and running again.
For us, MediorNet has become our secret weapon. I can’t even calculate how much money we’ve saved in labor costs alone. We save money and, more importantly, we save time. Redundancy is also crucial to us and it’s built into the MediorNet system—we know we can rely on it even for the biggest, most complex events. It’s designed to keep everything running smoothly even if a fiber cable breaks. And because we can prioritize certain signals, we ensure that the most important feeds always stay on, no matter what. We also love the flexibility. When we’re working on an event
such as a massive comedy festival or a national holiday celebration, we need to send signals to all corners of the venue. With a traditional system, you’d need a huge router and a lot of cables, but with the MicroN, we just run one fiber from the truck and then branch out to different parts of the venue. Whether it’s the front of house, backstage, or the VIP section, we can route feeds effortlessly. It’s a “spider network,” and it’s the way it should be.
Today, the MicroN and MediorNet are the heart of our operations. We use them on every production and our crew wouldn’t even consider going out without it; it’s become so essential that if we don’t have it on a gig, they’ll refuse to go. The days of hauling around bulky equip-
ment and dealing with complicated wiring are over. With MediorNet, we can focus on what we do best: creating incredible live experiences. And as we continue to grow, I know that MediorNet will remain at the core of everything we do. ●
Jonathan Fortin has worked with international production teams on major events like the America’s Cup and Volvo Ocean Race. He founded Rec4Box, a leading mobile rental company offering virtual reality, OB vans and transmission solutions. Fortin has played a key role in high-profile Canadian shows like “The Masked Singer” and advanced augmented reality integration, elevating industry standards. He can be reached at jonathan@rec4box.com. More information is available at www.riedel.net/en/.
Clear-Com’s Gen-IC Virtual Intercom system is a secure and flexible virtual intercom solution that allows users to easily integrate on-premise hardware with virtual intercom clients. End users can quickly ramp up multiple virtual clients as needed, with the unique capability of integrating with Clear-Com’s extensive hardware infrastructure over LAN, WAN and the internet.
The virtual intercom application can be deployed on selectable regional targets, which
minimizes its inherent latency by allowing administrators to easily deploy Gen-IC Virtual Intercom closest to where their teams work. The system utilizes Clear-Com’s Agent-IC mobile app and Station-IC virtual desktop client, eliminating the need for additional user training. Connections from the virtual clients to hardware ecosystems are achieved through the existing Clear-Com LQ series of IP interfaces, with no dedicated interfacing requirements. www.clearcom.com
TVU MediaHub is designed to revolutionize live-media feed conversion and routing with its cloud-based platform. Supporting unlimited format conversions, it transforms IP signals like YouTube, Zoom, Twitch, SRT and NDI into SDI for seamless studio integration. With drag-and-drop simplicity and automation, broadcasters can instantly scale operations up or down, drastically reducing costs. From simple scan conversion to disaster recovery, it adapts to any broadcast environment, ensuring unmatched flexibility and reliability in live production workflows.
TVU MediaHub simplifies complex routing with real-time preview and drag-and-drop capabilities; eliminates hardware routing or cable adjustments; and provides automation that effortlessly manages encoding, scaling and decoding in real time. www.tvunetworks.com
AJA’s 12G-SDI/HDMI 2.0 converters include HA5-12G and Hi5-12G. AJA’s HA5-12G converts an HDMI 2.0 input to 12G-SDI for 4K/UltraHD single-link outputs. HA5-12G includes two 12G-SDI distribution amplifier outputs with eight or two channels of embedded audio from an HDMI source or two-channel analog audio (RCA) input. HA5-12G also supports Extended Display Identification Data (EDID) emulation, which ensures the connected source continuously supplies custom preferred video formats on output.
HA5-12G-T and HA5-12G-T-ST Fiber SFPequipped models are single channel transmitters that can extend HDMI 4K/ UltraHD signals over long distances, up to 10 kilometers over a single LC or ST Fiber link. These models ship with SFPs installed and include all functionality of the HA5-12G, with the added benefit of further extending audio and video signals.
www.aja.com/products
The combination of Sony’s PDT-FP1 Portable Data Transmitter (pictured, camera sold separately), CBK-RPU7 4K/HD Remote Production Unit and NXL-ME80 LAN/WAN Media Edge Processor delivers a powerful 5G cellular live-production streaming solution. It ensures seamless data transmission, superior picture quality and ultra-low latency, making it ideal for live productions that demand reliability and performance..
The PDT-FP1 supports 5G mmWave and Sub-6 technologies, enabling high-speed transmission of video and still images from connected devices.
The NXL-ME80 serves as a decoder, receiving HEVC streams from the CBK-RPU7. This configuration is advantageous for broadcasting live events, remote production and real-time video-switching applications.
https://pro.sony/ue_US
USER REPORT
Victoria Roche Broadcast Manager TJC and Ideal World
LONDON—TJC, part of the Vaibhav Global Ltd. family, is a U.K.-based teleshopping channel, bringing a wide-ranging collection of lifestyle products to its audiences, from finely crafted jewelry and premium beauty products to home and garden decor.
Together with our sister TV shopping channels in the U.S. and Germany, we tirelessly search for unique and captivating pieces, directly sourcing from over 32 countries. We also pride ourselves on production value and like to engage our audience from different locations in ways that are fun and unique.
We were the pioneers of remote broadcasting in the liveteleshopping space, where we have successfully moved outside the studio to create content using Dejero’s EnGo mobile video transmitters and WayPoint receiver. Since we’ve been using Dejero (2020 onwards), it no longer takes a day and a half to set up a studio; we can go from nothing to being ready to broadcast from anywhere in less than 30 minutes.
Last year our U.S. sister company, Shop LC, found themselves broadcasting live from a haunted hotel in Tonopah, Nevada, an old mining town in the middle of the desert. They wanted to bring the viewers a rich historical experience about the town’s gold and silver deposits during their live buying journey.
They brought four Dejero EnGo mobile video transmit-
ters and three Starlink satellite dishes on-site. Dejero’s Smart Blending Technology augmented Starlink’s capacity by blending its LEO satellite network with multiple wired (ethernet) and wireless (cellular) IP connections. There was no way they could have gone live without Dejero, considering the fluctuating service gaps, fiber and cellular, that existed in such a remote location.
We also conducted a live broadcast from the desert in Tucson, Arizona, where the presenters were selling products while walking among giant cacti. Even in this remote location, connectivity was no issue for TJC because of Dejero’s Smart Blending Technology. In fact, the viewers assumed the background was a greenscreen because of how clear the transmission was. The presenter and guest had to walk through the landscape to prove it was real and were able to answer text messages live, in a different time zone.
EnGo travelled to Italy multiple times last year, broadcasting not only for TJC but also for our U.S. and German sister channels, using EnGo’s “sharing” feature. For the Vicenza Jewelry show, we had to set up on the rooftop of an Airbnb in Venice. The building’s internet connection was almost nonexistent, so we used a SIM card in the EnGo with roaming data, running a CuePoint return feed off a smartphone. EnGo performed flawlessly.
Our parent company, Vaibhav Global, saves money by using Dejero—we can send just one crew member to any location to meet a local presenter for a live broadcast. In Germany and the U.K., we even use an EnGo to mix on-site. In the pre-Dejero days,
because of the complexities and cost of RF and booking satellites and hiring production trucks, we simply couldn’t justify these on-location broadcasts.
Shop LC in the U.S. uses two EnGos and eight LivePlus App licenses to remote guests in from all over the world. We’ve made it as easy as possible for the guests, who can be set up and ready to go live in five minutes. Not only is the connectivity reliable, the units are robust: Despite hitting more than 100 degrees Fahrenheit in Italy and
in the deserts of Nevada and Arizona, the EnGo mobile transmitters were “bulletproof.” ●
Victoria Roche started at TJC as an engineer and rose to broadcast manager within four years. She is responsible for the technical broadcast operations of two channels that run 24/7, ensuring that TJC reaches its audiences on a variety of linear and OTT platforms. She can be reached at victoria.roche@tjc.co.uk. More information is available at www.dejero.com.
Matt Endicott Production Manager Cornerstone AV
SALT LAKE CITY—Cornerstone
AV is a wide-ranging event production company that covers many services, including scenic development, content management and live-event execution. In addition to delivering the inhouse experience, our live-event work often includes a broadcast element for streaming to external audiences.
Many of us came to Cornerstone with a broadcast television background. Those experiences inspired us to build a broadcaststyle truck to support our liveevent work. We like to say that we brought everything we learned from the TV world into the live-event world, including the core technologies used in studio facilities today.
Those technologies include several products from MultiDyne —the company shares our vision of applying broadcast-quality performance to the live event space, and its products exist comfortably in both worlds. MultiDyne also allows us to operate more efficiently by providing compact and problem-solving solutions that can be rapidly deployed, reducing our setup process by a significant margin.
MultiDyne’s FiberSaver is one product that consistently pays dividends. As a digital multiplexer, its main benefit is to overcome the challenge of moving a multitude of signals over a finite number of fibers. It also provides the convenience of wavelength remapping. This
means the unit’s wavelengthagnostic inputs can accept and multiplex digital optical signals from a variety of sources, move them over a common fiber strand at different wavelength rates and electronically reclock each signal to the appropriate standard of 1,310 nanometers required at the output. As opposed to passive reclocking that would be required without FiberSaver, its electronic reclocking reliably ensures robust and fresh signals at every output.
The FiberSaver’s one rack-unit design fits comfortably on our truck and is positioned just below our Ross Ultrix router. The Ultrix moves up to 24 signals to Fiber-
Saver over a 1-foot single-mode fiber jumper, which allows us to maintain 100% fiber purity through the chain. The inclusion of BNC connections allows us to move SDI signals over copper as warranted alongside the optical signals moving over fiber. The unit’s bidirectionality supports 12 return feeds, which allows us to monitor graphics, prompter, scheduling and cueing from FOH. MultiDyne can also customize solutions to support other needs at Cornerstone AV. That includes the VB Series “Build a Box” solution, which MultiDyne builds to specification to support any signal transport combination over fiber. These are portable solu-
tions that we can throw down anywhere to carry video and ethernet signals through to the destination. We often use them in a last-mile scenario, taking signals in from FiberSaver and moving them to our projectors.
We are also fans of the SMPTE-HUT Series as it brings simplicity to those of us in a hybrid world. More specifically, the SMPTE-HUT is a universal camera transceiver that removes the limitations that come with hybrid fiber and copper cabling. The HUT device in our drive rack is powered through a TAC-12 single-mode fiber cable. A typical application will power long-lens cameras positioned at front of house or at each side of stage, which would place the drive rack behind the stage. In either scenario, there is a drastic reduction in cable length—often up to 50 feet—as we can feed the drive rack over fiber.
Consider the alternative of requiring separate 700-foot cable runs for each feed; the hours across setup and breakdown times can quickly add up. With the SMPTE-HUT, we reduce at least several hours of labor before and after the event. The same benefit applies to FiberSaver, which also solves the problems of limited space for, or availability of, more than one or two fiber runs. Collectively, all our MultiDyne products simplify the challenges of operating in a hybrid universe. ●
Matt Endicott is a production manager at Cornerstone AV with an emphasis on the live event business. He can be reached at me@cornerstoneav.com. More information is available at www.multidyne.com.
Teradek’s Prism Mobile 5G is a video encoder and cellular distribution solution for sports broadcasters, news teams and live production crews. It provides the best possible cellular connectivity over public and private networks. The lightweight, compact solution allows camera operators to remain agile for longer while capturing action, the company said. Since its introduction, Prism Mobile 5G has streamed content for NBA and NFL teams, NBCUniversal and Fox. The solution seamlessly streams across remote terrain without a single drop.
The solution offers latency as low as 250 milliseconds over cellular; bonded video with up to 9 times connections; multichannel audio with up to 16 channels; low bit rate with HEVC for bit-rate efficiency; and camera-to-cloud with Frame.io, Sony C3P, Sony Ci and PIX.
https://teradek.com
Wohler’s new MAVRIC remotemonitoring software suite is licensed to run with signal probes installed on any of its current iSeries or eSeries monitors or with its new openGear card. MAVRIC enables many users to simultaneously view and listen to information from signal probes, providing one-to-many remote monitoring capabilities for operators located across diverse global locations. MAVRIC enables customers to architect their own customized, scalable and integrated monitoring solutions across multiple global locations.
A suite of three applications that may be purchased individually or as a complete solution, MAVRIC can be deployed on-premise, in the cloud or as a hybrid installation, with secure encrypted communications across all interfaces and signal sources. A service entirely hosted and managed by Wohler is also available. www.wohler.com
Version 5.4 for Digital Alert Systems’ DASDEC and One-Net emergency messaging platforms now includes provisions for the optional AES67/ Livewire+ audio-over-IP functions that support the company’s “EAS at the Edge” architecture, which combines Telos Alliance and Nautel products in a full-AoIP air chain. To enhance security, v5.4 incorporates OpenID Connect for single-sign-on support, giving large enterprises a simple means to control and authenticate access to DASDEC or One-Net devices. Finally, the update includes the new Federal Communications Commissionmandated Missing and Endangered Persons (MEP) event code.
The Version 5 software series, with its new user interface and upgraded operating system, is available for older-but-compatible DASDEC-II and One-Net SE units in the field and can be readily installed on any DASDEC-III model.
www.digitalalertsystems.com
LiveLink is Vislink’s portable all-in-one live production encoder, tailored for broadcasters and content creators that need fast, reliable video transmission. Compact yet powerful, LiveLink combines Vislink’s advanced encoding and bonded cellular technology to deliver high-quality, low-latency video from virtually any location. Its intuitive interface allows for quick setup and operation, making it ideal for live news, sports and events.
The device supports multinetwork bonding, leveraging 4G/5G cellular, Wi-Fi, and Ethernet to maximize reliability and coverage in challenging environments. LiveLink is also equipped with robust error correction, ensuring video integrity even under fluctuating bandwidth conditions. Integration with cloud-based workflows enables seamless management and content delivery, while its lightweight design ensures portability.
www.vislink.com
For possible inclusion, send information to tvtechnology@futurenet.com with People News in the subject line.
America’s Public Television Stations has elevated Jennifer Kieley to vice president, government and public affairs. Formerly senior director of government relations, she is tasked with managing congressional relations, serving as state government liason and working with federal departments and agencies, communications and grassroots and grasstops advocacy. In her new role, she succeeds Katie Riley, who was promoted to president and CEO of APTS last September, following the retirement of Patrick Butler.
E.W. Scripps
E.W. Scripps named Adam Harman as senior vice president of programming, overseeing content acquisition strategy for a portfolio of networks that includes Ion, Court TV, Bounce, Laff, Grit and Scripps News. A 20-year media and entertainment veteran, Harman was vice president of strategy and acquisitions at cable programmer A+E Networks for 11 yerars. Prior to A+E, Harman held programming posts at NBCUniversal’s Style and Hallmark Channel.
Rich Welsh, senior vice president at Deluxe, has been elected president of the Society of Motion Picture and Television Engineers. He succeeds Renard Jenkins, who has come to the end of his two-year tenure in the role. Welsh has been on the SMPTE board for more than 10 years, most recently serving a two-year term as SMPTE executive vice president. Welsh has also served as the group’s vice president of education and as governor for EMEA, Central/South America.
EditShare
Collaborative video workflow solutions provider EditShare has named Brad Turner its new CEO, replacing Ramu Potarazu. Most recently the founder of Turner Advisors, a sales and marketing strategy consultancy for midmarket software firms, Turner is the former general manager of Harris Broadcast’s media software business. He has held leadership posts with digital analytics, digital document management and information services businesses.
The Society of Broadcast Engineers has tapped Mike Downs, a 27-year veteran of nonprofit management, as executive director. Downs, who served as the director of meetings and conventions at Kiwanis International since 2012, will lead the 4,000-member organization for TV and radio engineers and those in related fields. In addition to his run at Kiwanis, Downs worked for 10 years as the chief staff executive at Key Club International.
Imagine Communications
Imagine Communications has elevated Steve Reynolds to CEO from president. Reynolds succeeds current CEO Tom Cotney, who will retire on March 31. Cotney will work with Reynolds to ensure a smooth leadership transition, Imagine said, and will remain with the company as nonexecutive chairman of the board, where will support critical customer commitments, advise the leadership team and serve as a performance coach across all levels of the organization.
Telestream
Dan Castles has returned to Telestream as CEO. He was the company’s founding CEO of Telestream in 1998 and led it for more than 22 years until retiring in April 2023. Also returning to Telestream’s senior leadership team are: Mark Wronski (product) and Charlie Dunn (product); Anna Greco (customer experience); and Jennifer Fehrmann (finance). Other members of the senior management team remain in place, according to the company.
Wheatstone
Wheatstone has named Kelly Parker director of product development. Parker returns to the company from Townsquare Media, where he was vice president, broadcast operations, overseeing 349 radio stations in 74 markets. In his new role, he’ll direct development for its audio-overIP, streaming, virtualization, mic, on-air processing and software server lines. Parker played a lead role in the development of Wheatstone’s WheatNet IP audio network in early 2005.