Welcome to the January 2025 issue of






Make quick, precise audio adjustments from anywhere, any time. Complement your news automation system with this virtual mixer.
Adjust the occasional audio level with the Virtual Strata mixer as an extension of your production automation system. Mix feeds and manage the entire audio production with all the mix-minus, automixing, control and routing features you need from your touchscreen monitor or tablet. Fits in any broadcast environment as an AES67/SMPTE 2110 compatible, WheatNet-IP audio networked mixer console surface
For the media and entertainment industry, artificial intelligence has become a moving target. Early concerns over AI’s impact on eliminating jobs have given way to additional dilemmas over how to manage its effect on journalism, sales and marketing, HR and engineering; in essence, every sector of an organization’s structure is being affected.
It’s during times of rapid change that effective oversight of such groundbreaking technology can make the difference between success or failure—between responding to your competition or being proactive and ahead of the curve.
That’s why it was important to see the Scripps station group take the initiative to form an AI strategy team to improve operations and open up new business opportunities. The team is headed up by Kerry Oslund, who has been appointed vice president of AI strategy, and Christina Hartman, the new vice president of emerging technology operations. Keith St. Peter has been named director of newsroom artificial intelligence and will lead AI strategy for news.
I spoke with Kerry about his new duties and he emphasized that the company is taking a holistic approach towards the impact of AI, both internally and externally.
“This is about doing everything everywhere all at once, but at different speeds with different guardrails,” he said. “However, depending on what you do, your guardrails are probably going to be a little bit different and the speed of which we attack the opportunity is going to be different as well. Our biggest guardrails are going to be around our journalism and our trust and our brand.”
In an effort to familiarize employees with AI, Scripps developed what it calls the “Engine Room,” which serves as a hub for AI tools used internally. Engine Room is a basic general-use AI chat tool (like ChatGPT), but safer to use for work purposes. Prompts and any information entered in the Engine Room are encrypted and remain with Scripps, so no information entered via Engine Room is accessible externally nor is it used to train AI models. As a general use tool, Engine Room can be helpful for basic functions like copy editing, ideation and writing assistance, the company said.
“One of the first things we have to do is just give our employees the tools that they need and do it in a safe manner,” Kerry said. “The ‘Engine Room’ is a closed environment where an employee can go in and they can do prompts and get responses, and that is not going to go out to the LLMs or in any way be leaked to outside of our company. In a company like ours that has over 5,000 employees, most of them are really curious, and they’ve been out there banging on the tools already, right? And so we’d rather have to give them a safe environment to bang on the tools.”
Along with that curiosity comes a healthy skepticism, given the hype surrounding AI. Kerry thinks such hands-on experience will alleviate some of that.
“If I could do one thing in my first year on this job, it would be turning skepticism into curiosity, because I think that would be a huge win. And by the way, skeptics live everywhere you know—they live in the C-suite. But when you give people the tools, they start experimenting and they become better prompt engineers themselves. They have a light-bulb moment—and when that light-bulb moment happens, then the human curiosity is just unleashed. And so the goal is to take skeptics and turn them into novices.”
Let’s hope that 2025 will be marked by such efforts that help our industry move beyond the hype and embrace the reality and possibilities of AI while remaining vigilant to our core mission.
Tom Butts Content Director tom.butts@futurenet.com
Vol. 43 No. 01 | January 2025
FOLLOW US www.tvtech.com twitter.com/tvtechnology
CONTENT
Content Director
Tom Butts, tom.butts@futurenet.com
Content Managers
Michael Demenchuk, michael.demenchuk@futurenet.com
Terry Scutt, terry.scutt@futurenet.com
Senior Content Producer
George Winslow, george.winslow@futurenet.com
Contributors: Gary Arlen, James Careless, David Cohen Fred Dawson, Kevin Hilton, Craig Johnston, and Mark R. Smith
Production Managers: Heather Tatrow, Nicole Schilling Art Directors: Cliff Newman, Steven Mumby
ADVERTISING SALES
Managing Vice President of Sales, B2B Tech Adam Goldstein, adam.goldstein@futurenet.com Publisher, TV Tech/TVBEurope Joe Palombo, joseph.palombo@futurenet.com
SUBSCRIBER CUSTOMER SERVICE
To subscribe, change your address, or check on your current account status, go to www.tvtechnology.com and click on About Us, email futureplc@computerfulfillment.com, call 888-266-5828, or write P.O. Box 8692, Lowell, MA 01853.
LICENSING/REPRINTS/PERMISSIONS
TV Technology is available for licensing. Contact the Licensing team to discuss partnership opportunities. Head of Print Licensing Rachel Shaw licensing@futurenet.com
MANAGEMENT
SVP, MD, B2B Amanda Darman-Allen VP, Global Head of Content, B2B Carmel King MD, Content, Broadcast Tech Paul McLane VP, Head of US Sales, B2B Tom Sikes VP, Global Head of Strategy & Ops, B2B Allison Markert VP, Product & Marketing, B2B Andrew Buchholz Head of Production US & UK Mark Constance Head of Design, B2B Nicole Cobban
FUTURE US, INC. 130 West 42nd Street, 7th Floor, New York, NY 10036
way without the prior written permission of the publisher. Future Publishing Limited (company number 2008885) is registered in England and Wales. Registered office: Quay House, The Ambury, Bath BA1 1UA. All information contained in this publication is for information only and is, as far as we are aware, correct at the time of going to press. Future cannot accept any responsibility for errors or inaccuracies in such information. You are advised to contact manufacturers and retailers directly with regard to the price of products/services referred to in this publication. Apps and websites mentioned in this publication are not under our control. We are not responsible for their contents or any other changes or updates to them. This magazine is fully independent and not affiliated in any way with the companies mentioned herein.
If you submit material to us, you warrant that you own the material and/or have the necessary rights/permissions to supply the material and you automatically grant Future and its licensees a licence to publish your submission in whole or in part in any/all issues and/or editions of publications, in any format published worldwide and on associated websites, social media channels and associated products. Any material you submit is sent at your own risk and, although every care is taken, neither Future nor its employees, agents,subcontractors or licensees shall be liable for loss or damage. We assume all unsolicited material is for publication unless otherwise stated, and reserve the right to edit, amend, adapt all submissions.
Please Recycle. We are committed to only using magazine paper which is derived from responsibly managed, certified forestry and chlorine-free manufacture. The paper in this magazine was sourced and produced from sustainable managed forests, conforming to strict environmental and socioeconomic standards.
Public Media Venture Group (PMVG) has announced it is now providing real-time translation of closed captioning from English to Spanish on PMVG’s NextGen TV (ATSC 3.0) test bed station in Cookeville, Tenn.
The initiative, which illustrates the capabilities of NextGen TV broadcasters to offer new services and expand available audiences, is a collaboration among DigiCAP, XL8, PMVG, the Korean Radio Promotion Association (RAPA) and PBS station WCTE.
The translation solution detects English closed captions within a television program and translates them into a selected language using an AI-powered translation engine operated by XL8. The translated closed captions are then
multiplexed back to the program stream and broadcast over PMVG’s NextGen TV station in Cookeville. While the LiveCAP service offers translation to many different languages, the current initiative in Cookeville focuses on Spanish.
“It has always been the mission of public broadcasters to serve all members of their communities,”
Marc Hand, president and founder of PVMG, says. “Captioning has extended the reach of public broadcasters to hearing-impaired viewers but has— until now—only been available in English. “This new service will extend the benefits of closed captions to households where English is not the primary language.”
The translation engine incorporated into LiveCAP is provided by XL8.
The Advanced Television Systems Committee has elected Ling Ling Sun, chief technology officer at Nebraska Public Media, and Ed Czarnecki, vice president of global and government affairs at Digital Alert Systems, to its board of directors. Fred Engel, principal at Fred Engel Technology Consulting has been re-elected for a second term.
“Our newly elected directors will serve three-year terms beginning January 2025. They join a board that guides ATSC in its mission to develop next-generation broadcast standards and foster innovation in the industry,” ATSC President Madeleine Noland said. “This elec-
tion cycle was very competitive, and we appreciate all the effort the candidates put forward to have a voice in guiding ATSC’s evolution and the high participation in the process by our members. This diverse and experienced group of executives will help us empower a new era of broadcasting.”
Noland also thanked Nexstar Media Group chief technology officer Brett Jenkins and Jim DeChant of LTN Global, whose board terms concluded at the end of last year.
“We thank them for their invaluable contributions to the advancement of ATSC and adoption of ATSC 3.0 internationally,” she said. Ling Ling Sun (l.) and Ed Czarnecki
Public Media Co. has launched an initiative funded by a $1.5 million grant from the John S. and James L. Knight Foundation to help local public media organizations better understand and meet their communities’ information needs.
“We are pleased to support Public Media Company’s local news initiative,” says Marc Lavallee, director of technology product and strategy/journalism at the foundation. “The initiative will enable local leaders to bolster the local news that is so essential to their communities.”
The initiative is intended to support the growth of newsrooms and expand local coverage at public media groups throughout the country. Public Media Co. is working with several partners on the project to provide insight and support to assist local outlets in expanding news coverage and better connect with their communities, it said.
The grant is aimed at helping local public and independent media outlets, supporters and funders make informed decisions and to access assistance to strengthen local news ecosystems, it said.
“Public media is an important solution to the crisis in local news,” Public Media Co. CEO Tim Isgitt said. “Local public media organizations have earned the trust of their communities over decades and collectively reach 99% of the country. This initiative comes at a critical moment for ensuring communities continue to have access to essential local news and information.”
Public Media Co. managing director Stephen Holmes will lead the project with assistance from research, industry and funding partners. Findings will be made public, it said.
It was not a big surprise when word came in late November that John Lawson was stepping down as the executive director of the AWARN Alliance.
(The alliance’s steering committee selected Dave Arland for the role. An interview with Arland is available online.)
A year ago in an interview, Lawson compared his effort to help get ATSC 3.0-based advanced emergency alerting and information (AEI) off the ground as being like “pushing a rock up a hill” and described his decision to try a “for-profit” approach to get the ball rolling while also continuing in his AWARN leadership role.
tions and News-Press & Gazette had left the alliance, and Lawson was characterizing the broadcast industry—with a few notable exceptions—as having “dropped the ball” on AEI.
“Since the election, I’ve seen a new commitment by some broadcasters to really do something with 3.0. We could see a sea change there,” he says, cautioning, however, that when it comes to AEI what must be avoided is for “every station group to have to reinvent the wheel.”
That is the role he hopes his new America’s Emergency Network company will fill.
In a move that signals further dealmaking and consolidation in the media and entertainment industry, Warner Bros. Discovery has announced a new corporate structure that will split the company into two divisions.
One unit will consist of its global linear networks, which include Discovery, TNT and others; the second division will include its streaming and studio operations, which include Max, its Hollywood studios and other operations.
The company did not specify where HBO would fall in this mix but it is believed it will be included in the streaming/studio division, given its importance to the Max streaming service.
Now John has given up the alliance’s reins, and he has a few thoughts about his greatest achievements and biggest disappointments while leading AWARN.
On the positive side of the ledger, the most significant thing was the role advanced emergency alerting played in getting the Federal Communications Commission to give ATSC 3.0 the thumbs up in November 2017. “We were definitely a factor in the commission’s approval of the ATSC 3.0 voluntary transition. It was a three-to-two vote. Every commissioner—even the two Democrats [Mignon Clyburn and future Chair Jessica Rosenworcel] who voted against it—talked about the public-service aspect of advanced alerting. Chairman [Ajit] Pai himself thanked me for giving him the ammunition to get the votes.”
Another important achievement was educating emergency managers about the problems 3.0-based AEI solves for them and building support for it among members of that community.
His biggest disappointment: “That AEI is not deployed widely in the United States and saving lives already.”
Lawson’s biggest hope for AEI is renewed interest among broadcasters. A year ago, station groups like Fox Television Sta-
“I’m very happy that the AWARN steering committee, of which I am a member, decided unanimously to keep AWARN going, and I really think that its role in terms of education, advocacy and networking between alerting authorities and broadcasters is a necessary complement for whatever happens on the private sector side,” he says.
“I think we [the AWARN Alliance] kept the torch alive for using broadcasting to save lives. It’s still burning, and I am confident that Dave Arland and the AWARN steering committee will find new directions to expand AWARN.”
The move follows a decision by Comcast to spin off its linear cable networks amid rampant cord cutting and is part of what analysts believe will be another period of consolidation in the media and entertainment industry.
In announcing the new corporate structure, Warner Bros. Discovery said it will enhance “its strategic flexibility and create potential opportunities to unlock additional shareholder value.” The company’s stock price, which has been hurt by the ongoing decline of the pay TV business, soared on the news that the company will be better-positioned for dealmaking.
“Since the combination that created Warner Bros. Discovery, we have transformed our business and improved our financial position while providing world class entertainment to global audiences,” Warner Bros. Discovery president and CEO David Zaslav said. “We continue to prioritize ensuring our Global Linear Networks business is well-positioned to continue to drive free cash flow, while our Streaming & Studios business focuses on driving growth by telling the world’s most compelling stories.
“Our new corporate structure better aligns our organization and enhances our flexibility with potential future strategic opportunities across an evolving media landscape, help us build on our momentum and create opportunities as we evaluate all avenues to deliver significant shareholder value,” he added.
By Gary Arlen
The year-end frenzy of data, deals and digital divination about connected TV (CTV) has painted an outsized vision of how this internet-based delivery system is overhauling the video ecosystem. Undeniably, CTV—and its hardware and programming components— is already affecting advertising strategies and encouraging re-evaluations of content and equipment opportunities. The lure of customized/targeted video ads has been the Holy Grail for decades, and true believers are now putting their faith in CTV.
But as one of the myriad of new studies points out, nearly two-thirds of viewers think CTV ads are not relevant to them. That analysis amplifies the difficulty of pinpointing relevance in the emerging fragmented CTV world.
CTV has definitely been on a roll. Nearly 90% of U.S. homes have at least one internet-connected TV device— either a smart TV or a streaming access plug-in, such as an Amazon Fire Stick or Roku Streaming Stick. More than 200 providers or packagers of streaming content have groped their way into the CTV world, but only a familiar handful continue to dominate it: YouTube (which has been aggressively building its CTV presence), Hulu, Amazon (including its Prime Video and Freevee offerings), Netflix, Max (the relaunched HBO Max) and Disney+. Apple TV+ is also accelerating its role, including leveraging several sports deals.
Digital researcher eMarketer calls CTV “the fastest-growing major ad channel,” summing up its 2024 revenue tally at $28.75 billion, nearly 19% higher than the previous year. eMarketer predicts CTV ad spending will climb to $32.6 billion this year, then $36.6 billion in 2026, $40.5 billion in 2027 and $44.32 billion in 2028.
The role of CTV vs. other on-demand and linear platforms is triggering unending hopeful speculation. “More and more advertisers [are] shifting their focus away from linear TV and towards CTV and streaming platforms,” observes Geir Skaaden, chief products and services officer at Xperi.
“The market is poised for meaningful growth on a global scale,” Skaaden tells TV Tech, noting that the “migration away from traditional media players will continue in 2025.” He cites the need to leverage “the advertising potential of smart TVs [as] important [but] it should not come at the expense of the user experience.”
Xperi develops software for consumerelectronics devices and media platforms for broadband video service. In the CTV realm, its primary product is TiVo OS, which Xperi says will be used in about 2 million smart TVs by year-end 2025.
Viewers continue to have more options for CTV equipment. Parks Associates’ latest “Battle of the Platforms” research into CTV ecosystems found that Samsung’s Tizen is the
most widely used smart- TV operating system. Sarah Lee, a Parks research analyst, points out that hardware brands can use streaming and CTV content “as springboards to promote their own content channels and smart TV models,” which Samsung, LG and others are doing aggressively.
LG Ad Solutions, the cross-screen advertising offshoot of TV maker LG Electronics, and audience measurement firm Alphonso have also been studying the impact of CTV advertising. In a December report on “The Connected Gamer,” the venture identified the impact of CTV ads on driving video-game purchases. Among the findings:
• 81% of gamers have upgraded their TVs to enhance gameplay.
• 80% say TV ads influence their decisions to buy video games, and 81% are more likely to purchase games they see advertised on TV.
• 85% appreciate seeing video-game recommendations on their TV home screen.
“The largest screen in the home is an incredibly powerful way for brands to connect with gaming audiences,” Tony Marlow, chief marketing officer at LG Ad Solutions, says. “Gamers are among the most deeply engaged audiences on connected televisions, which he called “a platform that uniquely blends premium, big-screen experiences with actionable data.”
When Walmart, the world’s largest retailer, last month completed its $2.3 billion acquisition of Vizio, a leading maker of “valuepriced” TV receivers, a key factor seemed to be its plans to use Vizio’s smart TV sets as a vehicle to expand Walmart’s targeted advertising operations. Analysts expect the retailer (on behalf of its merchandise suppliers) to integrate free, ad-supported
Walmart’s $2.3 billion acquisition of Vizio, finalized last month, is one of the largest CTV plays in a busy season of partnerships and acquisitions.
streaming TV (FAST) content into the CTV features of Vizio sets.
Another December deal hinted at the cross-platform visions of potential CTV operators. Comcast signed a multiyear distribution agreement with Warner Bros. Discovery to carry all of its domestic linear networks, including streaming options for Max and Discovery+, further enlarging the CTV options via Comcast’s Xfinity broadband networks.
The shift toward streaming also raises prospects of the role ATSC 3.0 datacasting will play in IP delivery. Although broadcasters have not yet widely committed to plans to use NextGen TV for wireless internet delivery, some observers envision opportunities that would tie future CTV projects to signals transmitted via 3.0 spectrum. These visions come as the appeal of over-the-air viewing is waning. Nielsen’s latest analysis showed that nearly two-thirds of the under-35-year-old audience tune to streaming video, while only 7% of that age cohort watches over-the-air broadcast channels. While those figures are more extreme than the nation’s overall viewership preferences (41.6% for streaming, 23.7% for OTA and 25% for cable in Nielsen’s November tally), the data portends the future
of viewing in the emerging connected-TV world.
These viewership analyses come as the landscape is changing for hardware, advertising, regulatory and business factors in related industries. For example, The Trade Desk, one of the largest global ad-technology providers, recently disclosed it is building a CTV operating system called “Ventura” (named after its California hometown). CEO Jeff Green believes his company’s demand-side platform (DSP) will find traction because it is independent of content ownership, unlike the CTV platforms at Amazon, Google and Roku.
“The largest screen in the home is an incredibly powerful way for brands to connect with gaming audiences.”
TONY MARLOW, LG AD SOLUTIONS
Wurl, a platform operator that has developed an AI that matches viewers’ emotions to CTV ad content, has issued a report on “Contextual Targeting on CTV” which encourages advertisers to analyze emotions “properly.”
“We have entered an era of AI that will soon enable us to better understand ourselves,” CEO Ron Gutman explained in his analysis of why CTV will become advertisers’ preferred format.
Similarly upbeat is a forecast from Teads, a global, cloud-based omnichannel platform operator that last month announced an exclusive international partnership with VIDAA USA, the smart-TV operating system company powering Hisense Smart TVs globally, to extend advertiser reach with CTV
Like many emerging technologies, “connected TV” (CTV) is still searching for the right word, for itself and for the sphere of services and platforms in its orbit.
• Connected TV: Both a device and a service. CTV can refer to the video receiver/display equipped with internet access (such as smart TVs and plugins, e.g. Amazon Fire Stick and Roku Streaming Stick) that enables viewers to stream and see video content. It also describes the services (especially ad-related content) delivered through these devices.
• Smart TV: The digital receiver/display device that delivers streams of CTV content.
• FAST or FASST: Free ad-supported streaming TV, programming that is streamed without a paid subscription. The shows include targeted commercials.
• Over-the-Top (OTT): Video content that is streamed, usually directly to a smart TV receiver, via the Internet but bypassing traditional devices such as a set-top box.
native display inventory on its smart TV OS. In a study released in December, Teads found 42% of media professionals identified CTV and omnichannel formats as the top theme for this year “poised to shape the future of advertising.”
Teads Global Chief Marketing Officer Natalie Bastian cites the concerns of advertising and publishing executives about the fragmented media landscape. She speculates they will seek “innovative ways to reach the consumer” and cites “shoppable CTV formats” and artificial intelligence as tools for “harnessing the full potential of the technology at our disposal.”
Similarly, Kara Manatt, executive vice president of intelligence solutions at Magna Global—the media intelligence and investment firm which identified that 64% of viewers believe CTV ads are misdirected—offers a congruent but alternative perspective.
“The continued growth of CTV and streaming make it a valuable place for brands to reach their audiences,” she says, citing findings from the Magna/Nexxen report on “The Intersection of Audience Data + Creative Optimization.” She contends the study’s performance insights underscore the value of “pairing audience data and optimized creative.”
Program-rights owners are mining their vaults for streamable shows and developing alliances with CTV packagers for access to platforms. For example, Samsung TV Plus (the CTV offshoot of the electronics maker) last month unveiled a deal with studio Worldwide Pants to offer 4,000 hours of segments from CBS’s long-running “Late Show With David Letterman” as a FAST channel on Samsung smart TVs. The FAST bundle, called “Letterman TV,” is among the latest additions to the Samsung worldwide lineup of more than 3,000 channels.
David Letterman, who hosted the “Late Show” for 22 years, quipped: “I’m very excited about this. Now I can watch myself age without looking in the mirror!”
More significantly, sports are shifting toward streaming platforms, most visibly the dollop of NFL games on Netflix, Prime Video, Peacock and ESPN+. A YouTube tally found viewers accessed 30% more sports content via streaming platforms in 2024 than in the previous year. That digs into a sweet spot for broadcast (and cable’s) programming powerhouse category.
Jon Giegengack, founder and principal at
The smart TV infrastructure, including connected and streaming video systems, is creating a surveillance system that can undermine privacy and consumer protections, according to the Center for Digital Democracy, a Washington advocacy group. Connected TV is a “privacy nightmare,” says CDD executive director
Jeffrey Chester, who co-authored a report to the Federal Trade Commission and the Federal Communications Commission asking for regulatory intervention. The 48page analysis “How TV Watches Us” was prepared in early autumn, before the November elections.
Although policymakers from the incoming Trump administration appear unlikely to pursue such curtailment, Chester tells TV Tech: “We’re going to keep the pressure on,” citing “a network of privacy groups in the U.S. and Europe that are working together to develop safeguards” against such digital snooping. Chester said his goal is to make “the public… more aware that their TVs are spying on them.”
“U.S. companies have pushed their connected TV services across the world,” Chester said. “We plan to ramp up the pressure on the Trump FTC to protect the privacy of Americans.”
In its report, CDD cited the “unprecedented tracking techniques” of connected-TV services and smart-TV devices that
Hub Entertainment Research, calls the current evolution of streaming platforms the “sixth wave of the ‘Battle Royale’ ” among competing providers. The year-end Hub Intel study found that the “average household is using 13 different sources of entertainment,” about half of which are for video (others are No. 1 Spotify and other audio content).
“TV is adapting to its new reality,” he said.
His Hub Entertainment colleague David Tice expects this year “major players in TV tech—especially those dominating TV set operating systems—to buy market share.” Tice predicts tech giants will “squeeze out smaller competitors by offering manufacturers irre-
seek to please advertisers by creating a “Trojan horse” to infiltrate personal privacy. The group’s study found that Hispanic, African-American and Asian-American viewers are “being singled out by marketers as highly lucrative targets” and contends these ethnic groups are the primary advertising targets for FAST channels.
The CDD report also delves deeply into the role of generative artificial intelligence, which it calls the “centerpiece” of the CTV industry’s plans for targeted advertising.
CDD also sent its report to the California attorney general and the California Privacy Protection Agency, asking both regulators to investigate the CTV industry by focusing on antitrust, consumer protection and privacy issues.
The CDD report came on the heels of a separate FTC analysis, “A Look Behind the Screens,” which was highly critical of the privacy practices of social media companies and streaming video services. That staff report concluded that the digital services threaten consumers’ privacy by collecting a “staggering amount of data in order to serve behaviorally targeted ads.”
The FTC report recommended Congress enact new legislation to curb “commercial surveillance” and urged digital platforms to eschew use of “privacy-invasive tracking technologies.”
❚ Gary Arlen
sistible deals to abandon homegrown solutions. By the end of 2025, the rise of FASTs will drive increased competition for viewers.” Tice also says he expects legacy media firms to exploit their best library titles for exclusive use on their owned-and-operated FASTs. Now the industry will have to wait to see which of the deals and prognostications come true. ●
Gary Arlen has monitored, analyzed and advised on the development of “new media” in countless manifestations for more than 40 years from his research/consultancy company, Arlen Communications, in Bethesda, Md.
By Fred Dawson
Intensifying demand for ultralow-latency streaming solutions in sportscasting and other live video scenarios is outpacing efforts to meet goals with conventional streaming, fueling prospects for greater reliance on alternative infrastructures.
The long-simmering debate over what to do about latency is coming to a boil, with the Super Bowl looming as an annual reminder of how bad the problem can be even after years of efforts to do better than the prevailing Hypertext Transport Protocol (HTTP) streaming infrastructure. Adding to the challenges confronting HTTP streaming are financially attractive use cases like microbetting and remote live sports production, which depend on ultralow latencies that are beyond the reach of the dominant HTTP-based HLS and MPEG DASH streaming modes with simultaneous viewing experiences sometimes scaling into the millions.
Often referred to as “real-time streaming,” with sub-half-second lag times falling below human perceptions of any delay between end points, such capabilities are supported by a dozen or more providers of cloud platforms that rely on the WebRTC (Real-Time Communication) protocol stack, which is incompatible with HTTP streaming. Most of these platforms far exceed the scaling capacities, app versatility and other capabilities of video conferencing systems but do support multidirectional
video communications, which can’t be duplicated with HTTP-based streaming.
These providers are bunched at one end of a debate spectrum that also includes many players arguing for far less ambitious outcomes and others who contend there are better, cheaper real-time alternatives to WebRTC in the offing. That last category includes a new protocol known as Media over QUIC (MoQ), under development by working groups at the Internet Engineering Task Force (IETF).
Where people stand in the debate depends on which use cases are being prioritized and how soon any real-time requirements must be satisfied. While MoQ looks to outperform any option now available, advocates say it will take a few years to finalize at the IETF and then commercialize across the vast content delivery network (CDN) ecosystem.
“If you need streaming at those latencies now, WebRTC meets your needs today,” said Will Law, a key participant in the IETF effort and the chief architect for cloud technolo-
gy at CDN operator Akamai. But he advises those who don’t see an immediate need for real-time connectivity to bide their time, with the understanding the MoQ standard will one day lead to real-time multidirectional streaming with key advantages over WebRTC, including compatibility the latter lacks with the existing CDN system.
Many operators, at least for now, just want to do away with the “spoiler” effect experienced by streaming audiences when they’re in earshot of outbursts from TV viewers who are watching events unfold much closer to real time, sometimes by as much as a half-minute or more.
In the case of last year’s Super Bowl, streaming viewers—who spent a Super Bowl-record aggregate of 2.64 billion minutes watching the game on seven OTT services— experienced average lag times behind on-field action ranging from a low of 42.73 seconds for Paramount+ to a high of 86.75 seconds for Fubo TV, according to calculations conducted by WebRTC platform provider Phenix Real Time Solutions.
The Phenix study also highlighted the inconsistency of streaming performance, with a 128-second spread between the lowest and highest latencies recorded on the Fubo platform. As evidenced by similar results reported by Phenix from 2023 and previous Super Bowls, just lowering latency enough to get rid of the spoiler effect is no cakewalk.
That point was brought home in September at the IBC Show with a demonstration revealing the outcome of a monthslong IBC Accelerator project led by Comcast and the U.K.’s BT. The
team’s hard-wrought reductions in latency contributions from encoding, packaging, buffering and other processes produced an overall cut in latency to 1.8 seconds on 4K live sports streams delivered over MPEG DASH and HLS.
Scalable live sports streaming at 1.8-second end-to-end latencies would certainly be a huge improvement over the current state of affairs. But the Accelerator team acknowledged that implementing their approach in real-world operations at mass scales could take some time and, in any event, would not satisfy the need for what WebRTC platforms are delivering.
These are the capabilities required to support a wealth of next-generation consumer applications infused with video-enabled social interactions among participants, which, along with microbetting, include watch parties, game shows, live sales presentations, online casino gambling, multiplayer game playing, distributed esports competitions, photorealistic group engagement in extended reality (XR) environments, and much else. In all these cases, multidirectional—as well as
real-time—streaming is essential.
The problem with WebRTC is that more money must be spent to reach mass audiences than is the case with conventional one-way
streaming supported by CDNs. Or, as Comcast technology fellow Alex Giladi put it at IBC,
“The difference between what you can get out of adaptive streaming and out of WebRTC is
large but not large enough to justify wholesale change.”
That’s true when the goal is nothing more than eliminating the spoiler effect. If there’s a return to be obtained on putting a WebRTC platform to use, though, it can be well worth paying an incremental premium to deliver what amounts to a premium service.
WebRTC is a peer-to-peer communications protocol stack based on the Real-Time Transport Protocol (RTP) used in internet voice and video communications. Streaming platforms built on WebRTC require distributed deployments of intelligently orchestrated cloud resources operating outside the HTTP domain, which can involve a huge volume of servers to reach massive scales.
Finding use cases where the returns are certain enough to overcome resistance to the spend is the challenge faced by all the WebRTC platform providers. But after several years of piecemeal progress, they’re starting to make some significant headway.
Microbetting is a hot use case that many advocates believe could get a lot hotter if
people could bet on what they’re seeing in real time. “Online betting as a whole is typically done through sports books like FanDuel and DraftKings based on data rather than what bettors are seeing on their video screens,” Phenix Chief Marketing Officer Jed Corenthal said. “Delivering video so it syncs up with microbetting options like what the next pitch is going to be in a baseball game would deliver a much better experience.”
Last year, just six-plus years after the U.S. Supreme Court legalized online betting in the states, microbetting accounted for about 20% of the $14 billion wagered digitally, according to Grandview Research. The advantage of increasing audience engagement generated by microbetting is obvious and well worth pursuing, said Kelly Pracht, CEO of nVenue, a leading enabler of the AI-driven data aggregation and analysis that goes into calculating odds and setting up microbetting options delivered
On a track separate from consumer applications, many entities, especially in sports, are hoping to lower production costs with real-time streaming on the back end to enable centralized video processing that eliminates the need for expensive mobile production trucks and crews at event venues.
For example, Verizon’s business unit, which has taken a leading role in enabling advances in in-venue mobile viewing experiences for sports fans, is also working with sports producers to facilitate remote production through real-time connectivity between venues and distant production studios.
“You can’t have remote production unless there are instantaneous connections,” says Josh Arensberg, chief technology officer for Media & Entertainment at Verizon Business.
“My goal is to actually enable the industry to do this and support it across a number of partners.”
Chris Allen, CEO of Red5, another WebRTC-based platform supplier, says, “WebRTC streaming is, I think, finally mature, and everybody is coming around to the fact that this is the right way to do this stuff.” Red5, which Allen says is undergoing remote production trials with multiple
sports and new organizations, has partnered with Zixi to facilitate next-gen sports productions through direct tie-ins between the Zixi playout transport platform and remote production workflows that rely on RTIS supplied by Red5.
Putting real-time interactive connectivity to work immediately in live production scenarios has important implications for
by sportsbooks.
“We’re excited about near-zero video streaming,” Pracht says. At this still-early stage in its market evolution, microbetting remains “a little disjointed,” she acknowledges. But with providers like Phenix available to support real-time streaming, she said she’s confident the industry will overcome obstacles to “bringing it together.”
There are formidable challenges to directly linking microbet bookmaking to what people are seeing on screen, especially if that entails going beyond handhelds to direct tie-ins with real-time streaming to smart TVs. But compared to what’s already been achieved by suppliers like nVenue, the climb doesn’t look all that steep.
Pracht notes the volume of real-time information nVenue processes to support its bookmaking clients’ microbetting options with NFL, NBA and MLB games has reached 480 trillion data points annually. “It takes a lot of technology to turn all that data into creating odds on what’s going to happen next,” she said. ●
wide use in distribution down the road, Allen suggests.
“My theory is it’s going to help accelerate a lot of end-user experiences as well,” he says. When producers “start using this stuff in production, then it makes it a lot easier to transfer to the consumer and create interactive apps and everything else as we go.”
❚ Fred Dawson
TV Tech is delighted to reveal the winners of the 2024 Media & Entertainment: Best in Market Awards. The awards recognize standout products and solutions that have been brought to market over the past year and include three Future brand categories: TV Tech, TVBEurope and Radio World. Nominees paid a fee to participate.
This year’s winners are:
MCP - FLEX CHANNELS
Akta
ATELIERE LIVE
Ateliere Creative Technologies
X4 ULTRA
BirdDog Technology
BLACKMAGIC PYXIS 6K
Blackmagic Design
BG -8K-88MA | 8X8 8K UHD HDMI 2.1 MATRIX SWITCHER WITH AUDIO DE - EMBEDDER BZBGEAR
CANON EOS C80
Canon
GEN - IC
Clear-Com
DEJERO GATEWAY 3220 NETWORK AGGREGATION SOLUTION
Dejero
AXISCTRL
Electric Friends
JKD SOLAR POWERED MARQUEE
Jungle Power
MAXON ONE
Maxon
MK.IO BEAM
MediaKind
QUICKLINK STUDIOEDGE
QuickLink
ARTIMO
Ross Video
BRC-AM7 FLAGSHIP PTZ CAMERA
Sony Electronics
A20- SUPERNEXUS
Sound Devices
TAG OPERATOR CONSOLE
TAG Video Systems
TX DARWIN
Techex
ZIXI MARKET SWITCHING
Zixi
An ability to make informed decisions is key in systems that handle dynamic situations
One of the lesser-realized but very important elements of artificial intelligence is real-time adaptation and decision-making. Where is this important, one might ask? The ability to process information as it arrives and then to make informed decisions without significant delay is an area where AI can be quite valuable.
Familiar applications of real-time adaptation include command-and-control environments, security situations, traffic control or monitoring environments and in autonomous driving (autopilots). Every one of these situations requires intelligent systems to be able to make adjustments in response to dynamic situations and—in most cases—in real time.
Any one of us can probably imagine the number of decisions that must be made in a self-driving vehicle solution. The ability for any system to “adapt in real time” is becoming essential in this fast-moving world—and AI is a primary element in those advancements.
Today, companies must be able to act quickly on data-driven insights to be more agile, proactive and to seize emerging opportunities or respond to sudden market shifts. Amazon is a good example of a business that must be able to move in a certain direction without being burdened by “legacy” components that bind it to restrictive methods that cannot react to sudden changes in marketplace demands.
For time-based analysis, the AI-driven environment might depend on the following: (1) continuous time analysis and (2) discrete time analysis. Each of these specific methodologies in mathematics, networks and analysis have subcomponents that become applicable to many elements in media, business/ financial forecasting, signal and process management and system modeling
(both complexity and accuracy).
Signal processing involves elements such as signal data visualization techniques, preprocessing and filtering techniques, plus physical-based time-domain and frequency-domain analysis (especially in real time), and the use or application of the data derived from signal processing.
Broadly defined, signal processing is a fundamental discipline in data science that deals with the extraction, analysis and manipulation of signals and time-series data.The depth of this science can get very complex and highly dependent, which is why we may see “data scientist-engineer” as a profession grow rapidly in the workplace.
In data science, a signal is defined as a gesture, action, element or sound used to convey information or instructions. From a third-person perspective, signals “transmit” information (such as instructions) by such means—i.e., by gesture, action or element/ component, including audio/visual elements such as sound, light or even temperature
changes in the environment, etc.
When placed into the context of signal processing, a “signal” can be any form of information that varies over time or space. Such signals may take many familiar forms, ranging from audio waveforms and temperature readings to financial market data and sensor-activity measurements. AI operations function on categorizing such signal data forms and learning the variations or changes from actual environments prompted by stimuli generated by external components including humans, the climate, physical alterations or altercations and such.
According to IBM, a neural network “is a machine-learning program, or model, that makes decisions in a manner like the human brain.” In our cases, these networks are specifically a computer system modeled on the human brain and nervous system, i.e., the “ideal” AI environment. By using processes that mimic how biological neurons work together to identify phenomena, the model can weigh options and arrive at conclusions.
Conclusions are generally reached by using a series of training exercises which in turn “machine-learn” to improve their accuracy over time. As these successive training exercises are fine-tuned for accuracy (and application), they become powerful tools in data and computer science and, in turn, support artificial intelligence. The results are that tasks, such as image or speech recognition, can take seconds compared to the hours a human might require using manual identification methods.
One of the best-known examples of a neural network is Google’s PageRank (PR) search algorithm, used to rank web pages in its search-engine results tabulations. We note that PR is named after both the term web page and Google’s co-founder Larry (Leonard) Page—associated with the co-founder of Google, Sergey Brin.
1: An ANN training process uses a set of unit cells (or artificial neurons), depicted by the circles, arranged in an input layer, one or more hidden layers and an output layer. Each neuron is connected to those neurons in the neighboring layers via adaptive weights.
Neural networks are sometimes cataloged as artificial neural networks (ANNs) or simulated neural networks (SNNs). There are several types or forms of neural networks, two of which are discussed here.
Artificial neural networks (ANNs) are a type of machine-learning algorithm that employs artificial neurons—a network of interconnected nodes (see Fig. 1 for a conceptual node diagram). These nodes then attempt to model the hu-
man brain’s neural network. Each individual node acts like its own linear regression model—composed of weights, a bias (i.e., a “threshold”) and an output. Linear regression models predict the value of a variable based on the value of another variable (see Fig. 2 for equation).
Linear regression fits a straight-line model (or surface) that is useful to “minimize discrepancies between a predicted value and an actual output value”—the approach in the training models used in artificial intelligence solutions and appears much like the slope equation from Algebra 1 (y=mx+b). For more information on the mathematics of these principles, follow up on the details of linear equations, least squares methods, and predictive coefficients.
Typically, ANNs will be used to solve complex problems—for example, facial recognition or document summary processing. Essentially, the theory behind the ANN is the teaching of computers to process data in methodologies that mimic the human brain.
Disadvantages of ANN may include: (1) they are computationally expensive and consume massive amounts of training cycles to obtain accuracy; and (2) it can be difficult for them to perfect predictions or categorize data. At this point in time, generative AI (one of the more familiar AI activities) can be considered “experimental” at best—with improvements being made as persistence in applications and libraries of training models are created.
A simulated neural network (SNN) is simply another name for an artificial neural network (ANN), considered a subset of machine learning. Summarily, ANNs are made up of connected nodes, or artificial neurons, that are loosely based on the neurons in the brain.
In our AI category, “continuous time analysis” refers to studying systems where changes occur smoothly over an uninterrupted time interval. Contrary to “continuous” is “discrete time analysis,” which examines systems where changes are only observed at specific, discrete points in time, essentially treating time as a series of intervals rather than a continuous flow.
The nature of the problem to be solved and the degrees (amount, type and frequency) of data being analyzed are determining factors when using either the discrete- or continuoustime analysis approaches.
Discrete time models often are the more preferred choice due to computational ease, however, continuous time models can provide
Fig. 2: Linear regression in machine learning is a statistical method used to model the relationship between a dependent variable and one or more independent variables. The aim is to find a linear equation that best describes this relationship, allowing the system to make predictions based on new data.
a more accurate representation of certain real-world phenomena when applicable (e.g., in self-driving vehicles or other autonomous activities.)
AI allows leaders of organizations to make
better decisions by using the built-in methodologies employed in many conditioned AI approaches to problems (such as linear regression techniques.)
Furthermore, better insights into the solution may be accomplished by uncovering patterns and relationships that others might have previously seen and thought they already understood. Fundamentally this is how AI is utilized in the generation or modification of art and images—referred to as “AI art.”
AI art is “any kind of image, text, video, audio or other kind of digital artwork produced by generative AI tools.” Such tools leverage millions of written, visual or aural content samples in reference to the prompts or known images employed when creating AI-generated art. AI art is currently integrated into many, if not all, of the major products from companies including Adobe, Microsoft, Google and more. ●
Karl Paulsen recently retired as a chief technology officer and is a longtime contributor to TV Tech. He can be reached at karl@ivideoserver.tv
Immersive sound is the final journey to the complete auditory experience. Stereo began that journey with the ability to reproduce sounds that would align more with our hearing, but at the onset, audio practitioners seemed to define stereo as two channels of monophonic sound.
I will always remember my first records that separated the instruments into one channel or the other—this did not work for television. When you saw music on television, the instruments were not separated into left or right, but more of a mix in the middle. This may have been because early mixing decks did not have panorama controls.
Once audio technicians ventured into two channels of sound, the problems began immediately with phasing and even to the extreme of phase cancellation. With surround sound, this became even more problematic. Stereo and surround sound cracked the door open for the complete auditory
experience, while the next step, immersive sound, blew the door off its hinges with creative possibilities while solving many technical issues.
For the audio producer/ broadcaster, immersive sound adds an upper layer of sound over the basic surround-sound bottom layer. Technically, when you propagate the upper channels with sound, you have immersive sound.
Immersive sound is easy to produce—begin with atmospheric enhancements so the listener further believes the “being there” experience. The ambience at sports venues is fairly homogenous and omnidirectional, particularly in the upper stratus, so to capture a stable immersive sound base will require a minimum of four microphones spaced some distance apart and some distance from the source.
There is much discussion of the separation of “spaced pairs” of microphones. I happen to like a widely spaced group of microphones because I like a broad sense of dimension— my experience has been that precise placement is not absolute in sports.
With immersive sound, you do not want to over-mix the ambiance/atmosphere into the lower channel—use all of your channels to separate your sound elements and be careful not to drown out the commentators or other voices with ambiance and atmosphere.
Immersive sound will evolve as the practitioners and producers gain knowledge and experience. There are sports that are covered with the athlete in full frame from head to toe, which lends itself to specific sounds in the upper front channels. For example with basketball, the net microphones can be placed in the upper front channels, giving a natural soundscape, because the listener does expect that the sound of the basket and net are above their head.
It is often assumed that if you cannot properly hear the sound field, you cannot properly control the sound field. This is partially true, but consider that somewhere in a quality-control or transmission room the entire immersive sound field should be verified, but for simple atmospheric enhancement visual metering is a valuable asset to the mixer.
Critical listening of a transmission mix may
dictate your level of sophistication in the immersive mix. For example, basic atmospheric enhancements can be metered and monitored in a QC listening room with a soundbar to determine proper levels of sound and balance to the immersive sound encoder.
In a space without overhead speakers, you can hang a couple of overhead speakers or— depending on budgets and time—a temporary immersive mix area can be assembled for the event. Specifically, any event where “flight pack” equipment is set up, you should be able to configure speakers to accommodate Immersive Sound production.
Immersive sound was not possible before digitized audio. Additionally, digitizing the audio and video signal solved the problem of transporting the broadcast signal to the consumer. The introduction of ATSC 3.0 audio standards for immersive sound with Dolby Atmos and its competitor, Fraunhofer’s MPEG-H, along with the proliferation of modestly priced encoding “black boxes” like the Liner Acoustics brands, has allowed for immersive sound to be encoded
on virtually every electronic entertainment production.
Dolby Atmos and MPEG-H decode immersive sound into the home over speakers, soundbars and ear devices and either codec is capable of decoding virtually any listening configuration—not only immersive but surround and stereo, solving the decades-long concern over “the down mix.”
Entertainment formats and technologies
Stereo and surround sound cracked open the door for the complete auditory experience, while the next step, immersive sound, blew the door off its hinges with creative possibilities while solving many technical issues.
are consumer-driven—no matter how cool the industry thinks a format is, the consumer may think otherwise. The problem for multichannel sound has always been delivering the experience to the home. The early analog “matrixes” from Dolby sounded mediocre at best, but opened ears to the possibilities of the enhanced audio experience.
Fast-forward to today and soundbars with audio decoders are the “go to” audio device for the home consumer. The consumer/listener can tell the difference with even the most basic of sound reproduction devices, and enhanced sound clearly stands out with virtually all content—drama, music, variety and sports. Finally, soundbars are significantly easier to install and set up; ask my mom.
My advice has always been, it is never going to be perfect! It only needs to be entertaining and immersive sound is entertaining. ●
Dennis Baxter is the author of “A Practical Guide to Television Sound Engineering” and the publication “Immersive Sound Production—A Practical Guide” on Focal Press. He can be reached at dbaxter@dennisbaxtersound.com or at dennisbaxtersound.com
Looking back, and ahead, at the TV industry’s technical leaps forward
While going through items I’d saved from my former apartment in Los Angeles, I came across a copy of TV Technology with my first article, dated November 1984. It included my user report on the Abekas 52 digital-effects unit I’d selected for KSCI, where I was working at the time. Looking through that issue, the articles and the ads showed how much broadcast technology has changed and how some things remain the same.
One thing that hasn’t changed in 40 years is the demand for spectrum. The article, “TV, Land Mobile Vie for UHF,” was about the Federal Communications Commission giving the Los Angeles Sheriff’s Department the use of Channel 19. As I was the chief engineer at KSCI, which was on Channel 18, I was concerned about that. As a consequence, not
only was this issue the one with my first article, I was also quoted in the spectrum article: “ ‘We are very concerned about it,’ said Doug Lung, CE at KSCI TV/Channel 18 San Bernardino, CA. ‘How are they defining ‘undue’ interference?’ ”
I added that when Channel 19 was used by the Los Angeles Olympic Coordinating Committee during the 1984 Summer Olympics, I noticed intermittent herring-bone interference on a high-quality demodulator and mild interference on a Sony receiver.
Other articles in that issue that may bring back memories include the front-page article “MTS Use Surveyed,” on stations’ stereo TV plans, and a guest editorial by George E. DeVault
Jr. (president and general manager of WKPT-TV and then-chairman of the NAB’s UHF Committee) titled “UHFers Fight for True Parity.”
The comments are interesting in that they point to issues with cable carriage of UHF stations and competition from out-of-market “superstations.” DeVault Jr. was also concerned that the presence of LPTV stations would limit full-power stations’ ability to improve existing service through the use of translators.
I recall that UHF TV stations didn’t get as much respect as the established VHF stations back then. Since the DTV transition, broadcasters have found UHF works better given the poor performance of indoor VHF an-
tennas and the huge amount of noise in the VHF band coming from motors, switching power supplies, solar inverters, LED lights and other electric devices.
Greg Best’s article, “Improve Specs for MTS Xmtr,” described how to optimize analog-TV transmitter performance for stereo and how to test stereo performance. Hans Schmid’s article, “Nonlinear Waveform Distortion,” explained the different types of distortion that can occur when processing or transmitting analog video.
There were no ads for TV transmitters in that issue! However, Modulation Sciences had a two-page spread describing its TV-stereo generator. Broadcast
Rummaging through items from his former apartment, our writer found his very first TV Tech article from November 1984. Check the online version of this column for a fullresolution view with readable text.
Engineering had a much smaller ad with its TV-stereo generator.
Looking back at the changes over the last four decades, two things stand out as contributors to the transformation in TV broadcasting—computers, including digital processing, and connectivity.
That Abekas 52 had circuit boards filled with ICs, including the large TRW A/D converter I mentioned in the article. Comparing that chip with today’s technology, it was bigger than a 2 TB SSD or a complete Raspberry Pi Pico Zero 2W computer. The Abekas 52 and other digital video-processing gear did amazing things in 1984 but finding a defective IC, bad capacitor or bad solder joint when these units started acting weird could be difficult.
Today, there is no need to convert analog to digital and back again in production. What used to take racks of equipment, patch bays and waveform monitors to maintain can be done on a few local servers or in the cloud. A hardware user interface is still required, but it likely has a generic processor under the hood. Reporters upload content to the cloud over the internet and editors and the production crew at the station can edit it in the cloud and ship it to master control, which increasingly is also operating in the cloud. From there it can go to the transmitter or final microwave link to the transmitter, wherever it is located.
I don’t recall seeing any digital components handling RF in transmitters or microwaves 40 years ago. Transmitter exciters had a large chassis filled with tunable inductors, variable capacitors and over a dozen potentiometers, in addition to transistors and crystal-controlled oscillators.
Fortunately, the inductors and capacitors didn’t require much if any routine adjustment, but the
Looking back at the changes over the last 40 years, two things stand out as contributors to the transformation in TV broadcasting—computers, including digital processsing, and connectivity.
corrections for differential gain and phase and other distortions often had to be adjusted as the transmitter tube aged or was replaced. There were no high-power solid-state TV transmitters 40 years ago.
Today, an entire digital TV exciter can fit on a circuit board the size of a sheet of paper. With perhaps one or two one-time settings, all adjustments and corrections are done in software, many automatically. A large FPGA chip generates the waveforms and handles the corrections. There is even code available to create an ATSC 3.0 transmitter using an offthe-shelf SDR, as I wrote about in my January 2023 article, “Learning About ATSC 3.0—on the Web or on the Bench.”
The other big change is connectivity. Forty years ago, content distribution had largely moved from terrestrial telco circuits and shipped (“bicycled”) video tapes to satellite. As fiber connections became more affordable, distribution moved from the sky back to the ground. Today, content is increasingly delivered over the Internet, using protocols like SRT and RIST. The cost has dropped to the point where internet connectivity can replace microwave links, assuming the reduction in reliablity is acceptable. For remote sites, space is an option, using Starlink. As other low-latency, low-earthorbit satellite constellations are launched, the cost of these connections is likely to drop. An important caveat is satellite Internet bandwidth is likely to remain limited, so this option
may not be available in densely populated areas.
In 1984, I couldn’t have imagined the technology TV production and broadcasting is using today. I don’t expect to be around 40 years from now, but I expect broadcasting will change more in the next 40 years than it has in the last 40.
ATSC 3.0 allows broadcasting to merge with internet distribution platforms through virtual channels. At some point in the future, the user may not realize
if the program they are watching is coming from a broadcast tower or the internet.
WHAT’S NEXT?
Of course, 40 years from now, will we still be using the internet to deliver video and audio or will it look as out-of-date as AOL, CompuServe and dial-up connections do now?
One thing that hasn’t changed in 40 years is TV stations still require a transmitter with sufficient power and an antenna at sufficient height to reach viewers. Forty years from now, will broadcasters still need their own transmitter and antenna?
I’m interested in hearing what those of you who do expect to be around 40 years from now expect TV broadcasting to look like in 40 years! Email me at dlung@transmitter.com.
The 28-7 mm F2 G Master is Sony’s first zoom lens with a constant F2 aperture and the 77th lens in the Sony E-Mount lineup. This full-frame lens offers a versatile focal range from 28 to 70 mm while delivering prime-like bokeh with its constant F2 aperture. Despite its wide aperture and zoom range, the 28-70 mm F2 G Master remains compact, lightweight and well-balanced, making it well-suited for photography and video applications. Its combination of zoom range, large aperture and compact design makes it an innovative and versatile lens for portrait, sports, wedding, event and video professionals.
The Alexa 265, ARRI’s next-generation 65-mm camera, combines a small form factor with a revised 65-mm sensor. Designed based on feedback from users of the Alexa 65, the 265 delivers improved image quality through 15 stops of dynamic range and enhanced low-light performance. The camera features the same LogC4 workflow, REVEAL Color Science and accessories as the ARRI Alexa 35 and offers a new filter system. The Alexa 265 makes 65 mm as easy to use as any other format.
The lens produces extremely sharp corner-to-corner results throughout its entire zoom range. The high-resolution output is made possible by the three XA (extreme aspherical) elements and three aspherical elements built in to minimize aberrations. The lens also features a floating focusing system that helps maintain internal stability. www.pro.sony/ue_US
Amazon Nova is a new generation of foundation models (FMs) that includes Amazon Nova Reel, which generates studio-quality videos. The new models will be available in Amazon Bedrock, and include Amazon Nova Micro (a very fast, text-to-text model); Amazon Nova Lite, Pro and Premier (multimodal models that can process text, images and videos to generate text); Amazon Nova Canvas (which generates studio-quality images); and Amazon Nova Reel (which generates studio-quality videos).
Amazon Nova Reel has a state-of-the-art video-generation model that allows customers to easily create high-quality 6-second video from text and images and will support the generation of videos of up to 2 minutes long in the coming months, Amazon said. Nova Reel is designed for content creation in advertising, marketing or training, Amazon said. Customers can use natural language prompts to control visual style and pacing, including camera motion, rotation and zooming. www.aws.amazon.com/ai/generative-ai/nova
Proton Camera Innovations’ new Proton 4K, which is being dubbed “the world’s smallest 4K broadcast-quality camera,” measures 1.1” x 1.1” x 1.3” (28 mm x 28 mm x 33 mm) and weighs 1.3 ounces, enabling 4K capture in spaces and setups previously believed to be inaccessible. The camera relies upon Proton Camera’s proprietary Polaris imaging chip and offers a 97-degree wide-angle view and the ability to use lenses from 35 degrees to 124 degrees to capture distortion-free visuals in various shooting conditions.
Its ultra-low 6W power consumption and quarter-inch thread for easy mounting make the Proton 4K well-suited for drones, remote mounts and portable rigs, enhancing its versatility for cinematographers, directors and broadcasters. The camera also offers on-board stereo audio and tally light.
www.proton-camera.com
The Alexa 265 camera body is based on the compact Alexa 35. Even though it contains a sensor three times as large as the Alexa 35, the new camera is only 4 mm longer and 11 mm wider. Using this body design means Alexa 265 is less than one-third of the Alexa 65’s weight—7.27 pounds vs. 23.15 pounds (3.3 kilograms vs. 10.5 kilograms)—and employs ARRI’s latest cooling and power-management technologies. www.arri.com/en
Version 10.12 of Lawo’s mc² audio production consoles features significant improvements in channel management, Waves integration, user experience and system security. One key updates is the new Strip Assign page, which offers a modern, intuitive interface that makes assigning channels to fader strips quick and easy. Users can assign single or multiple channels in one step and effortlessly reassign, duplicate or swap channels between strips. The page also allows for direct customization of channel labels and fader strip colors.
Version 10.12 also brings substantial improvements to the integration between mc² consoles and Waves systems through the new ProLink protocol. This update replaces the older RFC protocol and enhances communication between mc² consoles and Waves SuperRack 14 systems. Label synchronization is now automatic, reducing manual entry errors. The new access channel linking feature allows engineers to open a connected Waves Rack with a single button press on the mc² console.
www.lawo.com
Virtual Placement 7.7 offers dynamic and automated graphics support with tools for field-goal targeting during live football games and several automated features for when teams enter the “red zone,” giving broadcasters and venue operators tools to enrich storytelling and deliver visual clarity. The new Field Goal Target Tool allows broadcasters to show precise, team-specific target lines. Storing predefined field goal ranges for home and away teams, the tool provides accurate placement and highlights each team’s range. Enhancements to the Red Zone Tool include a new outline option, with customizable textures, colors, sizes and opacities for visually compelling highlights of the red zone. Tailored to the NFL’s new kickoff rule, the feature optimizes the display for the landing zone, providing a fresh approach to onfield graphics. Besides boosting audience engagement, the enhancements allow broadcasters to adapt visuals to team branding or broadcast themes. www.chyron.com
Steve Jones Founder & CEO Corrivium
SYDNEY—Corrivium is a live over-the-top streaming services provider focused on delivering high-quality virtual and hybrid events across various industries. We specialize in crafting engag-
its flexibility and the fact that it supports complex live OTT broadcasts without compromising on speed or quality. The platform’s cloud-based infrastructure allows us to scale resources dynamically, which is crucial for projects with fluctuating demands. Previously, managing multiple live broadcasts globally, all with varying, unique requirements, was a challenge. With AMPP, we now handle these
we leveraged AMPP to design tailored workflows for each room. Additionally, we built a customizable master multiview to seamlessly monitor all ingests in real time. This streamlined approach not only minimized our onsite footprint, but also ensured high-quality adaptive bit rate streams, delivering an exceptional viewing experience for every audience member.
ing and interactive experiences for audiences worldwide, combining technical expertise and a deep understanding of digital media with a focus on live OTT and streaming services and high-quality, low-latency broadcasts.
Our clients rely on us for seamless, engaging virtual experiences, and we continuously refine our workflows to meet the rapidly evolving demands of digital media. To meet those demands, Grass Valley’s AMPP emerged as the ideal solution to expand and optimize our capabilities.
We chose AMPP because of
demands seamlessly within a unified platform.
Setup was straightforward and AMPP integrated smoothly with our existing workflows. Its cloud-native nature enables our team to access it from anywhere, which is essential for real-time collaboration during high-stakes events.
In terms of implementing and customizing workflows for our clients with AMPP, we have found that it has enabled innovative new workflows. During a recent virtual conference featuring 16 concurrent live rooms,
Another standout example is our collaboration with Racing and Wagering Western Australia (RWWA). They needed to produce multiple versions of their live racing content for various distribution partners while minimizing latency. After proving that AMPP could meet their requirements during trials, RWWA faced the challenge of deploying a production-ready system within two weeks.
Leveraging our extensive experience with AMPP, we successfully deployed a fully operational system ahead of schedule. Beyond the initial scope, the implementation allowed us to help RWWA optimize workflows. The resulting cost savings far exceeded their initial expectations, showcasing the long-term value AMPP offers.
In terms of driving excellence, efficiency and client satisfaction,
adopting AMPP has helped us significantly improve efficiency and allowed us to expand our ability to meet diverse client requests. AMPP empowers us to tackle ambitious ideas with tailored solutions, no matter the challenge. Its flexible encoding capabilities allow us to ingest and output virtually any video format, ensuring consistently high-quality content. By streamlining our workflows, AMPP lets us focus on delivering successful events—ultimately achieving 100% client satisfaction.
AMPP has also revolutionized our internal workflows, minimizing manual tasks and enabling our team to prioritize creativity. Its intuitive automation tools simplify managing complex transitions and graphics integrations, saving time and reducing errors. The result is a consistently polished product that not only meets but exceeds client expectations.
As live streaming continues to grow, we’re eager to explore AMPP’s advanced features, including Sport Producer X, opening new possibilities for accessible, real-time storytelling. These advanced production tools have the potential to further enhance our offerings and expand our business into new areas. ●
Steve Jones is the founder and CEO of Corrivium, where he oversees all aspects of live OTT broadcasts and streaming operations. With over 16 years of experience in media production, he has a keen eye for innovation and a commitment to enhancing viewer experiences through advanced broadcast technology. He can be reached at steve.jones@corrivium.com
More information is available at www.grassvalley.com.
Deployed globally, Broadpeak’s nanoCDN multicast adaptive bit rate (mABR) technology enables operators, service providers and content owners to achieve unparalleled scalability and quality of experience (QoE) for live, multiscreen video delivery.
NanoCDN tackles the challenges linked to peak traffic during popular live content—including video buffering and latency, which can severely impact viewing experiences—by sending only one stream through the network using the same bandwidth, no matter the number of viewers. Leveraging the multicast capabilities of an operator’s network and ABR streaming formats, Broadpeak’s nanoCDN technology enables operators to manage high-demand live traffic while providing outstanding QoE to end users. www.broadpeak.tv
Harmonic’s DMS X is a new softwareas-a-service (SaaS)-based version of its Distribution Management System for the company’s XOS advanced media processor. Running on the public cloud, DMS X elevates primary distribution, enabling broadcasters and content providers to securely distribute video content over satellite, managed IP and open-internet delivery networks. With DMS X, content providers can remotely manage advanced playout workflows; download clips, graphics and playlists; monitor edge devices; and distribute high-quality video content to affiliates via internet, satellite and hybrid primary distribution.
Using DMS X, content providers can control 10,000-plus XOS media processors from a centralized user interface. With 24/7 monitoring by a dedicated DevOps team, the system guarantees high reliability. www.harmonicinc.com
OTTlink from Crispin, a Sony Group company, allows broadcasters to translate their over-theair ad schedule for OTT distribution, prepare for the transition from OTA-centric ATSC 1.0 to OTT-centric ATSC 3.0 and ensure compliance with SCTE standards all without disrupting their current ad insertions.
OTTlink allows users to maximize the value and reach of their content, empowering them to generate new revenues by translating their on-air advertising schedule into signals for downstream OTT systems. Users can send reliable, accurate and dynamic schedule information to downstream third-party devices for dynamic ad insertion (DAI). Crispin’s OTTlink does not require new hardware or the need to learn new software. Once purchased, it provides back-end communication between Crispin and a third-party system, making it a direct point of entry into OTT commercial insertion. www.crispincorp.com
Dalet Flex is an agile media logistics and content supply chain solution that makes it easier and less costly to produce, prepare, package, distribute and monetize content with powerful metadata and asset management for a variety of media, including OTT and streaming.
The latest updates include new features designed to deliver operational efficiencies and help users expand their business opportunities. The updates allow users to work with content faster by accessing, editing and delivering content while it’s being ingested; offer increased format support, seamlessly managing the latest production and postproduction formats; provide tools for easier review and approval; and make it easier to keep tabs on system usage and related costs. With its cloud-native architecture built on microservices, the Dalet Flex platform is consistently updated to ensure compatibility, security, and the introduction of new features.
www.dalet.com
Brightcove AI Suite, consisting of six all-new AI solutions, leverages artificial intelligence to help customers create and optimize content, grow engagement and monetization and reduce the costs of creating, managing and distributing content. Related to cloud-based/OTT distribution, the AI Content Multiplier uses generative AI to automate repetitive tasks like reformatting content, auto-clipping, auto-summarization and creating highlight reels. The ability to repurpose a single piece of content can help customers effectively distribute additional content across different platforms.
AI Metadata Optimizer auto-generates descriptions and transforms content into searchable and AI-optimizable data sets to make it discoverable and monetizable. It is also designed to keep costs down and quality high with the AI Cost-to-Quality Optimizer solution that optimizes content ingestion and delivery via Brightcove’s Context Aware Encoding (CAE) technology. www.brightcove.com/en
24i Video Cloud platform provides a complete suite of solutions and all the key capabilities required to launch and grow a streaming service— from ingest and transcode to elegant white-label applications for all kinds of connected devices.
The suite includes Appstage, an engaging front-end solution providing everything needed need to reach and engage with streaming audiences; Backstage, a real-time UX and integration manager that serves as a go-between for the apps and back-end systems; 24iQ, an engagement-boosting platform that is easy to deploy with many built-in models to optimize ideal personalization for AVOD, SVOD, broadcast and FAST; Videostage, a back end for services focused on VOD and live events that offers fully cloud-hosted ingest, transcoding, DRM packaging and CDN services or a hybrid deployment to handle the full complexity of multichannel TV with electronic program guide ingest, catch-up and network DVR. www.24i.com
Bernd Brunner Head of Linear Channel DAZN
MUNICH—DAZN is one of the largest, fastest-growing sports streaming services globally, serving over 300 million viewers across 200 markets and delivering 86,000 live events last year. We provide audiences across multiple continents, including North America, Europe, Asia and Oceania, with a wide variety of international live sports coverage—including soccer, boxing, motorsports, martial arts, golf, unmissable action from the NFL, UEFA Champions League and Europa League, and a range of premier women’s sports. Our mission is to deliver as much live sports as possible to global viewers, including one particularly fast-growing audience group: the sports betting community.
across Germany. With a suite of LTN’s automated linear channel creation and playout technologies, we create and deliver a flexible portfolio of 11 low-latency, livestreaming channel variants to over 1,000 betting locations.
Starting last fall, we began delivering an additional pop-up offering with up to eight concurrent European elite soccer
real time as possible. LTN satisfied that requirement by offering very low latency from ingest to playout processing and distribution. LTN Lift is a user-friendly, easy-to-use solution that helps us spin up new linear feeds on the fly and integrate both live and file-based content into a linear channel experience when required.
are backed by LTN’s ultra-reliable IP network, which is a big plus. The monitoring, transparency and support levels are really strong. We trust the underlying technology foundations and we know LTN’s team is there whenever we need them.
We’ve worked with LTN for several years, harnessing the LTN Lift solution to flexibly spin up and manage unique versions of our core linear channels to support days with many concurrent matches, delivered at low latency to third-party betting platforms via LTN’s managed, multicast global IP network.
Today, we use LTN to bring a broader range of live coverage of Europe’s most popular soccer competition to wagering audiences and partner platforms
competitions to distribution partners—along with a range of matches from top European soccer leagues. One of our core requirements was latency—the processing and channel-creation latency from when we acquire content until it hits our clients’ platforms needs to be as low as possible. This is especially important when channel-creation playlists include live elements, because in-play wagering involves incredibly demanding streaming requirements.
Fans enjoying the game from their local betting shops need to know their stream is as close to
We also didn’t want to have to build an additional facility to manage these pop-up channels. We can operate these channels from the production suites in our facility, at home or anywhere from our laptops. As a hosted cloud solution, it’s super flexible and that means we don’t have to worry about additional hardware or maintenance requirements. Having a light-touch solution where we can spin up and operate channels only when required is key for us.
While we enjoy the agile software-as-a-service (SaaS) model, we also know that our workflows
After creating our new playout instances and managing our game day live-channel portfolio, we deliver them at low latency to distribution platform provider iGameMedia, which acquires and delivers streams at ultra-low latency to betting outlets across the region.
Meanwhile, LTN helps us manage complex residential and commercial rights agreements and blackout requirements while delivering a number of concurrent live feeds to subscribers on one of the world’s leading streaming platforms.
Plugging into an intelligent IP ecosystem like LTN means we can scale and bring together complex signal acquisition, routing and distribution capabilities, helping us receive produced feeds from partner broadcasters and tailor distribution for multiple downstream partner platforms. We’re big believers in scalable, IP-first technology foundations—and we’re looking forward to driving future growth across new markets and specialized audience groups. ●
Bernd Brunner is the head of linear channel at DAZN. He can be reached at bernd.brunner@dazn. com.
For more information, visit www.ltnglobal.com.
Providing simplified, scalable media processing, CoralV by SipRadius is an advanced virtualized solution within the CoralOS ecosystem, offering flexible deployment for encoding, decoding and secure media transport. Designed for modern IP-based workflows, CoralV integrates the capabilities of the SipRadius CoralCoder, CoralPipe and CoralEdge into a versatile software platform.
Supporting scalable deployments, it delivers ultra-low-latency performance, high-quality encoding and seamless integration for REMI workflows, cloud delivery and real-time collaboration. Whether on prem, in the cloud or in hybrid environments, CoralV provides media professionals with reliable and efficient solutions to meet today’s production and distribution demands.
www.sipradius.com/solutions
The Prima platform for real-time integrated media applications supports a range of applications addressing the needs of modern media and organizations, offering a unified suite of services designed for flexible deployment, robust security, economic scalability and centralized management.
Developed from the ground up, PRIMA addresses the industry’s constantly evolving requirements. The current product lineup includes: Playout (designed to streamline channel deployment and management processes for broadcasters, enhancing efficiency and reliability); Workflow (facilitates seamless integration between networked and cloud storage and an intuitive builder for simplified workflow design); and Control (optimizes IP connectivity to offer a crucial control system for seamless device management and signal routing). www.pebble.tv
Ross’s Graphite Cloud is an all-in-one production system with powerful switcher, 3D graphics, clip playback and audio mixing. Available for outright purchase or on a subscription model through the Ross Production Cloud, Graphite Cloud delivers flexible on-prem production power so users only pay for what they need. With a Carbonite production switcher, XPression graphics, a video clip server, and audio mixing with MixMinus, Graphite Cloud provides all the tools required to produce quality productions in the cloud with the same control interfaces and menu systems as any on-prem system from Ross. Those familiar with the Carbonite workflow with Dashboard and TouchDrive control interfaces can operate their systems with complete confidence. Additionally, Graphite CPC is a component of the rapidly expanding Ross Production Cloud ecosystem, which now includes I/O Management, Media Asset Management (MAM), Automated Production Control and more. www.rossvideo.com
Amagi DYNAMIC is an on-demand live UHD playout solution with automated orchestration and infrastructure management that allows users to upgrade live broadcasting systems with on-demand infrastructure, support for running parallel live events and enterprise-level live MCR features.
It’s designed as a cloud-based nonlinear solution for live event management and playout and can be easily scaled up or down without worrying about steep infrastructure costs. Features include: multi-AZ redundancy; real-time graphics; automated provisioning and configuration of infrastructure based on the live event schedule; REST API interfaces, to allow users to control the system from a third-party solution; and a simple-to-use web UI that simplifies live-event control management. It allows users to seamlessly stream live sporting or news events, delivering an immersive and real-time viewing experience to sports enthusiasts worldwide.
www.amagi.com/products/amagi-dynamic
Media Engine is a modern media management service built into the Signiant Platform. It allows users to easily do a federated search across all storage—on prem and in the cloud—and quickly preview and interact with media assets. Results are immediately actionable via the powerful services available on the platform, anchored by fast file transfer.
With Media Engine, Signiant technology indexes existing files on existing storage so users can easily start searching. Unlike legacy MAM systems, there’s no need to reingest media or align with a structured metadata schema. Proxies are accessible to third-party services via API, and the resulting metadata can be returned to the Signiant Platform via an API. File-transfer technology quickly moves those files regardless of where they’re physically located and makes them available via an integration with Media Shuttle, the Signiant SaaS product designed for person-initiated transfers.
www.signiant.com
As NDI becomes more commonly found in broadcast studios worldwide, NDI Bridge is transforming professional cloud productions over IP, enabling seamless collaboration and high-quality video workflows. A part of the expanding NDI Tools suite, this free software empowers users to connect remote production environments over WAN with ease, delivering results that rival traditional methods.
NDI Bridge supports all NDI features, including video, alpha channel, multichannel audio, metadata, KVM support and much more, and simplifies cloud editing by securely transmitting and converting multimedia streams, making it the technology of choice for broadcasters and production teams transitioning to cloud workflows. It integrates seamlessly with other NDI tools, ensuring adaptability for diverse production needs. www.ndi.video
USER REPORT
Andreas Göttl Managing Director Munich Media Operations
Thomas Mitschelen Managing Director of MiTM and Technical Lead of Munich Media
MUNICH—Broadcast technology is constantly evolving and staying ahead of the curve is one of the keys to our success at Munich Media Operations, a media-services company for sports, media and advertising; and MiTM, which provides consulting services, project management, system planning, integration and training to the broadcast industry
O ur mission has always been to embrace innovation and deliver cutting-edge solutions. One example of that innovation has been partnering with TAG Video Systems and deploying their Realtime Monitoring Platform. It has been a key differentiator, enabling seamless integration, unparalleled flexibility and efficiencies across all of our operations.
Before implementing TAG’s solution, we faced significant hurdles. Our previous system, while functional, required extensive manual configuration and often delivered unreliable results. It became clear we needed a more dependable and scalable solution, one that could adapt to the growing complexity of our workflows and provide actionable insights in real time. At the heart of our operations is the efficient monitoring and management of high volumes of transport streams. With up to
40 concurrent games on a busy weekend and streams crossing multiple touchpoints, our previous monitoring system couldn’t keep pace with the increasing demands. We needed a solution that would address our immediate challenges and future-proof our infrastructure.
Industry peers we consulted with vouched for TAG’s robust capabilities and scalability. Early demonstrations highlighted TAG’s ability to handle diverse formats, such as SMPTE ST 2110 and NDI, while providing a
ployed 50 licenses, focusing on monitoring incoming feeds for our primary client, a major German telecom company. As our client base and operations expanded, so did our reliance on TAG. Today, we manage over 500 licenses, monitoring hundreds of streams with unmatched reliability and efficiency.
One standout feature has been TAG’s ability to centralize monitoring and visualization in the cloud. This capability has trans-
cloud-based infrastructure that aligned perfectly with our vision. The flexibility TAG offered in deploying virtual instances on Amazon Web Services (AWS) was a key deciding factor.
From the outset, TAG impressed us with their responsiveness and understanding of our unique requirements. During the evaluation phase, the company worked closely with us to address initial concerns, ensuring a seamless transition to its platform.
TAG’s monitoring platform quickly became integral to our operations. We initially de-
TAG’s flexibility has allowed Munich Media Operations to develop unique cloud-based workflows to monitor multiple remote productions.
formed our workflows, enabling us to operate remotely with minimal on-site requirements. For instance, operators can now monitor streams from anywhere, drastically reducing the need for complex physical setups.
TAG’s flexibility also extends to managing unique workflows. For Munich Media and MiTM, we developed a cloud-based solution where every signal crosses our system at least once. TAG’s platform allowed us to visualize, analyze, and alert operators of potential issues, ensuring efficiency in every part of the IP workflow.
Overall, the impact of TAG’s solution on our operations has been very positive because it grows with us: Whether monitoring 30 streams or 500, the system performs consistently, without bottlenecks. In addition, TAG’s cloud-based architecture enables rapid deployment and flexible remote operations—we can move from one production site to another seamlessly, using only monitors and an internet connection. It also provides operational efficiency with automated alerts and in-depth analysis allowing our operators to resolve issues quickly, minimizing downtime and improving service quality.
One noteworthy instance of TAG’s support came early in our partnership. A cultural difference in audio-channel configurations could have negatively impacted a critical live event. TAG’s team not only resolved the issue promptly but also provided detailed guidance to prevent future occurrences.
This level of dedication is just one of many examples of our close collaboration.
As the broadcast industry continues to evolve, we’re confident in our ability to adapt, innovate and lead. TAG solutions empower us to push boundaries, bringing flexibility, efficiency, and reliability to every project we undertake. ●
Andreas Göttl, managing director at Munich Media Operations, can be reached at andreas.goettl@mmo-media.com. Thomas Mitschelen, managing director at MiTM and Technical Lead for Munich Media, can be reached at thomas.mitschelen_extern@ mmo-media.com.
More information is available at www.tagvs.com.
Chris Buchanan Vice President of Engineering Estrella Media
BURBANK, Calif.—Estrella Media operates under the umbrella of MediaCo and centers itself around content that connects with Hispanic communities, serving as a cultural and information bridge and reaching audiences across various broadcast and digital platforms. We are known for our diverse programming of news, entertainment, sports, reality shows and music content designed for our predominantly Hispanic audience and take pride in being one of the largest producers of Spanish-language original video and audio content in the United States, a lot of which is produced in our studios here in Burbank.
Our content is available nationwide, whether it’s through our O&O stations or our affiliate stations and partnerships with other broadcasters, and cable and satellite providers.
As part of our overall strategy for delivering high-quality broadcasting, we needed to enhance cost efficiency and operational flexibility. I was tasked with spearheading the search for transitioning our satellite broadcast delivery to an IP-based workflow. That’s when I discovered Zixi, and after finding out that it could leverage our existing hardware infrastructure and that of our affiliates and broadcast partners, it was immediately clear that they were the right choice for our operations.
Within 60 days, we migrated from satellite to IP for our programming distribution, and
within 90 days, we had fully transitioned across all our affiliates and owned-and-operated stations.
Our affiliates experienced minimal disruption as Zixi integrated with their existing operations so they could maintain continuity while benefiting from IP-based advantages. The integration of Zixi with Harmonic IRDs (Integrated Receiver Decoders 8130/8140 models) is ideal because it simplifies the workflow with minimal reconfiguration and no additional middleware.
The transition was simple as we could leverage existing IRDs around the country, and with a straightforward software update and internet connection, IRDs could receive our signals through Zixi, allowing us to deliver to over 100 locations.
Uninterrupted signal delivery is critical for maintaining the high quality of broadcasts. We operate a network connecting our Los Angeles and Dallas broadcast operating centers, and Zixi’s inter-cluster capabilities and failover capabilities automatically switch to backup streams or alternative paths to ensure broadcasts stay online even
in the case of network disruptions, dual server or circuit failures.
In addition to our two broadcast centers, we’re using Zixi’s ZaaS (Zixi as a Service) to deliver live and linear content to certain target affiliates, including MVPDs and MSOs, without requiring extensive on-premises infrastructure.
The shift away from relying on satellite delivery and distribution has provided significant savings. For example, live soccer games originating in Mexico now incur negligible additional costs, with our IP delivery infrastructure paying for itself within just a few games.
We’ve been using the Zixi platform for about two years now, and the ease of maintenance means we experience minimal operational disruptions—in fact, simple code updates are completed in seconds. In addition, satellite distribution includes significant recurring expenses for leases and maintenance and transitioning to IP has eliminated these costs, fully covering our investment in the new system.
One of my favorite tools is Zixi’s
Estrella Media is using Zixi’s ZaaS (Zixi as a Service) to deliver live and linear content to certain target affiliates, including MVPDs and MSOs, without requiring extensive on-premises infrastructure.
Zen Master, which gives us centralized access to all enabled devices across our locations. It’s comprehensive, giving us deep visibility into performance metrics for every stream and every endpoint on the network. I like to think of it as a holistic health view of our operations, one that helps us proactively manage our delivery network and quickly analyze and address any issues. It’s the glue that holds everything together.
Zixi’s expertise and guidance throughout have been nothing but professional and responsive. Our cost, operational, and technical benefits have been transformative and have modernized our content distribution. Zixi’s technology and team has really set the standard for what IP video transport can achieve. ●
Chris Buchanan is vice president of engineering at Estrella Media, where he leads the technical operations behind the company’s broadcast activities. He can be reached at chrisb@estrellamedia.com.
More information is available at www.zixi.com.
Bitcentral’s Oasis Media Asset Manager (MAM) is a powerful media workflow solution designed to optimize news production for teams, whether in the newsroom or working remotely. Its intuitive, high-performance search engine provides centralized access to raw footage, works-in-progress, completed segments and archived content within a single platform. Oasis integrates effortlessly with top NRCS and NLE vendors and enables federated searches and content transfers across multiple station locations.
With steadfast reliability, Oasis offers versatile storage options, including on-premises, cloud-based, and hybrid storage with Bitcentral’s Fusion Hybrid Storage. It forms a key part of the Core News suite, which also includes Precis for ingest and playout, and Create for editing. www.bitcentral.com
Media Gateway is a live media delivery and distribution solution based on the PlayBox Neo Suite platform. Designed to maximize productivity, it simplifies daily tasks, enabling the entire playout routing and decoding process to be software managed.
Cinegy Air is a cloud-native playout and automation platform enabling broadcast and streaming delivery across multiple channels. The software-based platform provides automation and playout for SD, HD, UHD and 8K in an integrated suite, supporting mixed formats, resolutions and unrendered edit sequences. Available in several configurations including Air PRO with integrated Cinegy Title and CG/branding, Air Ultimate with additional subtitling capabilities, plus two new bundles: AirPack and Air Infinity. Air Infinity combines playout with StreamSwitcher for near-instantaneous IP failover capabilities, while AirPack bundles Air PRO, Capture PRO and Multiviewer for complete channel-in-a-box functionality. The latest version now supports AI-powered real-time subtitling, comprehensive monitoring, and GPU-accelerated processing for improved efficiency and low operating costs.
www.cinegy.com
With Media Gateway, users can avoid buying, accommodating, powering and maintaining auxiliary processing hardware. Media Gateway’s Screen input allows the capture of video and audio content from the desktop to be delivered as live feeds. Installed on-prem or in the cloud, it enables the reception, transmission, and conversion of a range of broadcasting media and formats. Signals can be sourced from, and converted to SDI, NDI, SRT, UDP or RTP. www.playboxneo.com
SDVI’s Rally Access Workstation is a fully managed solution for editing in the cloud with Adobe Premiere Pro. This new addition to the company’s Rally media supply-chain management platform enables anywhere, anytime access to hosted edit workstations and content in the cloud, dynamically managing associated infrastructure deployment so that media organizations can scale their edit capacity easily and cost-effectively within an automated media supply chain.
Rally Access Workstation is a significant development for both creative and QC operations because it enables tight integration of Adobe Premiere Pro into the media supply chain without incurring data movement in or out of the cloud. www.sdvi.com
StreamMaster is Pixel Power’s highly modular and scalable media processing platform that supports SDI and IP standards. It runs on COTS hardware and can be deployed on-premise, in a data center or in the public cloud. Its powerful tool set enables authoring, transcoding, logo and graphics insertion, content rendering, real-time master control, real-time graphics and DVE.
It also includes several innovative features such as “Junction Preview” and a commercial minutage counter that enables broadcasters to more effectively manage schedules and commercial breaks. The softwaredefined architecture of the platform enables broadcasters to access the specific features and functional modules they need, and only pay for them when they are used.
www.pixelpower.com
For possible inclusion, send information to tvtechnology@futurenet.com with People News in the subject line.
Telstra Broadcast Services
Telstra Broadcast Services (TBS), a unit of Australia telecom and technology company Telstra, has named 25-year media, telecom and technology industry veteran Karen Clark as CEO. She most recently was chief revenue officer at TBS, leading pre-sales, sales and marketing teams across all regions. Telstra said Clark plans to advance TBS’s global agenda, launch market-ready 5G media solutions, sustain significant growth and showcase its value through a steadfast commitment to customers.
JOYCE “JC” CATALDO HPA
Joyce “JC” Cataldo has been named head of development and strategy at the Hollywood Professional Association. In her new role, she’ll lead a team tasked with expanding membership, enhancing opportunities and driving strategic growth for the trade group for entertainment industry creative and technical support professionals. In 2017, she was named director of business development at HPA and STMPE, transitioning fulltime to HPA in 2020.
Riedel Communications has named Jan Eveleens CEO of its Product Division, succeeding Rik Hoerée, who decided to step back from that post after more than a decade with the communications equipment supplier. Eveleens was named Riedel’s director of business development in 2018, where he was a driving force behind such key initiatives as the restructuring of production and purchasing operations and helped steer the company through global supply-chain challenges.
Hearst Television
Hearst Television has promoted David Callahan to the corporate role of director, broadcast information technology. He had been IT manager at WXII-WCWG Winston-Salem, N.C., and will be based at the CBS-The CW duopoly. He reports to Vice President, Information Technology Kenneth Murphy. The son of a truck engineer at North Carolina’s UNC-TV, Callahan joined WXII-TV in 2002 as a maintenance technician. There, he played a key role in the station’s analog-to-digital transition.
SpectraRep
SpectraRep has named George Molnar chief technology officer. He’ll draw on his 25 years of experience in crisis and emergency management technologies to spearhead SpectraRep’s national public alerting datacast network for public safety, government and commercial communications, the company said. He most recenty was senior director of technology at WTOP in Washington. Prior to that, Molnar was director of engineering and management, WARN Project at PBS.
Lynx Technik
Lynx Technik has named Vincent Noyer director of product marketing. Based in Weiterstadt, Germany, he will lead product strategy and product development efforts and deliver go-to-market plans for the company’s signal-processing solutions portfolio. Noyer comes to Lynx from Ross Video where, as director of sports analysis, he introduced Ross’ Piero Sports Graphics solution to the U.S. market, where it quickly became a go-to technology for football telecasts,
Nexstar Media Group has hired Bill Sammon as senior vice president of Washington editorial content for The Hill and NewsNation. In this new post, he is responsible for directing national news content in the nation’s capital, where he will be based. Sammon brings nearly 40 years of experience to the role, most recently as senior vice president and managing editor, Washington, at Fox News. He reports to NewsNation managing editor for news and politics Cherie Grzech.
Ikegami Electronics (USA)
Emilio Aleman has joined Ikegami Electronics (USA) as senior product manager, tasked with promoting the imaging and display equipment maker’s products to customers in the broadcast, security, medical and industrial imaging sectors. Most recently with Ross Video, Aleman is a 40-year industry veteran with experience in optics, electronic imaging, signal processing and transmission. He graduated from the New Jersey Institute of Technology in 1984. He’ll be based in Mahwah, N.J.